id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,882,219
Mastering Micro Frontends: A Deep Dive into Next-Gen Front-End Architecture
In the rapidly evolving landscape of front-end development, architects and engineers are continually...
0
2024-06-09T16:41:42
https://dev.to/yelethe1st/mastering-micro-frontends-a-deep-dive-into-next-gen-front-end-architecture-4mm3
microfrontends, frontendarchitecture, scalabledevelopment, modularwebapplications
In the rapidly evolving landscape of front-end development, architects and engineers are continually seeking innovative solutions to tackle the challenges of building large-scale, enterprise-grade applications. As a senior front-end engineer at a Fortune 500 company, I've had the privilege of leading teams in adopting cutting-edge technologies and architectural patterns. In this comprehensive guide, we'll embark on a journey through the intricacies of Micro Frontends, exploring their fundamental principles, real-world applications, implementation strategies, and considerations for success. ## Unraveling the Concept of Micro Frontends At its core, Micro Frontends is a paradigm shift in front-end architecture, inspired by the principles of microservices but tailored to the unique challenges of user interface development. It entails decomposing monolithic front-end applications into smaller, self-contained units, each responsible for a specific feature or functionality. This modular approach empowers teams to work independently, enabling faster iteration cycles, improved scalability, and enhanced maintainability. ### Delving into Key Components 1. **Decomposition**: Micro Frontends promote a granular approach to UI development, breaking down complex user interfaces into modular components that align with business domains or user journeys. This modularization fosters a clear separation of concerns, enabling teams to focus on their respective areas of expertise without being hindered by dependencies or conflicts with other teams. 2. **Independence**: Each Micro Frontend operates autonomously, with its own codebase, dependencies, and deployment pipeline. This independence fosters a culture of ownership and accountability, empowering teams to innovate and iterate at their own pace. By decoupling front-end components from each other, Micro Frontends minimize the risk of unintended side effects and enable teams to release updates with confidence. 3. **Integration**: While Micro Frontends are developed and deployed independently, they must seamlessly integrate at runtime to provide a cohesive user experience. This integration can be achieved through client-side or server-side composition techniques, each offering unique trade-offs in terms of performance, flexibility, and complexity. Client-side composition, for example, involves loading and rendering Micro Frontends in the client's browser using JavaScript frameworks like single-spa or Web Components, while server-side composition involves stitching together Micro Frontends on the server and serving them as a single page to the client. ## Elevating the Business Value of Micro Frontends As a senior front-end engineer, I've witnessed firsthand the transformative impact of Micro Frontends on enterprise-grade applications. Let's explore how this architectural pattern unlocks new possibilities across various industries and use cases. ### Driving Agility in E-Commerce Platforms In the fiercely competitive landscape of e-commerce, agility is paramount to success. By adopting Micro Frontends, e-commerce giants can accelerate innovation cycles, introduce new features, and personalize user experiences at scale. From product discovery to checkout, each aspect of the shopping journey can be modularized and optimized independently, resulting in higher conversion rates and customer satisfaction. ### Empowering Collaboration in Enterprise Applications Enterprise applications often span multiple business units, departments, and geographies, presenting unique challenges in terms of collaboration and alignment. Micro Frontends provide a pragmatic solution by enabling cross-functional teams to work in parallel, focusing on their respective domains without being constrained by centralized governance. Whether it's CRM systems, HR portals, or supply chain management tools, Micro Frontends foster collaboration, reduce dependencies, and accelerate time-to-market. ### Enabling Customization in Content Management Systems Content management systems (CMS) serve as the backbone of digital experiences, empowering content creators to publish, manage, and distribute content seamlessly. With Micro Frontends, CMS providers can offer customizable interfaces tailored to the unique needs of their customers. Whether it's a corporate intranet, a news publishing platform, or an e-learning portal, Micro Frontends enable fine-grained customization, extensibility, and interoperability, empowering content creators to express their creativity without being bound by rigid templates or workflows. ## Navigating the Implementation Maze Implementing Micro Frontends is not without its challenges, especially in the context of enterprise-scale applications. Let's explore some of the key considerations and strategies for success. ### Architectural Decisions - **Granularity vs. Coherence**: Striking the right balance between granularity and coherence is crucial in designing Micro Frontends. While fine-grained Micro Frontends offer flexibility and autonomy, they may lead to fragmentation and inconsistency in the user experience. Conversely, coarse-grained Micro Frontends may sacrifice autonomy for consistency, resulting in tight coupling and slower iteration cycles. Finding the sweet spot requires thoughtful analysis of business requirements, user journeys, and technical constraints. - **Communication and Data Sharing**: Establishing clear communication channels and data sharing mechanisms between Micro Frontends is essential for maintaining a cohesive user experience. Whether it's inter-frame communication, shared state management, or event-driven architectures, architects must carefully evaluate the trade-offs and choose the most appropriate approach based on factors such as performance, scalability, and complexity. ### Deployment Strategies - **Continuous Delivery Pipeline**: Implementing a robust continuous delivery pipeline is critical for deploying Micro Frontends independently and reliably. This includes automated testing, versioning, dependency management, and rollback mechanisms to ensure smooth and seamless releases. - **Progressive Rollouts**: Adopting progressive rollout strategies such as canary deployments, feature toggles, and A/B testing can help mitigate risks and validate changes before they are rolled out to a wider audience. This iterative approach empowers teams to experiment, gather feedback, and iterate based on real-world usage data, ultimately leading to better outcomes and higher user satisfaction. ## Embracing the Future of Front-End Development As we embark on this journey into the realm of Micro Frontends, it's essential to embrace a mindset of continuous learning, experimentation, and collaboration. By harnessing the power of Micro Frontends, we can unlock new possibilities, drive innovation, and shape the future of front-end development in the digital age. As a senior front-end engineer at a Fortune 500 company, I'm excited to be at the forefront of this transformation, leading my team towards new horizons of success and excellence. Join me on this journey, and together, let's master the art of Micro Frontends and elevate front-end development to new heights of greatness.
yelethe1st
1,880,790
Go lang app containerized in Docker with Chainguard image
Welcome to Books CLI: Securely Search Books Using Chainguard Images Hey everyone, Amir...
0
2024-06-09T16:41:11
https://dev.to/bekbrace/go-lang-app-containerized-in-docker-with-chainguard-image-12bf
go, docker, chainguard, containers
# Welcome to Books CLI: Securely Search Books Using Chainguard Images Hey everyone, Amir here! 🎉 I'm excited to introduce my latest project, **Books CLI**. This command-line application, built in Go, allows you to search for books by your favorite authors directly from your terminal. What sets this tool apart is its use of **Chainguard Images** to ensure that every search is not only fast but also secure. ## Why Chainguard Images? Security is a top priority in today’s software development landscape. Chainguard Images provide a secure, minimal base that is specifically designed to reduce vulnerabilities in Docker containers. This makes **Books CLI** not just powerful, but also a safer choice for developers. ## Key Features - **Search by Author**: Simply enter an author’s name to retrieve a list of their books. - **Simple CLI**: Easy-to-use interface that requires minimal setup. - **Immediate Results**: Get book results instantly without the need for a GUI. - **Secure**: Runs in Docker containers powered by Chainguard Images for enhanced security. ## Getting Started Make sure you have Go installed on your machine. If not, you can download it from [here](https://golang.org/dl/). ```bash # Clone the repository git clone https://github.com/yourusername/books-cli.git cd books-cli # Build the application using Docker and Chainguard Images docker build -t books-cli . # Run the application docker run books-cli search "Author Name" ``` # Video Tutorial For a more detailed guide, check out my video tutorial where I cover everything from setting up Go to using Books CLI with Chainguard Images. This is a great resource for those who prefer to learn visually. {%youtube rx7TCPPgM10%} # Example Usage Searching for books by J.K. Rowling docker run books-cli search "J.K. Rowling" The output will display a list of books by J.K. Rowling, showcasing the tool's quick and secure functionality. # Contribute Contributions are welcome! If you have ideas for new features or improvements, please fork the project and submit a pull request. #License Distributed under the MIT License. See LICENSE for more information. # Contact Feel free to reach out to me on Twitter - https://x.com/BekBrace Thank you, and I will see you in the next time.
bekbrace
1,882,216
12 React Best Practices Every Developer Should Know
Boost Your React Skills with These 12 Best Practices . 1 Use Ternary Operator for...
0
2024-06-09T16:34:05
https://dev.to/mayank_tamrkar/12-best-practices-every-developer-should-know-dle
react, coding, programming, webdev
##Boost Your React Skills with These 12 Best Practices . 1 **Use Ternary Operator for Conditional Rendering:** ```jsx import React from 'react'; const MyComponent = ({ isLoggedIn }) => { return ( <div> {isLoggedIn ? <p>Welcome, User!</p> : <p>Please login.</p>} </div> ); }; export default MyComponent; ``` 2 **Use JSX Spread Attributes for Props:** Using spread attributes is generally preferred as it makes the code cleaner and more maintainable, but it's essential to ensure that only necessary props are passed down to avoid unnecessary re-renders or potential bugs. **Example without spread attributes:** ```jsx import React from 'react'; const MyComponentWithoutSpread = (props) => { return <ChildComponent prop1={props.prop1} prop2={props.prop2} />; }; export default MyComponentWithoutSpread; ``` **Example with spread attributes:** ```jsx import React from 'react'; const MyComponentWithSpread = (props) => { return <ChildComponent {...props} />; }; export default MyComponentWithSpread; ``` 3 **Use Fragment to Return Multiple Elements:** When returning multiple elements, use Fragment or shorthand syntax (`<>...</>`). ```jsx import React from 'react'; const MyComponent = () => { return ( <> <h1>Title</h1> <p>Content</p> </> ); }; export default MyComponent; ``` 4 **Keep Component State Local:** Keeping component state local ensures that each component manages its own state independently for better encapsulation and easier maintenance. ```jsx import React, { useState } from 'react'; const Parent = () => { return ( <div> <Child /> <Child /> </div> ); }; const Child = () => { const [count, setCount] = useState(0); return ( <div> <p>{count}</p> <button onClick={() => setCount(count + 1)}>Increment</button> </div> ); }; export default Parent; ``` 5 **Use Memoization for Performance Optimization:** Memoization optimizes performance by caching expensive function results, reducing unnecessary recalculations and rendering in components for faster applications. ```jsx import React, { useMemo } from 'react'; const ExpensiveComponent = ({ num }) => { const computedValue = useMemo(() => { // Expensive computation return num * 2; }, [num]); return <div>{computedValue}</div>; }; export default React.memo(ExpensiveComponent); ``` 6 **Use Custom Hooks to Encapsulate Logic:** Using custom hooks in React encapsulates and reuses stateful logic across components, improving code reuse, organization, testing, and readability, making it a best practice for maintainable and clean code. ```jsx import { useState, useEffect } from 'react'; const useFetchData = (url) => { const [data, setData] = useState(null); const [loading, setLoading] = useState(true); useEffect(() => { fetch(url) .then((response) => response.json()) .then((data) => { setData(data); setLoading(false); }); }, [url]); return { data, loading }; }; const DataDisplay = ({ url }) => { const { data, loading } = useFetchData(url); if (loading) return <p>Loading...</p>; return <pre>{JSON.stringify(data, null, 2)}</pre>; }; export default DataDisplay; ``` 7 **Avoid Inline Functions in JSX:** Avoid inline functions in JSX because they create new function instances on every render, which can hurt performance. Instead, define functions outside JSX to improve efficiency and prevent unnecessary re-renders. ```jsx const Button = ({ onClick }) => ( <button onClick={onClick}>Click me</button> ); const App = () => { const handleClick = () => { console.log('Button clicked'); }; return <Button onClick={handleClick} />; }; export default App; ``` 8 **Passing Default Props:** Passing default props ensures that your component has a fallback value if a prop is not provided. This enhances component reliability by preventing errors due to missing props and simplifies the component logic by reducing the need for explicit checks or conditional rendering within the component itself. ```jsx function Avatar({ person, size = 100 }) { // ... } ``` 9 **Use Component Composition:** Component composition in React means building complex components by combining simpler, reusable components for better maintainability and readability. ```jsx const Header = ({ title }) => <h1>{title}</h1>; const Content = ({ children }) => <div>{children}</div>; const App = () => ( <div> <Header title="Welcome to My App" /> <Content> <p>This is the content of the app.</p> </Content> </div> ); export default App; ``` 10 **Use Cleanup Function in useEffect:** Cleanup function in useEffect ensures proper resource cleanup, preventing memory leaks and maintaining component cleanliness. ```jsx import React, { useState, useEffect } from 'react'; const Timer = () => { const [count, setCount] = useState(0); useEffect(() => { const interval = setInterval(() => { setCount(count => count + 1); }, 1000); // Cleanup function return () => clearInterval(interval); }, []); // Empty dependency array for componentDidMount behavior return <div>Timer: {count}</div>; }; export default Timer; ``` 11 **Use Error Boundary:** Using error boundaries helps to catch and handle errors in React components, providing better user experience, preventing crashes, and improving application robustness, making it a best practice for error management. ```jsx import React, { useState, useEffect } from 'react'; const ErrorFallback = () => ( <div> <h2>Something went wrong!</h2> <p>Please try again later.</p> </div> ); const ErrorBoundary = ({ children }) => { const [hasError, setHasError] = useState(false); useEffect(() => { const handleError = (error, errorInfo) => { console.error('Error caught by Error Boundary:', error, errorInfo); setHasError(true); }; // Return ``` 12 **Lazy Load Components** Lazy loading components helps to improve the performance of your React application by loading components only when they are needed. This can reduce the initial load time of your application. ```jsx import React, { Suspense, lazy } from 'react'; const LazyComponent = lazy(() => import('./LazyComponent')); const App = () => { return ( <div> <h1>My App</h1> <Suspense fallback={<div>Loading...</div>}> <LazyComponent /> </Suspense> </div> ); }; export default App; ``` ---
mayank_tamrkar
1,882,112
Searching with Umbraco Examine: Avoid these common filtering mistakes
With Umbracos Codegarden conference mere days away I am starting to feel the Codegarden spirit! And...
0
2024-06-09T16:22:55
https://dev.to/jemayn/searching-with-umbraco-examine-avoid-these-common-filtering-mistakes-1oin
umbraco, lucene
With Umbracos Codegarden conference mere days away I am starting to feel the Codegarden spirit! And that has helped motivate me to blog a bit again, at the top of my blogpost ideas list is something interesting I found a few months back about how you can filter in Examine - Umbracos API layer on top of Lucene. # Setup The starting point of this blogpost is an Umbraco 13 site with [The Starter Kit](https://marketplace.umbraco.com/package/umbraco.thestarterkit). And a simple search setup inspired by [this docs article](https://docs.umbraco.com/umbraco-cms/v/13.latest-lts/reference/searching/examine/quick-start). The starter kit has a People section, where people have tags assigned in a property called "department". I've added my own "Person" with a nice AI generated image: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ityq9po63gxqfxz9r2tb.png) Here is a quick overview of the starting code: {% details PeopleController.cs %} ```csharp using ExamineTesting.Models; using ExamineTesting.Services; using Microsoft.AspNetCore.Mvc; using Microsoft.AspNetCore.Mvc.ViewEngines; using Umbraco.Cms.Core.Models.PublishedContent; using Umbraco.Cms.Core.PublishedCache; using Umbraco.Cms.Core.Web; using Umbraco.Cms.Web.Common.Controllers; namespace ExamineTesting.Controllers; public class PeopleController : RenderController { private readonly IPublishedValueFallback _publishedValueFallback; private readonly ISearchService _searchService; private readonly ITagQuery _tagQuery; public PeopleController(ILogger<RenderController> logger, ICompositeViewEngine compositeViewEngine, IUmbracoContextAccessor umbracoContextAccessor, IPublishedValueFallback publishedValueFallback, ISearchService searchService, ITagQuery tagQuery) : base(logger, compositeViewEngine, umbracoContextAccessor) { _publishedValueFallback = publishedValueFallback; _searchService = searchService; _tagQuery = tagQuery; } public override IActionResult Index() { var tags = HttpContext.Request.Query["tags"]; var allTags = _tagQuery.GetAllContentTags(); var searchResults = _searchService.SearchContentByTag(tags); // Create the view model and pass it to the view SearchViewModel viewModel = new(CurrentPage!, _publishedValueFallback) { SearchResults = searchResults.results, Tags = allTags, Query = searchResults.query }; return CurrentTemplate(viewModel); } } ``` {% enddetails %} {% details SearchService.cs %} ```csharp using Examine; using Microsoft.Extensions.Primitives; using Umbraco.Cms.Core.Models.PublishedContent; using Umbraco.Cms.Web.Common; namespace ExamineTesting.Services; public class SearchService : ISearchService { private readonly IExamineManager _examineManager; private readonly UmbracoHelper _umbracoHelper; public SearchService(IExamineManager examineManager, UmbracoHelper umbracoHelper) { _examineManager = examineManager; _umbracoHelper = umbracoHelper; } public (IEnumerable<IPublishedContent> results, string query) SearchContentByTag(StringValues tags) { IEnumerable<string> ids = Array.Empty<string>(); var queryString = string.Empty; if (_examineManager.TryGetIndex("ExternalIndex", out IIndex? index)) { var q = index .Searcher .CreateQuery("content") .NodeTypeAlias("person"); if (tags.Any()) { q.And().Field("department", tags.FirstOrDefault()); } ids = q .Execute() .Select(x => x.Id); queryString = q.ToString(); } var results = new List<IPublishedContent>(); foreach (var id in ids) { results.Add(_umbracoHelper.Content(id)); } return (results, queryString); } } ``` {% enddetails %} {% details people.cshtml %} ```csharp @using Microsoft.AspNetCore.Mvc.TagHelpers @inherits Umbraco.Cms.Web.Common.Views.UmbracoViewPage<ExamineTesting.Models.SearchViewModel> @{ Layout = "master.cshtml"; } @{ void SocialLink(string content, string service) { if (!string.IsNullOrEmpty(content)) { ; //semicolon needed otherwise <a> cannot be resolved <a class="employee-grid__item__contact-item" href="http://@(service).com/@content">@service</a> } } } @Html.Partial("~/Views/Partials/SectionHeader.cshtml") <section class="section"> <div class="container"> <div> <span>@Model.Query</span> <form action="@Model.Url()" method="get"> <label for="tags">Choose a tag:</label> <select name="tags" id="tags"> @foreach (var tag in Model.Tags) { <option value="@tag?.Text">@tag?.Text</option> } </select> <button>Search</button> </form> </div> <div class="employee-grid"> @foreach (Person person in Model.SearchResults) { <div class="employee-grid__item"> <div class="employee-grid__item__image" style="background-image: url('@person.Photo?.Url()')"></div> <div class="employee-grid__item__details"> <h3 class="employee-grid__item__name">@person.Name</h3> @if (!string.IsNullOrEmpty(person.Email)) { <a href="mailto:@person.Email" class="employee-grid__item__email">@person.Email</a> } <div class="employee-grid__item__contact"> @{ SocialLink(person.FacebookUsername, "Facebook"); } @{ SocialLink(person.TwitterUsername, "Twitter"); } @{ SocialLink(person.LinkedInUsername, "LinkedIn"); } @{ SocialLink(person.InstagramUsername, "Instagram"); } </div> </div> </div> } </div> </div> </section> ``` {% enddetails %} So at this point we have a dropdown with all available tags, and can choose one, click search and it filters into that specific tag. (To help debugging I've also added the Lucene query string output): ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mf94adiy9rabacmp854m.png) ## The problem Now at this point is where I would have often stopped in the past. As I showed in the image just above it works, the query adds `+department:denmark`, the `+` in Lucene queries means AND, and it returns the only person in our set of people that has the tag "denmark". However, what if I try to find the test person - King Arthur - who I added earlier based on his tag "Fairytale kingdom": ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n974ie5rnbvw40i1frcb.png) Suddenly a bunch of unexpected results are added, and while these are all great people - none of them have the "Fairytale kingdom" tag. And we can also see that the apparent way that examine treats a query string with a space is to split it in two and do an OR search: `+(department:fairytale department:kingdom)` (the space in Lucene means OR) After a bit of digging in the data, it occurs that all of these extra people have the tag "United Kingdom", thus they are treated as hit based on the "kingdom" part. ### Handling multiword phrases in filters The fix for this problem is actually quite easy - in our code we have this which is what adds the tag filter: ```csharp q.And().Field("department", tags.FirstOrDefault()); ``` Now without an IDE it may be a bit tough to understand what `tags` are - but in this case the var `tags` is of the type `StringValues` which is what we get back from the HTTP Query. It allows you to fx have a query like this: `domain.com?q=searchterm&color=red&color=blue` Where if you then tried to get the color query value you would get StringValues with 2 values - red and blue. In our case we only ever pass along one tag, so adding a .FirstOrDefault() to it will return it as a string. The Examine `.Field()` method has two versions, one that takes the query as a string, and one that takes it as an `IExamineValue`: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jg14n8fuw5bqc6ps0zvs.png) An IExamineValue is basically a search term with additional logic applied. So if we fx wanted to boost this specific part of our queries importance we can add the boost extension which then turns out string into an ExamineValue: `tags.FirstOrDefault().Boost(10)`. But there is another string extension that takes a search string and turns it into a "phrase match" string, where the string will basically need to be an exact match otherwise it wont work. We can achieve this by changing it to `tags.FirstOrDefault().Escape()` A new search at this point shows the query now has it in quotes: `+department:"Fairytale Kingdom"` However, it also doesn't return King Arthur as it should: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0lxg9fdwnlxjqvr33ha1.png) It does in the Examine dashboard in the backoffice though: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fccf8qzhs9w9d0tplj2o.png) The problem is reported and discussed here: https://github.com/Shazwazza/Examine/issues/329 So rather than going into it I will just ensure it works, but making sure it is indexed and searched for in lowercase: We add a quick TransformingIndexValues event where we can take the tags values from the "department" property, and lowercase them and save into a new "departmentLower" field: ```csharp using Examine; using Umbraco.Cms.Core.Events; using Umbraco.Cms.Core.Notifications; using Umbraco.Cms.Web.Common.PublishedModels; namespace ExamineTesting.Notifications; public class ExternalIndexTransformations : INotificationHandler<UmbracoApplicationStartedNotification> { private readonly IExamineManager _examineManager; public ExternalIndexTransformations(IExamineManager examineManager) { _examineManager = examineManager; } public void Handle(UmbracoApplicationStartedNotification notification) { if (!_examineManager.TryGetIndex(Umbraco.Cms.Core.Constants.UmbracoIndexes.ExternalIndexName, out var index)) { throw new InvalidOperationException( $"No index found by name {Umbraco.Cms.Core.Constants.UmbracoIndexes.ExternalIndexName}"); } index.TransformingIndexValues += IndexOnTransformingIndexValues; } private void IndexOnTransformingIndexValues(object? sender, IndexingItemEventArgs e) { if(e.ValueSet.ItemType is not Person.ModelTypeAlias) return; var hasDepartment = e.ValueSet.Values.TryGetValue("department", out var values); if (!hasDepartment) return; var tagsLowerCase = values!.Select(x => x.ToString()?.ToLowerInvariant()); var valuesDictionary = e.ValueSet.Values.ToDictionary(x => x.Key, x => x.Value.ToList()); var newValues = new List<object>(); foreach (var tag in tagsLowerCase) { if (!string.IsNullOrWhiteSpace(tag)) { newValues.Add(tag); } } valuesDictionary.Add("departmentLower", newValues ); e.SetValues(valuesDictionary.ToDictionary(x => x.Key, x => (IEnumerable<object>)x.Value)); } } ``` After a quick reindex we can see the new field: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wvxf1x3zqkgp4l5rpn3p.png) Now after changing our tags filter in the searchservice to look into that field and also lowercase the searchterm: ```csharp q.And().Field("departmentLower", tags.FirstOrDefault()?.ToLowerInvariant().Escape()); ``` Now we get the search result that we expected: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/33jimkii3ezcoleid7oy.png) Wohoo now the filtering works just as expected!! 🎉 Or does it... 🤔 ### Partial word matches What if you were to add a dragon, and that dragon is tagged with "Fairytale": ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fah4g9vcr3yiuoyo6d7w.png) The exact phrase "fairytale" matches the value "fairytale kingdom", so now we get extra results again: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sr94v58rmrepwipj8kbs.png) In some cases that may be exactly what we want - however in others it will be considered a wrong match. So how can we handle this case? In my search for a solution I stumbled upon this old blogpost about lucene phrase matching from 2012, so shout out to Mark Leighton Fisher who describes an easy workaround in [this blogpost](https://blogs.perl.org/users/mark_leighton_fisher/2012/01/stupid-lucene-tricks-exact-match-starts-with-ends-with.html)! Basically what he suggests is to wrap your indexed value and searchterm in a delimeter word - in his example he uses the word `lucenematch`. The reason this works is that we go from having the indexed values: fairytale fairytale kingdom to lucenematch fairytale lucenematch lucenematch fairytale kingdom lucenematch And while `fairytale` will be a match to `fairytale kingdom`. `lucenematch fairytale lucenematch` will not match `lucenematch fairytale kingdom lucenematch` So I'll add a quick string extension to add the delimiter word: ```csharp namespace ExamineTesting.Extensions; public static class SearchExtensions { private const string DelimiterWord = "lucenematch"; public static string AddLuceneDelimiterWord(this string value) { return $"{DelimiterWord} {value} {DelimiterWord}"; } } ``` And in the TransformingIndexValues event I will ensure that we add it to the values before indexing: ```csharp foreach (var tag in tagsLowerCase) { if (!string.IsNullOrWhiteSpace(tag)) { newValues.Add(tag.AddLuceneDelimiterWord()); } } ``` If I then go and reindex the externalIndex I can see it is added to our departmentLower field: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/92evoa6h7r66dg2j7zn3.png) Then we can add it to our searchService as well: ```csharp if (tags.Any()) { q.And().Field("departmentLower", tags.FirstOrDefault()?.AddLuceneDelimiterWord().ToLowerInvariant().Escape()); } ``` And when trying the search again we can see the query now adds the delimiter word, and the results are back to only the specific one we wanted: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iecib4evk4buc97lh2hv.png) ## Outro _Note: [Joe](https://umbracocommunity.social/@joe) has written a similar blogpost that solves the same problem with a different approach and goes a bit more in-depth with explaining the underlying Examine/Lucene parts._ _Please check it out here: https://joe.gl/ombek/blog/tag-style-exact-matching-with-examine/_ Thanks for following along! Please let me know if this was helpful to you 🙂 Also feel free to reach out to me on Mastodon: https://umbracocommunity.social/@Jmayn
jemayn
1,882,206
AWSKRUG Community Chronicles: Insights from a Community Hero (2/2)
Organizing Meetup Schedules Initially, meetups were primarily organized based on...
27,649
2024-06-09T16:18:31
https://dev.to/aws-heroes/awskrug-community-chronicles-insights-from-a-community-hero-22-2h0b
# Organizing Meetup Schedules ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0w8v973lya9qmmfypi9l.png) Initially, meetups were primarily organized based on location (gudi, gangnam). However, as meetups emerged based on technology categories (serverless, container, architecture, etc.), multiple meetups began to occur simultaneously on the same day. To address this, organizers share the AWSKRUG Google Calendar to register schedules in advance and avoid overlapping meetup schedules. Despite these efforts, with over 20 active meetups currently, occasional schedule conflicts still occur. In such cases, adjustments are made to the schedule to avoid overlapping events and minimize participant dilemmas. One day, a gudi meetup event was held with only four participants, including myself (no photo was taken at that time, so a photo of a meetup with six participants, including myself, is used as a replacement). Perhaps topics from other meetups held on the same day were more popular at that time. However, despite the small number of participants at the gudi event, I realized the advantage of its atmosphere. Thanks to the small number of participants, we could deeply listen to each other's work, specialized skills, challenges, etc., and quickly become closer. Therefore, now even when the number of participants is small, there is no sense of urgency. Of course, there is still a desire for more people to participate. # Participant Payment Confirmation ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/128xsbzq1zu29gp5zgfv.png) Our meetup collects participation fees, mostly 5000 won (as of June 8, 2024, $3.62). Participation fees are collected to prevent no-shows and purchase snacks. Each meetup has a treasurer responsible for managing participation fees, and each meetup has its own account. When participation fees are received in the meetup account, the treasurer checks the payment status by comparing the names of the participants who paid the fee with those who applied for the meetup. However, sometimes there are situations where the participant who paid the fee does not match the meetup ID. In such cases, asking participants to enter their real names in the meetup registration form usually resolves the issue. If the payer is still not identified, DMs are sent to the meetup participants or inquiries are made in the Slack meetup channel. We have been operating in this way for seven years, but we have not yet found a perfect way to confirm 100% of participant payments. Therefore, we personally verify the payment status of participants until the day of the meetup. The inability to directly collect participation fees through meetup applications seems to be the main reason for this. # Participant Entry Registration (Optional) According to the security regulations of buildings like AWS Korea, meetup organizers must submit a visitor entry list to the building security team three days before the event. Meetup organizers provide the list of participants to the venue rental manager, who then submits the list to the building security team. After submitting the visitor entry list, meetup registrations are closed. If new participants wish to join after that, they are sent a rejection message saying, "Sorry, please apply for the next meetup without being late." Meetup organizers maintain this stance because they find it difficult to spend time and energy on this process as they are working in their respective companies. # Snacks ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fjjmm7571bcl71pqe4m0.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1j180j3ydx9nt59u7kz3.jpeg) The snacks we prepare have evolved over time. When I first joined the community, it was mainly one roll of kimbap and carbonated drinks. Later, we ordered pizza and divided it into 2-3 slices each, and now we prepare various snacks such as toast, sandwiches, and hamburgers. Due to price increases after the pandemic, there are often cases where the snack order amount exceeds the participation fee of 5000 won ($3.62 as of 06.08), leading to an increase in meetup fees. Currently, there are two opinions among organizers, and some meetups have raised their fees to provide better quality snacks. I'm curious if snacks are prepared at meetups in other countries, so please leave a comment. # Icebreaking & Networking ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0cmrzd3af6100v7vfb4r.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8alsaof5895odn0jf0c0.png) I strive to provide positive experiences to participants who join this user group operated by me and other organizers, so they can feel welcomed and gain a sense of belonging, receive courage to present their experiences in front of others, and change their nature to speak up first. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ezt2beg3guulnrq37eb6.png) Like me when I first attended a meetup event, typical Korean participants at events feel awkward and are often embarrassed to showcase their skills to others. Therefore, we start the event by introducing the meetup and allowing each participant, starting with myself and the organizers, to introduce themselves. When participants introduce themselves by talking about what they do and what technologies they are interested in, I believe they gain the experience of speaking up in this space, and this experience increases the likelihood of them speaking up during Q&A sessions or networking times after the main presentation. I think it's much easier than overcoming awkwardness and embarrassment until they speak up. During networking time, we make sure to have snacks available. We set up snacks at each table so that people can gather around and have conversations while eating. There was a time when the networking atmosphere was so good that the meetup exceeded its end time. The next day, there was a complaint from an adjacent office worker who was working overtime, stating that the noise from the meetup continued past the scheduled end time. I feel sorry for the coworker who had to work overtime that day. However, I evaluate the atmosphere of that event as successful. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4bytnvzn9inmebg5p240.jpeg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/196rmte97icjjja3hifc.jpeg) Sometimes, after the meetup event ends, we arrange a time to go to a nearby pub for drinks and networking. Personally, I think networking outside the event venue is a much better opportunity for participants to get closer to each other. Moreover, the satisfaction of participants attending this networking session is much higher. However, from the perspective of organizers preparing for the event, it requires a lot of energy. There is no guarantee that everyone who attended the event will join the networking session, and if more than 20 people suddenly try to visit a pub, it often becomes cumbersome as many pubs may not have enough space, requiring us to visit several places to find a suitable one. # Reflections as a User Group Leader ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ajmke7ryek0bkhv60j4h.jpeg) The process of preparing meetup events is not easy, especially for those with little experience. If even one participant feels uncomfortable at this event, it can be very burdensome. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l6wo7bar9com20hpol1j.jpeg) However, after the event, receiving feedback from even one person saying that this meeting was very beneficial brings great satisfaction and joy. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g9do3899phmzeahuzd2n.jpeg) Furthermore, there is an opportunity to expand one's network by directly communicating with presenters and receiving technical hints, as well as getting to know passionate meetup organizers who contribute to the operation of the community and have a great passion for AWS technology. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/82ol0s42ojfqc720u60v.jpeg) Being recognized for my consistent activities as a user group leader and being selected as the first member of AWS Community Builder, invited to the 2022 Thailand APJ Summit to meet overseas Community Builders, and being selected as an AWS Community Hero and invited to re:Invent events are experiences that I am extremely grateful for. I consider it an honor to be part of a community where passionate people from all over the world can become friends, and I want to continue spreading the positive influence of the AWS community to many people in the future.
taeseong_park
1,881,674
Functional-programming tier list
📹 Hate reading articles? Check out the complementary video, which covers the same content. If you...
0
2024-06-09T16:18:04
https://dev.to/zelenya/functional-programming-tier-list-4acl
functional, haskell, scala, ocaml
📹 Hate reading articles? Check out [the complementary video](https://youtu.be/vhIQZ0px-Lc), which covers the same content. --- If you have ever wondered which functional programming language is better than others, which one to pick, or which to learn — here is my subjective tier list. > ⚠️ Note that I’m a bit biased towards statically typed languages. > > 💡 If you haven’t seen one of these, we go from D (bad/terrible) to S (the best/excellent). ## Haskell Let’s start with [Haskell](https://www.haskell.org/). If you can only take one functional language to the desert island, you should take Haskell because it offers so much. Sure, it’s not for everyone — it has a steep learning curve and can feel like throwing someone into the water to teach them how to swim — but it can be well worth it. Laziness — a major reason to use Haskell, also a major source of frustration. Also, Haskell provides great concurrency mechanisms and powerful type-level machinery with great type inference. All of this has been there and stable for more than 20 years. Nevertheless, Haskell keeps growing — there is always more stuff to learn and experiment with. Seems fair that Haskell is an **S-tier** fp language. ## Scala [Scala](https://www.scala-lang.org/), on the other hand, offers a gentler learning curve and you can dial up the functional programming at your own pace. You can’t go as “far” as Haskell, but you can go pretty far. I have a special place for Scala in my heart. I think Scala is an ideal fp language for Java developers and has [an excellent free course](https://www.coursera.org/learn/scala-functional-program-design) for functional programming beginners. Without doing any research, I’m pretty sure Scala is the most used fp language in production. Scala also deserves an **S-tier.** ## OCaml [OCaml](https://ocaml.org/) is another language that offers OOP/FP flexibility. OCaml 5 was a strong release, which not only brought multicore but also a lot of attention to the language. When people talk about OCaml, they often talk about a powerful module system, performance, and strong developer productivity (e.g., reversible debugger) — but I can’t speak on that because I haven’t used it in proper projects. I can talk about my favorite features: [polymorphic variants](https://ocaml.org/manual/5.2/polyvariant.html) and [effect handlers](https://ocaml.org/manual/5.2/effects.html). **Polymorphic variants** is my current preferred way of [dealing with errors](https://youtu.be/O3V_lc3oifs), and **Effect handlers** seem to me like [a future of the control flow](https://youtu.be/GaAe7zGq1zM). Two features I wish every language had (or will have). Overall, I don’t have much experience with OCaml, but `S-tier` feels fair. ## PureScript Going back to a language I used: [PureScript](https://www.purescript.org/). I think it’s highly underrated. How do I put it? It’s like a tidier version of Haskell — in my opinion, it’s easier to learn and get into. It has a solid interop with the JavaScript world — so, you can slowly introduce it to your project and at the same time have access to the sea of js libraries. On top of that, PureScript offers great records and row polymorphism. *I know I keep bringing this up every time, but* any time I have to manipulate some data, especially json, I miss not using PureScript. Unquestionably **S-tier**. ## Elm [Elm](https://elm-lang.org/) is a sibling of PureScript, and it’s even more focused. It’s both focused in the “features” it provides and focused on beginner friendliness. I bring this up in [my video on values](https://youtu.be/co-Vg7M4yKw): it’s not for everyone but has a solid niche. Elm is another great way to tap into fp from the frontend direction. Because it has clear values and focus, it’s a solid **S-tier**. ## Roc If I understand it correctly, [Roc](https://www.roc-lang.org/) is bringing Elm’s values and mindset to the backend. Or at least extending on those. It has an attractive approach to balancing (or navigating) between prototyping and reliability — they promise a nice flexibility of dynamically-typed languages with a seamless switch to the other mode, where you actually handle both happy and error paths. > 💡 I don’t have time to get into this, but if this sounds vague but exciting, I’d recommend looking into it (either watch [a full talk on this](https://www.youtube.com/watch?v=7R204VUlzGc) or actually [try it yourself](https://www.roc-lang.org/tutorial#tasks)) > On top of that, I think their anonymous sum types look fun (together with all the open and closed records and unions). And I’m also curious where their ideas around different [platforms](https://www.roc-lang.org/platforms) are going to lead. Roc doesn’t have a “stable” release yet, but I say it’s an upcoming **S-tier**. ## Unison [Unison](https://www.unison-lang.org/) is another new kid in town. And I think Unison can compete with Haskell on an amount of mind-breaking concepts. I don’t even know where to begin. Two of my favorite things: 1) [Everything is a function](https://youtu.be/F6S7NHnK1vA): deployment is done with a function call, calling another service is a function call, and accessing storage is a function call. 2) [Abilities](https://youtu.be/GaAe7zGq1zM) — Unison’s implementation of direct-style algebraic effects, similar to OCaml’s **effect handlers.** ``` safeDiv3 : '{IO, Exception, Store Text} Nat safeDiv3 = do use Nat / == toText use Text ++ a = !randomNat b = !randomNat Store.put (toText a ++ "/" ++ toText b) if b == 0 then Exception.raise (Generic.failure "Oops. Zero" b) else a / b ``` Did I mention that they promise to eliminate yaml? **S-tier**. ## Gleam [Gleam](https://gleam.run/) had a v1 release a couple of months ago. It’s a friendly language on top of Erlang runtime. It’s a simple selling pitch, but quite a strong one. And they have the sweetest mascot. What other reasons do you need? **S-tier.** ## F-sharp Once again speaking of interop, [F#](https://fsharp.org/) might be an obvious fp choice for those on the .NET: either you’re using C# already or want to make games using popular engines. And I keep seeing people bringing up [F# for Fun and Profit](https://fsharpforfunandprofit.com/) — it looks really fun. I haven’t touched F# — I’m a bit biased. So, **S-tier**. ## Takeaway As you might have guessed I’m pretty excited about functional programming. There are a lot of options, targeting different platforms, audiences, and values. Just pick your poison. --- {% embed https://youtu.be/vhIQZ0px-Lc %}
zelenya
1,882,205
TronFc Cloud Mining Platform
TRONFC Cloud Mining Website Certification Platform✅ Sign up and get 38,000TRX 🎁 The minimum...
0
2024-06-09T16:17:09
https://dev.to/tronfcfun/tronfc-cloud-mining-platform-ikn
mining, usdt, tron
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g8l177vhq4snmqko421i.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/modhwtgy5w0a1euk2kcr.jpg) TRONFC Cloud Mining Website Certification Platform✅ Sign up and get 38,000TRX 🎁 The minimum deposit to activate the mining machine is 20TRX, and the minimum withdrawal amount is 1TRX. Daily income can be reinvested or withdrawn 💰 Created on 2022/11/24 👉 📊 Profit rate: 8% per day 👉 🍬Minimum deposit: 20 TRX 👉 🔴Minimum withdrawal amount is 1 TRX 👉 No withdrawal fee Basic investment After the basic account is recharged, it will automatically join the mining. After recharging, the income will be issued once every 24 hours. You need to enter the "income" page. Manually collect profits every day. (1️⃣) Deposit 20 TRX and get 1.6TRX income per day👉 (2️⃣) Deposit 50TRX and get 4TRX income per day👉 3️⃣ Deposit 100 TRX and you will get 8TRX income for life per day, the more you invest, the more income you will get 🌐Website link: copy the link and paste it into your browser👇👇 https://tronfc.cc/index.html#/register/178029
tronfcfun
1,882,203
Why You Should Try Low-Code/No-Code
Work smarter, not harder In a world where speed and flexibility are becoming essential in...
0
2024-06-09T16:16:06
https://dev.to/crossnetics/why-you-should-try-low-codeno-code-11kk
nocode, webdev, beginners, marketing
## Work smarter, not harder In a world where speed and flexibility are becoming essential in business, how can marketers, influencers, and entrepreneurs use the latest technology to amplify their campaigns? Have you ever wondered how much easier and more effective your marketing efforts can become using innovative Low-Code and No-Code platforms? In this article, we will discuss how these technologies are changing the rules of the marketing game by providing tools to implement marketing strategies quickly and effectively without in-depth technical knowledge. ## What are Low-Code and No-Code Platforms **"Low-code" and "No-Code"** have recently become increasingly popular in technology and business. But what do they mean? These tools are changing how we build websites and apps and automate tasks, making it easier and faster, even if you don't know how to code. ## Benefits of Use in Marketing **Low-code and No-Code platforms offer several benefits for marketing that can help you improve your strategy and speed up processes:** **Rapid Development and Testing:** These platforms allow you to develop marketing campaigns and tools quickly, saving you development time and allowing you to run tests quickly. **Simplifying Complex Tasks:** Complex tasks such as data analysis and marketing automation become accessible without deep technical knowledge. **Flexibility and Adaptability:** Platforms enable you to adapt to changes in marketing trends and audience needs quickly. **Cost Reduction:** Using Low-Code and No-Code platforms can reduce the cost of developing and maintaining marketing tools. **Improved Customer Engagement:** Automating and personalizing customer communications becomes more affordable and effective. These benefits make Low-Code and No-Code platforms a powerful tool in the arsenal of any marketer seeking efficiency and innovation. ## Practical Application Examples Let's look at a few real-world examples of how Low-Code/No-Code platforms are being used in marketing that illustrate their potential and flexibility: **Email Marketing Automation:** Companies use these platforms to automate newsletters, segment audiences, and personalize messages to improve engagement and make campaigns more effective. **Creating Interactive Landings:** Quickly create and test landing pages with different designs and content to see what best attracts and retains visitors' attention. **Social Media Management:** Automate publications, track brand mentions, and analyze audience engagement through social media integration. **User Data Collection and Analysis:** Forms and surveys will collect customer data to help understand customer preferences and behavior. **Conducting Marketing Research:** Quickly gather and process feedback from customers to improve products and services. ## Case Study: Real-Life Examples of Low-Code/No-Code Use Cases In this section, we will look at actual case studies of companies that have successfully implemented Low-Code/No-Code platforms: **Work4Labs: Simplifying Data Processing** Work4Labs, a company that helps Fortune 500 businesses with social media and analytics, used the Integrate.io platform to simplify big data processing. This reduced the time and resources spent on complex programming and system support. **Bendigo Bank: Creating Customer-Centric Applications** Bendigo Bank, one of Australia's largest banks, used the Appian platform to create 25 customer-centric applications in just one and a half years, significantly reducing the cost and time required to develop similar applications manually. **North Carolina State University: Developing Applications for the Learning Process** North Carolina State University used the Mendix platform to build critical applications, including a lab management and course registration system, significantly improving administrative efficiency. **The Spur Group: Onboarding Process Automation** Consulting firm The Spur Group used low-code/no-code applications to simplify and accelerate onboarding processes for new hires, saving the company time and improving retention. **Apex Imaging: Optimising Spreadsheet Workflow** Apex Imaging, a company that offers rebranding services for companies such as Home Depot and Starbucks, used low-code solutions to increase spreadsheet efficiency, reducing time and streamlining processes. ## Step-by-Step Instructions for Low-Code/No-Code Implementation Implementing Low-Code/No-Code platforms into your marketing strategy can dramatically change how you approach your business. Here are actionable step-by-step instructions to get you started: **Step 1: Define Your Goals** Set Clear Goals: What do you want to improve with these platforms? It could be automation, increased campaign efficiency, or better customer interaction. **Step 2: Choose the Right Platform for You** Research and Choose: Analyze the platforms available. Look for those that offer the features you need and integrate with your current tools. **Step 3: Conduct a Test Project** Start Small: Implement a small project to evaluate the features and benefits of your chosen platform. **Step 4: Train Your Team** Develop Skills: Give your team access to training materials and resources. The more familiar they are with the tools, the more effective they will be in using them. **Step 5: Analyse and Optimise** Measure Results: Use analytics to measure project performance and adjust to optimize processes. **Step 6: Expand Usage** Scale Success: After a successful pilot project, use the platforms widely in different marketing activities. ## Future and Trends The future of marketing is inextricably linked to technological advances, and Low-Code/No-Code platforms play a vital role in this. Here are the trends and developments we can expect shortly: - Increased Integration and Connectivity: Platforms will offer even more opportunities to integrate with other tools and services, allowing for more complex and feature-rich marketing systems. - Increased Functionality: The future will bring more advanced features and functionality for users without technical skills, making platforms even more powerful and versatile. - Adaptation to Changing Trends: Platforms will quickly adapt to new marketing trends, offering solutions to the latest marketing challenges and opportunities. - The Democratization of Technologies: Low-code and No-Code will continue to make technology accessible to a wide range of users, fostering innovation and creativity in marketing. - Increased Efficiency and Cost Reduction: Using these platforms will continue to reduce the time and cost of developing and implementing marketing tools and campaigns. ## Conclusion: Time to Act Low-code and No-Code platforms open new horizons for marketers, influencers, and entrepreneurs by offering powerful yet affordable tools for implementing marketing ideas. These technologies demonstrate that creating effective digital marketing no longer requires deep technical expertise or huge budgets. We live in an era where flexibility, speed, and innovation are critical success factors. Incorporating Low-Code and No-Code platforms into your marketing strategy is a step towards building a more agile and adaptable business that can respond quickly to changes in market and audience needs. Now that you're aware of the features and benefits of these platforms, it's time to take action. Start small, experiment, and gradually integrate these tools into your processes. **Platforms like [Crossnetics](https://crossnetics.io/) and others can provide a great starting point, offering free trials to help you find the perfect fit for your needs. Your future in digital marketing promises to be bright and full of new opportunities.**
crossnetics
1,882,204
I spent the last 6 months building LiveAPI Proxy: Here are 10 HARD-EARNED Engineering Lessons you can use now
How LiveAPI Taught me some important Lessons in engineering I have been working on a...
0
2024-06-09T16:15:13
https://dev.to/hexmos/i-spent-the-last-6-months-building-liveapi-proxy-here-are-10-hard-earned-engineering-lessons-you-can-use-now-1kc6
webdev, proxy, api, apache2
## How LiveAPI Taught me some important Lessons in engineering I have been working on a product named **LiveAPI**. Let me just give an idea of what this product does. ![](https://journal-wa6509js.s3.ap-south-1.amazonaws.com/a4de9a479329ec18346643e0123e766e31a30954947bba0288378b77b790174c.png) The above API doc is a static one, users cant execute and change things by themselves. Static API docs like these often lose customer attention before the developers even try the APIs. ![](https://journal-wa6509js.s3.ap-south-1.amazonaws.com/5233b5551958c997b1fd23067bda5ed6b6a5d6692205a3e53dfa3d2b5096d688.png) The above API Doc uses LiveAPI, here developers can execute these APIs instantly right within their browser, so that developer attention can be captured within the first 30 seconds of their visit. LiveAPI uses a WASM Binary and a language core for executing the APIs. These things are already built up and we started testing this on some httpbin URLs, everything seemed fine When we tried doing a GET request to www.google.com, it failed. We investigated further and found out that there was a **CORS** error going on. CORS error prevents us from making requests from one site to another site. But this is a vital thing, because we are always requesting from one site(API docs) to another site(the target API url). So we thought for a while on this issue, and an idea popped up. **How about we use proxyservers**? This is a potential solution to this problem and will get us back up and running. Let's see how proxy servers can be a useful approach. ## Learning about Proxies: Engineering a Solution for CORS-Free browser requests ### What is a proxy server? ![alt text](https://journal-wa6509js.s3.ap-south-1.amazonaws.com/8b239a5a001991d26e69ece9c98f28651559027493bd900e19cf92bbc34eff1f.png) Consider this example. Here you can see two people, Alice and Bob. In the middle there is a proxy. Alice asked the proxy to forward a message to him, Bob also does the same. The proxy acts as the middleman here passing information between these two people. This is how proxy servers work. **A proxyserver acts as a middleman between a client and a server**, We have 3 things: Client Requests, Proxy Server and Responses. **Client Request:** When you send a request to a website. Instead of the website receiving it first, the proxy server receives it. **Proxy Server:** The proxy server then forwards your request to the actual website. It’s like a middleman that handles the communication. **Response:** The website responds to the proxy server, which then forwards the response back to you. ### How Proxies aid with solving the CORS problem The proxy server makes the request to the target API on behalf of our LiveAPI tool. Since the browser sees the request coming from the proxy server rather than from our site, it bypasses the CORS restrictions. ### Figuring out how to build a proxy server: The approach I took Since we got an idea of what the solution looks like, We were thinking about what technologies should we use. For our case, we already had an **apache2 server** up and running, and since Apache is a widely used server with a lot of module support, we felt it was the better choice for building our proxy server. ## Putting the Solution into Action: Building an Apache2 Proxy and Getting LiveAPI working Continue reading [the article](https://journal.hexmos.com/liveapi-engineering-lessons/)
rijultp
1,882,202
3D Tshirt Configurator With Three.js and Fabric.js
Hello Dev Community, I'm a non-code developer working on a pet project to create a 3D T-shirt...
0
2024-06-09T16:13:10
https://dev.to/apcliff/3d-tshirt-configurator-with-threejs-and-fabricjs-31j9
Hello Dev Community, I'm a non-code developer working on a pet project to create a 3D T-shirt configurator that allows users to customize a T-shirt model in real-time. The goal is to let users add text, images, and colors to a 3D T-shirt model, manipulate these elements, and interact with the 3D model itself. My target audience is primarily mobile users, so responsiveness and ease of use on smaller screens are critical. Current Implementation I've been using Three.js for rendering the 3D model and Fabric.js for 2D canvas interactions. However, I've encountered several issues that I haven't been able to resolve. Here’s what I have so far: 1. Loading the 3D Model: Successfully loading and displaying the 3D T-shirt model. 2. Fabric.js Canvas: Overlaid on top of the 3D model to add and manipulate text and images. 3. Basic Controls: Buttons for adding text and images, changing colors, rotating the model, and moving the model. Issues I'm Facing 1. Interaction with Added Elements: Users cannot edit, resize, or move added text and images around the 3D model. 2. Toolbar Usability: The toolbar is not user-friendly on mobile devices. 3. Model Positioning: The 3D model is not centered correctly and often appears misplaced. 4. Element Manipulation: Added elements (text, images) are not properly aligned with the 3D model and sometimes cover the entire model instead of specific areas. 5. Responsiveness: The overall configurator lacks responsiveness for mobile devices. Requirements I need help with the following: 1. Interactivity: Ensuring that users can interact with added elements (text, images) on the 3D model—move, resize, change colors, and edit text in real-time. 2. Mobile-Friendly Toolbar: Making the toolbar responsive and user-friendly for mobile devices, possibly with gesture controls. 3. Correct Model Centering: Ensuring the 3D model is centered and properly scaled on load. 4. Accurate Element Placement: Properly mapping text and images to specific areas of the 3D model without them covering the entire model. 5. Overall Responsiveness: Making sure the configurator is fully responsive and functional across different screen sizes. Sample Code Here is the current version of my code: ``` html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>3D T-shirt Configurator</title> <style> body { margin: 0; } #container { display: flex; flex-direction: column; height: 100vh; } #3d-view { flex: 1; } #toolbar { position: fixed; bottom: 0; width: 100%; background-color: #eee; display: flex; justify-content: center; padding: 10px; } canvas { display: block; } </style> </head> <body> <div id="container"> <div id="3d-view"></div> <div id="toolbar"> <!-- Toolbar buttons here --> <input type="file" id="uploadTexture" /> <input type="color" id="colorPicker" /> <button id="addText">Add Text</button> <button id="moveUp">Up</button> <button id="moveDown">Down</button> <button id="moveLeft">Left</button> <button id="moveRight">Right</button> </div> </div> <script src="https://cdn.jsdelivr.net/npm/three@0.138.3/build/three.min.js"></script> <script src="https://cdn.jsdelivr.net/npm/three@0.138.3/examples/js/loaders/GLTFLoader.min.js"></script> <script src="https://cdn.jsdelivr.net/npm/fabric@4.5.0/dist/fabric.min.js"></script> <script> let scene, camera, renderer, model, fabricCanvas; function initThreeJS() { const container = document.getElementById('3d-view'); scene = new THREE.Scene(); camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 1000); renderer = new THREE.WebGLRenderer({ antialias: true }); renderer.setSize(window.innerWidth, window.innerHeight * 0.8); container.appendChild(renderer.domElement); const light = new THREE.AmbientLight(0xffffff, 1); scene.add(light); const loader = new THREE.GLTFLoader(); loader.load('path/to/your/model.gltf', function(gltf) { model = gltf.scene; scene.add(model); camera.position.z = 3; }); const controls = new THREE.OrbitControls(camera, renderer.domElement); controls.enableZoom = true; controls.enableRotate = true; controls.enablePan = false; animate(); } function initFabricJS() { fabricCanvas = new fabric.Canvas('fabricCanvas', { width: window.innerWidth, height: window.innerHeight * 0.2, backgroundColor: 'transparent', }); fabricCanvas.on('object:modified', generateTexture); fabricCanvas.on('object:added', generateTexture); } function generateTexture() { fabricCanvas.renderAll(); fabricCanvas.getElement().toBlob(function(blob) { const texture = new THREE.TextureLoader().load(URL.createObjectURL(blob)); texture.flipY = false; model.traverse(function(child) { if (child.isMesh) { child.material.map = texture; child.material.needsUpdate = true; } }); }); } document.getElementById('addText').addEventListener('click', function() { const text = new fabric.Text('Sample Text', { left: 100, top: 100, fill: 'black', fontSize: 24 }); fabricCanvas.add(text); }); document.getElementById('uploadTexture').addEventListener('change', function(event) { const file = event.target.files[0]; const reader = new FileReader(); reader.onload = function(e) { fabric.Image.fromURL(e.target.result, function(img) { img.scaleToWidth(200); fabricCanvas.add(img); }); }; reader.readAsDataURL(file); }); document.getElementById('colorPicker').addEventListener('input', function(event) { const activeObject = fabricCanvas.getActiveObject(); if (activeObject) { activeObject.set({ fill: event.target.value }); fabricCanvas.renderAll(); generateTexture(); } }); document.getElementById('moveUp').addEventListener('click', function() { model.position.y += 0.1; }); document.getElementById('moveDown').addEventListener('click', function() { model.position.y -= 0.1; }); document.getElementById('moveLeft').addEventListener('click', function() { model.position.x -= 0.1; }); document.getElementById('moveRight').addEventListener('click', function() { model.position.x += 0.1; }); window.addEventListener('resize', function() { camera.aspect = window.innerWidth / window.innerHeight; camera.updateProjectionMatrix(); renderer.setSize(window.innerWidth, window.innerHeight * 0.8); }); initThreeJS(); initFabricJS(); </script> </body> </html> ``` Features Needed 1. Editable Text: Users should be able to add, edit, resize, and move text around the 3D model. 2. Image Manipulation: Users should be able to upload images, resize, move, and apply them to specific areas of the 3D model. 3. Color Customization: Users should be able to change the color of the text and images applied to the 3D model. 4. Model Controls: Users should be able to move, rotate, and zoom in/out the 3D model. 5. Mobile Responsiveness: The entire configurator should be optimized for mobile use, with easy-to-use controls and a responsive design. Request for Help I would appreciate any guidance, code snippets, or resources that could help me achieve the desired functionality. Specifically, I'm looking for: 1. Improved User Interactivity: How can I enable users to interact with added elements (text, images) directly on the 3D model? 2. Mobile Optimization: Tips or best practices for making the toolbar and overall interface more mobile-friendly. 3. Accurate Texture Mapping: How can I ensure that the added text and images are correctly mapped to specific areas of the 3D model? Thank you for taking the time to read this. Any help or pointers would be greatly appreciated!
apcliff
1,882,201
Timeless June: The Animated Wall Clock in My Study Room
This is a submission for Frontend Challenge v24.04.17, CSS Art: June. Inspiration For the...
0
2024-06-09T16:08:10
https://dev.to/niketmishra/timeless-june-the-animated-wall-clock-in-my-study-room-2h2o
frontendchallenge, devchallenge, css, javascript
_This is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), CSS Art: June._ ## Inspiration <!-- What are you highlighting today? --> For the month of June, I drew inspiration from the long summer days and the importance of time in our daily lives. This led me to create an **_animated wall clock that hangs in my study room_**, symbolizing both productivity and the passage of time. ## Demo {% codepen https://codepen.io/niketmishra/pen/YzbxONb %} ## Journey Creating this animated wall clock was an exciting and educational experience. Here’s a brief overview of my process: Creating this animated wall clock was a fulfilling experience. I started with a simple HTML structure to define the clock's face and hands. Using CSS, I styled the clock to give it a modern and clean look, employing transformations and animations for smooth movement. JavaScript was then integrated to ensure the clock hands move accurately in real-time. This project enhanced my skills in CSS animations and JavaScript integration, and it now serves as a functional and decorative piece in my study room. <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- We encourage you to consider adding a license for your code. --> MIT License Copyright (c) 2024 Niket Kumar Mishra Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. <!-- Don't forget to add a cover image to your post (if you want). --> <!-- Thanks for participating! -->
niketmishra
1,881,906
AWSKRUG Community Chronicles: Insights from a Community Hero (1/2)
Currently, there are 24 meetups in the AWS Korea User Group(AWSKRUG). Most of the meetups are held...
27,649
2024-06-09T16:06:26
https://dev.to/aws-heroes/awskrug-community-chronicles-insights-from-a-community-hero-12-2c4h
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6x0efsruf7iuonnfmuq.png) Currently, there are 24 meetups in the AWS Korea User Group(AWSKRUG). Most of the meetups are held in Seoul, with some also in Pangyo, Gyeonggi-do, and Busan. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/el9x5srq8l6rrwwya7tn.png) Currently, I am running the gudi, frontend, and gametech meetups with several other organizers. All three meetups are held in Seoul, with gudi being specifically located in the Guro Digital Complex area. Starting in 2018, when I first began running meetups, I experienced preparing for the events alone, the atmosphere of the meetups when there were only four participants, and even complaints from employees working overtime during networking sessions. I want to share the community management methods and various experiences I have had so far. # Shy guy Suddenly Becoming a Meetup Organizer ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l5qe0lslk0pazvyr7c3i.jpeg) In August 2017, shortly after I joined my current company, our COO suggested attending a meetup near the company in Guro Digital Complex. The experience of attending the first meetup felt like entering a new world. Active discussions, no hiding of shortcomings, an attitude of listening to others' opinions and seeking improvement, and a welcoming atmosphere even for first-time attendees were all impressive. After attending several meetups, many participants recognized and greeted me warmly. I felt welcomed and acknowledged as part of the community. I also hoped to quickly build my skills so I could present in front of these people someday. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hs790s3wilu0h5uy6rrq.jpeg) One day, [the organizer](https://www.facebook.com/pouu69) of the Guro Digital meetup next to our company left for another job. As he couldn't dedicate time to the community anymore, he announced the meetup's closure. Feeling sad about the potential end of the meetup, which provided great networking and skill-building opportunities just three minutes away from our office, I volunteered to take over to keep the community alive. Despite not having the personality to speak in public or any experience presenting at meetups at that time, I volunteered to lead the meetup. Thus, I started operating the meetup with a sense of responsibility and pressure. # Overview of Meetup Event Preparation The process of opening a meetup event is as follows: 1. Find speakers 2. Announce the meetup 3. Confirm attendance fees 4. Choose snacks for the event 5. Arrive at the venue 30 minutes before the meetup to set up snacks 6. Check attendees' names upon entry Typically, 1 to 10 people prepare for each meetup, handling roles such as venue reservation, microphone operation during events, assisting attendees, bringing snacks, and managing attendance fees. Initially, 1-2 people prepared meetup events, but as meetup culture matured, these roles became established. Some meetups still have one person handling everything. Currently, I am the one of these meetups's member, I run the gudi meetup alone. Running a meetup event alone is challenging. I usually reserve the venue at my workplace, announce the meetup, monitor attendance fees, order pizza on the event day, start and finish the event, dismiss attendees, and lock the office door afterward. Although I've simplified the process as much as possible, there's still a lot to manage. # Meetup Event Content Preparation Meetup content is typically organized in the following formats: - Keynote - Discussions - Hands-on sessions ## Keynote ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/seoncurf44trh4ke39ui.jpeg) The most common and manageable format is speaker-led sessions resembling lectures or seminars. also Q&A session after Keynote session satisfy many participants as well. To add a personal experience, in the early days of the Guro Digital(#gudi) meetup, I felt pressured to hold monthly events regularly. (there were some other organizers who felt the same as well) Sometimes, there were no speakers for the next month's event. To maintain the meetup, I volunteered to prepare and present my experiences with AWS. This became my first AWS community presentation, fulfilling my aspiration since joining the meetup. ## Discussions ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dvrvkpjptf1fvyqll9kv.jpeg) At my first meetup event, the topic was "Your thoughts on DevOps?" I shared a blank Google Slide for participants to edit and add their thoughts on DevOps. Each person briefly spoke about their thoughts, followed by a discussion. As meetup operations matured, we organized several discussions without speakers. We shared a Google Slide with a common topic for participants to discuss. During the pandemic, we hosted an Amazon Chime online meetup with engaging topics like: - Tools or furniture recommendations for improving work experiences - Tips for staying productive while working remotely - Motivation for study groups - Job recruiting - Development problem consultations Participants shared spontaneous ideas and enjoyed the sessions. After the pandemic, we organized discussions on AWS challenges, advice for newcomers, and advice for new AWSKRUG community members. Before these events, I invited experienced AWS users and frequent community participants as mentors via Slack DM. They helped newcomers with technical questions, and I rewarded them with community hero credits. After the events, we received feedback about cost-saving, minimizing downtime during updates, RDS issues, and satisfying answers to participants' questions. ## Hands-On Sessions ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zmkcr0pkq9yv6c6k93tl.jpeg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/avddxsufgg6jypngiyeq.jpeg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ga55ortibvapfjvewm5r.jpeg) Before the pandemic, we held hands-on sessions on AWS basics that even beginners could follow for about four hours. We invited participants on weekends. Multiple community members passionately contributed to preparing these sessions, assisting participants during the events, and resolving unexpected situations. Despite the energy required, successfully completing these events brought immense satisfaction. ## Event Content Summary ![AWSKRUG's Github organization. there are some repositories](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yor3q7rgo20ndpg2x0ys.png) After each event, presentation slides or discussion history are organized and uploaded to the [AWSKRUG meetup GitHub repository](https://github.com/awskrug/). Additionally, some meetups record the presentations during the event or upload them to the [AWSKRUG YouTube](https://www.youtube.com/@awskrug) via live streaming. ... I think this content is too long for one page, so let's continue in the next post!
taeseong_park
1,882,200
AWS CloudWatch: The Gatekeeper for Your AWS Environment
Introduction Maintaining your infrastructure’s operational health and peak performance is...
0
2024-06-09T16:03:08
https://dev.to/subodh_bagde/aws-cloudwatch-the-gatekeeper-for-your-aws-environment-4257
aws, cloudwatch, cloudcomputing, devops
## Introduction Maintaining your infrastructure’s operational health and peak performance is crucial in the world of cloud computing. AWS CloudWatch provides robust monitoring, alerting, reporting, and logging features, acting as a gatekeeper for your AWS environment. This article explores AWS CloudWatch, its benefits, and a real-world example of setting up an alarm to track CPU usage. ## What is AWS CloudWatch? Amazon Web Services (AWS) offers a flexible monitoring and management solution called AWS CloudWatch. You may use it to trigger alarms, monitor and evaluate metrics, and get real-time insights about your AWS apps and resources. By serving as a central repository for all monitoring data, CloudWatch enables you to keep your infrastructure operating efficiently and in good condition. ## Advantages of AWS CloudWatch AWS CloudWatch is a vital tool for managing AWS environments because of its many important benefits, which include: - Monitoring: Continuously observes your AWS resources and applications to ensure they are functioning correctly. - Real-Time Metrics: Provides up-to-date data on resource utilization, enabling informed decision-making. - Alarms: Automatically notifies you when specific metrics exceed predefined thresholds, allowing for timely intervention. - Log Insights: Centralizes and manages logs, simplifying troubleshooting and application behavior monitoring. - Custom Metrics: Tracks specific metrics relevant to your application or business needs. - Cost Optimization: Monitors resource usage and sets up billing alarms to help manage and optimize AWS costs. ![AWS CloudWatch Diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nqcij9audrgwv3b932sw.png) ## Creating an Alarm in CloudWatch Let’s explore a practical use case where we create an alarm in CloudWatch to notify us via email when the CPU utilization of an instance spikes to 50% or above. - Log in to the AWS Management Console and navigate to the CloudWatch service. - Click on “Alarms” in the left-hand menu and select “Create alarm.” - Choose “Select metric” and pick “EC2” from the metric source. - Under “Namespace,” select “AWS/EC2.” - For “Metric Name,” choose “CPUUtilization.” - Select the specific EC2 instance you want to monitor from the “Instance ID” dropdown. So I have created a specific EC2 instance for this purpose called “cloud-watch-demo”. - Scaling: Scales with your AWS environment, handling millions of events per minute. ![EC2 Instance](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i66wrw5tf9ihukrtmhuw.png) - Select your EC2 instance and then click on monitoring tab to see Graphical analysis of your instance based on various parameters such as CPU Utilization. ![CPU Utilization under Monitoring tab](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kvq4thshbthnlot4cyry.png) - Under “Statistic,” select “Average” to monitor the average CPU utilization over a period. - In the “Period” field, enter the desired time window for averaging CPU usage (e.g., 5 minutes). - For “Comparison operator,” choose “Greater than (>)” - In the “Threshold” value, enter “50” to trigger the alarm when CPU utilization exceeds 50%. - Leave the “Evaluation periods” set to “1” for the alarm to trigger if the average CPU utilization is above 50% for the chosen time period. - Under “Alarm name,” enter a descriptive name for your alarm (e.g., “High CPU Utilization on [Instance ID]”). - Now you need to configure the notification for the alarm. Click on “Add action” and choose “SNS topic.” - If you haven’t already, create a new SNS topic or select an existing one where you want to receive notifications. - Click “Next” and review the alarm configuration. - Finally, click “Create alarm” to set up your CloudWatch alarm. - Once the alarm is created you can click on it to see a detailed view. ![Cloud Watch alarm](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dux4lpyzxkp0gnv2ymau.png) - Click on the Metrics tab in the AWS CloudWatch and then search for metric name “CPUUtilization”. After this select your EC2 Instance to see the graph. (In my case, I’ve selected “cloud-watch-demo”) ![CPU Utilization Metrics](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bzhoxvklissu155gr51n.png) - Now to check the working of alarm I’ve used a python program that generates CPU Spikes which will in turn affect the CPU Utilization of our instance. **Credits for the python script:- Abhishek Veeramalla** ``` import time def simulate_cpu_spike(duration=30, cpu_percent=80): print(f"Simulating CPU spike at {cpu_percent}%...") start_time = time.time() # Calculate the number of iterations needed to achieve the desired CPU utilization target_percent = cpu_percent / 100 total_iterations = int(target_percent * 5_000_000) # Adjust the number as needed # Perform simple arithmetic operations to spike CPU utilization for _ in range(total_iterations): result = 0 for i in range(1, 1001): result += i # Wait for the rest of the time interval elapsed_time = time.time() - start_time remaining_time = max(0, duration - elapsed_time) time.sleep(remaining_time) print("CPU spike simulation completed.") if __name__ == '__main__': # Simulate a CPU spike for 30 seconds with 80% CPU utilization simulate_cpu_spike(duration=30, cpu_percent=80) ``` - Run this python script on your EC2 Instance. After this you will have to wait for 2–5 minutes then check your email which you have provided for the SNS topic. You will see the results on your AWS Cloud watch alarm dashboard. ![CPU-Spike python script](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/io59p7dt2i1s9eqw51od.png) ![AWS SNS](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dofy7fih6r1mf4q4aupz.png) ![Alarm Mail](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d8eye7oxo9rs90ro3d05.png) ![Alarm Dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x2vjzgy4u7lho2gf2xul.png) ## Conclusion AWS CloudWatch is a crucial tool for monitoring and managing your AWS resources and applications. It offers real-time insights, automated alarms, centralized logging, and cost management features, enabling you to maintain the health and performance of your infrastructure. By setting up alarms, such as the one for monitoring CPU utilization, you can proactively address issues and ensure your applications run smoothly. Thank you for reading, and I hope you found this blog post helpful in your AWS journey!
subodh_bagde
1,882,197
Terabox video link saver
Terabox (formerly known as Dubox) is a cloud storage service that allows users to save, store, and...
0
2024-06-09T16:00:56
https://dev.to/teraboxdownloader/terabox-video-link-saver-4m6m
terabox, video, videosaver, teraboxlink
Terabox (formerly known as Dubox) is a cloud storage service that allows users to save, store, and manage their files online. If you want to save video links or videos from Terabox, follow these steps: Saving Video Links Log in to Terabox: Open the Terabox website or app and log in to your account. Upload Video: Click on the "Upload" button and select the video file you want to upload from your device. Get Shareable Link: Once the video is uploaded, find the video file in your Terabox account. Right-click on the video file (or use the options menu) and select "Share" or "Get Link." [Terabox will generate a shareable link for your video](https://www.pokoreel.com/terabox). Copy this link. Save the Link: You can save the link in a document, note-taking app, or any other place where you can easily access it later. Saving Videos from Terabox Download Video: Navigate to the video you want to save from Terabox. Click on the download button (usually represented by a downward arrow) to download the video to your device. Save Locally: Once downloaded, the video will be saved to your device’s default download location. Move the video file to your preferred directory for easy access. Managing Your Videos Organize Files: Create folders within Terabox to organize your videos and other files. Move your video files to specific folders for better management. Backup Important Videos: Consider keeping a backup of important videos either on another cloud storage service or an external hard drive. Additional Tips Use the Terabox App: The Terabox mobile app can be handy for uploading and accessing your videos on the go. Enable automatic backup for your photos and videos to ensure they are always saved in the cloud. Check Storage Limits: Be aware of your storage limits on Terabox. Free accounts have limited storage, while premium plans offer more space. If you have specific questions or run into issues, Terabox’s help center or customer support can provide further assistance.
teraboxdownloader
1,882,196
Dynamically execute Tailwind CSS on multiple files with multiple outputs
Code snippet to dynamically execute Tailwind CSS in a given folder with multiple outputs for each file.
0
2024-06-09T16:00:50
https://dev.to/cbillowes/dynamically-execute-tailwind-css-on-multiple-files-with-multiple-outputs-38f2
--- title: Dynamically execute Tailwind CSS on multiple files with multiple outputs published: true description: Code snippet to dynamically execute Tailwind CSS in a given folder with multiple outputs for each file. tags: # cover_image: https://direct_url_to_image.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-06-09 15:58 +0000 --- Finds all CSS files in a given director, iterates through them and executes tailwindcss CLI using the Tailwind config and outputs each file to a `dist/` dir. ``` find . -name \"*.css\" -exec npx tailwindcss -c tailwind.config.js -i {} -o dist/{} \\; ```
cbillowes
1,882,194
Node.js vs. NestJS: A Comparative Analysis
In the world of web development, choosing the right framework or runtime can significantly impact the...
0
2024-06-09T15:58:36
https://dev.to/yelethe1st/nodejs-vs-nestjs-a-comparative-analysis-3f7o
webdev, javascript, node, nestjs
In the world of web development, choosing the right framework or runtime can significantly impact the efficiency, scalability, and maintainability of your application. Node.js, a popular JavaScript runtime, and NestJS, a progressive Node.js framework, are two powerful options that developers often consider. This article provides a comparative analysis of Node.js and NestJS, highlighting their features, use cases, and key differences to help you make an informed decision. ## Understanding Node.js Node.js is an open-source, cross-platform JavaScript runtime environment that executes JavaScript code outside of a web browser. It is built on Chrome's V8 JavaScript engine and uses an event-driven, non-blocking I/O model, making it ideal for building scalable network applications. ### Key Features of Node.js 1. **Asynchronous and Event-Driven**: Node.js uses a single-threaded event loop, which allows it to handle multiple connections concurrently. This non-blocking architecture is perfect for real-time applications like chat apps and online games. 2. **Fast Execution**: Powered by the V8 engine, Node.js compiles JavaScript to native machine code, providing high performance and speed. 3. **NPM Ecosystem**: Node.js has a rich ecosystem of libraries and modules available through the Node Package Manager (NPM), making it easy to add functionality to your applications. 4. **Cross-Platform**: Node.js can run on various operating systems, including Windows, macOS, and Linux. ### Common Use Cases for Node.js - Real-time applications (e.g., chat applications, online gaming) - API development and microservices - Single Page Applications (SPA) - Streaming services - Server-side scripting and command-line tools ## Understanding NestJS NestJS is a progressive Node.js framework designed to build efficient, reliable, and scalable server-side applications. It is built with TypeScript and leverages the powerful features of Node.js while providing an out-of-the-box application architecture. ### Key Features of NestJS 1. **Modular Architecture**: NestJS uses a modular architecture, allowing developers to organize their code into modules, controllers, and services. This promotes reusability and maintainability. 2. **TypeScript Support**: Built with TypeScript, NestJS offers strong typing, which helps in catching errors during the development phase and improves code quality. 3. **Dependency Injection**: NestJS has a powerful dependency injection system, making it easier to manage dependencies and create testable, maintainable code. 4. **Built-in Middleware**: NestJS provides a comprehensive set of built-in middleware, including support for routing, validation, and exception handling. 5. **Extensible**: NestJS can be easily extended with various plugins and supports a wide range of libraries, including those from the Express and Fastify ecosystems. ### Common Use Cases for NestJS - Enterprise applications - Complex server-side applications - API development with strong typing and modularity - Applications requiring a scalable and maintainable architecture ## Node.js vs. NestJS: Key Differences ### 1. **Architecture** - **Node.js**: Node.js provides a minimalistic approach, giving developers the flexibility to design their architecture. It requires developers to manually organize their code and integrate various libraries and tools as needed. - **NestJS**: NestJS offers a well-defined application architecture out of the box, including modules, controllers, and services. This structured approach reduces the time spent on setting up the project and ensures consistency across the codebase. ### 2. **Language and Typing** - **Node.js**: Node.js primarily uses JavaScript, but it also supports TypeScript. Developers need to set up TypeScript manually if they choose to use it. - **NestJS**: NestJS is built with TypeScript from the ground up, providing strong typing and modern JavaScript features by default. This leads to better code quality and easier debugging. ### 3. **Dependency Injection** - **Node.js**: Node.js does not provide a built-in dependency injection system. Developers need to use third-party libraries if they want to implement dependency injection. - **NestJS**: NestJS has a robust dependency injection system built-in, simplifying the management of dependencies and enhancing the modularity and testability of the application. ### 4. **Learning Curve** - **Node.js**: Node.js has a moderate learning curve, especially for developers familiar with JavaScript. However, designing and maintaining a large-scale application requires a deep understanding of best practices and architectural patterns. - **NestJS**: NestJS has a steeper learning curve due to its use of TypeScript and the additional concepts it introduces, such as modules and decorators. However, its comprehensive documentation and structured approach can help developers quickly become productive. ### 5. **Performance** - **Node.js**: Node.js is known for its high performance, especially in real-time applications and scenarios involving heavy I/O operations. - **NestJS**: NestJS builds on top of Node.js, so it inherits its performance characteristics. However, the additional abstraction layers may introduce slight overhead, which is usually negligible in most applications. ## When to Choose Node.js - You need a lightweight and flexible runtime for a small to medium-sized application. - You prefer using JavaScript without the additional setup of TypeScript. - Your project involves building real-time applications or command-line tools. - You want to leverage the vast NPM ecosystem for various libraries and tools. ## When to Choose NestJS - You are building a large-scale, enterprise-level application that requires a scalable and maintainable architecture. - You prefer using TypeScript for its strong typing and modern JavaScript features. - You want to benefit from a modular architecture and built-in dependency injection. - You need a framework with comprehensive documentation and a structured approach to development. ## Conclusion Both Node.js and NestJS are powerful tools for building server-side applications. Node.js offers flexibility and simplicity, making it suitable for a wide range of applications, especially those requiring real-time capabilities. NestJS, on the other hand, provides a more structured and opinionated framework, ideal for complex, large-scale applications that benefit from strong typing and a modular architecture. Ultimately, the choice between Node.js and NestJS depends on the specific requirements of your project, your team's familiarity with JavaScript and TypeScript, and your preferred development approach. By understanding the strengths and use cases of each, you can make an informed decision that aligns with your application's goals and your development workflow.
yelethe1st
1,882,192
Buy Verified Paxful Account
https://dmhelpshop.com/product/buy-verified-paxful-account/ Buy Verified Paxful Account There are...
0
2024-06-09T15:55:47
https://dev.to/gabrialmillse432/buy-verified-paxful-account-27d6
react, python, ai, devops
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-paxful-account/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/om2fro3d5zu7kwoi3t1v.png)\n\n\n\nBuy Verified Paxful Account\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, Buy verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to Buy Verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with. Buy Verified Paxful Account.\n\nBuy US verified paxful account from the best place dmhelpshop\nWhy we declared this website as the best place to buy US verified paxful account? Because, our company is established for providing the all account services in the USA (our main target) and even in the whole world. With this in mind we create paxful account and customize our accounts as professional with the real documents. Buy Verified Paxful Account.\n\nIf you want to buy US verified paxful account you should have to contact fast with us. Because our accounts are-\n\nEmail verified\nPhone number verified\nSelfie and KYC verified\nSSN (social security no.) verified\nTax ID and passport verified\nSometimes driving license verified\nMasterCard attached and verified\nUsed only genuine and real documents\n100% access of the account\nAll documents provided for customer security\nWhat is Verified Paxful Account?\nIn today’s expanding landscape of online transactions, ensuring security and reliability has become paramount. Given this context, Paxful has quickly risen as a prominent peer-to-peer Bitcoin marketplace, catering to individuals and businesses seeking trusted platforms for cryptocurrency trading.\n\nIn light of the prevalent digital scams and frauds, it is only natural for people to exercise caution when partaking in online transactions. As a result, the concept of a verified account has gained immense significance, serving as a critical feature for numerous online platforms. Paxful recognizes this need and provides a safe haven for users, streamlining their cryptocurrency buying and selling experience.\n\nFor individuals and businesses alike, Buy verified Paxful account emerges as an appealing choice, offering a secure and reliable environment in the ever-expanding world of digital transactions. Buy Verified Paxful Account.\n\nVerified Paxful Accounts are essential for establishing credibility and trust among users who want to transact securely on the platform. They serve as evidence that a user is a reliable seller or buyer, verifying their legitimacy.\n\nBut what constitutes a verified account, and how can one obtain this status on Paxful? In this exploration of verified Paxful accounts, we will unravel the significance they hold, why they are crucial, and shed light on the process behind their activation, providing a comprehensive understanding of how they function. Buy verified Paxful account.\n\n \n\nWhy should to Buy Verified Paxful Account?\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, a verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence. Buy Verified Paxful Account.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to buy a verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with.\n\n \n\nWhat is a Paxful Account\nPaxful and various other platforms consistently release updates that not only address security vulnerabilities but also enhance usability by introducing new features. Buy Verified Paxful Account.\n\nIn line with this, our old accounts have recently undergone upgrades, ensuring that if you purchase an old buy Verified Paxful account from dmhelpshop.com, you will gain access to an account with an impressive history and advanced features. This ensures a seamless and enhanced experience for all users, making it a worthwhile option for everyone.\n\n \n\nIs it safe to buy Paxful Verified Accounts?\nBuying on Paxful is a secure choice for everyone. However, the level of trust amplifies when purchasing from Paxful verified accounts. These accounts belong to sellers who have undergone rigorous scrutiny by Paxful. Buy verified Paxful account, you are automatically designated as a verified account. Hence, purchasing from a Paxful verified account ensures a high level of credibility and utmost reliability. Buy Verified Paxful Account.\n\nPAXFUL, a widely known peer-to-peer cryptocurrency trading platform, has gained significant popularity as a go-to website for purchasing Bitcoin and other cryptocurrencies. It is important to note, however, that while Paxful may not be the most secure option available, its reputation is considerably less problematic compared to many other marketplaces. Buy Verified Paxful Account.\n\nThis brings us to the question: is it safe to purchase Paxful Verified Accounts? Top Paxful reviews offer mixed opinions, suggesting that caution should be exercised. Therefore, users are advised to conduct thorough research and consider all aspects before proceeding with any transactions on Paxful.\n\n \n\nHow Do I Get 100% Real Verified Paxful Accoun?\nPaxful, a renowned peer-to-peer cryptocurrency marketplace, offers users the opportunity to conveniently buy and sell a wide range of cryptocurrencies. Given its growing popularity, both individuals and businesses are seeking to establish verified accounts on this platform.\n\nHowever, the process of creating a verified Paxful account can be intimidating, particularly considering the escalating prevalence of online scams and fraudulent practices. This verification procedure necessitates users to furnish personal information and vital documents, posing potential risks if not conducted meticulously.\n\nIn this comprehensive guide, we will delve into the necessary steps to create a legitimate and verified Paxful account. Our discussion will revolve around the verification process and provide valuable tips to safely navigate through it.\n\nMoreover, we will emphasize the utmost importance of maintaining the security of personal information when creating a verified account. Furthermore, we will shed light on common pitfalls to steer clear of, such as using counterfeit documents or attempting to bypass the verification process.\n\nWhether you are new to Paxful or an experienced user, this engaging paragraph aims to equip everyone with the knowledge they need to establish a secure and authentic presence on the platform.\n\nBenefits Of Verified Paxful Accounts\nVerified Paxful accounts offer numerous advantages compared to regular Paxful accounts. One notable advantage is that verified accounts contribute to building trust within the community.\n\nVerification, although a rigorous process, is essential for peer-to-peer transactions. This is why all Paxful accounts undergo verification after registration. When customers within the community possess confidence and trust, they can conveniently and securely exchange cash for Bitcoin or Ethereum instantly. Buy Verified Paxful Account.\n\nPaxful accounts, trusted and verified by sellers globally, serve as a testament to their unwavering commitment towards their business or passion, ensuring exceptional customer service at all times. Headquartered in Africa, Paxful holds the distinction of being the world’s pioneering peer-to-peer bitcoin marketplace. Spearheaded by its founder, Ray Youssef, Paxful continues to lead the way in revolutionizing the digital exchange landscape.\n\nPaxful has emerged as a favored platform for digital currency trading, catering to a diverse audience. One of Paxful’s key features is its direct peer-to-peer trading system, eliminating the need for intermediaries or cryptocurrency exchanges. By leveraging Paxful’s escrow system, users can trade securely and confidently.\n\nWhat sets Paxful apart is its commitment to identity verification, ensuring a trustworthy environment for buyers and sellers alike. With these user-centric qualities, Paxful has successfully established itself as a leading platform for hassle-free digital currency transactions, appealing to a wide range of individuals seeking a reliable and convenient trading experience. Buy Verified Paxful Account.\n\n \n\nHow paxful ensure risk-free transaction and trading?\nEngage in safe online financial activities by prioritizing verified accounts to reduce the risk of fraud. Platforms like Paxfu implement stringent identity and address verification measures to protect users from scammers and ensure credibility.\n\nWith verified accounts, users can trade with confidence, knowing they are interacting with legitimate individuals or entities. By fostering trust through verified accounts, Paxful strengthens the integrity of its ecosystem, making it a secure space for financial transactions for all users. Buy Verified Paxful Account.\n\nExperience seamless transactions by obtaining a verified Paxful account. Verification signals a user’s dedication to the platform’s guidelines, leading to the prestigious badge of trust. This trust not only expedites trades but also reduces transaction scrutiny. Additionally, verified users unlock exclusive features enhancing efficiency on Paxful. Elevate your trading experience with Verified Paxful Accounts today.\n\nIn the ever-changing realm of online trading and transactions, selecting a platform with minimal fees is paramount for optimizing returns. This choice not only enhances your financial capabilities but also facilitates more frequent trading while safeguarding gains. Buy Verified Paxful Account.\n\nExamining the details of fee configurations reveals Paxful as a frontrunner in cost-effectiveness. Acquire a verified level-3 USA Paxful account from usasmmonline.com for a secure transaction experience. Invest in verified Paxful accounts to take advantage of a leading platform in the online trading landscape.\n\n \n\nHow Old Paxful ensures a lot of Advantages?\n\nExplore the boundless opportunities that Verified Paxful accounts present for businesses looking to venture into the digital currency realm, as companies globally witness heightened profits and expansion. These success stories underline the myriad advantages of Paxful’s user-friendly interface, minimal fees, and robust trading tools, demonstrating its relevance across various sectors.\n\nBusinesses benefit from efficient transaction processing and cost-effective solutions, making Paxful a significant player in facilitating financial operations. Acquire a USA Paxful account effortlessly at a competitive rate from usasmmonline.com and unlock access to a world of possibilities. Buy Verified Paxful Account.\n\nExperience elevated convenience and accessibility through Paxful, where stories of transformation abound. Whether you are an individual seeking seamless transactions or a business eager to tap into a global market, buying old Paxful accounts unveils opportunities for growth.\n\nPaxful’s verified accounts not only offer reliability within the trading community but also serve as a testament to the platform’s ability to empower economic activities worldwide. Join the journey towards expansive possibilities and enhanced financial empowerment with Paxful today. Buy Verified Paxful Account.\n\n \n\nWhy paxful keep the security measures at the top priority?\nIn today’s digital landscape, security stands as a paramount concern for all individuals engaging in online activities, particularly within marketplaces such as Paxful. It is essential for account holders to remain informed about the comprehensive security protocols that are in place to safeguard their information.\n\nSafeguarding your Paxful account is imperative to guaranteeing the safety and security of your transactions. Two essential security components, Two-Factor Authentication and Routine Security Audits, serve as the pillars fortifying this shield of protection, ensuring a secure and trustworthy user experience for all. Buy Verified Paxful Account.\n\nConclusion\nInvesting in Bitcoin offers various avenues, and among those, utilizing a Paxful account has emerged as a favored option. Paxful, an esteemed online marketplace, enables users to engage in buying and selling Bitcoin. Buy Verified Paxful Account.\n\nThe initial step involves creating an account on Paxful and completing the verification process to ensure identity authentication. Subsequently, users gain access to a diverse range of offers from fellow users on the platform. Once a suitable proposal captures your interest, you can proceed to initiate a trade with the respective user, opening the doors to a seamless Bitcoin investing experience.\n\nIn conclusion, when considering the option of purchasing verified Paxful accounts, exercising caution and conducting thorough due diligence is of utmost importance. It is highly recommended to seek reputable sources and diligently research the seller’s history and reviews before making any transactions.\n\nMoreover, it is crucial to familiarize oneself with the terms and conditions outlined by Paxful regarding account verification, bearing in mind the potential consequences of violating those terms. By adhering to these guidelines, individuals can ensure a secure and reliable experience when engaging in such transactions. Buy Verified Paxful Account.\n\n \n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com"
gabrialmillse432
1,882,191
Glam Markup Beaches
This is a submission for Frontend Challenge v24.04.17, CSS Art: June. Inspiration My love...
0
2024-06-09T15:55:24
https://dev.to/thabangrammitlwa/glam-markup-beaches-3ln3
frontendchallenge, devchallenge, css
_This is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), CSS Art: June._ ## Inspiration My love for summer and water ## Demo I created an under the sea feeling with the mark up of the list to the top beaches in the world, as you click on each beach it reveals information about the beach. Info about "Take me to the beach" is hidden at first and revealed when the tab is clicked. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k8i5wwc80glk8ywkhp24.png) https://glam-markup-beach.netlify.app/
thabangrammitlwa
1,882,190
CSS BORDERS
CSS Borders The CSS border properties allow you to specify the style, width, and color of...
0
2024-06-09T15:54:04
https://www.devwares.com/blog/css-borders/
webdev, beginners, css, programming
## CSS Borders The CSS border properties allow you to specify the style, width, and color of an element's border. ## Border Width The [border-width](https://www.devwares.com/tailwindcss/classes/tailwind-border-width/) property specifies the width of the four borders. The property can have from one to four values. ```css div { border-width: 10px; } ``` In this example, all four borders will be 10px wide. ## Border Style The [border-style](https://www.devwares.com/tailwindcss/classes/tailwind-border-style/) property specifies what kind of border to [display](https://www.devwares.com/tailwindcss/classes/tailwind-display/). The property can have from one to four values (solid, dotted, dashed, double, groove, ridge, inset, outset, none, hidden). ```css div { border-style: solid; } ``` In this example, all four borders will be solid. ## Border Color The [border color](https://www.devwares.com/tailwindcss/classes/tailwind-border-color/) property sets the color of an element's four borders. The property can have from one to four values. ```css div { border-color: red; } ``` In this example, all four borders will be red. ## Border - Individual Sides You can set the border on individual sides of an element: ```css div { border-left: 6px solid red; border-right: 6px solid blue; } ``` In this example, the element will have a 6px solid red border on the left side and a 6px solid blue border on the right side. ## Shorthand Property: border The `border` property is a shorthand property for `border-width`, `border-style` (required), and `border-color`. ```css div { border: 5px solid red; } ``` In this example, all four borders will be 5px wide, solid and red. ## Border Radius The [border radius](https://www.devwares.com/tailwindcss/classes/tailwind-border-radius/) property is used to add rounded borders to an element: ```css div { border: 2px solid; border-radius: 25px; } ``` In this example, the element will have a 2px solid border with a 25px radius, making it rounded. ## Border Collapse The [border-collapse](https://www.devwares.com/tailwindcss/classes/tailwind-border-collapse/) property is specifically used for table elements. It controls how table borders collapse into a single border when adjacent cells have borders: - collapse: Borders collapse into a single border (default behavior). - separate: Borders remain separate. ```css table {   border-collapse: collapse; /* Borders collapse into one */ } ``` ## Border Spacing The [border-spacing](https://www.devwares.com/tailwindcss/classes/tailwind-border-spacing/) property sets the space between adjacent cell borders in a table: ```css table {   border-spacing: 10px; /* Space between cell borders */ } ```
hypercode
1,877,693
Advice for Intermediate developers
Prologue I wrote this blog five years ago when I was a junior developer. The tips I shared...
0
2024-06-09T15:50:55
https://dev.to/rampa2510/advice-for-intermediate-developers-4777
software, community, developer, career
# Prologue I wrote [this blog](https://dev.to/rampa2510/3-tips-for-new-developers-49hj) five years ago when I was a junior developer. The tips I shared back then are still rules I follow today and have become an integral part of me. I've grown a lot as a developer, so now I want to give back to the community as an intermediate developer. The advice mentioned here is for people who love their craft and want to get better at it, not for the sake of better compensation but for the joy of programming. ## 1) Love your job I've seen people treat programming as just a job, doing it only for the money. They program to earn a living and go about their daily lives. This lifestyle is fine, its your choice. But don't be surprised if your skills don't improve and you become stagnant. To become great at programming, you have to love your job. You spend most of your day programming at your day job, and if you don't love it, you won't take the initiative to improve your skills while working. I have a personal story to share. I once worked at a company I hated. I didn't take any initiative to improve the codebase or learn new things to enhance the application architecture. Now, I work at a job I love and treat it as my own product. This often leads me to learn new things and develop the codebase in a well-structured manner because I don't want to ruin it. If you do what you don't love, you'll do more harm than good. You can learn after work, but you would have wasted around six hours of your day and accomplished very little. ## 2) Be a generalist Never put yourself in a box. Don’t think of yourself as just a frontend developer or backend developer. Think of yourself as a software developer. Great developers don't limit themselves to specific technologies they focus on solving problems, not just parts of a problem. If you limit yourself to a certain stack, you won't become a great problem solver. Software development is all about problem-solving, and if you don’t understand how to build an end-to-end product, you won’t be a good problem solver. At the start of your career, you might have to choose a specific stack to prove yourself as a great software developer. But don't let that limit you. If you work at a good company, talk with a senior or other developers to gain insights into different teams and learn new things. Start taking responsibility for other parts of your company's codebase to transition into a more full-stack developer role. This way, you'll start thinking more about solving whole problems rather than just parts. If you are not welcomed to work with other stacks I would recommend working at another job. A company should never limit the learning of their engineers. So, be a generalist. Don’t limit yourself to one part of the stack. Learn to solve problems as a software developer. Generalists find it easier to be good at solving specific problems because they can pick up new technologies faster since they already have a broad understanding. ## 3) Never stop learning new tech ( Be a tinkerer ) This is a crucial point that many developers overlook. To be a good problem solver, you must keep yourself up to date with the latest advancements in your technology. I find a lot of joy in my hobby projects, which help me develop many skills. When you tinker with new stuff, you learn a lot, and you never know when it will become useful. For example, imagine you've been tasked with creating a blogging application for your company. They want a custom solution, not something that uses Webflow and other similar services. If you've kept up with the latest advancements, you can use modern CMS tools like Supabase or Pocketbase to develop the backend quickly. It might take just 30 minutes to set up a CMS for your blogging site, saving you from creating and managing the database and backend code. Then you can focus on the frontend according to your company's needs. Here's a personal example: I’ve been learning Go for a month on the side. Recently, I had to write a cron job to update user metrics every 30 minutes. Knowing that Go is great and very fast for such tasks, I created the cron job in Go, built the binary, and scheduled a system daemon task with a timer for every 30 minutes. It works efficiently and consumes fewer resources. If I hadn't been tinkering in my spare time and only wrote code at my day job, I wouldn’t have come up with the best solution in a reasonable time. The cron job would have been written in Node, which would take more time as the user base grows. So, never stop learning and creating on the side. The best way to learn is by creating and tinkering. I’ve been learning Ruby on Rails and Go on the side, and I’ve come to appreciate the different features that various ecosystems offer. This has helped me integrate new ideas into my workflow. ## 4) Take ownership I recently watched a [video](https://www.youtube.com/watch?v=5i_O6NLXYsM&t=1586s) of ThePrimagen that inspired me to write this blog. He mentioned that the best way to solve a problem or become a great software developer is to take ownership of the product. He talked about how Doom was created by just four guys who delivered such a good product because they took ownership. They knew they had no one else to rely on, so they made it their responsibility to develop the best possible software. There was no Plan B. They never felt burnout or gave up because they owned the product, not just the tasks. To improve your skills as a software developer, you need to start taking ownership of the product you are building, not just features or tasks. You will find it much more enjoyable to work on a product when you see any feature or bug as your problem to solve, not just another task for someone else. This is the best way to beat burnout. When you take ownership, you will find joy in improving and making the product more efficient. If you are working on a product, you can't blame others for any bugs that come up when users find them. You are part of the problem if things go wrong, so you have to take ownership to fix them and make a great product. Good, scalable products are built by teams, and if you don't take ownership, you are not a good team member. When you take ownership, you write the best possible code to create the best possible software, not just another software product. Like the four guys who made Doom, they put in an insane amount of time to create something that was theirs, and they never settled for just another game, they created an era-defining game. The rest, as they say, is history. The same applies to you, if you want to make the best possible software, you have to start taking ownership and think of the product as your own. # Epilogue I feel good after writing this blog and sharing my thoughts with the community. We might argue about frameworks, languages, and tools, but these debates help us improve. They push technology forward, making our community very competitive. Let's keep the passion alive!
rampa2510
1,882,187
HIRE FAST SWIFT CYBER SERVICES TODAY
I, David , a businessman from Colorado Springs , am forever indebted to , a team of exceptionally...
0
2024-06-09T15:50:24
https://dev.to/my_office_1803aad4fece944/hire-fast-swift-cyber-services-today-2a6e
cryptocurrency
I, David , a businessman from Colorado Springs , am forever indebted to ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/llpapw7hl2ym2oovxcen.jpg), a team of exceptionally skilled and efficient hackers, for their remarkable work in recovering my cryptocurrency that was mistakenly sent to the wrong address. The value of the crypto amounted to 166,464.18 Euro, and I had nearly given up hope of ever retrieving it until a trusted friend suggested FAST SWIFT CYBER SERVICES to me. Prior to engaging FAST SWIFT CYBER SERVICES, I had tried numerous other recovery companies, all of which proved to be ineffective. However, my friend, who had faced a similar predicament, assured me that FAST SWIFT CYBER SERVICES were the real deal and would not disappoint. He had even heard success stories of others who had reclaimed their funds through their expertise. The process with FAST SWIFT CYBER SERVICES was surprisingly swift and seamless. Upon reaching out to them via email, I received a response from one of their representatives within the hour. They meticulously guided me through each step, providing detailed explanations and keeping me informed of their progress. To my astonishment, they successfully traced and recovered my cryptocurrency within a mere ten days! Throughout the recovery journey, FAST SWIFT CYBER SERVICES exhibited unwavering transparency and professionalism. They were readily available to address any inquiries or concerns I had, offering clear explanations whenever necessary. Their advanced technical abilities and profound understanding of the blockchain network were truly commendable. I am immensely grateful to FAST SWIFT CYBER SERVICES for their relentless dedication and for rescuing me from the brink of financial devastation. Not only did they retrieve my lost funds, but they also reinstated my faith in the positive use of hackers' skills. I wholeheartedly recommend FAST SWIFT CYBER SERVICES to anyone facing a similar situation, as I am confident they will deliver exceptional results. Undoubtedly, they are the best in their field. Email: fastswift@cyberservices.com Telephone: +1 303-945-3891 WhatsApp: +1 401 219-5530
my_office_1803aad4fece944
1,882,186
How to Use NgRx Selectors in Angular
In NgRx, when we want to get data from the store, the easiest way is by using store.select. It allows...
0
2024-06-09T15:48:42
https://www.danywalls.com/how-to-use-ngrx-selectors-in-angular
angular, ngrx, typescript, frontend
In [NgRx](https://ngrx.io/), when we want to get data from the store, the easiest way is by using [`store.select`](http://store.select). It allows us to get any slice of the state. Yes, it sounds funny, but any slice returns an `Observable<any>`. For example: ![xcc](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/16k8ps1vb106ugpb7696.png) It is flexible but also risky because, what happens if our `state` structure changes? The best way to fix this is by using `Selectors`. Let's play with them! ## **NgRx Selectors** The Selectors help us get slices of the store state with type safety. They act like a mirror of our state, making it easy to use anywhere and allowing us to avoid repeating the same code in our application. NgRx provide two functions `createFeatureSelector()` and `createSelector()` to create selectors. The `createFeatureSelector` function allows us to define a type-safe slice of our state using our state definition. ```typescript export const selectHomeState = createFeatureSelector<HomeState>('home'); ``` The `createSelector` function uses the `featureState` as the first parameter and a second function to get the `featureState` and pick the slice. ```typescript export const selectLoading = createSelector(  selectHomeState, (homeState) => homeState.loading ) ``` We already know the NgRx selector functions, so let's use them and have some fun 🔥! ## Creating Selectors It's time to start using `createFeatureSelector` and `createSelector`in our project. We continue with the initial [**project of NgRx**, clone it, and](https://www.danywalls.com/understanding-when-and-why-to-implement-ngrx-in-angular) switch to the `action-creators` branch. ```typescript git clone https://github.com/danywalls/start-with-ngrx.git git switch action-creators ``` Open the project with your favorite editor, and create a new file [`src/app/pages/about/state/home.selectors.ts`](https://www.danywalls.com/understanding-when-and-why-to-implement-ngrx-in-angular). Next, import the `createFeatureSelector` and `createSelector` functions. Use `createFeatureSelector` with the `HomeState` interface to create `selectHomeState`. ```typescript export const  selectHomeState = createFeatureSelector<HomeState>('home'); ``` After that, use `selectHomeState` to create selectors for `players`, `loading`, and `acceptTerms`. ```typescript export const selectLoading = createSelector(  selectHomeState, (homeState) => homeState.loading ) ​ export const selectPlayers = createSelector(  selectHomeState, (homeState) => homeState.players ) export const selectAcceptTerms = createSelector(  selectHomeState, (homeState) => homeState.acceptTerms, ) ``` We can also compose selectors. For example, if we want to know when the `players` have data and the user has accepted the terms (`acceptTerms`), we can create `selectAllTaskDone`. This combines the `selectPlayers` and `selectAcceptTerms` selectors to check if all tasks are done. ```typescript export const selectAllTaskDone = createSelector(  selectPlayers,  selectAcceptTerms, (players, acceptTerms) => {    return acceptTerms && players.length > 0; } ) ``` The final code in `home.selectors.ts` looks like this: ```typescript import {createFeatureSelector, createSelector} from "@ngrx/store"; import {HomeState} from "./home.state"; export const selectHomeState = createFeatureSelector<HomeState>('home'); export const selectLoading = createSelector( selectHomeState, (homeState) => homeState.loading ) export const selectPlayers = createSelector( selectHomeState, (homeState) => homeState.players ) export const selectAcceptTerms = createSelector( selectHomeState, (homeState) => homeState.acceptTerms, ) export const selectAllTaskDone = createSelector( selectPlayers, selectAcceptTerms, (players, acceptTerms) => { return acceptTerms && players.length > 0; } ) ``` Okay, with the selectors ready, it's time to refactor `home.component.ts` to use them. Import each selector from `home.selectors.ts`. > Note: Remove the `toSignal` function and use `store.selectSignals` to automatically transform the selectors' observables into signals. ```typescript  public $loading = this._store.selectSignal(selectLoading);  public $players = this._store.selectSignal(selectPlayers);  public $acceptTerms = this._store.selectSignal(selectAcceptTerms); ``` Finally, create a new variable to use the composable selector `selectAllTaskDone`. ```typescript public $allTasksDone = this._store.selectSignal(selectAllTaskDone); ``` Update `home.component.html` markup to use the `$allTasksDone` signals in the template. ```xml @if (!$allTasksDone()) { Wait for the players and accept terms } @else { <h2>Everything done!🥳</h2> } ``` Save the changes, and everything will continue to work 😄. To test the composed selectors, when `playersLoadSuccess` is triggered and you click on `acceptTerms`, the message "Everything done!" will be shown! ![xcc](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0iovjeepg5cox037jqq7.gif) ## Conclusion We learn how to use `selectors` to retrieve and manage state slices in NgRx, instead of directly using `store.select` , getting the benefits of type-safe selectors with `createFeatureSelector` and `createSelector` and composing selectors. * **Source:**[feature/using-selectors](https://github.com/danywalls/start-with-ngrx/tree/feature/using-selectors) * [NgRx Selectors Guide](https://ngrx.io/guide/store/selectors) * [Understanding NgRx](https://www.danywalls.com/understanding-when-and-why-to-implement-ngrx-in-angular)
danywalls
1,882,184
HTML tables, why we use and how to use?
HTML Tables: HTML tables are used to arrange the data into rows and columns. A table in...
0
2024-06-09T15:47:23
https://dev.to/wasifali/today-i-learned-about-html-tables-cc5
html, webdev, css, learning
## **HTML Tables:** HTML tables are used to arrange the data into rows and columns. A table in HTML consists of table cells inside rows and columns. To create the table we will use different tags, where the main one is `<table> </table>.` `<table>` defines the table. `<tr>` defines a row within the table. `<th>` defines header cells within the row `<td>` defines data cells within the row. ## **Table cells:** Each table cell is defined by a `<td>` and a `</td>` tag. td stands for table data. ## **HTML Table border:** HTML tables can have borders of different styles and shapes. ## **How To Add a Border** To add a border, use the CSS border property on table, th, and td elements: ## **Example** ```HTML table, th, td { border: 1px solid black; } ```
wasifali
1,882,183
Dulces Suenos Spanish Pop (Sample Packs)Download
** How to Download: Dulces Sueños Spanish Pop (Sample Packs) Elevate Your Music...
0
2024-06-09T15:44:59
https://dev.to/kala_plugins_7b320d218402/dulces-suenos-spanish-pop-sample-packsdownload-5d2f
music, production, vstplugins, download
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hbzrmnpj524ah4edpzy2.png) ** ## How to Download: Dulces Sueños Spanish Pop (Sample Packs) **Elevate Your Music Production with Authentic Latin Vocals .Unlock the essence of Latin pop with the **Dulces Sueños: Spanish Pop** sample pack, a meticulously crafted collection of all-female Spanish language toplines, phrases, and spoken word vocal samples. Produced in collaboration with a world-class Spanish-speaking vocalist, this pack captures the vibrant energy of Latin vocals dominating Billboard and worldwide charts. High-Quality Samples Inspired by Latin and American Pop Icons Drawing inspiration from renowned collaborators like DJ Snake and Selena Gomez, Shawn Mendez and Camilla Cabello, and Spanish pop sensations Rosalía, JLo, and Shakira, this sample pack bridges the gap between acoustic singer-songwriter pop and electronic dance music. Whether you're an acoustic artist or an EDM producer, **Dulces Sueños** provides the perfect blend of elements to enhance your tracks. Comprehensive Musical Elements for Diverse Production Needs In addition to the stunning vocal samples, **Dulces Sueños** offers a rich selection of accompanying guitar chord progressions, steel-stringed acoustic rhythms, synth plucks, and song-starters. Every sample is guaranteed original, written, and recorded in-house using a high-end all-analog chain with specialized vocal treatment and processing by producer Charlie McClean. Easy Download and Access Ready to infuse your music with authentic Latin flair? Download the **Dulces Sueños: Spanish Pop** sample pack now and start creating hit tracks that resonate with global audiences. Download Now: [Dulces Sueños: Spanish Pop Sample Packs](https://kalaplugins.com/dulces-suenos-spanish-pop-sample-packs/) Enhance your sound with **Dulces Sueños** – where Latin pop meets top-tier production quality. #DulcesSueños #SpanishPop #SamplePacks #MusicProduction #PopMusic #AudioSamples #SoundDesign #BeatMaking #HomeStudio #MusicCreation #Loops #MusicInspiration #ProducerTools #AudioEngineering #MusicLoops #ProductionTools #Songwriting #MusicSamples
kala_plugins_7b320d218402
1,882,181
LOST BITCOIN RECOVERY SERVICE DIGITAL HACK RECOVERY
Digital Hack Recovery has emerged as a leading force in the intricate landscape of Bitcoin recovery,...
0
2024-06-09T15:43:09
https://dev.to/liam_jones_9ae8cbbf5c29e7/lost-bitcoin-recovery-service-digital-hack-recovery-1ejo
Digital Hack Recovery has emerged as a leading force in the intricate landscape of Bitcoin recovery, offering invaluable assistance to individuals and companies grappling with the loss or theft of their digital assets. In an era where the adoption of virtual currencies like Bitcoin is on the rise, the need for reliable recovery services has never been more pressing. This review aims to delve into the multifaceted approach and remarkable efficacy of Digital Hack Recovery in navigating the challenges of Bitcoin recovery. At the heart of Digital Hack Recovery's methodology lies a meticulous and systematic procedure designed to uncover the intricacies of each loss scenario. The first step in their methodical approach is the detection of the loss and the acquisition of crucial evidence. Recognizing that every situation is unique, the team at Digital Hack Recovery invests considerable time and effort in comprehending the nature of the loss before devising a tailored recovery strategy. Whether the loss stems from a compromised exchange, a forgotten password, or a hacked account, they collaborate closely with clients to gather pertinent information, including account details, transaction histories, and any supporting documentation. This meticulous data collection forms the foundation for an all-encompassing recovery plan, ensuring that no stone is left unturned in the pursuit of lost bitcoins. What sets Digital Hack Recovery apart is not only its commitment to thoroughness but also its utilization of sophisticated tactics and cutting-edge technologies. With a wealth of experience and expertise at their disposal, the team employs state-of-the-art tools and techniques to expedite the recovery process without compromising on accuracy or reliability. By staying abreast of the latest developments in the field of cryptocurrency forensics, they can unravel complex cases and overcome seemingly insurmountable obstacles with ease. Moreover, their success is underscored by a portfolio of case studies that showcase their ability to deliver results consistently..it is not just their technical prowess that makes Digital Hack Recovery a standout player in the industry; it is also their unwavering commitment to client satisfaction. Throughout the recovery journey, clients can expect unparalleled support and guidance from a team of dedicated professionals who prioritize transparency, communication, and integrity. From the initial consultation to the final resolution, Digital Hack Recovery endeavors to provide a seamless and stress-free experience, ensuring that clients feel empowered and informed every step of the way. Digital Hack Recovery stands as a beacon of hope for those who have fallen victim to the perils of the digital age. With their unparalleled expertise, innovative approach, and unwavering dedication, they have cemented their reputation as the go-to destination for Bitcoin recovery services. Whether you find yourself grappling with a compromised exchange, a forgotten password, or a hacked account, you can trust Digital Hack Recovery to deliver results with efficiency and precision. With their help, lost bitcoins are not merely a thing of the past but a valuable asset waiting to be reclaimed. Talk to Digital Hack Recovery Team for any crypto recovery assistance via their Email; digitalhackrecovery @techie. com or visit their website; https:// digitalhackrecovery. com
liam_jones_9ae8cbbf5c29e7
1,882,178
Buy verified cash app account
https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash...
0
2024-06-09T15:42:33
https://dev.to/gabrialmillse432/buy-verified-cash-app-account-32ap
webdev, javascript, beginners, programming
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/618mgdjkjp5b5yy0pv0w.png)\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts.  With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n"
gabrialmillse432
1,882,173
Cloud Scalability: The Key to Business Agility in a Changing World
The business world today is a whirlwind of change. New technologies emerge constantly, customer...
0
2024-06-09T15:34:07
https://dev.to/marufhossain/cloud-scalability-the-key-to-business-agility-in-a-changing-world-2iad
The business world today is a whirlwind of change. New technologies emerge constantly, customer expectations shift rapidly, and competition is fiercer than ever. To survive and thrive in this dynamic environment, businesses need to be agile. Agility means being adaptable, responsive, and innovative. It's about being able to quickly adjust to new opportunities and challenges. **Why Agility Matters** Think of agility like a race car compared to a clunky truck. The race car can zip through corners and navigate unexpected obstacles, while the truck struggles to keep up. Agile businesses are like the race car. They can react quickly to changing market trends, launch new products and services faster, and adapt their strategies on the fly. This agility translates to a significant competitive advantage: * **Faster time-to-market:** Get your ideas to market before the competition. * **Improved customer experience:** Respond to customer needs and preferences in real-time. * **Stronger competitive edge:** Stay ahead of the curve and disrupt the market yourself. **The Struggles of Traditional IT** Many businesses, however, are held back by their traditional IT infrastructure. On-premises servers are expensive to set up and maintain. Scaling them up or down to meet changing demands is a slow and cumbersome process. This inflexibility becomes a major roadblock to agility. Imagine a race car stuck with the truck's engine – it simply can't reach its full potential. **Cloud Scalability: The Engine of Agility** Cloud scalability is the game-changer for businesses seeking agility. Cloud computing offers a pool of virtual resources like storage, processing power, and databases. Businesses can access these resources on-demand, just like turning on a light switch. Here's how cloud scalability empowers businesses: * **Respond to changing demands:** Need more processing power for a busy season? The cloud can easily scale up your resources. Need to scale back after the rush? No problem, you only pay for what you use. * **Optimize costs:** No more massive upfront investments in hardware. You only pay for the resources you use, making cloud computing a cost-effective solution. * **Fuel innovation:** The cloud removes technical barriers to experimentation. Businesses can test new ideas and launch [SaaS developments](https://www.clickittech.com/saas/saas-development/?utm_source=backlinks&utm_medium=referral) quickly and cheaply. * **Streamline IT operations:** Cloud providers manage the infrastructure, freeing up your IT team to focus on core business functions. **Real-World Agility with Cloud** Take the example of a company called Acorn Fitness. They used to struggle to handle surges in traffic during peak membership signup times. Their on-premises servers would overload, leading to website crashes and frustrated customers. By migrating to the cloud, Acorn Fitness gained the scalability they needed. Now, they can automatically scale their resources up during peak times and down during slower periods. This ensures a seamless user experience and allows them to focus on growing their business. **Unlocking Your Agility** Moving to the cloud isn't a one-size-fits-all solution. Choosing the right cloud provider, developing a solid migration strategy, and ensuring data security are all crucial considerations. However, the potential benefits of cloud scalability are undeniable. By embracing the cloud, businesses can shed the limitations of traditional IT and become agile race cars, ready to navigate the ever-changing world of business.
marufhossain
1,871,634
JavaScript Essentials
Introduction JavaScript is a versatile programming language essential for adding...
27,559
2024-06-09T15:32:00
https://dev.to/suhaspalani/javascript-essentials-5e44
webdev, javascript, beginners, programming
#### Introduction JavaScript is a versatile programming language essential for adding interactivity to web pages. It is one of the core technologies of the web, alongside HTML and CSS. Learning JavaScript fundamentals is crucial for any web developer. #### JavaScript Basics **Data Types and Variables:** - **Primitive Data Types**: `string`, `number`, `boolean`, `null`, `undefined`, `symbol`, and `bigint`. - **Variables**: - **Declaring Variables**: Using `var`, `let`, and `const`. - **Variable Scope**: Understanding global and local scope. **Operators and Expressions:** - **Arithmetic Operators**: `+`, `-`, `*`, `/`, `%`, `**` (exponentiation). - **Assignment Operators**: `=`, `+=`, `-=`, `*=`, `/=`, `%=`. - **Comparison Operators**: `==`, `===`, `!=`, `!==`, `>`, `<`, `>=`, `<=`. - **Logical Operators**: `&&`, `||`, `!`. - **String Operators**: Concatenation using `+`. #### Control Flow **Conditional Statements:** - **If Statements**: Basic if-else syntax. - **Else If Statements**: Handling multiple conditions. - **Switch Statements**: Alternative to multiple if-else statements for comparing a single variable against multiple values. **Loops:** - **For Loop**: Iterating with a counter. - **While Loop**: Looping based on a condition. - **Do-While Loop**: Looping at least once before checking the condition. #### Functions **Defining and Invoking Functions:** - **Function Declarations**: Using the `function` keyword. - **Function Expressions**: Assigning functions to variables. - **Arrow Functions**: Concise syntax introduced in ES6. **Parameters and Arguments:** - **Default Parameters**: Setting default values for function parameters. - **Rest Parameters**: Using `...` to handle an indefinite number of arguments. #### Objects and Arrays **Introduction to Objects:** - **Creating Objects**: Using object literals and the `new Object()` syntax. - **Accessing Properties**: Dot notation and bracket notation. - **Methods**: Functions defined within objects. **Introduction to Arrays:** - **Creating Arrays**: Using array literals and the `new Array()` syntax. - **Accessing Elements**: Index-based access. - **Common Methods**: `push()`, `pop()`, `shift()`, `unshift()`, `forEach()`, `map()`, `filter()`, `reduce()`. #### Event Handling **Basics of Event Handling:** - **Adding Event Listeners**: Using `addEventListener()`. - **Common Events**: `click`, `mouseover`, `keydown`, `submit`. **Event Handling Examples:** - **Button Click**: Displaying an alert on button click. - **Form Submission**: Validating form input before submission. #### Conclusion Understanding JavaScript fundamentals is key to creating dynamic and interactive web applications. Mastering these basics will provide a strong foundation for more advanced JavaScript topics and frameworks. #### Resources for Further Learning - **Online Courses**: Platforms like Codecademy, Udemy, and freeCodeCamp offer interactive JavaScript courses. - **Books**: "Eloquent JavaScript" by Marijn Haverbeke, "JavaScript: The Good Parts" by Douglas Crockford. - **Documentation and References**: MDN Web Docs (Mozilla Developer Network) provides comprehensive documentation and examples for JavaScript. - **Communities**: Engage with developer communities on platforms like Stack Overflow, GitHub, and Reddit for support and networking.
suhaspalani
1,882,170
How to Handle Side Effects in Angular Using NgRx Effects
Side-effects! They are one of the most common tasks in our applications. In Angular, but build...
0
2024-06-09T15:31:28
https://www.danywalls.com/how-to-handle-side-effects-in-angular-using-ngrx-effects
angular, ngrx, frontend, typescript
Side-effects! They are one of the most common tasks in our applications. In Angular, but build application if we don't take care the component ends with a lot of responsability, like get, process and render the data. But in Angular most of time when we need to get data from an API, instead of put the logic to handle everything related to HTTP requests, we create services to put the logic there, but our components still need to use these services to subscribe to them. When we use [Ngrx,](https://ngrx.io/) the main idea is for components to trigger actions. These actions then cause the reducer to make the necessary changes in the state and get the updated data using the selectors in the component. But how I can handle side-effect changes? For example start a http request, get the data and trigger the action with the result? who is responsible to get the data, process and update the state? let's show a scenario, I need to show a list of players from my state, and the players come from an API. We have two actions to start this process: `Players Load` and `Player Load Success`. ```typescript export const HomePageActions = createActionGroup({  source: 'Home Page',  events: {    'Accept Terms': emptyProps(),    'Reject Terms': emptyProps(),    'Players Load': emptyProps(),    'Player Loaded Success': props<{ players: Array<any> }>(), }, }); ​ ``` We have to have a separation, so we create the `players.service.ts` with the responsibility to get the data. ```typescript import { inject, Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; import { delay, map } from 'rxjs'; import { environment } from '../../environments/environment'; import { Player } from '../entities/player'; ​ @Injectable({ providedIn: 'root' }) export class PlayersService {  private _http = inject(HttpClient); ​  public getPlayers() {    return this._http     .get<{ data: Array<Player> }>(`${environment.apiUrl}/players`, {        headers: {          Authorization: `${environment.token}`,       },     })     .pipe(        map((response) => response.data),        delay(5000),     ); } } ​ ``` But, I can't change the `state` in the reducer because it is a function, I can't do async task or dispatch actions from there, the only place available is to use the same component to dispatch the actions when get the data. Open the `home.component.ts`, we inject the `PlayersService`, in the `onOnInit` lifecycle dispatch the `HomePageActions.playersLoad()` action to set the loading to true, and subscribe to the `this._playersService.getPlayers()` after get the data then dispatch the `playerLoadedSuccess` action with the response. The code looks like: ```typescript export class HomeComponent implements OnInit {  private _store = inject(Store);  private _playersService = inject(PlayersService);  public $loading = this._store.selectSignal(selectLoading);  public $players = this._store.selectSignal(selectPlayers);  /** others properties removed to d to keep simplicity. **/ ​  public ngOnInit(): void {    this._store.dispatch(HomePageActions.playersLoad());    this._playersService.getPlayers().subscribe(players => {      this._store.dispatch(HomePageActions.playerLoadedSuccess({        players     }))   }) } ``` The previous code works, but why does the `home.component` have to `subscribe` to the service and also dispatch the action when the data arrives? Why does the `home.component` need to know who is responsible for loading the data? The home component only needs to trigger actions and react to state changes. This is where [NgRx Effects](https://ngrx.io/guide/effects) are useful. They take actions, perform the necessary tasks, and dispatch other actions. ## **The Effects** What is an effect? It is a class like a service with the `@Injectable` decorator and the `Actions` injected. The [`Actions`](https://ngrx.io/api/effects/Actions) [service](https://ngrx.io/api/effects/Actions) help to listen *each* action dispatched *after* the reducer. ```typescript ​@Injectable() export class HomeEffects {  private _actions = inject(Actions); } ``` We declare a field using the [`createEffe`](https://ngrx.io/api/effects/createEffect)[`ct` function](https://ngrx.io/api/effects/createEffect), any action returned from the effect stream [is then disp](https://ngrx.io/api/effects/createEffect)atch back to the [`Store`](https://ngrx.io/api/store/Store) and the [A](https://ngrx.io/api/store/Store)ctions are filtered u[sing a](https://ngrx.io/api/effects/Actions) [ofType](https://ngrx.io/api/effects/ofType) operator to takes one or more actions. The of action is then flatten and mapped into a new observable using any high-orders operator like `concatMap`, `exhaustMap` , `switchMap` or `mergeMap`. ```typescript loadPlayers = createEffect(() => this._actions.pipe( ofType(HomePageActions.playersLoad) )); ``` Since [the version](https://ngrx.io/api/effects/createEffect) 15[.2 we also have](https://ngrx.io/guide/effects/operators#oftype) functional effects, instead to use a class use the same `createEffect` function to create the effects. ```typescript export const loadPlayersEffect = createEffect( (actions$ = inject(Actions)) => { }); ``` But how does it work? Well, the component [triggers](https://ngrx.io/api/store/Store) the load product action, then the effect listens for this action. Next, we inject the service to get the [data](https://ngrx.io/api/effects/ofType) and [trigger an action](https://ngrx.io/guide/effects/operators#oftype) with the data. The reducer then listens for this action and makes the change. Does it seem like too many steps? Let me show you how to refactor our code to use Effects! ## **Moving To Effects** It's time to start using `effects` in our project. We continue with the initial [**project of NgRx**, clone it, and switch to the `action-creators`](https://www.danywalls.com/understanding-when-and-why-to-implement-ngrx-in-angular) branch. ```typescript git clone https://github.com/danywalls/start-with-ngrx.git git switch feature/using-selectors ``` Next, install effects `@ngrx/effects` package from the terminal. ```typescript npm i @ngrx/effects ``` Next, open the project with your favorite editor and create a new file [`src/app/pages/about/state/home.effects.ts`](https://www.danywalls.com/understanding-when-and-why-to-implement-ngrx-in-angular). Declare a `loadPlayersEffect` using the `createEffect` function, inject `Actions` and `PlayersService`, and then pipe the actions. ```typescript import { inject } from '@angular/core'; import { Actions, createEffect, ofType } from '@ngrx/effects'; import { PlayersService } from '../../../services/players.service'; export const loadPlayersEffect = createEffect( (actions$ = inject(Actions), playersService = inject(PlayersService)) => { return actions$.pipe() ) ); ``` Use `ofType` to pipe the actions and filter by the `HomePageActions.playersLoad` action type. ```typescript  loadPlayers = createEffect(() =>    this._actions.pipe(      ofType(HomePageActions.playersLoad)     ) ); ``` Use the `concatMap` operator to get the stream from the action, use the playerService and call the `getPlayers()` method and use `map` to dispatch `HomePageActions.playerLoadedSuccess({ players })`. ```typescript concatMap(() =>        this._playersService         .getPlayers()         .pipe(            map((players) => HomePageActions.playerLoadedSuccess({ players })),         ),     ), ``` After the map, handle errors using the `catchError` operator. Use the `of` function to transform the error into a `HomePageActions.playerLoadFailure` action and dispatch the error message. ```typescript catchError((error: { message: string }) => of(HomePageActions.playerLoadFailure({ message: error.message })), ), ``` The final code looks like: ```typescript import { inject } from '@angular/core'; import { Actions, createEffect, ofType } from '@ngrx/effects'; import { PlayersService } from '../../../services/players.service'; import { HomePageActions } from './home.actions'; import { catchError, concatMap, map, of } from 'rxjs'; export const loadPlayersEffect = createEffect( (actions$ = inject(Actions), playersService = inject(PlayersService)) => { return actions$.pipe( ofType(HomePageActions.playersLoad), concatMap(() => playersService.getPlayers().pipe( map((players) => HomePageActions.playerLoadedSuccess({ players })), catchError((error: { message: string }) => of(HomePageActions.playerLoadFailure({ message: error.message })), ), ), ), ); }, { functional: true }, ); ``` We have the effect ready, its time to register it in the app.config, so import the home.effect and use the provideEffects function pass the homeEffect The app.config looks like: ```typescript import * as homeEffects from './pages/home/state/home.effects'; export const appConfig = { providers: [ provideRouter(routes), provideStore({ home: homeReducer, }), provideStoreDevtools({ name: 'nba-app', maxAge: 30, trace: true, connectInZone: true, }), provideEffects(homeEffects), //provide the effects provideAnimationsAsync(), provideHttpClient(withInterceptors([authorizationInterceptor])), ], }; ``` We have registered the effect, so it's time to refactor the code in the `HomeComponent`. Remove the injection of the players service, as we no longer need to subscribe to the service. The home component looks like: ```typescript export class HomeComponent implements OnInit {  private _store = inject(Store);  public $loading = this._store.selectSignal(selectLoading);  public $players = this._store.selectSignal(selectPlayers);  public $acceptTerms = this._store.selectSignal(selectAcceptTerms);  public $allTasksDone = this._store.selectSignal(selectAllTaskDone); ​  public ngOnInit(): void {    this._store.dispatch(HomePageActions.playersLoad()); } ​  onChange() {    this._store.dispatch(HomePageActions.acceptTerms()); } ​  onRejectTerms() {    this._store.dispatch(HomePageActions.rejectTerms()); } } ``` Done! Our app is now using effects, and our components are clean and organized! ## **Recap** We learned how to handle side-effects like HTTP requests and clean up components that have too many responsibilities. By using actions, reducers, and effects to manage state and side-effects. We can refactor our component to use NgRx Effects for fetching data from an API. By moving the data-fetching logic to effects, components only need to dispatch actions and react to state changes, resulting in cleaner and more maintainable code. * Source Code: [https://github.com/danywalls/start-with-ngrx/tree/using-effects](https://github.com/danywalls/start-with-ngrx/tree/using-effects) * [NgRx Effects Documentation](https://ngrx.io/guide/effects) * [NgRx createEffect](https://ngrx.io/api/effects/createEffect) * [ofType](https://ngrx.io/guide/effects/operators#oftype)
danywalls
1,882,169
Unlocking the Power of the Cloud: A Comprehensive Guide to Cloud Computing
What is Cloud Computing? Cloud computing is a transformative technology that allows businesses and...
0
2024-06-09T15:29:50
https://dev.to/mcckeown/unlocking-the-power-of-the-cloud-a-comprehensive-guide-to-cloud-computing-4cc4
beginners, webdev
**What is Cloud Computing?** Cloud computing is a transformative technology that allows businesses and individuals to access and use computing resources over the internet. Instead of owning and maintaining physical servers and data centers, users can leverage cloud services to store data, run applications, and perform various IT tasks. This model provides on-demand access to a shared pool of resources like servers, storage, and applications, which can be rapidly provisioned and released with minimal management effort. **Brief History of Cloud Computing** The concept of cloud computing dates back to the 1960's with the idea of time-sharing, which allowed multiple users to access a single computer system. However, the modern era of cloud computing began in the late 1990's and early 2000's. In 1999, Salesforce introduced the idea of delivering enterprise applications via a simple website, and in 2006, Amazon Web Services (AWS) launched its cloud-based infrastructure services, revolutionizing the way businesses operated by offering scalable and affordable computing resources. Since then, other tech giants like Google, Microsoft, and IBM have also entered the cloud market, making cloud computing a cornerstone of the digital economy. **Benefits of Cloud Computing** Cloud computing offers numerous benefits, summarized as **CSFRAS**: Cost Efficiency, Scalability, Flexibility, Reliability, Agility, Security, and Sustainability. **_Cost Efficiency:_** Cloud computing eliminates the need for large capital investments in hardware and software. Instead, businesses can pay for what they use, reducing operational costs. **Scalability:** Cloud services can scale up or down based on demand, ensuring that businesses can handle peak loads without over-provisioning resources. **Flexibility:** Cloud computing supports a wide range of applications and services, providing businesses with the flexibility to choose the best solutions for their needs. **Reliability:** Major cloud providers offer high levels of reliability with guaranteed uptime and disaster recovery capabilities, ensuring continuous business operations. **Agility:**Cloud computing allows for rapid deployment of new applications and services, enabling businesses to innovate faster and respond quickly to market changes. **Security:** Leading cloud providers invest heavily in security measures, offering robust protection against cyber threats and ensuring compliance with various regulations. **Sustainability** Cloud providers optimize their data centers for energy efficiency and sustainability, helping reduce the carbon footprint of IT operations. **Cloud Computing Models** There are three primary cloud computing models: Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). **IaaS**: Provides virtualized computing resources over the internet. Examples include AWS EC2 and Google Compute Engine. Businesses use IaaS for flexible, scalable infrastructure without the need for physical hardware. **PaaS**: Offers hardware and software tools over the internet, mainly for application development. Examples include Google App Engine and Microsoft Azure. PaaS simplifies the development process by providing a complete environment for development, testing, and deployment. **SaaS:** Delivers software applications over the internet, on a subscription basis. Examples include Google Workspace and Microsoft Office 365. SaaS provides ready-to-use applications that are accessible from anywhere, making it easy for businesses to collaborate and operate efficiently. **Cloud Deployment Models** Cloud deployment models determine how cloud services are provided and utilized. The main models are Private, Public, and Hybrid clouds. _**Private Cloud:**_ Used exclusively by a single organization. It offers greater control and security, making it ideal for businesses with sensitive data or regulatory requirements. Example: A financial institution running its own private cloud to safeguard customer data. **Public Cloud:** Services are delivered over the public internet and shared across multiple organizations. It is cost-effective and scalable, suitable for startups and businesses with fluctuating demands. Example: A retail company using AWS to handle seasonal traffic spikes. **Hybrid Cloud:** Combines private and public clouds, allowing data and applications to be shared between them. This model offers the best of both worlds—flexibility and security. Example: A healthcare provider using a private cloud for patient records and a public cloud for running their website. Organizations choose deployment models based on factors like data sensitivity, cost, performance requirements, and regulatory compliance. **The Future of Cloud Computing** Cloud computing is poised for tremendous growth and innovation in the coming years. According to industry forecasts, the global cloud computing market is expected to reach $832.1 billion by 2025. Emerging technologies like artificial intelligence (AI), machine learning (ML), and the Internet of Things (IoT) are driving further adoption of cloud services. **Job Prospects and projections** The demand for cloud computing professionals is skyrocketing. Roles such as cloud architects, cloud engineers, DevOps engineers, and cloud security experts are in high demand. Companies are looking for skilled individuals who can design, manage, and secure cloud infrastructure. **Why Learning Cloud Computing is a Good Decision** Learning cloud computing equips you with the skills to work with cutting-edge technology, opening doors to exciting career opportunities. With businesses across all industries migrating to the cloud, the knowledge of cloud platforms and services will be invaluable. Additionally, certifications from leading cloud providers like AWS, Google Cloud, and Microsoft Azure can significantly enhance your career prospects and earning potential. As businesses continue to embrace digital transformation, the reliance on cloud computing will only increase. Innovations in cloud services, enhanced security measures, and sustainable practices will shape the future of the industry. By investing in cloud computing skills today, you are preparing for a future where cloud expertise will be indispensable. In conclusion, cloud computing is not just a trend but a fundamental shift in how we manage and deliver IT resources. Its benefits, versatility, and future potential make it a critical area for both businesses and IT professionals. Whether you are a seasoned professional or just starting, understanding and mastering cloud computing will undoubtedly position you for success in the digital age.
mcckeown
1,882,168
How To Add HTTP Headers to Requests with Functional Interceptors in Angular
When we work with request data in Angular to an external API, sometimes we need to add or send...
0
2024-06-09T15:29:21
https://www.danywalls.com/how-to-add-http-headers-to-requests-with-functional-interceptors-in-angular
javascript, angular, frontend
When we work with request data in Angular to an external API, sometimes we need to add or send headers. The idea of repeating the code in each request is not something to feel proud of. For example, when working with the [**ball don't lie API**](https://www.balldontlie.io/), it requires sending the `Authorization` header with the API key. One simple solution is to create an object with my headers: ```typescript  private _ballDontLieAuthHeader = {          Authorization: `MY_AMAZING_TOKEN`, } ``` Next, add the `_ballDontLieAuthHeader` header to every request. The code looks like this: ```typescript import { inject, Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; import { delay, map } from 'rxjs'; import { Player } from '../entities/player'; ​ @Injectable({ providedIn: 'root' }) export class PlayersService {  private _http = inject(HttpClient); ​  private _ballDontLieAuthHeader = {          Authorization: `MY_AMAZING_TOKEN`, } ​  public getPlayers() {    return this._http     .get<{ data: Array<Player> }>(`/players`, {        headers: this._ballDontLieAuthHeader,     })     .pipe(        map((response) => response.data),        delay(5000),     ); }  public getPlayerById(id: string) {  .... }    public deletePlayer(id: string) { ... } } ​ ​ ``` Maybe it works, but what happens if the headers are needed in other services? I would have to import the `_ballDontLieAuthHeader` everywhere, and every request method would need to add them 😿. A better alternative is to use interceptors for this. Let's explore how to do that. ## **The Interceptors** The functional interceptors are functions that run on every request. They help us add headers, retry failed requests, cache responses, and do [**more**](https://angular.dev/guide/http/interceptors#intercepting-response-events). Creating an interceptor is simple. It's just a function with `HttpRequest` as a parameter and `next` to process the next step in the interceptor chain. ```typescript       export function monitorInterceptor(req: HttpRequest<unknown>, next: HttpHandlerFn): Observable<HttpEvent<unknown>> {  console.log(`🐒 hi!`);  return next(req); } ​ ``` The interceptors must be registered with the `provideHttpClient()`, for example, in the `app.config` or `bootstrapApplication`. ```typescript       bootstrapApplication(AppComponent, {providers: [  provideHttpClient(    withInterceptors([monitorInterceptor,myotherInterceptor]), ) ]}); ``` Now that we know how easy it is to create an interceptor, let's update our code by moving the API URL and key to the environment file. Then, we can create and register the interceptor. ## **Configure The Environments** First, starting with Angular 15 or 16, the environment files are not included by default. However, we can easily generate them using the Angular CLI. Run the command `ng g environments` in the terminal, which will create `environment.ts` and `environment.development.ts` files. ```typescript ng g environments CREATE src/environments/environment.ts (31 bytes) CREATE src/environments/environment.development.ts (31 bytes) ``` Open the `environment.ts` and add the `API URL` and `token`: ```typescript export const environment = {  production: true,  apiUrl: 'https://api.github.com/repos',  token: 'your-api-key' }; ​ ``` The environment is ready! Let's create and register our interceptor. ## **Create and Register Interceptor** Using the Angular CLI, create an interceptor by running the `ng g interceptor interceptors/authorization` command. ```typescript ng g interceptor interceptors/authorization CREATE src/app/interceptors/authorization.interceptor.spec.ts (512 bytes) CREATE src/app/interceptors/authorization.interceptor.ts (158 bytes) ​ ``` Open the `authorization.interceptor.ts` file. In the `authorizationInterceptor` function, we get the `req` and `next` parameters. ```typescript ​ export const authorizationInterceptor: HttpInterceptorFn = (req, next) => {  return next(req); }; ``` We clone the request using the `.clone()` method and set the properties to change in the new instance in the `HttpHeaders`. Use `req.headers.set('Authorization', ${environment.token})` to add the Authorization header to the request, and use `next` to include the change in the request. The final code looks like this: ```typescript import { HttpInterceptorFn } from '@angular/common/http'; import {environment} from "../../environments/environment"; ​ export const authorizationInterceptor: HttpInterceptorFn = (req, next) => {  const requestWithAuthorization = req.clone({    headers: req.headers.set('Authorization', `${environment.token}`), });  return next(requestWithAuthorization); }; ​ ``` Finally, open the `app.config` file and import `provideHttpClient`. Then, register the authorizationInterceptor using the `withInterceptors` function: ```typescript import { provideHttpClient, withInterceptors } from '@angular/common/http'; ​ export const appConfig = {  providers: [ provideHttpClient(withInterceptors([authorizationInterceptor])),     ] } ``` Save the changes and voilà! Every request now includes the Authorization header with the token 🎉!
danywalls
1,882,167
Dominando o Angular: Guia Completo para Iniciantes
Enter fullscreen mode Exit fullscreen mode O Angular é uma das...
0
2024-06-09T15:27:59
https://dev.to/mayra_machado_f50e69498d7/dominando-o-angular-guia-completo-para-iniciantes-3l5p
``` ``` O Angular é uma das frameworks mais populares para desenvolvimento de aplicações web. Neste guia, exploraremos os fundamentos do Angular e como você pode começar a criar aplicações robustas e escaláveis. Desenvolvido pela Google, o Angular foi lançado em 2010 e desde então tem evoluído significativamente, tornando-se uma escolha confiável para desenvolvedores web. Configuração do Ambiente: Como instalar e configurar o Angular CLI. Componentes Básicos: Criando e gerenciando componentes no Angular. Serviços e Injeção de Dependência: Utilizando serviços para lógica de negócios e comunicação entre componentes. Roteamento e Navegação: Implementando roteamento para criar uma aplicação SPA (Single Page Application). Discussão: Comparação entre Angular e outras frameworks como React e Vue.js. Análise das vantagens e desvantagens de usar Angular em diferentes tipos de projetos. O Angular é uma ferramenta poderosa para desenvolvedores web. Com este guia, você está preparado para começar a explorar as possibilidades que o Angular oferece. "Quer aprender mais sobre desenvolvimento em Angular? Confira nossos tutoriais completos e comece hoje mesmo!" ``` ```
mayra_machado_f50e69498d7
1,882,166
A crash course in using Bunjs instead of Node.js on Linux
Transitioning from Node.js to Bun on Linux: A Complete Guide Introduction In...
0
2024-06-09T15:23:54
https://dev.to/chovy/a-crash-course-in-using-bunjs-instead-of-nodejs-on-linux-56bi
linux, bunjs, node, tutorial
# Transitioning from Node.js to Bun on Linux: A Complete Guide ## Introduction In the evolving world of JavaScript, Bun is quickly making a name for itself as a high-performance runtime environment that's compatible with Node.js but significantly faster. This guide will walk you through the basics of getting started with Bun on a Linux system, and how it compares to the traditional Node.js setup. ## Installation ### Step 1: Install Bun To install Bun on your Linux system, open your terminal and run the following command: ```bash curl -fsSL https://bun.sh/install | bash ``` This command downloads and installs Bun, automatically adding it to your path for immediate use. ## Basic Commands Here are some basic Bun commands to get you started: - **Run a script**: `bun run app.js` - **Install a package**: `bun add express` ## Creating a Web Server with Express Let's create a simple web server using Express. First, install Express using Bun: ```bash bun add express ``` Then, create a file named `app.js` and add the following code: ```javascript import express from 'express'; const app = express(); const port = 3000; app.get('/', (req, res) => { res.send('Hello World with Bun!'); }); app.listen(port, () => { console.log(`Server listening at http://localhost:${port}`); }); ``` Run your server with: ```bash bun run app.js ``` ## Differences from Node.js Bun is designed to be faster and more efficient than Node.js: - **Performance**: Uses a high-speed JavaScript engine. - **Built-in package manager**: Simplifies your workflow by integrating package management. ## Conclusion Transitioning to Bun can significantly enhance your development workflow and application performance. Give it a try and see the difference for yourself! ---
chovy
1,882,131
Web accessibility, how to design web pages for everyone
As developers, how can we make at least the web more accessible for all? That’s the main question of...
0
2024-06-09T15:18:43
https://dionarodrigues.dev/blog/web-accessibility-how-to-design-web-pages-for-everyone
a11y, webacessibility, webdev, inclusion
**As developers, how can we make at least the web more accessible for all? That’s the main question of this article and here we’ll explore some ideas to make our websites more inclusive, a place where people with disabilities can also use the internet to learn something new, acquire some different skill, buy products and many other actions that most of us do in our daily digital life and maybe have never thought about it before.** > “Accessibility: It’s About People” - [W3C](https://www.w3.org/WAI/people/) Did you know that **1.3 billion people (16% of the world’s population) are estimated to live with some form of disability nowadays** and that they face many issues related not only with stigma and discrimination, but also exclusion from education and employment? Information from the [World Health Organization (WHO) report on disability](https://www.who.int/news-room/fact-sheets/detail/disability-and-health). ## What is accessibility when it comes to web In short, **web accessibility, more specifically, is about diversity, equity and inclusion**: everyone can perceive, understand, navigate and interact with the web no matter what their ability or circumstances. Some examples of disabilities that affect access to the web are: - **Visual**: includes people with blindness, low-level vision, and color blindness. Following WHO reports, [2.2 billion people in the world have a near or distance vision impairment](https://www.who.int/en/news-room/fact-sheets/detail/blindness-and-visual-impairment). - **Auditory**: deaf and hard-of-hearing (DHH) people. The WHO states that [5% of the world's population needs rehabilitation to deal with their disabling hearing loss](https://www.who.int/en/news-room/fact-sheets/detail/deafness-and-hearing-loss). - **Cognitive**: refers to a broad range of disabilities that include intellectual disability, dyslexia, autism, brain injury, stroke, Alzheimer's disease… - **Speech**: difficulty in a person's ability to produce sounds that create words. - **Mobility**: difficulty performing movements ranging from walking to manipulating objects with your hands. It is important to note that **some disabilities are permanent, but many of them are also temporary**, such as not being able to use your smartphone with your hands while driving, so voice command solutions can be useful here, for example; or cannot hear sound at the moment, then subtitles in videos can help the user to better understand the content. > “Web accessibility is about creating web content, design, and tools that can be used by everyone regardless of ability.” - [Monsido](https://monsido.com/web-accessibility) ## How to improve user experience with web accessibility Accessibility standards, in addition to [WCAG (Web Content Accessibility Guidelines provided by W3C)](https://www.w3.org/WAI/standards-guidelines/wcag/), also involve [local/country laws and policies](https://www.w3.org/WAI/policies/), as well as the type of product and industry, and a combination of all of them will definitely lead to digital design being more accessible for everyone. In this article, we will focus on some well-known web accessibility principles for improving user experience on web pages and most of them are certainly included in your country's government requirements as well. ### Semantic HTML The most basic principle of web accessibility is also a bit generic, but here it's important for you to know that **you should use HTML tags correctly to help [screen readers](https://en.wikipedia.org/wiki/Screen_reader)** understand and navigate the content, identifying titles, subtitles and paragraphs, for example. - Set the language of the document by using the `lang` attribute in the opening `<html>` tag. - Replace `<div>` tags to semantic HTML elements, such as `<main>`, `<article>`, `<section>`, `<header>`, `<nav>` and `<footer>`. - Be sure to use heading tags to achieve logical levels of content structure, not because you want to only display larger font sizes: `<h1>`, `<h2>`, `<h3>`, `<h4>`, `<h5>` and `<h6>`. - Nest HTML elements correctly, otherwise browsers and [assistive technologies](https://www.w3.org/WAI/people-use-web/tools-techniques/#at) may not be able to understand the content as intended. - Use lists whenever you need to display a list of items, like a menu or ingredients on a receipt, for example: `<ul>` and `<ol>`. - Use`<table>` only when you need to display structured tabular data, not for layout or anything else. Read more about [semantic HTML in this MDN article](https://developer.mozilla.org/en-US/curriculum/core/semantic-html/). ### Document Title One of the main HTML tags, **`<title>` (the document title element located in the `<head>`) is the first piece of information that screen readers say when users navigate between pages**. This is also important because it appears in the browser tab, helping the user to know where they are and navigate between the pages open in their browser. - Title should be unique for every page of your site and any other related site; - It should be a descriptive phrase related to the content of the page. - If the title is long, try to put the most important words in the first 55-60 characters, as search engines usually display around that. Read more on [Page Title in MDN](https://developer.mozilla.org/en-US/docs/Web/HTML/Element/title). ### Alt Text (Image Alternative Text) **Images, which are part of the content and not just decorative, must have alternative texts to describe them**, so that blind people (or those with low vision) and screen readers can understand them. Users who disable images to save bandwidth also take advantage of this feature. Alt text are included by the "alt" attribute in the `<img>` HTML tag, see example bellow: `<img src=“logo.png” alt=“Diona Rodrigues logo” />` It’s important to use short, concise and appropriate alternative texts for images: - Alt text must always be associated with the image content. - If the image is just decorative, leave the "alt" attribute blank. - For functional images, such as images used as buttons, they should start with action words like “submit” and “go to”, for example. - If there is text in the image, such as a logo, for example, this text must be in the alternative text. Read more about [alt text on web.dev](https://web.dev/learn/accessibility/images). ### Keyboard navigation and focus Although some users, for several reasons, prefer to **navigate a web page using only the keyboard**, people with low vision or blindness can use the keyboard combined with a screen reader for this purpose and browsers by default have a visual style to be applied to the elements receiving focus. You can play around with this by going to this [MDN project to see how native keyboard accessibility works](https://mdn.github.io/learning-area/tools-testing/cross-browser-testing/accessibility/native-keyboard-accessibility.html) by pressing the tab key. - Normally the navigation order by pressing the tab key depends on the HTML structure, however you can change it using the [HTML attribute called “tabindex”](https://developer.mozilla.org/en-US/docs/Web/HTML/Global_attributes/tabindex). This attribute receives numbers, where the number 0 (tabindex="0") means that the element follows the natural order as it appears in the DOM; Negative numbers cause elements to be removed from the natural order of focus. Then, the priority focus order will be based on positive numbers, where the smallest has higher priority over the largest (1, 2, 3...). - [Skip links](https://developer.mozilla.org/en-US/docs/Learn/Accessibility/HTML#skip_links) are very useful for allowing users to skip the header and navigate to the main content of a page. They are usually visually hidden and can be accessed by pressing "tab" key. It is normally the first element that will receive focus on a page. - Although we can disable styling for focused HTML elements, you should never do this. However, you can create your own style like the example below. See how to change the style of a focused HTML element by changing the outline CSS property: ```css :focus { outline: 3px solid hsla(220, 100%, 50%, 80%); } ``` Alternatively, you can replace the outline CSS property with the box-shadow: ```css :focus { outline: none; box-shadow: 0 0 0 3px hsla(220, 100%, 50%, 80%); } ``` Read more about [keyboard navigation and focus on web.dev](https://web.dev/learn/accessibility/focus). ### Color and contrast Following the [WCAG Four Accessibility Principles](https://www.w3.org/WAI/WCAG22/Understanding/intro#understanding-the-four-principles-of-accessibility), **all users must be able to perceive the content on the page, and therefore color and contrast are vital to achieving this**. Color should not be used as the sole visual means of conveying information, indicating an action, or distinguishing a visual element, as color may not be seen by colour-deficient users. And the contrast between the text and the background should be enough for users with moderately low vision to read it. To measure contrast, WCAG uses a technique called “contrast ratio,” which takes the difference in luminance (or brightness) between the colours of foreground and background content (i.e., usually text) and checks its legibility. I really recommend you read [“Contrast and Color Accessibility” by WebAIM](https://webaim.org/articles/contrast/) to understand more about it. Check out a list of suggestions to make content more accessible with color and contrast: - WCAG defines some types of text as “incidental,” meaning they have no contrast requirements: inactive elements (such as disabled buttons), decorative elements (such as text in images used only for background decoration purposes), invisible elements (like a [skip link](https://developer.mozilla.org/en-US/docs/Learn/Accessibility/HTML#skip_links)) and part of an image that contains other significant visual content (like a license plate in an image showing city traffic, for example). - Make sure the contrast between text (and also images of text) and background has a contrast ratio of at least 4.5:1. Larger text (minimum size of 18pt, or 14 pt bold) should have a ratio of at least 3:1. You can use tools like [Contrast Checker](https://webaim.org/resources/contrastchecker/) to measure it. - Avoid high color contrast scheme for your site as it can make reading difficult for people with dyslexia, as [this study](https://www.w3.org/WAI/RD/2012/text-customization/r11) shows. - Don’t rely solely on colours to convey information, as some people will not be able to see these colours. So, instead of using red colour to mark required form fields, mark them with an asterisk and in red, for example. Read more about [color and contrast in this WebAIM article](https://webaim.org/articles/contrast/). ### Typography and Text Resizing **Typography plays a big role on web pages and it is essential to choose the correct font family, font size, as well as properties such as letter and line spacing to make texts readable. Additionally, some users with low vision may need to zoom in to read content better, so relative rather than absolute sizes are very important for web accessibility.** Some tips for better typography when it comes to web accessibility: - Studies show that [people with disabilities find it easier to read texts using common fonts](http://dyslexiahelp.umich.edu/sites/default/files/good_fonts_for_dyslexia_study.pdf) such as Helvetica, Arial and Times New Roman, for example. Therefore, avoid choosing fonts with cursive designs or artistic shapes. - [Avoid using too many different typefaces](https://webaim.org/techniques/fonts/#limited), as this can make our brain have more effort and spend more time to build a map of their characters and patterns to parse words when reading a text. - [Line length should be between 50 and 120 characters](https://webaim.org/techniques/textlayout/#line) to provide comfort when returning to the beginning of the next line. - Font sizes should be based on relative values ​​(%, rem or em) to easily be resized when needed (using browser zoom for example). - Since screen readers cannot read text embedded in images, use markup texts. - Mainly for long texts, use elements such as headings, subheadings, lists and quotes, for example, to break the linearity of the content and make reading more comfortable. - [WCAG defines how text spacing should be applied](https://www.w3.org/WAI/WCAG21/Understanding/text-spacing), with some exceptions, and shows that the spacing between letters must be at least 0.12 times the font size; line height/spacing, 1.5 times; spacing between paragraphs, 2 times; and spacing between words, 0.16 times. Be careful because short and large spaces can also affect readability. Read more about [typography in web.dev](https://web.dev/learn/accessibility/typography). ### More web accessibility improvements Above you saw many aspects of a web page that you can improve, providing a better experience for all users, especially those with permanent or temporary disabilities. And of course there are other elements that you can also take into consideration, like ARIA, forms, animation, video and audio, for example. So I strongly suggest you take a look at [web.dev](https://web.dev/learn/accessibility) and [MDN documentation](https://developer.mozilla.org/en-US/docs/Learn/Accessibility) to learn more about it. ## How to mesure web accessibility The sooner accessibility is assessed the better, so if you are starting a new project I suggest you plan to apply at least the web accessibility best practices I mentioned in the previous section. Otherwise, it's best to group the improvements you need to apply and find the best way to do so based on the specifics of your existing project. There are many ways to mesure web accessibility, from checklists to online tools and browser extensions: - [Web Accessibility Evaluation Tools List by W3C](https://www.w3.org/WAI/test-evaluate/tools/list/) - [Web Accessibility Checklist by A11Y Project](https://www.a11yproject.com/checklist/) - [WC3 Web Accessibility Checklist](https://www.w3.org/WAI/test-evaluate/preliminary/) - [Automated Tools for Testing Accessibility by Harvard University](https://accessibility.huit.harvard.edu/auto-tools-testing) ## Resources for learning about web accessibility There are many ways to build a successful accessible website, and a good start is to follow the [international Web Content Accessibility Guidelines (WCAG)](https://www.w3.org/WAI/standards-guidelines/wcag/) created and maintained by the W3C. Because they are an extensive documentation, you can firstly learn [WCAG 2 at a Glance](https://www.w3.org/WAI/standards-guidelines/wcag/glance/), that combine different grouped guidelines. [Mozilla's MDN](https://developer.mozilla.org/en-US/docs/Learn/Accessibility) is another great resource for learning about web accessibility and is sure to provide all the knowledge you need to improve websites and applications, making them accessible to everyone. The Google team, through [web.dev](https://web.dev/learn/accessibility), also offers an easy-to-understand course on web accessibility, where you will find several examples and practical suggestions on how to apply them. Utah University also has a great project called [WebAIM](https://webaim.org/articles/) full of articles to learn about web accessibility. At least but not least, I found a website called [Monsido](https://monsido.com/web-accessibility), which also has good explanations on the subject. ## References on web cccessibility - [Disability by World Health Organisation](https://www.who.int/news-room/fact-sheets/detail/disability-and-health) - [Accessibility Fundamentals Overview by W3C](https://www.w3.org/WAI/fundamentals/) - [What is accessibility? by MDN](https://developer.mozilla.org/en-US/docs/Learn/Accessibility/What_is_accessibility) - [Introduction to Web Acessibility by Monsido](https://monsido.com/web-accessibility) - [Learn Accessibility by web.dev](https://web.dev/learn/accessibility) - [Accessibility Principles](https://www.w3.org/WAI/fundamentals/accessibility-principles/) ### Videos on Youtube - [Enhancing visual design with web accessibility by WebFlow](https://www.youtube.com/watch?v=pj4lVJIjRZ0&ab_channel=Webflow) - [Accessibility vs. Inclusive Design by NNGroup](https://www.youtube.com/watch?v=hE83Qn-PTGA&ab_channel=NNgroup) - [The Internet's Accessibility Problem — and How To Fix It by Clive Loseby on TED](https://www.youtube.com/watch?v=QWPWgaDqbZI&ab_channel=TED) - [Web Accessibility: ADA Compliance Tips to Design for All Users (FREE Checklist!) by HubSpot Marketing](https://www.youtube.com/watch?v=zoAFBJl9DHQ&ab_channel=HubSpotMarketing) - [Accessibility - The State of the Web by Chrome for Developers](https://www.youtube.com/watch?v=TomOQYxFnrU&ab_channel=ChromeforDevelopers) ## Conclusion **Web accessibility is how to create websites for everyone and should not be an option, on the contrary, it is non-negotiable when it comes to web pages.** > “Accessibility is essential for developers and organisations that want to create high-quality websites and web tools, and not exclude people from using their products and services.” - [W3C](https://www.w3.org/WAI/fundamentals/accessibility-intro/) There are several types of disabilities such as low vision, blindness, depth, autism, dyslexia, difficulty producing sound, and many others, and all of them must be taken into consideration when developing a web page, and certainly the set of strategies for Improving web accessibility will also depend on the type of project, industry and government laws. By following at least some of the suggestions I left in this article, your web project will certainly provide a much better user experience for everyone in different contexts. Also, I really recommend that you take a look at all the links I've added throughout this article as they provide more information that can guide you through this process. See you next time! 😁
dionarodrigues
1,882,127
How to configure Server-Side Encryption (SSE-S3) in Amazon S3?
Introduction Amazon S3 offers various encryption options to secure your data at rest....
0
2024-06-09T15:15:56
https://dev.to/siddhantkcode/how-to-configure-server-side-encryption-sse-s3-in-amazon-s3-3nlk
aws, s3, security, encryption
## Introduction Amazon S3 offers various encryption options to secure your data at rest. Among these options, Server-Side Encryption (SSE) is a powerful feature where Amazon S3 automatically encrypts your objects. This blog post will guide you through configuring SSE-S3 to encrypt objects added to an S3 bucket using the `PutObject` API operation. We'll cover the necessary steps, including bucket creation, policy configuration, and practical implementation using the Python `boto3` library. ## What is SSE-S3? Server-Side Encryption with Amazon S3-Managed Keys (SSE-S3) is a method for encrypting data at rest. When you use SSE-S3, Amazon S3 encrypts your data using AES-256 encryption, and Amazon S3 manages both the encryption and the decryption process. ## Steps to Configure SSE-S3 ### 1. Create or Select an S3 Bucket First, you'll need an S3 bucket where you want to store your encrypted objects. You can either create a new bucket or use an existing one. - To create a new bucket: - Open the Amazon S3 console. - Choose **Create bucket**. - Enter a unique bucket name and select the region. - Configure any additional settings as needed and choose **Create bucket**. ### 2. Configuring Bucket Policies To enforce that all objects uploaded to your bucket are encrypted using SSE-S3, you need to configure a bucket policy. - Go to the Amazon S3 console. - Select your bucket. - Navigate to the **Permissions** tab. - Under **Bucket Policy**, add the following policy: ```json { "Version": "2012-10-17", "Statement": [ { "Sid": "EnableSSE-S3", "Effect": "Deny", "Principal": "*", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*", "Condition": { "StringNotEquals": { "s3:x-amz-server-side-encryption": "AES256" } } } ] } ``` Replace `YOUR_BUCKET_NAME` with the name of your bucket. This policy ensures that any `PutObject` request without the `x-amz-server-side-encryption` header set to `AES256` will be denied. ### 3. Confirming the Configuration After setting up your bucket and policy, it's crucial to verify that the configuration works as intended. #### Using boto3 in Python To test SSE-S3, we'll use the `boto3` library, which is the Amazon Web Services (AWS) SDK for Python. 1. **Install boto3** if you haven't already: ```sh pip install boto3 ``` 2. **Upload an Object with SSE-S3**: Here's a simple Python script that uploads an object to your S3 bucket with server-side encryption enabled: ```python import boto3 # Initialize a session using Amazon S3 s3_client = boto3.client('s3') # Upload a new file response = s3_client.put_object( Bucket='YOUR_BUCKET_NAME', Key='example.txt', Body=b'Hello world!', ServerSideEncryption='AES256' ) print(response) ``` Replace `YOUR_BUCKET_NAME` with your actual bucket name. 3. **Verify the Object**: After running the script, check the S3 console to ensure that the object `example.txt` is uploaded and encrypted. You can confirm this by checking the properties of the uploaded object in the S3 console, where it should indicate that server-side encryption is enabled with `AES-256`. ## Conclusion By following these steps, you can ensure that all objects stored in your Amazon S3 bucket are encrypted using SSE-S3. This adds an extra layer of security to your data at rest, helping you comply with various security and compliance requirements. Configuring SSE-S3 is a straightforward process that involves creating or selecting a bucket, setting up a bucket policy, and confirming the encryption configuration through practical implementation. With the example provided using the `boto3` library in Python, you can seamlessly integrate SSE-S3 into your applications, ensuring robust data protection for your stored objects. --- For more tips and insights on security and log analysis, follow me on Twitter [@Siddhant_K_code](https://twitter.com/Siddhant_K_code) and stay updated with the latest & detailed tech content like this. --- ## Related Docs - https://docs.aws.amazon.com/AmazonS3/latest/userguide/UsingServerSideEncryption.html
siddhantkcode
1,857,408
Say "Oui" to Global Users: Localize Your Flutter App with Gemini
Why localize your app? Think about it, ignoring international users is like making a fire meme and...
0
2024-06-09T15:09:41
https://dev.to/koukibadr/fast-localization-with-gemini-on-flutter-4keh
flutter, gemini
Why localize your app? Think about it, ignoring international users is like making a fire meme and forgetting to post it online - what's the point? But don't worry, we hear you screaming Fear not, for Gemini is here to be your Yoda in the localization swamp. We'll tackle all your app translation woes, so buckle up and get ready for a smooth ride and dive deep with us in this post! **What is localization and How it's done?** To conquer the globe, you need some serious localization skills. Think of locales as your app's Rosetta Stone – they unlock the ability to understand different languages, date formats (no more confusing July 4th with 4th of July!), currencies, and all those cultural nuances. So, French users see euros and "dd/mm/yyyy," while Spanish speakers get their pesos and the familiar "dd/mm/aaaa." Now, managing these locales can feel like herding cats. But fear not, there's a better way than wrestling with JSON files (because, let's be honest, that's a recipe for disaster). Enter the dynamic duo of flutter_localizations and intl – these packages are your knight in shining armor, recommended by the official Flutter people themselves. Here's the battle plan: - Recruit your allies: Integrate flutter_localizations and intl into your app. - Set up your command center: Create a l10n.yaml config file. ```yaml arb-dir: lib/l10n template-arb-file: app_en.arb output-localization-file: app_localizations.dart ``` - Deploy your troops: Create .arb files under the lib/l10n folder (or a location of your choosing, just update the config file accordingly). These .arb files will house all your app's localized strings. With this battle plan in place, you'll be a localization champion in no time, ready to take your app to a global phenomenon! **With Gemini** Feeling overwhelmed by app localization? Manually translating endless lines for every language can be a daunting task. But fear not, developer! Google Gemini has your back with the arb_translate package. Think of arb_translate as your personal localization assistant. You simply write the default translations, and then unleash the power of Gemini! With a single arb_translate command, Gemini translates all the other languages you need, taking context (e-commerce, cars, coding?) into account for natural-sounding translations. Ditch the manual translation grind and let Gemini be your localization hero. It's a much more efficient way to conquer the world with your app! Now that you've got a taste of Gemini's localization magic. why not jump in and try it for yourself you can finally overcome the language barrier and reach new heights, just like Lara Croft raiding tombs and uncovering hidden treasures across the globe! - [https://docs.flutter.dev/ui/accessibility-and-internationalization/internationalization](https://docs.flutter.dev/ui/accessibility-and-internationalization/internationalization) - [https://pub.dev/packages/arb_translate](https://pub.dev/packages/arb_translate)
koukibadr
1,882,129
pottery-chicago
bachelorette parties
0
2024-06-09T15:07:18
https://dev.to/potterychicago1/pottery-chicago-4mbh
[bachelorette parties](https://pottery-chicago.com/bachelorette-party-chicago)
potterychicago1
1,882,128
Setting Up Emacs for Go Development on macOS
Introduction Emacs is a highly customizable text editor with powerful features for programming. In...
0
2024-06-09T15:05:41
https://dev.to/yas8say/setting-up-emacs-for-go-development-on-macos-g04
go
**Introduction** Emacs is a highly customizable text editor with powerful features for programming. In this guide, we’ll walk through installing Emacs on macOS, setting it up for Go development, and using it effectively. _Step 1: Install Emacs_ First, ensure you have Emacs installed on your system. You can download it from GNU Emacs or use a package manager. _Step 2: Install Go_ Make sure you have Go installed. You can download it from the official Go website. _Step 3: Install go-mode_ go-mode is an Emacs major mode for editing Go code. You can install it via MELPA (Milkypostman’s Emacs Lisp Package Archive). Enable MELPA in Emacs: Add the following to your Emacs configuration file (usually ~/.emacs or ~/.emacs.d/init.el): ``` (require 'package) (add-to-list 'package-archives '("melpa" . "https://melpa.org/packages/") t) (package-initialize) ``` Install go-mode: Open Emacs and run the following commands: `M-x package-refresh-contents` `M-x package-install RET go-mode RET` _Step 4: Configure go-mode_ Add the following configurations to your Emacs configuration file to enable go-mode and some useful Go tools: ``` (require 'go-mode) ;; Set up Go-specific key bindings (add-hook 'go-mode-hook (lambda () (setq tab-width 4) (setq indent-tabs-mode 1))) ;; Enable auto-completion (add-hook 'go-mode-hook 'company-mode) ;; Enable Flycheck for real-time syntax checking (add-hook 'go-mode-hook 'flycheck-mode) ;; Enable automatic formatting on save (add-hook 'before-save-hook 'gofmt-before-save) ;; Optional: set $GOPATH and $GOROOT if not set globally (setenv "GOPATH" "/path/to/your/gopath") (setenv "GOROOT" "/path/to/your/goroot") ``` _Step 5: Install company-mode for Auto-completion company-mode is a text completion framework for Emacs._ Install company-mode: `M-x package-install RET company RET` _Step 6: Install flycheck for Syntax Checking_ flycheck provides real-time syntax checking. Install flycheck: `M-x package-install RET flycheck RET` _Step 7: Install and Configure gopls (Go Language Server)_ gopls is the official Go language server, providing IDE features. Install gopls: Open terminal tehn, ``` go install golang.org/x/tools/gopls@latest ``` Configure Emacs to use gopls: Add the following to your Emacs configuration file: ``` (use-package lsp-mode :ensure t :commands (lsp lsp-deferred) :hook ((go-mode . lsp-deferred)) :config (setq lsp-prefer-flymake nil)) ;; Use flycheck instead of flymake (use-package lsp-ui :ensure t :commands lsp-ui-mode) (use-package company-lsp :ensure t :commands company-lsp) ``` _Step 8: Additional Tools_ To further enhance your Go development experience in Emacs, you might want to install additional tools: magit for Git integration: `M-x package-install RET magit RET` projectile for project management: `M-x package-install RET projectile RET` Example Configuration Here's an example of a complete Emacs configuration for Go development: ``` (require 'package) (add-to-list 'package-archives '("melpa" . "https://melpa.org/packages/") t) (package-initialize) ;; Install and configure go-mode (use-package go-mode :ensure t :hook ((go-mode . lsp-deferred) (before-save . gofmt-before-save)) :config (setq tab-width 4) (setq indent-tabs-mode 1)) ;; Enable company-mode for auto-completion (use-package company :ensure t :hook (go-mode . company-mode)) ;; Enable flycheck for real-time syntax checking (use-package flycheck :ensure t :hook (go-mode . flycheck-mode)) ;; Configure lsp-mode and lsp-ui for Go (use-package lsp-mode :ensure t :commands (lsp lsp-deferred) :config (setq lsp-prefer-flymake nil)) (use-package lsp-ui :ensure t :commands lsp-ui-mode) (use-package company-lsp :ensure t :commands company-lsp) ;; Optional: projectile for project management (use-package projectile :ensure t :config (projectile-mode +1)) ;; Optional: magit for git integration (use-package magit :ensure t) ``` Combined code for all packages: ``` ;; Initialize package sources (require 'package) (add-to-list 'package-archives '("melpa" . "https://melpa.org/packages/") t) (package-initialize) ;; Ensure use-package is installed (unless (package-installed-p 'use-package) (package-refresh-contents) (package-install 'use-package)) ;; Install and configure go-mode (use-package go-mode :ensure t :hook ((go-mode . lsp-deferred) (before-save . gofmt-before-save)) :config (setq tab-width 4) (setq indent-tabs-mode 1)) ;; Enable company-mode for auto-completion (use-package company :ensure t :hook (go-mode . company-mode)) ;; Enable flycheck for real-time syntax checking (use-package flycheck :ensure t :hook (go-mode . flycheck-mode)) ;; Configure lsp-mode and lsp-ui for Go (use-package lsp-mode :ensure t :commands (lsp lsp-deferred) :config (setq lsp-prefer-flymake nil)) (use-package lsp-ui :ensure t :commands lsp-ui-mode) (use-package company-lsp :ensure t :commands company-lsp) ;; Optional: projectile for project management (use-package projectile :ensure t :config (projectile-mode +1)) ;; Optional: magit for git integration (use-package magit :ensure t) ;; Function to run the current Go file (defun my-go-run () "Run the current Go file." (interactive) (let ((compile-command (concat "go run " buffer-file-name))) (compile compile-command))) ;; Function to build the current Go project (defun my-go-build () "Build the current Go project." (interactive) (compile "go build")) ;; Function to test the current Go project (defun my-go-test () "Test the current Go project." (interactive) (compile "go test ./...")) ;; Add key bindings for Go commands (add-hook 'go-mode-hook (lambda () (local-set-key (kbd "C-c C-r") 'my-go-run) (local-set-key (kbd "C-c C-b") 'my-go-build) (local-set-key (kbd "C-c C-t") 'my-go-test))) ;; End of configuration ``` This setup should give you a powerful and efficient Go development environment in Emacs. _Step 9: Using Emacs for Go Development_ **Creating a Simple Go Program** Open Emacs Create a New Go File: Command: `C-x C-f ~/go/src/hello/hello.go RET` Add Go Code: ``` package main import "fmt" func main() { fmt.Println("Hello, World!") } ``` Save the File: Command: `C-x C-s` Running the Go Program You can run your Go program directly from Emacs using the key bindings set up in your configuration. Run the Current Go File: Command: `C-c C-r` Build the Current Go Project: Command: `C-c C-b` Test the Current Go Project: Command: `C-c C-t` _Extra: Clearing an Entire File in Emacs_ Clearing File Content Open File: Command: `C-x C-f /path/to/yourfile RET` Select All Text: Command: `C-x h` Delete Selected Text: Command: `C-w` Save the File: Command: `C-x C-s` Using erase-buffer Command Open File Run erase-buffer: Command: `M-x erase-buffer RET` Save the File: Command: `C-x C-s` Split window: `C-x 2 (horizontal split)` `C-x 3 (vertical split)` Conclusion By following this guide, you have set up Emacs on macOS for Go development, including installing necessary packages, configuring Emacs, and using it to write, run, build, and test Go programs. Happy coding!
yas8say
1,882,126
JavaScript Client-Side Development: Tips and Tools for Mastery 🚀
Welcome to our forum dedicated to mastering JavaScript for client-side development! Whether you're a...
0
2024-06-09T15:00:10
https://dev.to/timetinker/javascript-client-side-development-tips-and-tools-for-mastery-5b4
javascript, webdev, tutorial, python
Welcome to our forum dedicated to mastering JavaScript for client-side development! Whether you're a beginner looking to learn the basics or a seasoned developer aiming to sharpen your skills, this is the place for you. Join us to explore essential tools, frameworks, and libraries that enhance your JavaScript development experience. Learn techniques to optimize your code for faster, more efficient applications and share best practices to write clean, maintainable, and scalable JavaScript code. Discuss real-world scenarios and solutions, from simple scripts to complex single-page applications, and get help and offer advice on troubleshooting and debugging common JavaScript issues. Stay updated with the latest trends, updates, and innovations in the JavaScript ecosystem. Engage with a community of passionate developers, ask questions, share insights, and grow your expertise in JavaScript client-side development. Let's code together and take our JavaScript skills to the next level! 🚀
timetinker
1,882,075
Bash Scripting for Software Engineers - A Beginner's Guide
It's your first day at your new job. You've been handed a computer running Linux, and you were told...
27,654
2024-06-09T15:00:00
https://dev.to/alexindevs/bash-scripting-for-software-engineers-a-beginners-guide-1j65
programming, bash, shell, linux
It's your first day at your new job. You've been handed a computer running Linux, and you were told to locate all the files containing the word "key". It's a simple enough task, right? The catch is, there are thousands of files on that system, and you've never written a shell script before. Shell scripting, the language of the command line, is your ticket to automating repetitive tasks and mastering your Linux environment. With this guide, you'll be able to navigate your way around any Bash terminal with ease, and you might learn a couple of cool tricks along the way! ## What are all these terms, anyway? ### Shell A shell is a program that interprets commands and executes them on your operating system. Simply put, a shell is a command-line interpreter that acts as a bridge between you and your operating system. It is usually run in a terminal or console. The terminal provides an input interface to run textual commands, and the shell takes in those commands and executes them on your system. ### Bash It is a shell program and command language. Bash has its roots in earlier Unix shells. The **Bourne shell**, released in 1977, was a major step forward in allowing users to write scripts and automate tasks. Bash, short for **Bourne-Again SHell**, was created in 1989 by Brian Fox as part of the GNU Project. It was designed as a free and improved alternative to the Bourne shell, while still maintaining compatibility with existing scripts. ## Getting started with Bash Scripting ### Setting up the Development Environment Bash shells are commonly found on Linux operating systems. In this article, we will be working primarily with Ubuntu, a Linux distribution. You can download and set up Ubuntu here: (Canonical Ubuntu)[http://ubuntu.com/download]. Alternatively, if you're working from a Windows environment, you can download the Windows Subsystem for Linux (WSL) which gives you access to a Linux operating system and a bash terminal, without the need for dual booting, or clean wiping Windows. You can get WSL here: https://learn.microsoft.com/en-us/windows/wsl/install. Once you have your terminal open, you should see a prompt like the one below: ```bash $ ``` Now we are ready to begin. ### Basic Shell commands - `cd`: Change directory. This command is used to navigate to a different directory within the file system. ```bash $ cd Desktop # This will switch to the ./Desktop directory. ``` - `ls`: List directory contents. It displays the files and directories in the current directory. ```bash $ ls file1.txt file2.txt directory1/ ``` - `mkdir`: Make a directory. This command creates a new directory with the given name. ```bash $ mkdir newDir $ ls newDir ``` - `rm`: Remove. It is used to delete files or directories. Be cautious as it does not have a confirmation prompt by default. ``` $ rm newFile $ ls ``` ```bash $ rm -rf newDir $ ls ``` - `pwd`: Print working directory. It shows the current directory path. ``` $ pwd /home/alexin/newDir/ ``` - `cp`: Copy. This command is used to copy files or directories from one location to another. ```bash $ cp ../file_to_copy.txt . $ ls file_to_copy.txt ``` - `mv`: Move. It moves files or directories from one location to another. It can also be used to rename files. ```bash $ mv ../file_to_move.txt . $ ls file_to_move.txt ``` - `touch`: Create a new file. It creates an empty file with the specified name or updates the timestamp of an existing file. ```bash $ touch newFile.txt $ ls newFile.txt ``` - `cat`: Concatenate and display content. This command is used to display the content of files on the terminal. It can also be used to concatenate and display multiple files. ```bash $ cat newFile.txt This is the content of newFile.txt. ``` - `echo`: Print text. This is used to print text or variables to the terminal or standard output. ```bash $ echo 'I am Alexin' I am Alexin ``` ```bash $ name="Alexin" $ echo 'My name is $name' My name is Alexin ``` - `man`: Displays information about a command. 'man' stands for manual, and it is used to provide detailed information about the specified command, including its purpose, syntax, options, and examples of usage. ```bash $ man ls LS(1) User Commands LS(1) NAME ls - list directory contents SYNOPSIS ls [OPTION]... [FILE]... DESCRIPTION List information about the FILEs (the current directory by default). Sort entries alphabetically if none of -cftuvSUX nor --sort is speci‐ fied. Mandatory arguments to long options are mandatory for short options too. -a, --all do not ignore entries starting with . -A, --almost-all do not list implied . and .. --author Manual page ls(1) line 1 (press h for help or q to quit) ``` - `grep`: The `grep` command is used to search for specific patterns or regular expressions within files or streams of text. It stands for "Global Regular Expression Print". ```bash $ grep "error" file.txt This line contains error. This line also contains the word "error". This line has errors (error is in the word errors) ``` - `find`: Search files and directories. This command lets you search for matching expressions or patterns in a specified file or directory. It allows you to search based on various criteria such as name, type, size, and permissions. ```bash $ find / - ``` ## Creating Bash Scripts A bash script is a file typically ending with the extension `.sh` that contains a logical series of related commands to be executed. You can create a bash script using the nano text editor by running this command in your terminal: ```bash $ nano new_script.sh ``` In the editor, start your script with a **shebang** line. The shebang line tells the system that this file is a script and specifies the interpreter to use. ```bash #!/bin/bash ``` You can add a simple command to print the text "Hello, World!" to the terminal. ```bash #!/bin/bash text="Hello, World!" echo $text ``` Save the file then exit: `CTRL X` + `Y` + `Enter`. Before you can run the file, you have to make it executable. Change the file's permissions using the following commands: ```bash $ chmod u+x new_script.sh ``` Now run your bash script. ```bash $ ./new_script.sh Hello, World! ``` Congratulations! You have created your first bash script. Now, let's learn about handling control flow with conditionals and loops. ### Control flow This refers to the order in which commands are executed in a program. In Bash scripting, control flow constructs allow you to manage the execution sequence of your script, enabling you to make decisions, repeat actions, and manage complex logical conditions. #### Key Control Flow Constructs in Bash 1. **Conditional Statements**: These are used to execute a block of code only if a specified condition is true. - **if Statement**: ```bash if [ condition ]; then # Code to execute if condition is true fi ``` - **if-else Statement**: ```bash if [ condition ]; then # Code to execute if condition is true else # Code to execute if condition is false fi ``` - **if-elif-else Statement**: ```bash if [ condition1 ]; then # Code to execute if condition1 is true elif [ condition2 ]; then # Code to execute if condition2 is true else # Code to execute if neither condition1 nor condition2 is true fi ``` 2. **Loops**: These are used to repeat a block of code multiple times. - **for Loop**: ```bash for variable in list; do # Code to execute for each item in list done ``` - **while Loop**: ```bash while [ condition ]; do # Code to execute as long as condition is true done ``` - **until Loop**: ```bash until [ condition ]; do # Code to execute until condition is true done ``` 3. **Case Statement**: This is used to execute one of several blocks of code based on the value of a variable. ```bash case $variable in pattern1) # Code to execute if variable matches pattern1 ;; pattern2) # Code to execute if variable matches pattern2 ;; *) # Code to execute if variable doesn't match any pattern ;; esac ``` To demonstrate these concepts, let us write a program that performs a calculation given two numbers and an operation as input: ```bash #!/bin/bash # Function to perform arithmetic operations perform_operation() { # First we define 3 variables, to store the arguments passed into the function. local num1=$1 local num2=$2 local operation=$3 # We then use a case statement to execute an arithmetic operation, based on the value of the $operation variable. We log the result of the operation to the terminal. case $operation in addition) result=$((num1 + num2)) echo "Result of addition: $result" ;; subtraction) result=$((num1 - num2)) echo "Result of subtraction: $result" ;; multiplication) result=$((num1 * num2)) echo "Result of multiplication: $result" ;; division) if [ $num2 -eq 0 ]; then echo "Error: Division by zero is not allowed." else result=$((num1 / num2)) echo "Result of division: $result" fi ;; *) echo "Invalid operation. Please use one of the following: addition, subtraction, multiplication, division." ;; esac } # Main script starts here echo "Enter the first number:" read num1 echo "Enter the second number:" read num2 echo "Enter the operation (addition, subtraction, multiplication, division):" read operation # Perform the operation perform_operation $num1 $num2 $operation # Loop to check if user wants to perform another operation while true; do echo "Do you want to perform another operation? (yes/no)" read choice case $choice in yes|y|Yes|YES) echo "Enter the first number:" read num1 echo "Enter the second number:" read num2 echo "Enter the operation (addition, subtraction, multiplication, division):" read operation # The read command enables a shell script to read user input from the command line. perform_operation $num1 $num2 $operation ;; no|n|No|NO) echo "Exiting the script. Goodbye!" break ;; *) echo "Invalid choice. Please enter yes or no." ;; esac done ``` #### Handling command line arguments n a Bash script, command-line arguments are accessed using positional parameters: - `$0` is the name of the script. - `$1`, `$2`, ..., `$N` are the arguments passed to the script. - `$#` is the number of arguments passed to the script. - `$@` is all the arguments passed to the script. - `$*` is all the arguments passed to the script as a single word. - `"$@"` is all the arguments passed to the script, individually quoted. - `"$*"` is all the arguments passed to the script, quoted as a single word. Let's modify the previous arithmetic script to handle command-line arguments. This way, users can specify the numbers and the operation directly when they run the script. ```bash #!/bin/bash # Function to perform arithmetic operations perform_operation() { local num1=$1 local num2=$2 local operation=$3 case $operation in addition) result=$((num1 + num2)) echo "Result of addition: $result" ;; subtraction) result=$((num1 - num2)) echo "Result of subtraction: $result" ;; multiplication) result=$((num1 * num2)) echo "Result of multiplication: $result" ;; division) if [ $num2 -eq 0 ]; then echo "Error: Division by zero is not allowed." else result=$((num1 / num2)) echo "Result of division: $result" fi ;; *) echo "Invalid operation. Please use one of the following: addition, subtraction, multiplication, division." ;; esac } if [ $# -ne 3 ]; then echo "Usage: $0 <num1> <num2> <operation>" echo "Example: $0 5 3 addition" exit 1 fi num1=$1 num2=$2 operation=$3 perform_operation $num1 $num2 $operation ``` Now that you know the basics of scripting in shell, try out these practice assignments: ### Assignment #1: Locate Files Containing the Word "key" Your task is to write a Bash script that searches for all files containing the word "key" within a specified directory and its subdirectories. The script should: 1. Accept the directory path as a command-line argument. 2. Use the `grep` command to search for files containing the word "key". 3. Print the paths of the files that contain the word "key". ### Assignment #2: Automate Git Commits for Each Edited File In this assignment, you will write a Bash script to automate the process of creating a separate Git commit for every file that has been edited in a Git repository. This script will be particularly useful in scenarios where you want to commit each file separately, perhaps for clearer version history or to adhere to specific project guidelines. 1. Ensure the script is executed within a Git repository. 2. Use Git commands to list files that have been edited. 3. Loop through the list of modified files and create a commit for each one. ## Conclusion In summary, this guide has equipped you with the foundational knowledge to navigate the world of Bash scripting. You've explored core concepts like shell commands, creating scripts, control flow structures, and handling user input. With this strong base, you can now venture into more complex scripting tasks to automate various processes and streamline your workflow on Linux systems. Remember, practice is key to mastering any skill. Experiment with the provided assignments and explore other scripting challenges to solidify your understanding. Thank you for reading!
alexindevs
1,882,113
JavaScript Coding Interview Questions
Q1. Reverse this string let str = "Hello, World!"; let s =...
0
2024-06-09T14:27:46
https://dev.to/alamfatima1999/javascript-interview-questions-576j
Q1. Reverse this string ```JS let str = "Hello, World!"; let s = str.split("").reverse().join(""); console.log(s); ``` Q2. Remove duplicates ```JS let arr = [1,2,3,4,5,6,6]; //let s = new Set(); let s = new Set(arr); s.add(...arr); for(let i =0;i<arr.length;i++){ s.add(arr[i]); } console.log(s); ``` Q3. Can you write a function in JavaScript to convert a string containing hyphens and underscores to camel case? transformation of the string “secret_key_one” into camel case results in “secretKeyOne.” ```JS let str = "secret_key_one"; let s = str.split("_"); console.log(s); let ans = ""; for(let i = 0;i<s.length;i++){ if(i!=0){ ans+=s[i].substring(0, 1).toUpperCase(); ans+=s[i].substring(1); } else{ ans+=s[i]; } } console.log(ans); ``` Q4. Swap two numbers ```JS let a =1, b=2; Ans 1 a = a ^b; b = a^b; a = a^b; console.log(a, b); Ans 2 [a b] = [b.a]; ``` Q5. Flatten this array ```JS const arr = [1, 2, [3, 4]]; let ans = []; const flatternArray = (arr) => { for(let i =0;i<arr.length;i++){ if(Array.isArray(arr[i])){ flatternArray(arr[i]); }else{ ans.push(arr[i]); } } // console.log(ans); }; flatternArray(arr); console.log(ans); ``` Q6. const nestedObject = { a: { b: { c: 42 } } }; const propertyPath = 'a.b.c'; const result = deepAccess(nestedObject, propertyPath); // result: 42 ```JS const nestedObject = { a: { b: { c: 42 }, z:{a:19} } }; const propertyPath = 'a.z.a'; // result: 42 let obj; let arr = propertyPath.split('.'); // console.log(str); const deepAccess = (nestedObject, arr) => { if(arr.length==1){ // console.log("Reached end",nestedObject[arr[0]]); return nestedObject[arr[0]]; } let i = arr[0]; // console.log(i); nestedObject = nestedObject[i]; arr.shift(); // console.log(arr); return deepAccess(nestedObject, arr); }; const result = deepAccess(nestedObject, arr); console.log(result); ``` OR ```JS let ans = arr.reduce((obj, ele) =>{ return obj[ele]; }, nestedObject); console.log(ans); ```
alamfatima1999
1,882,125
Understanding DNS: The Backbone of the Internet
Understanding DNS: The Backbone of the Internet Hey, developers! Today, we're diving into...
0
2024-06-09T14:59:09
https://dev.to/elizabethsobiya/understanding-dns-the-backbone-of-the-internet-49d7
# Understanding DNS: The Backbone of the Internet Hey, developers! Today, we're diving into a fundamental yet often overlooked part of the internet: the Domain Name System (DNS). Whether you're a seasoned developer or just starting out, understanding DNS is crucial for navigating the web and creating robust applications. So, let's break it down in simple terms and explore why DNS is so important. ## What is DNS? Imagine you're trying to visit a friend's house. You know their name, but you need their address to get there. DNS acts like the internet's phonebook, translating human-friendly domain names (like `www.example.com`) into IP addresses (like `192.0.2.1`) that computers use to identify each other on the network. ## How Does DNS Work? When you type a URL into your browser, several steps happen behind the scenes: 1. **Querying the Resolver**: Your computer contacts a DNS resolver, usually provided by your internet service provider (ISP), asking for the IP address of the domain. 2. **Checking the Cache**: The resolver first checks its cache to see if it already knows the IP address. If it does, it returns the address immediately. 3. **Recursive Queries**: If the address isn't in the cache, the resolver performs a series of queries. It starts by asking one of the root DNS servers, which knows where to find top-level domain (TLD) servers (like `.com`, `.org`, etc.). 4. **TLD Servers**: The root server directs the resolver to the appropriate TLD server, which in turn points to the authoritative DNS server for the specific domain. 5. **Authoritative DNS Server**: The authoritative server holds the DNS records for the domain and responds with the IP address. 6. **Returning the IP Address**: The resolver sends the IP address back to your computer, which then uses it to connect to the website's server. ## Types of DNS Records DNS records are stored in the authoritative servers and contain various pieces of information about the domain. Here are some common types: - **A Record**: Maps a domain to an IPv4 address. - **AAAA Record**: Maps a domain to an IPv6 address. - **CNAME Record**: Alias of one name to another (e.g., `www.example.com` to `example.com`). - **MX Record**: Specifies mail servers for the domain. - **TXT Record**: Holds text information, often used for verification and security purposes. ## Why DNS Matters ### 1. **User Experience** DNS makes it easy for users to access websites using memorable domain names instead of complex IP addresses. Imagine if you had to remember `172.217.5.110` instead of `www.google.com`! ### 2. **Scalability** DNS allows websites to use multiple servers across different locations. This helps distribute the load and improves performance, making the internet scalable and efficient. ### 3. **Security** DNS can enhance security through features like DNSSEC (Domain Name System Security Extensions), which helps protect against certain types of attacks by ensuring the responses from DNS servers are authentic. ### 4. **Email Delivery** DNS plays a crucial role in email delivery. MX records ensure that emails are routed to the correct mail servers, helping maintain reliable communication. ## Real-World Example: Setting Up a DNS Record Let's say you've just launched a new website and want to link your domain name to your web server's IP address. Here's a simplified step-by-step guide: 1. **Register Your Domain**: Choose and register your domain name with a domain registrar. 2. **Find Your DNS Settings**: Log into your domain registrar's control panel and find the DNS settings. 3. **Add an A Record**: Create a new A record and enter your web server's IP address. 4. **Save Changes**: Save your changes and wait for them to propagate (this can take anywhere from a few minutes to 48 hours). 5. **Test Your Setup**: Open your browser and type your domain name. If everything is set up correctly, you should see your website. ## Conclusion DNS is an essential part of the internet's infrastructure, enabling us to use easy-to-remember domain names to access websites. Understanding how DNS works can help you troubleshoot issues, improve website performance, and enhance security. So next time you type a URL into your browser, take a moment to appreciate the complex system working behind the scenes to connect you to the right website. Happy coding! Feel free to share your thoughts and questions in the comments below. Let's keep learning together!
elizabethsobiya
1,882,124
How to Deploy and Host Your Website Cost-Effectively with Vercel 💰💰
Creating and deploying a portfolio website can often seem like a intimidating task, especially when...
0
2024-06-09T14:57:37
https://dev.to/itsfarhankhan28/how-to-deploy-and-host-your-website-cost-effectively-with-vercel-5ejk
vercel, deployment, hosting, website
Creating and deploying a portfolio website can often seem like a intimidating task, especially when you're trying to minimize costs without compromising on quality. Recently, I had the opportunity to develop a simple yet effective single-page portfolio for a client, and I found an efficient way to deploy it using Vercel while keeping expenses to a minimum. ![Client's Portfolio](https://res.cloudinary.com/dn2ljrxzy/image/upload/v1717922315/Blog/fojj2doxjsssw5uitaad.png "Client's Portfolio") In this blog post, I'll share my experience and provide a step-by-step guide on how you can do the same for your projects, making use of cost-effective domain purchases and Vercel's hosting capabilities. ## Behind the Scenes: Building a Sleek Single Page Portfolio The project at hand was a straightforward single-page portfolio designed to showcase the client's work. This portfolio was purely a frontend project, meaning there was no backend or database integration involved. ### Key Technologies Used: - Next.js - Typescript - Tailwind.css ### Development Highlights: 1. Responsive Design: Ensuring the portfolio looks great on all devices. 2. Interactive Elements: Adding smooth transitions and hover effects. 3. Clean Code: Keeping the codebase simple and maintainable. ## Visual Guide: Deployment Process Simplified ![Deployment Process](https://res.cloudinary.com/dn2ljrxzy/image/upload/v1717930659/Blog/gt7p91r1p41pglzejqy7.png "Deployment Process") ## Deployment on Vercel Deploying the portfolio website was straightforward thanks to Vercel. Vercel is known for its simplicity and efficiency, making it an excellent choice for frontend projects. ### Why Vercel? - Ease of Use: Vercel provides a seamless deployment process, particularly for static sites. - Free Plan: Vercel's free tier is perfect for personal projects and small client projects. - Automatic SSL: It automatically provides SSL certificates for your custom domains. ### Steps to Deploy: 1. Sign Up and Set Up: Create a Vercel account and link it to your GitHub repository. 2. Import Project: Import your project from GitHub, GitLab, or Bitbucket. 3. Configure Settings: Set up the build settings if needed (though Vercel often auto-detects them). 4. Deploy: Click the deploy button, and Vercel will handle the rest. ## Acquiring and Pointing a Custom Domain After deploying the site on Vercel, the next step was to point a custom domain to it. I purchased a domain at a very affordable price from _**HIOX India**_. ![HIOX India](https://res.cloudinary.com/dn2ljrxzy/image/upload/v1717936882/Blog/purxurt4ujoh0myzcckk.png "HIOX India") ### Steps to Acquire and Point Domain: 1. Purchase Domain 2. Access DNS Settings: Log into your domain registrar account and navigate to the DNS settings. 3. Add DNS Records: Add an A record pointing to Vercel's IP address or use CNAME records as directed by Vercel. ### Example of DNS Records: ``` Type: A Name: @ Value: 76.76.21.21 (Vercel's IP) Type: CNAME Name: www Value: cname.vercel-dns.com ``` ### Screenshot of DNS Configuration: ![DNS configuration](https://res.cloudinary.com/dn2ljrxzy/image/upload/v1717937889/Blog/w78ogi0fbeltepm6rmz7.png "DNS configuration") ![DNS configuration](https://res.cloudinary.com/dn2ljrxzy/image/upload/v1717937982/Blog/irj9kafvwfxf94ylbzcc.png "DNS configuration") ## Cost Efficiency One of the significant advantages of this deployment method is its cost efficiency. Here's how this approach helps in cutting costs: ### Cost Savings Breakdown: - Free Hosting: Vercel offers free hosting for static sites, reducing hosting expenses. - Affordable Domains: Purchasing domains from budget-friendly registrars can save a considerable amount. ### Overall Savings: By combining Vercel's free hosting with a cheap domain purchase, the total annual cost for maintaining the portfolio website was significantly reduced. ## Limitations and Considerations While this approach is excellent for small projects, it's important to recognize its limitations: ### Not Suitable for Larger Projects: - No Backend Support: Vercel is ideal for frontend-only projects. Projects requiring server-side logic or database interactions need a different hosting solution. - Limited Customization: Advanced server configurations are not possible. ### Examples of Suitable Projects: - Portfolios - Blogs - Landing Pages ### Projects Requiring Alternative Solutions: - E-commerce Sites - Web Applications with Backend ## Conclusion In summary, deploying a single-page portfolio on Vercel while using an affordable custom domain is an excellent way to cut costs without sacrificing quality. This method is particularly suited for small to medium-sized projects where backend integration is not necessary. ### Key Takeaways: - Vercel offers an easy and free hosting solution for static sites. - Purchasing a domain from budget-friendly registrars can further reduce costs. - This approach is ideal for portfolios, blogs, and landing pages, but not for projects requiring backend support. ### Have you tried deploying a site using Vercel and a custom domain? Share your experiences and any tips you have in the comments below!
itsfarhankhan28
1,882,122
It’s free real (e)state
Let’s talk about States, in context of building user interfaces be it a web application, desktop or...
0
2024-06-09T14:55:44
https://dev.to/ishar19/its-free-real-estate-3jip
react, flutter, webdev, mobile
Let’s talk about States, in context of building user interfaces be it a web application, desktop or mobile applications. What is a state? Why do we use it, need it? How should we use it? And why/ where should we not use it? ### Act 1: What A state can be representative of your UI at any given point of time, whatever is on the screen can be considered as a state of your application. It’s like talking a snapshot of your app and keeping the information about colours, fonts, data, interactions everything. It tells us how and what is an app right now and what could happen next depending on the choices user makes. It holds your data coming from backend, user generated actions and interactions. ![State as a function] (https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fx83pd2xmpknjs9op0h6.png) ### Act 2: Why Why do need a state? Why do we use states in our app? Do we even need a state? How many types of state can be there? All these questions are very subjective and opinionated, and here are my 2cents on it. Need: To hold information which might change depending on certain situations like user interaction or loading new set of data. To “do something, when something happens”. Use: To change only those parts of UI which needs to be changed when something changes or happens. That can be a click, a hover, a long press, refreshing the loading bar every 3 seconds. It could be anything. imagine reloading the whole application when a button is pressed. Do we need: Yes and No If your application contains some part which will be updated in future depending on the change of a variable, you might want to use a state for that. Do not keep constants which might as well be hardcoded and will never be changed through any user interaction/ to provide an user interaction or doesn’t affect the interface into a “state” in your app. Examples can be loading something from local storage of your device, or keeping a set of constants to map out your username’s initials with colours. Anything that is just loaded once and never re-loaded can just be a variable. Technically it is a state but not in the context of “state” when we use “setState” or “getState”. ![UI as a function] (https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l855fjjj6g1e7p0ilvqv.png) Notice, how it’s a two way function, depicting your UI depends on state and your state depends on UI. A change in state, loading a data, can change your UI and change in UI, selection of a filter, can change your state. I only keep a piece of info in “state” if the change in it should immediately reflect in UI and I only map a piece of UI to “state” if I need to carry that piece of information to next interaction. ### Act 3: How Depending on the tech you use, you will have different options to use states in your application. But there are two major types which needs to be differentiated 1. App wide state - This piece of info is going to be used at multiple places and, emphasis on and, can be changed from multiple places. If it’s only going to be read, you might as well be okay with reading it directly from source rather than going through the pain of setting up an app level state. App level info like theme of your UI, user authentication info, cart info if it’s an e-commerce thing, omnibox etc ![App state](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/snl6m9klu8qz1jhyci78.png) 1. Component/ widget level state - This piece of info is going to be used in this and maximum of two level deep component/ widget from this. You will be fine with just using a component/ widget level solution. ![Component State](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yrxef1r6r9d48r46vh3q.png) Component level info, like selected filters in a product section, preferred font for typing in a input box etc. ### Tip: Never use your state for holding and calculating business logic or app logic, it should only contain UI logic. ![Venn Diagram of states](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/975741sqe5s4uyb60qqu.png) And in the end, would like to quote, Dan Abramov "Do what is less awkward" [Chatting in a github issue](https://github.com/reduxjs/redux/issues/1287#issuecomment-175351978)
ishar19
1,882,121
BEST RECOVERY EXPERTS FOR LOST/STOLEN CRYPTOCURRENCY LEE ULTIMATE HACKER
LEEULTIMATEHACKER@ AOL. COM Support @ leeultimatehacker . com. telegram:LEEULTIMATE wh@tsapp +1 ...
0
2024-06-09T14:53:22
https://dev.to/jackie_hjelm_8cea8a4e5c17/best-recovery-experts-for-loststolen-cryptocurrency-lee-ultimate-hacker-1e9j
LEEULTIMATEHACKER@ AOL. COM Support @ leeultimatehacker . com. telegram:LEEULTIMATE wh@tsapp +1 (715) 314 - 9248 https://leeultimatehacker.com In the cacophony of voices heralding cryptocurrencies as the next big thing, I found myself swept up in the frenzy, eager to partake in the promise of untold riches. Fresh off a major success in Hollywood's real estate scene, I ventured into the world of crypto trading with boundless enthusiasm. Little did I know that amidst the excitement lay the lurking shadows of deception and loss. My initiation into the world of crypto investments took an unexpected turn when I fell victim to a scam forest investment, resulting in the disappearance of $97,500 worth of Bitcoin from my digital wallet. The shock and disbelief were palpable as I grappled with the realization that my once-secure digital assets had vanished into thin air. Panic set in, accompanied by visions of lost opportunities and shattered dreams. In the midst of despair, a glimmer of hope emerged in the form of Lee Ultimate Hacker Despite its whimsical name evoking images of sorcery and magic, Lee Ultimate Hacker operates at the intersection of technology and expertise, offering a lifeline to those ensnared in the labyrinth of lost Bitcoins. With trepidation tinged with cautious optimism, I reached out to Lee Ultimate Hacker placing my trust in their ability to navigate the complex terrain of Bitcoin recovery. From the outset, their professionalism and commitment to excellence shone through, instilling in me a sense of confidence amidst the uncertainty. But could their promises of seamless recovery be more than mere illusions? My skepticism gave way to curiosity as I delved deeper into the real-world impact of Lee Ultimate Hacker's services. Through case studies and testimonials, I discovered firsthand the transformative power of their intervention, as users like myself recounted tales of redemption and restored faith in the crypto ecosystem. One such case study showcased the journey of a fellow investor who, like myself, had fallen victim to a scam, losing a substantial sum of Bitcoin in the process. With nowhere else to turn, they sought refuge in the expertise of Lee Ultimate Hacker. Through meticulous analysis and strategic intervention, Lee Ultimate Hacker was able to trace the path of the lost Bitcoin and facilitate its safe return to its rightful owner. Another compelling narrative highlighted the plight of an individual whose digital assets had been compromised due to a security breach. Faced with the daunting task of reclaiming what was rightfully theirs, they turned to Lee Ultimate Hacker for assistance. Through a combination of cutting-edge technology and unwavering dedication, Lee Ultimate Hacker successfully recovered the stolen Bitcoin, restoring peace of mind and financial security. As I reflected on these stories of resilience and redemption, it became clear that Lee Ultimate Hacker's impact extends far beyond mere technical proficiency. Their commitment to their clients' well-being and their relentless pursuit of justice set them apart as true guardians of the crypto community. Looking to the future, the landscape of Bitcoin security and recovery appears brighter than ever, thanks to the innovative solutions and unwavering dedication of pioneers like Lee Ultimate Hacker. With emerging technologies and evolving strategies at their disposal, they stand poised to lead the charge in safeguarding digital assets and ensuring a more secure and resilient crypto ecosystem for all. Lee Ultimate Hacker's reputation as a beacon of hope and reliability in the murky waters of Bitcoin recovery is well-deserved. Trust in their expertise, and let them guide you safely through the storms of uncertainty, towards a future where lost Bitcoins are no longer a source of despair, but a testament to the resilience of the human spirit. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2tmltbbogm8ij6qxs95l.jpg)
jackie_hjelm_8cea8a4e5c17
1,882,119
Beginners Guide on How to Contribute to Open Source Projects
What Exactly is Open Source? Open source refers to software that is freely available to...
0
2024-06-09T14:47:03
https://dev.to/idungstanley/beginners-guide-on-how-to-contribute-to-open-source-projects-3344
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rtl51lqfctiyvmrod0v2.png) ## What Exactly is Open Source? Open source refers to software that is freely available to the public to use, modify, and distribute. This software comes with source code that anyone can inspect, enhance, and adapt according to their needs. The philosophy behind open source promotes collaborative development and community engagement, where developers from around the world can contribute to improving the software. Popular examples of open-source projects include Linux, Mozilla Firefox, and the Apache HTTP Server. ## Why Contribute to Open Source? Contributing to open source projects can be a rewarding way to learn, teach, share, and build experience. Contributing to open source has numerous benefits, including: - Skill Development: Improve your coding and problem-solving skills. - Networking: Connect with other developers and professionals. - Portfolio Building: Showcase your contributions to potential employers. - Learning: Gain knowledge from real-world projects. - Community Involvement: Be part of a global community and contribute to projects you care about. - Mentorship: To find a mentor if you need one. ## How Can You Get Started Contributing? **Understanding the Basics** Before diving into open source contributions, it's essential to have a solid understanding of the basics: - Version Control: Learn Git and GitHub as they are the most widely used tools in open source. - Programming Skills: Ensure you have a good grasp of the programming languages used in the projects you are interested in. - Reading Documentation: Get comfortable with reading and understanding project documentation. **Setting Up Your Environment** Ensure your development environment is set up with the necessary tools: - Git: Install Git on your machine. - Code Editor: Use a code editor like Visual Studio Code, Sublime Text, or Atom. - GitHub Account: Create a GitHub account if you don't have one. **Finding an Open Source Project** Start by looking for projects that interest you or align with your skillset. You can find open source projects on: - GitHub: Use GitHub's Explore feature or search for topics of interest. - GitLab: Similar to GitHub, GitLab offers a range of open source projects. - Bitbucket: Another platform for finding open source projects. - Open Source Directories: Websites like Open Source Guide, First Timers Only, and Up For Grabs list projects looking for contributors. **Choosing the Right Project** Consider the following criteria when selecting a project: - Activity Level: Look for active projects with recent commits and regular updates. - Community: Check if the project has an active community and good support for newcomers. - Documentation: Ensure the project has comprehensive documentation to help you get started. - Issues: Look for beginner-friendly issues labeled as "good first issue" or "beginner-friendly." **Understanding the Project** Once you've chosen a project, take time to understand it: - Read the README: The README file provides an overview of the project. - Explore the Codebase: Browse through the code to understand its structure and components. - Review Documentation: Check the project's documentation for setup instructions, contribution guidelines, and coding standards. - Join Community Channels: Engage with the project's community through forums, chat channels, or mailing lists. ## What Should You Expect? Contributing to open source can be a rewarding but challenging experience. Here's what you should expect: - Learning Curve: There might be a steep learning curve as you get familiar with the project. - Feedback: Expect constructive feedback on your contributions. - Collaboration: Be prepared to work collaboratively with other contributors. - Patience: Contributions may take time to be reviewed and merged. ## What is Needed to Participate in Open Source Contribution? To effectively participate in open source contributions, you'll need: - Basic Coding Skills: Knowledge of the programming languages used in the project. - Git and GitHub: Proficiency in version control and using GitHub. - Communication Skills: Ability to communicate clearly with other contributors. - Problem-Solving Skills: Capability to troubleshoot and solve issues. ## How Does One Start from Scratch to Raising a PR? 1. Fork the Repository Fork the project repository to create a copy under your GitHub account. This allows you to make changes without affecting the original repository. 2. Clone the Forked Repository Clone the forked repository to your local machine using the command: `git clone https://github.com/your-username/project-name.git ` 3. Create a New Branch Create a new branch for your changes to keep your work organized: `git checkout -b idungstanley-branch ` 4. Make Your Changes Make the necessary changes to the codebase. Ensure your changes adhere to the project's coding standards and guidelines. 5. Commit Your Changes Commit your changes with a descriptive commit message: `git add . git commit -m "Add feature XYZ" ` 6. Push to Your Fork Push your changes to your forked repository: `git push origin my-feature-branch ` 7. Open a Pull Request Go to the original project repository on GitHub and open a pull request from your forked repository. Provide a clear description of the changes you've made and why they are necessary. 8. Collaborate and Iterate Be responsive to any feedback or requested changes from the project maintainers. Make necessary adjustments and update your pull request accordingly. ## How Does One Look for an Open Source Project to Participate In? Finding the right open source project involves: 1. Identifying Your Interests Consider what technologies, languages, or topics you are passionate about. This will make the contribution process more enjoyable. 2. Using GitHub's Explore Feature GitHub's Explore feature can help you discover projects based on your interests. You can browse through trending repositories or search for specific topics. 3. Checking Contribution Guides Many open source projects have contribution guides that provide an overview of how to get involved. These guides often highlight areas where help is needed. 4. Exploring Open Source Directories Websites like Open Source Guide, First Timers Only, and Up For Grabs list projects that welcome new contributors. These directories often label beginner-friendly issues to help you get started. ## What are the Criteria and How Does a PR Get Approved? Criteria for Contributing Each project may have its own set of criteria for contributions. Generally, you should: - Follow Coding Standards: Adhere to the project's coding conventions and guidelines. - Write Clear Commit Messages: Provide concise and descriptive commit messages. - Test Your Changes: Ensure your changes do not break existing functionality. - Document Your Work: Update any relevant documentation to reflect your changes. ### PR Approval Process The process for approving a pull request typically involves the following and this depends on the company: - Review: Project maintainers or other contributors review your pull request. They may provide feedback or request changes. - Discussion: Engage in constructive discussions to address any concerns or questions about your pull request. - Revisions: Make any necessary revisions based on the feedback received. - Approval: Once all feedback has been addressed and the changes are satisfactory, the pull request will be approved and merged into the main branch. ### How to Join the Community? 1. Participate in Discussions Engage in discussions on forums, chat channels, or mailing lists related to the project. Introduce yourself and express your interest in contributing. 2. Attend Community Events Join community events like hackathons, meetups, or conferences to network with other contributors and learn more about the project. 3. Contribute to Documentation Contributing to documentation is a great way to start. It helps you understand the project better and provides an entry point for more significant contributions. 4. Be Respectful and Inclusive Always be respectful and inclusive in your interactions. Open source communities thrive on collaboration and mutual respect. ## Roles in a typical open source Project **Project Maintainer** Responsibilities: - Overseeing the project: Maintainers are responsible for the overall health and direction of the project. - Merging Pull Requests: They review and merge contributions from other developers. - Managing Releases: They handle the release process, ensuring that new versions are stable and well-documented. - Setting the Vision: Maintainers set the vision and goals for the project and make decisions on major changes or new features. Skills: - In-depth knowledge of the project's codebase. - Strong leadership and decision-making abilities. - Excellent communication skills to interact with contributors and users. **Core Contributor** Responsibilities: - Regular Contributions: Core contributors regularly contribute significant code, documentation, or other resources to the project. - Code Review: They assist maintainers by reviewing pull requests and providing feedback. - Mentorship: They often help onboard new contributors by providing guidance and support. Skills: - Deep understanding of the project's codebase. - Ability to write high-quality, maintainable code. - Good mentoring and communication skills. **Contributor** Responsibilities: - Submitting Pull Requests: Contributors make improvements or add features to the project by submitting pull requests. - Reporting Issues: They help by identifying and reporting bugs or suggesting enhancements. - Improving Documentation: Contributors often update or improve project documentation to help others understand the project better. Skills: - Basic to advanced coding skills, depending on the contribution. - Familiarity with the project's guidelines and processes. - Willingness to collaborate and receive feedback. **Issue Triage** Responsibilities: - Managing Issues: They help manage the project's issue tracker by categorizing, tagging, and prioritizing issues. - Reproducing Bugs: They verify bug reports by trying to reproduce the reported issues. - Closing Issues: They close issues that are resolved or no longer relevant. Skills: - Good organizational skills. - Attention to detail to accurately categorize and prioritize issues. - Ability to reproduce and verify bugs. **Documentation Specialist** Responsibilities: - Writing and Maintaining Documentation: They create and maintain comprehensive documentation for the project, including installation guides, tutorials, and API references. - User Guides: They write guides to help new users understand how to use the project. - Developer Guides: They provide detailed guides for developers looking to contribute to the project. Skills: - Strong writing and communication skills. - Technical understanding of the project. - Ability to translate complex technical concepts into easily understandable language. **Community Manager** Responsibilities: - Engaging the Community: They engage with the community by responding to questions, facilitating discussions, and organizing events. - Moderating Discussions: They moderate forums, chat channels, and mailing lists to ensure respectful and productive communication. - Growing the Community: They work to attract new contributors and users to the project. Skills: - Excellent interpersonal and communication skills. - Experience in community building and moderation. - Ability to manage conflicts and foster a positive community environment. **Designer** Responsibilities: - User Experience (UX) Design: They design the user experience and user interface of the project. - Creating Visual Assets: They create visual assets such as logos, icons, and banners. - Improving Usability: They suggest and implement improvements to enhance the usability of the project. Skills: - Strong design skills, including proficiency with design tools like Sketch, Figma, or Adobe XD. - Understanding of UX principles and best practices. - Ability to collaborate with developers to implement design changes. **Tester** Responsibilities: - Testing New Features: They test new features and changes to ensure they work as expected. - Writing Test Cases: They write and maintain test cases for automated and manual testing. - Reporting Bugs: They report any bugs or issues they find during testing. Skills: - Attention to detail and a methodical approach to testing. - Knowledge of testing tools and methodologies. - Ability to write clear and concise bug reports. **Mentor** Responsibilities: - Guiding New Contributors: They provide guidance and support to new contributors, helping them understand the project and how to contribute. - Running Onboarding Sessions: They run onboarding sessions or create resources to help new contributors get started. - Providing Feedback: They review contributions from new contributors and provide constructive feedback. Skills: - Strong knowledge of the project. - Excellent teaching and mentoring skills. - Patience and the ability to provide constructive feedback. **Financial Supporter** Responsibilities: - Funding the Project: They provide financial support to the project, either through direct donations, sponsorships, or grants. - Promoting the Project: They help promote the project to attract more financial supporters. - Managing Funds: In some cases, they may help manage the allocation and use of funds. Skills: - Understanding of fundraising and financial management. - Ability to communicate the value of the project to potential supporters. - Experience in sponsorship or grant writing. **Advocate/Evangelist** Responsibilities: - Promoting the Project: They promote the project through talks, blog posts, social media, and other channels. - Building Partnerships: They help build partnerships with other projects, organizations, and communities. - User Education: They educate potential users and contributors about the project and how to get involved. Skills: - Strong communication and presentation skills. - Passion for the project and its goals. - Ability to engage with a wide audience. ## Must-Have Elements in All Open Source Projects For an open source project to be successful, welcoming, and easy to contribute to, there are certain elements that should be in place. These elements help in maintaining clarity, ensuring effective collaboration, and fostering a healthy community. Below are the must-have elements that every open source project should incorporate: **Clear Documentation** **A. README.md** The README.md file is often the first document that a new visitor to your project will see. It should provide a comprehensive overview of the project, including: - Project Description: What does the project do? Installation Instructions: How can someone set up the project on their local machine? - Usage Examples: Provide examples of how to use the project. - Contributing Guidelines: How can someone contribute to the project? **B. CONTRIBUTING.md** A CONTRIBUTING.md file provides detailed instructions on how to contribute to the project. This should include: - Code of Conduct: Expected behavior and consequences for violations. - How to Report Issues: Guidelines for reporting bugs or suggesting features. - Development Workflow: How to set up the development environment, run tests, and submit changes. - Style Guide: Coding conventions and best practices. **C. CODE_OF_CONDUCT.md** A CODE_OF_CONDUCT.md file outlines the expected behavior of contributors and the standards for community interactions. This helps in fostering a welcoming and inclusive community. **D. LICENSE** Every open source project should have a license that specifies how others can use, modify, and distribute the code. Popular licenses include the MIT License, Apache License 2.0, and GNU General Public License (GPL). **Issue Tracker** An issue tracker is crucial for managing bugs, feature requests, and discussions. Platforms like GitHub, GitLab, and Bitbucket provide built-in issue tracking features. Key components of an issue tracker include: - Labels: Categorize issues by type, such as bug, enhancement, or documentation. - Templates: Provide issue and pull request templates to ensure that contributors provide necessary information. - Milestones: Group related issues and pull requests into milestones to track progress toward specific goals. **Contribution Guidelines** Clear contribution guidelines help new contributors understand how to get started. This includes: - How to Fork and Clone the Repository: Basic steps to set up the project locally. - Branching Model: Guidelines on creating branches for new features or bug fixes. - Commit Messages: Standard format for commit messages. - Pull Request Process: Steps for submitting a pull request, including how to run tests and get the code reviewed. **Continuous Integration/Continuous Deployment (CI/CD)** CI/CD pipelines automate the testing and deployment of code changes. This ensures that new contributions do not break the project and that the latest version is always deployable. Popular CI/CD tools include: - GitHub Actions: Integrates seamlessly with GitHub repositories. - Travis CI: Supports various programming languages and integrates with GitHub and Bitbucket. - CircleCI: Known for its speed and flexibility. **Testing Framework** A robust testing framework is essential for maintaining code quality and ensuring that new contributions do not introduce bugs. Include unit tests, integration tests, and end-to-end tests as appropriate. Popular testing frameworks include: - JUnit: For Java projects. - PyTest: For Python projects. - Jest: For JavaScript projects. **Community Channels** Engage with your community through various channels to encourage collaboration and support. This can include: - Discussion Forums: Platforms like GitHub Discussions or Discourse. - Chat Channels: Slack, Discord, or Gitter for real-time communication. - Mailing Lists: Google Groups or Mailchimp for announcements and discussions. **Version Control** Use a version control system like Git to manage changes to the project's codebase. GitHub, GitLab, and Bitbucket are popular platforms that provide hosting for Git repositories. **Project Governance** Define how the project is governed, including decision-making processes, roles, and responsibilities. This can be outlined in a GOVERNANCE.md file and should include: - Maintainers: List of people responsible for reviewing and merging contributions. - Decision-Making Process: How decisions are made (e.g., consensus, majority vote). - Conflict Resolution: Procedures for resolving disputes. **Security Policy** Include a security policy that outlines how to report security vulnerabilities and the process for handling them. This can be included in a SECURITY.md file and should cover: - Reporting Process: How and where to report security issues. - Response Time: Expected time frame for acknowledging and addressing reports. - Disclosure Policy: Guidelines on how and when security issues will be disclosed to the public. **Code Quality Tools** Incorporate tools to maintain high code quality, such as: - Linters: Automatically check code for stylistic and programming errors. - Code Formatters: Enforce consistent code formatting. - Static Analysis Tools: Detect potential bugs and security vulnerabilities. ### Conclusion Incorporating these essential elements in your open source project will help create a welcoming, organized, and efficient environment for contributors. Clear documentation, robust testing, and effective community engagement are crucial for the success and sustainability of any open source project. By following these best practices, you can attract more contributors, foster collaboration, and build a thriving community around your project.
idungstanley
1,882,118
Using Phoenix.PubSub as a message-bus for Elixir cluster
In my sharing session about using Phoenix.PubSub as message-bus for Elixir cluster, it quite simple...
0
2024-06-09T14:44:45
https://dev.to/manhvanvu/simple-using-phoenixpubsub-as-a-message-bus-for-elixir-cluster-l3c
elixir, cluster, messagebus, pubsub
In my sharing session about using Phoenix.PubSub as message-bus for Elixir cluster, it quite simple and effective way for small & medium system. Now I recap it. ![bus-message between two processes of two nodes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cq14bm1rimlgxlmr2je1.png) (Simple case for using Phoenix PubSub to create a bus-message for two or more processes in different nodes) For standalone Elixir app (without Phoenix framework), we need add depend libraries to mix file: ```Elixir defp deps do [ {:phoenix_pubsub, "~> 2.1"}, {:libcluster, "~> 3.3"} ] end ``` `:phoenix_pubsub` is a library from Phoenix framework, it can run without framework. `:libcluster` is a new way to run Elixir app in cluster by declare config in `config.exs` ```Elixir config :libcluster, topologies: [ local_epmd: [ # The selected clustering strategy. Required. strategy: Cluster.Strategy.LocalEpmd, # Configuration for the provided strategy. Optional. config: [hosts: [:"frontend_1@127.0.0.1", :"front_end_2@127.0.0.1", :"trading@127.0.0.1"]], # The function to use for connecting nodes. The node # name will be appended to the argument list. Optional connect: {:net_kernel, :connect_node, []}, # The function to use for disconnecting nodes. The node # name will be appended to the argument list. Optional disconnect: {:erlang, :disconnect_node, []}, # The function to use for listing nodes. # This function must return a list of node names. Optional list_nodes: {:erlang, :nodes, [:connected]}, ]] ``` This is simple config for running cluster by `:libcluster` In Phoenix app just add `:libcluster` & its config are enough. In `Application` module in app need to add `PubSub` as a child process to application supervisor (or your supervisor) ```Elixir children = [ {Phoenix.PubSub, name: Trading.PubSub}, ... ] ``` In process need to receive message from message-bus we need to run `subscribe` to subscribe a topic on a PubSub like: ```Elixir PubSub.subscribe(pubsub_name, topic_name) ``` `pubsub_name` is name of PubSub we create in supervisor. `topic_name` is name of topic in PubSub we want to receive messages. When have a message we will receive it as a message in our process or `handle_info`if we used GenServer. After done with PubSub we need to `unsubscribe` to remove process out of PubSub (in case we don't remove, messages will be dropped if our process exited). If want to send message to bus we need to use `broadcast` function to send messages: ```Elixir PubSub.broadcast(pubsub_name, topic, {:join, my_id, "Hello"}) ``` `{:join, my_id, "Hello"}` is our message. Now, we can send message around cluster with a little effort. **Conclusion** `:libcluster` help us join apps to cluster just by add config. `:phoenix_pubsub` help us send message cross cluster. Source available at our [Github repo](https://github.com/ohhi-vn/live_pub_demo)
manhvanvu
1,882,116
The Frontend Challenge: 1980s/Miami style Glam Up Beaches Around the World
This is a submission for [Frontend Challenge...
0
2024-06-09T14:42:53
https://dev.to/darrellroberts/the-frontend-challenge-1980smiami-style-glam-up-beaches-around-the-world-59kf
devchallenge, frontendchallenge, css, javascript
_This is a submission for [Frontend Challenge v24.04.17]((https://dev.to/challenges/frontend-2024-05-29), Glam Up My Markup: Beaches_ ## What I Built <!-- Tell us what you built and what you were looking to achieve. --> I created a place where the user can easily locate on a world map where the best beaches in the world are located. The user can look on the world map and explore details of their selected beach. I wanted to achieve this because I felt it gave a better sense of geography knowing where the best beaches in the world are, and it can serve as a travelling aid to check if you are near any of these beaches. ## Demo <!-- Show us your project! You can directly embed an editor into this post (see the FAQ section from the challenge page) or you can share an image of your project and share a public link to the code. --> ![Screenshot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k29p2r7svmwnrecu3qqr.jpg) [Link to project/GitHub page](https://darrellroberts.github.io/beaches_frontendchallenge/) [Link to GitHub repository](https://github.com/DarrellRoberts/beaches_frontendchallenge) ## Journey <!-- Tell us about your process, what you learned, anything you are particularly proud of, what you hope to do next, etc. --> ###Style I started thinking about the design and how I would present it and what colour scheme I would use. I decided to use a 1980s/miami style, reflecting pink sand and turquoise blue ocean. In addition I wanted the background to move like the tide, which is why I added a tide animation which mimics the sea. This was hard to achieve as you can't animate the actual colour gradient, you have to instead animate the background sizes. ###Positioning I wanted to display the beaches on a world map. This meant researching where the beaches were actually located and presented the challenge of how I could position them. I used the map as a background image and made the beaches container a CSS grid. I experimented with different columns and rows but found on desktop, a 4x4 grid works well. After this, I cycled through each beach using an id and chose in which row and column it should be located. Using properties such as justify-self and align-self, allowed me to be more accurate. In some cases I had to use margins in order to perfect its positioning further. Although the points aren't 100% accurate, I'm proud that the grid system worked well. ###JavaScript I wanted to split the html page into two parts: introduction and map. This is why I added a click event listener on the homepage which transitions from the introduction to the map page. I used a for loop to designate an id to each beach as well as a click event listener. For me it was easier and meant I wrote less code, and that is also why I made use of different CSS ids for visible and non-visible elements. I then used JavaScript to change its ids, depending on whether I wanted the element to display or not. Each beach is shown on the world map as a red square. Its only when the user hovers over this red square, that the location is revealed and the user can click on it to see more details. ###Responsiveness As I am using a world map as the background image, I employed a desktop first approach for CSS-styling. This also meant it was very challenging to make responsive on mobile devices. For this, I changed the background image proportions on a media query, whilst making sure that the beach locations are still accurate. ##What next I had ideas of turning this into a game with an airplane. The user could control the airplane with a keyboard and maybe the map at first would be completely obscured. It would only be when the user moves that parts of the map would reveal itself. So I thought I could turn this into a game, whereby the user is asked questions about particular beaches, and they have to find where it is located. However, I knew this would be problematic on mobile devices. This was my initial thoughts but I had to admit to myself it would be too ambitious for this entry. ##Conclusion Nevertheless, I'm proud that I achieved what I wanted and it certainly sharpened my CSS and JavaScript skills. Let me know what you think below or if you have any criticisms. I apologise for the lack of gifs in the description, they usually give me a headache, so I don't want to give anyone else one either. Also English is my first language, so be as brutal as you want. Thank you! Darrell <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- We encourage you to consider adding a license for your code. --> <!-- Don't forget to add a cover image to your post (if you want). --> <!-- Thanks for participating! -->
darrellroberts
1,882,115
easiest way to print sqls in entity framework
Let say you have DbContext like below public class Database(DbContextOptions&lt;Database&gt;...
0
2024-06-09T14:42:52
https://dev.to/ozkanpakdil/easiest-way-to-print-sqls-in-entity-framework-4nag
Let say you have DbContext like below ```C# public class Database(DbContextOptions<Database> options) : DbContext(options) { public DbSet<Todo> Todos => Set<Todo>(); } ``` Just want to print generated SQLs use below class ```C# public class Database(DbContextOptions<Database> options) : DbContext(options) { public DbSet<Todo> Todos => Set<Todo>(); protected override void OnConfiguring(DbContextOptionsBuilder optionsBuilder) => optionsBuilder.LogTo(Console.WriteLine); } ``` Example generated SQLs for my test app ```shell 09/06/2024 15:37:42.815 RelationalEventId.CommandExecuting[20100] (Microsoft.EntityFrameworkCore.Database.Command) Executing DbCommand [Parameters=[], CommandType='Text', CommandTimeout='30'] SELECT "t"."Id", "t"."Content", "t"."Done" FROM "Todos" AS "t" info: Microsoft.EntityFrameworkCore.Database.Command[20101] Executed DbCommand (0ms) [Parameters=[], CommandType='Text', CommandTimeout='30'] SELECT "t"."Id", "t"."Content", "t"."Done" FROM "Todos" AS "t" info: 09/06/2024 15:37:42.815 RelationalEventId.CommandExecuted[20101] (Microsoft.EntityFrameworkCore.Database.Command) Executed DbCommand (0ms) [Parameters=[], CommandType='Text', CommandTimeout='30'] SELECT "t"."Id", "t"."Content", "t"."Done" FROM "Todos" AS "t" ```
ozkanpakdil
1,881,591
Integrando sua API com front React e Axios
Ao se desenvolver uma API, é ideal que os dados resultantes no lado do servidor dela sejam consumidos...
0
2024-06-09T14:42:51
https://dev.to/tuliopss/como-integrar-sua-api-com-seu-frontend-cg8
api, programming, fullstack, react
Ao se desenvolver uma API, é ideal que os dados resultantes no lado do servidor dela sejam consumidos em algum lugar de forma dinâmica e intuitiva, diferentemente de um software de teste manual, como o Postman, por exemplo. Para isso, o desenvolvimento do frontend se faz necessário, permitindo a interação e visualização direta do usuário com o sistema. Nesse artigo, iremos utilizar a biblioteca React.js como exemplo de tecnologia no frontend. Para realizar a comunicação Front-API, é necessário a utilização de ferramentas que façam as requisições http, como o Axios, FetchAPI, Ajax, etc. Iremos optar aqui pelo Axios, instale ele com o comando "**npm install axios**" no seu terminal. Obs: é importante que na sua API, esteja com o Cors configurado corretamente para não existir o bloqueio entre aplicações. Com sua API rodando, iremos relembrar as rotas implementadas e seus métodos para serem chamados pelo Axios. No diretório do frontend, vamos criar uma pasta utils, e criar um arquivo api.js para armazenamos uma variável com a URL da nossa API. Por padrão do Axios, o instanciamos passando uma URL base. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q3ws2fhmhh8gio4xlndd.png) Agora iremos criar a pasta service, com o nosso user-service para separar a lógica de serviço dos componentes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5iwo86tntvxvyog33n43.png) Aqui, no nosso método para retornar todos os usuários, passamos a URL (já armazenada na variável) junto do método GET, que irá retornar a seguinte response (resposta): ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9sjtur9elrabcvh9jez.png) Repare que ao fazer nossa requisição, a API nos retorna todos os dados que lá foi configurado: status code, headers, e o principal que é o data, que é o conteúdo que estamos buscando. Agora para apresentarmos isso ao nosso front, iremos no nosso componente e realizar esse resgate de dados. Precisamos definir o useState users e setUsers como um array vazio, para manipularmos os seus devidos estados e criamos uma função para recuperar os valores lá do nosso service e popular o array users, dessa forma: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/438f0rfwok7o9glzzwf6.png) E por fim o hook nativo do React useEffect para executar essa função apenas quando a página for renderizada. Agora estamos com a faca e o queijo na mão, já temos os valores dos usuários armazenados no useState e só precisamos imprimi-los. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dcttx5yuhe6gftc4ltdj.png) Definimos uma condicional, para que só imprima a tabela caso exista usuários, e para isso, implementamos a função map no state users para percorrer todos os usuários e apresentar cada atributo seu. O resultado ficou assim: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1nycwperi161zpflavcb.png) Agora que vimos como ler dados no front, nada mais justo que vermos como criar novos dados. Vamos precisar de um componente com um formulário para manipularmos os campos lá. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mtsvtxaaxk3e2hp1cgym.png) Nos inputs, precisamos mapear os valores, e para isso setamos o atributo name igual aos campos definidos na nossa API, o handler onChange para alterarmos o state dos campos. Com a estrutura pré-definida, vamos ao service: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d71xlmt63mb25xfk1ep1.png) Seguimos a mesma lógica do método anterior, só que em um método post (e patch também), precisamos passar os dados que serão enviados como parâmetro, ou seja, agora junto do verbo HTTP passamos a URL, os dados que serão enviados e as configurações de headers, que aqui simulei um token de autenticação para demonstrar como seria caso a rota da sua API fosse protegida por esse tipo de serviço. Vamos acessar essa função no nosso formulário! Vamos definir os states com os campos do usuário e uma função para acessar nosso service. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i78l1bic9ttm9437lue6.png) A função handleSubmit serve para definir o que será executado quando o formulário for enviado, ali criamos um novo objeto de usuário com os estados atualizados do formulário e passamos esse objeto como parâmetro da função. Vamos ver como ficou: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zydydypptksh73giecbn.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/drsjuu2iwo7ttyxncc6k.png) Com isso, vimos exemplos de métodos comuns em API's integrado com seu frontend. Em casos de CRUD, a lógica é a mesma, apenas adaptando o verbo HTTP passado no método.
tuliopss
1,882,114
PACX ⁓ Working with solutions
PAC CLI provides a rich namespace dedicated to working with Dataverse solutions. The commands...
0
2024-06-09T14:39:57
https://dev.to/_neronotte/pacx-working-with-solutions-5fil
powerplatform, dataverse, github, opensource
[PAC CLI provides a rich namespace](https://learn.microsoft.com/en-us/power-platform/developer/cli/reference/solution) dedicated to working with Dataverse solutions. The commands provided there are really powerful, and designed to **drive a code-first approach to solution management**, that personally I ❤️ a lot. Before PACX, my usual Dataverse development cycle was: 1. use `pac solution init` to initialize locally my dataverse solution project (*.cdsproj) 2. manually update the generated `Solution.xml` file to add details about my company's publisher 3. build the solution via `dotnet build` 4. if not previously done, connect to my Dataverse environment via `pac auth` 5. publish the solution to dataverse via `pac solution import` 6. start manually create/update solution components via make.powerapps.com 7. sync the changes locally via `pac solution sync` 8. when ready, commit everything on my Azure DevOps repo When I started scripting data model manipulations with PACX, I felt the urge of a couple more commands to streamline my development activities. ## pacx solution setDefault At first it was `pacx solution setDefault`. This came [before Dataverse preferred solution](https://learn.microsoft.com/en-us/power-apps/maker/data-platform/preferred-solution) feature came out, and it's pretty much a "local" version of it. When I created tables or columns via PACX (_we'll see how to in the upcoming articles_) I wanted them to be placed automatically in the context of a given solution. `pacx solution setDefault` allows user to define which solution should be considered "default" from now on, on the currently selected environment. ![pacx solution setDefault](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z7kq4cg6pg7wdp8vqf4b.png) PACX saves in it's local settings storage the default solution specified for each environment. Then, when I ran any command such as `pacx table create`, I no longer needed to pass the --solution argument. Time saved 😎 ## pacx solution getDefault In complex scenarios, when I have multiple solutions in place, it happens to forget which solution is set as default for the current environment. That's why I created the second command, `pacx solution getDefault`. It simply returns the default solution set for the current environment, if any. ![pacx solution getDefault](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0tppvsye7ul63q6qlmuk.png) ## pacx solution create The code-first approach to solution management is great but... what if you just need to create a temporary solution (e.g. to work with Ribbon Workbench) that you don't want to save locally or on the repo? I often found myself in the situation where the solution where already there, and I needed to segregate specific components in a separate one. `pacx solution create` creates a new solution directly on the current environment, without saving anything locally. > If you want, you can later clone that solution using `pac solution clone` and keep it updated via `pac solution sync`. `pacx solution create` has the same behavior of `pac solution init`: creates the solution and also the publisher if needed. The minimum required info for the command is the (display) `name` of the solution to create, and one of `publisherUniqueName` or `publisherPrefix` (you can specify both, at least 1 is required). ```Powershell pacx solution create --name master --publisherUniqueName greg pacx solution create -n master -pun greg pacx solution create --name master --publisherPrefix greg pacx solution create -n master -pp greg ``` The following conventions apply: - if the `uniqueName` argument is not specified, it's reduced from the `name` argument considering only letters, numbers or underscores, all in lowercase. - solution version is set to 1.0.0.0 by default - solution is (of course) unmanaged - about the publisher - if `publisherUniqueName` is specified, the tool tries to find a publisher with that unique name in the environment. If found, it is used as publisher for the solution. - if `publisherUniqueName` is not specified, but `publisherPrefix` is provided, the tool tries to find a publisher with that prefix in the environment. If found, it is used as publisher for the solution. If no publisher has been found, a new publisher is created, with the following defaults: - **uniquename**: - uses `publisherUniqueName` if provided, otherwise - uses `publisherFriendlyName` if provided (considering only letters, numbers and underscores), otherwise - uses `publisherPrefix` - **friendlyname**: - uses `publisherFriendlyName` if provided, otherwise - uses `publisherUniqueName` if provided, otherwise - uses `publisherPrefix` - **customizationprefix**: - uses `publisherPrefix` if provided, otherwise - uses `publisherUniqueName` if provided, considering only letters or numbers. If the length of the generated string is <= 5 chars, takes the whole string, otherwise extracts the first 3 chars. Otherwise... - uses `publisherFriendlyName`, considering only letters or numbers. If the length of the generated string is <= 5 chars, takes the whole string, otherwise extracts the first 3 chars. - **customizationoptionvalueprefix**: - uses `publisherOptionSetPrefix` argument if provided, otherwise sets it to 10000 Personally, now I use it a lot. I changed the approach described in the beginning of the current article this way: 1. use `pacx solution create` to create my solution on the dataverse environment, also creating the publisher if needed 2. set that solution to default via `pacx solution setDefault` 3. start the data model manipulations via `pacx table`, `pacx column` and `pacx rel` commands 4. when ready, clone the solution locally via `pac solution clone` 5. do all the other stuff via pacx or make.powerapps.com 6. sync the changes locally via `pac solution sync` 7. when ready, commit everything on my Azure DevOps repo I find this approach more lean and direct, and allows me to focus on content right at step #2 instead of step #6 as before. ## pacx solution delete I added this command because I'm lazy. [PAC CLI already contains a command to delete a solution](https://learn.microsoft.com/en-us/power-platform/developer/cli/reference/solution#pac-solution-delete) from a given environment, but I kept forgetting it and writing ```Powershell pacx solution delete ... ``` instead of ```Powershell pac solution delete ... ``` Thus... I created an alias command, that does basically the same thing. ## pacx solution getPublisherList This is quite easy, it just prints the list of publishers already available in the current environment ![pacx solution getPublisherList](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gfomgh32125g4fl1x2ot.png) --- In the next articles we'll deep dive on table management commands.
_neronotte
1,882,096
Why You Need a Portfolio Website
Having a portfolio website is crucial for professional growth in today's competitive job market....
0
2024-06-09T14:20:11
https://dev.to/abdullah_ali_eb8b6b0c2208/why-you-need-a-portfolio-website-445l
webdev, programming, jobs, tutorial
Having a portfolio website is crucial for professional growth in today's competitive job market. Here's why **heres my portfolio website link drop a review and shares yours https://abdullahs-portfolio.vercel.app/** **Showcase Your Profile ** - First Impressions Matter: A well-designed portfolio site makes a strong first impression, setting you apart from the competition. - Personal Branding: It allows you to build and showcase your personal brand effectively. - Professional Storytelling: You can tell your professional story in a visually appealing and engaging way. **Highlight Your Project Expertise** - Detailed Case Studies: Present detailed case studies to illustrate your problem-solving abilities and project management skills. - Visual Proof of Work: Include images, videos, and interactive elements to bring your projects to life. - Client Testimonials: Feature testimonials from satisfied clients to build trust and credibility **Demonstrate Your Skills** - Show Skills in Action: Display concrete examples of your skills through portfolio pieces like design mockups, code snippets, or writing samples. - Thought Leadership: Share tutorials, blog posts, or articles to demonstrate your expertise and thought leadership in your field. - Versatility: Highlight your versatility by showcasing a range of projects and skills. heres my portfolio website link where its demonstrated _https://abdullahs-portfolio.vercel.app_
abdullah_ali_eb8b6b0c2208
1,882,091
Frontend Challenge CSS Beach
This is a submission for Frontend Challenge v24.04.17, CSS Art: June. ...
27,653
2024-06-09T14:17:43
https://dev.to/syedmuhammadaliraza/frontend-challenge-css-beach-3lih
frontendchallenge, css, dev, devchallenge
_This is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), CSS Art: June._ ## Demo <!-- Show us your CSS Art! You can directly embed an editor into this post (see the FAQ section of the challenge page) or you can share an image of your project and share a public link to the code. --> ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Best Beaches in the World</title> <style> body, html { margin: 0; padding: 0; font-family: Arial, sans-serif; background: #f0f8ff; color: #333; } header { background: url('header-beach.jpg') no-repeat center center/cover; color: white; text-align: center; padding: 2rem 1rem; } header h1 { transition: transform 0.3s ease, background-image 0.3s ease; display: inline-block; background-clip: text; -webkit-background-clip: text; color: transparent; background-image: linear-gradient(45deg, #ff6347, #ffcc33); } header h1:hover { transform: scale(1.1); background-image: linear-gradient(45deg, #00c6ff, #0072ff); } main { padding: 2rem; max-width: 1200px; margin: auto; } section { margin-bottom: 2rem; } h2 { border-bottom: 2px solid #333; padding-bottom: 0.5rem; } ul { list-style: none; padding: 0; } li { background: white; margin: 1rem 0; padding: 1rem; border-radius: 8px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); transition: transform 0.3s; } li:hover { transform: scale(1.05); } li h3 { margin: 0 0 0.5rem 0; background-clip: text; -webkit-background-clip: text; color: transparent; background-image: linear-gradient(45deg, #ff6347, #ffcc33); transition: background-image 0.3s ease; } li h3:hover { background-image: linear-gradient(45deg, #00c6ff, #0072ff); } li p { margin: 0; } @media (min-width: 768px) { ul { display: flex; flex-wrap: wrap; gap: 1rem; } li { flex: 1 1 calc(33.333% - 2rem); } } </style> </head> <body> <header role="banner"> <h1>Best Beaches in the World</h1> </header> <main> <section> <h2>Take me to the beach!</h2> <p>Welcome to our curated list of the best beaches in the world. Whether you're looking for serene white sands, crystal-clear waters, or breathtaking scenery, these beaches offer a little something for everyone. Explore our top picks and discover the beauty that awaits you.</p> </section> <section> <h2>Top Beaches</h2> <ul> <li> <h3>Whitehaven Beach, Australia</h3> <p>Located on Whitsunday Island, Whitehaven Beach is famous for its stunning white silica sand and turquoise waters. It's a perfect spot for swimming, sunbathing, and enjoying the natural beauty of the Great Barrier Reef.</p> </li> <li> <h3>Grace Bay, Turks and Caicos</h3> <p>Grace Bay is known for its calm, clear waters and powdery white sand. This beach is ideal for snorkeling, diving, and enjoying luxury resorts that line its shore.</p> </li> <li> <h3>Baia do Sancho, Brazil</h3> <p>Baia do Sancho, located on Fernando de Noronha island, offers stunning cliffs, vibrant marine life, and crystal-clear waters, making it a paradise for divers and nature lovers.</p> </li> <li> <h3>Navagio Beach, Greece</h3> <p>Also known as Shipwreck Beach, Navagio Beach is famous for the rusting shipwreck that rests on its sands. Accessible only by boat, this secluded cove is surrounded by towering cliffs and azure waters.</p> </li> <li> <h3>Playa Paraiso, Mexico</h3> <p>Playa Paraiso, located in Tulum, offers pristine white sands and turquoise waters against the backdrop of ancient Mayan ruins. It's a perfect blend of history and natural beauty.</p> </li> <li> <h3>Anse Source d'Argent, Seychelles</h3> <p>Anse Source d'Argent is renowned for its unique granite boulders, shallow clear waters, and soft white sand. This beach is perfect for photography, snorkeling, and relaxation.</p> </li> <li> <h3>Seven Mile Beach, Cayman Islands</h3> <p>Stretching for seven miles, this beach offers soft coral sand, clear waters, and numerous activities such as snorkeling, paddleboarding, and enjoying beachside restaurants and bars.</p> </li> <li> <h3>Bora Bora, French Polynesia</h3> <p>Bora Bora is known for its stunning lagoon, overwater bungalows, and vibrant coral reefs. It's a perfect destination for honeymooners and those seeking luxury and tranquility.</p> </li> <li> <h3>Lanikai Beach, Hawaii</h3> <p>Lanikai Beach features powdery white sand and calm, clear waters, making it a favorite for swimming, kayaking, and enjoying the scenic views of the Mokulua Islands.</p> </li> <li> <h3>Pink Sands Beach, Bahamas</h3> <p>Pink Sands Beach is famous for its unique pink-hued sand, clear waters, and serene atmosphere. It's an idyllic spot for beachcombing, swimming, and relaxing in paradise.</p> </li> </ul> </section> </main> </body> </html> ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g3s0r862eb1ims07wxpc.png) ### Process The journey to create this project began with a clear vision to transform a simple HTML template into a visually appealing, interactive, and accessible website. Here’s a breakdown of the steps taken: 1. **Initial Setup**: I started with the provided HTML template, which included basic information about the best beaches in the world. 2. **Design and Layout**: Using CSS, I designed a layout that is responsive and visually engaging. The goal was to create a clean and inviting look that complements the beach theme. 3. **Header Design**: A beach-themed header image was used to set the tone. I applied CSS styles to center the text and make it visually striking. 4. **Text Styling**: To add a dynamic touch, I used linear gradients for the text in the header and beach names. This included creating hover effects to make the text interactive. 5. **Responsiveness**: Ensured that the layout adapts well to different screen sizes using media queries. This involved creating a flexible grid for the list of beaches. 6. **Hover Effects**: Implemented smooth hover effects on the beach names and the header title to enhance user interaction. 7. **Accessibility**: Considered accessibility by ensuring text readability, appropriate color contrasts, and a clean, navigable structure. ### What I Learned - **CSS Gradients**: How to apply linear gradients to text and make them look good. - **Responsive Design**: Improved skills in making layouts responsive using media queries and flexible units. - **CSS Transitions**: Gained a better understanding of how to create smooth transitions for hover effects. - **Accessibility**: Learned the importance of making websites accessible and the techniques to achieve it. ### Proud Moments - **Visual Appeal**: I am particularly proud of the visual appeal created by the gradients and hover effects. They add a layer of sophistication and interactivity to the site. - **Responsiveness**: Ensuring the site looks good on various devices was challenging but rewarding. The flexible layout enhances the user experience across different screen sizes. - **Smooth Interactions**: The transitions and hover effects work seamlessly, making the site feel more interactive and engaging. ### Next Steps - **JavaScript Interactivity**: I hope to add more JavaScript features to enhance interactivity, such as animations and dynamic content loading. - **Additional Content**: Including more detailed information, photos, and user reviews for each beach could provide a richer experience. - **Performance Optimization**: Optimizing images and CSS to ensure faster load times and smoother performance. - **Further Accessibility Improvements**: Continuously improving accessibility features to ensure the site is usable by everyone. ### Team Credits While this project was a solo endeavor, I would like to acknowledge the supportive community at DEV for their resources and inspiration. If this were a team submission, teammates would be credited here. ### License This code is open-source and can be used freely with proper attribution. Feel free to modify and improve it for your projects. Thank you for reading about my journey in the Frontend Challenge. It was a fun and educational experience, and I hope you enjoy the final product! I you need any advise or give advise than DM me on Linkedin [Syed Muhammad Ali Raza](https://www.linkedin.com/in/syed-muhammad-ali-raza/)
syedmuhammadaliraza
1,882,090
DIGITAL WEB RECOVERY AGAENCY FOR CRYPTOCURRENCY FRAUD RECOVERY
Fraudulent activities are on the rise, and individuals like me often find themselves in devastating...
0
2024-06-09T14:13:58
https://dev.to/nicole_treacy_be8c5c9d0b3/digital-web-recovery-agaency-for-cryptocurrency-fraud-recovery-1j4e
Fraudulent activities are on the rise, and individuals like me often find themselves in devastating situations, feeling helpless and alone. However, my experience with Digital Web Recovery has been nothing short of a miraculous turnaround. In March, I fell victim to a fraudulent binary options website, which left me in a terrible financial and emotional state after luring me in with false promises of guaranteed profits. I invested my entire savings of about $340,000, only to realize that I had been deceived when the scammers denied all my withdrawal requests and disappeared without a trace. I struggled with the loss, feeling hopeless and alone. It wasn't until last month that I stumbled upon Digital Web Recovery, a company specializing in recovering funds lost to scams. Skeptical but desperate, I reached out to them, and from the very beginning, their team displayed professionalism, empathy, and an unwavering determination to help me. What struck me the most was their transparent and results-based approach. They conducted a thorough investigation into my case and assured me that they would only charge a fee of 20% upon successfully recovering my lost funds. This level of transparency gave me confidence in their service, as it demonstrated their commitment to delivering results rather than merely making promises. After several weeks of dedicated effort, Digital Web Recovery recovered most of my lost funds. The relief, sense of justice, and closure that I felt cannot be overstated. They truly turned a hopeless situation around, providing me with financial restitution and a renewed sense of trust and security. Digital Web Recovery was evident throughout the entire process. They kept me informed at every step, patiently answering all my questions and addressing any concerns I had. Their dedication to helping individuals like me who have fallen victim to scams is truly commendable. I am incredibly grateful for the invaluable assistance I received from Digital Web Recovery. They not only helped me recover a significant portion of my lost funds but also restored my faith in the possibility of seeking justice against fraudulent activities. Their commitment to their client's well-being goes beyond mere financial restitution; it extends to providing emotional support and guidance during what can be an incredibly distressing time. If you have been scammed and find yourself in a similar situation, I highly recommend reaching out to Digital Web Recovery. Their proven track record, transparent approach, and unwavering dedication make them a reliable ally in the fight against scams. My experience with them has been nothing short of exceptional, and I am eternally grateful for their assistance in helping me reclaim what was rightfully mine. Contact below; Website https://digitalwebrecovery.com Email; digitalwebexperts@zohomail.com Telegram user; @digitalwebrecovery Digital Web Recovery truly lives up to its name as a lifeline for those who have been victimized by fraudulent activities. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kgggokimvo9pkxwuzqey.jpeg)
nicole_treacy_be8c5c9d0b3
1,882,089
Key Features of the Best Infant Carriers
Comfort and Support: The best infant carriers provide optimal comfort and support for both you and...
0
2024-06-09T14:10:50
https://dev.to/cris_eagles_2b77808a25111/key-features-of-the-best-infant-carriers-1gn9
**Comfort and Support:** The best infant carriers provide optimal comfort and support for both you and your baby. Look for carriers with wide, padded shoulder straps and a supportive waist belt to distribute your baby’s weight evenly. Ensure that the carrier has a sturdy, adjustable seat that supports your baby’s hips and thighs in an ergonomic “M” position. **Adjustability:** As your baby grows, you’ll want a carrier that can adapt to their changing size. Choose a carrier with adjustable straps and buckles that allow you to customize the fit for both you and your baby. Some carriers even offer multiple carrying positions, such as front-facing, hip-carrying, and back-carrying, to accommodate your baby’s development. **Breathability: ** Babies can get hot and sweaty when pressed against your body, so it’s essential to choose a carrier made from breathable, lightweight materials. Look for carriers with mesh panels or moisture-wicking fabrics that help regulate your baby’s temperature and prevent overheating. **Safety:** Your baby’s safety is paramount, so make sure to choose a carrier that meets safety standards and has been tested for quality. Look for carriers with secure buckles, sturdy stitching, and a wide, supportive base that keeps your baby’s airway clear. **Ease of Use:** As a busy parent, you’ll appreciate a carrier that is easy to put on and take off. Look for carriers with simple, intuitive designs that allow you to quickly adjust the fit and securely place your baby in the carrier. Some carriers even come with handy features like built-in storage pockets or a removable sun hood. source: [bestinfantcarrier.com](https://bestinfantcarrier.com/)
cris_eagles_2b77808a25111
1,882,088
Frontend Challenge Submission
This is a submission for Frontend Challenge v24.04.17, CSS Art: June. ...
0
2024-06-09T14:09:57
https://dev.to/syedmuhammadaliraza/frontend-challenge-submission-i01
frontendchallenge, devchallenge, css
_This is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), CSS Art: June._ ## Demo <!-- Show us your CSS Art! You can directly embed an editor into this post (see the FAQ section of the challenge page) or you can share an image of your project and share a public link to the code. --> ``` <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Best Beaches in the World</title> <style> body, html { margin: 0; padding: 0; font-family: Arial, sans-serif; background: #f0f8ff; color: #333; } header { background: url('header-beach.jpg') no-repeat center center/cover; color: white; text-align: center; padding: 2rem 1rem; } header h1 { transition: transform 0.3s ease, background-image 0.3s ease; display: inline-block; background-clip: text; -webkit-background-clip: text; color: transparent; background-image: linear-gradient(45deg, #ff6347, #ffcc33); } header h1:hover { transform: scale(1.1); background-image: linear-gradient(45deg, #00c6ff, #0072ff); } main { padding: 2rem; max-width: 1200px; margin: auto; } section { margin-bottom: 2rem; } h2 { border-bottom: 2px solid #333; padding-bottom: 0.5rem; } ul { list-style: none; padding: 0; } li { background: white; margin: 1rem 0; padding: 1rem; border-radius: 8px; box-shadow: 0 4px 8px rgba(0, 0, 0, 0.1); transition: transform 0.3s; } li:hover { transform: scale(1.05); } li h3 { margin: 0 0 0.5rem 0; background-clip: text; -webkit-background-clip: text; color: transparent; background-image: linear-gradient(45deg, #ff6347, #ffcc33); transition: background-image 0.3s ease; } li h3:hover { background-image: linear-gradient(45deg, #00c6ff, #0072ff); } li p { margin: 0; } @media (min-width: 768px) { ul { display: flex; flex-wrap: wrap; gap: 1rem; } li { flex: 1 1 calc(33.333% - 2rem); } } </style> </head> <body> <header role="banner"> <h1>Best Beaches in the World</h1> </header> <main> <section> <h2>Take me to the beach!</h2> <p>Welcome to our curated list of the best beaches in the world. Whether you're looking for serene white sands, crystal-clear waters, or breathtaking scenery, these beaches offer a little something for everyone. Explore our top picks and discover the beauty that awaits you.</p> </section> <section> <h2>Top Beaches</h2> <ul> <li> <h3>Whitehaven Beach, Australia</h3> <p>Located on Whitsunday Island, Whitehaven Beach is famous for its stunning white silica sand and turquoise waters. It's a perfect spot for swimming, sunbathing, and enjoying the natural beauty of the Great Barrier Reef.</p> </li> <li> <h3>Grace Bay, Turks and Caicos</h3> <p>Grace Bay is known for its calm, clear waters and powdery white sand. This beach is ideal for snorkeling, diving, and enjoying luxury resorts that line its shore.</p> </li> <li> <h3>Baia do Sancho, Brazil</h3> <p>Baia do Sancho, located on Fernando de Noronha island, offers stunning cliffs, vibrant marine life, and crystal-clear waters, making it a paradise for divers and nature lovers.</p> </li> <li> <h3>Navagio Beach, Greece</h3> <p>Also known as Shipwreck Beach, Navagio Beach is famous for the rusting shipwreck that rests on its sands. Accessible only by boat, this secluded cove is surrounded by towering cliffs and azure waters.</p> </li> <li> <h3>Playa Paraiso, Mexico</h3> <p>Playa Paraiso, located in Tulum, offers pristine white sands and turquoise waters against the backdrop of ancient Mayan ruins. It's a perfect blend of history and natural beauty.</p> </li> <li> <h3>Anse Source d'Argent, Seychelles</h3> <p>Anse Source d'Argent is renowned for its unique granite boulders, shallow clear waters, and soft white sand. This beach is perfect for photography, snorkeling, and relaxation.</p> </li> <li> <h3>Seven Mile Beach, Cayman Islands</h3> <p>Stretching for seven miles, this beach offers soft coral sand, clear waters, and numerous activities such as snorkeling, paddleboarding, and enjoying beachside restaurants and bars.</p> </li> <li> <h3>Bora Bora, French Polynesia</h3> <p>Bora Bora is known for its stunning lagoon, overwater bungalows, and vibrant coral reefs. It's a perfect destination for honeymooners and those seeking luxury and tranquility.</p> </li> <li> <h3>Lanikai Beach, Hawaii</h3> <p>Lanikai Beach features powdery white sand and calm, clear waters, making it a favorite for swimming, kayaking, and enjoying the scenic views of the Mokulua Islands.</p> </li> <li> <h3>Pink Sands Beach, Bahamas</h3> <p>Pink Sands Beach is famous for its unique pink-hued sand, clear waters, and serene atmosphere. It's an idyllic spot for beachcombing, swimming, and relaxing in paradise.</p> </li> </ul> </section> </main> </body> </html> ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g3s0r862eb1ims07wxpc.png) ### Process The journey to create this project began with a clear vision to transform a simple HTML template into a visually appealing, interactive, and accessible website. Here’s a breakdown of the steps taken: 1. **Initial Setup**: I started with the provided HTML template, which included basic information about the best beaches in the world. 2. **Design and Layout**: Using CSS, I designed a layout that is responsive and visually engaging. The goal was to create a clean and inviting look that complements the beach theme. 3. **Header Design**: A beach-themed header image was used to set the tone. I applied CSS styles to center the text and make it visually striking. 4. **Text Styling**: To add a dynamic touch, I used linear gradients for the text in the header and beach names. This included creating hover effects to make the text interactive. 5. **Responsiveness**: Ensured that the layout adapts well to different screen sizes using media queries. This involved creating a flexible grid for the list of beaches. 6. **Hover Effects**: Implemented smooth hover effects on the beach names and the header title to enhance user interaction. 7. **Accessibility**: Considered accessibility by ensuring text readability, appropriate color contrasts, and a clean, navigable structure. ### What I Learned - **CSS Gradients**: How to apply linear gradients to text and make them look good. - **Responsive Design**: Improved skills in making layouts responsive using media queries and flexible units. - **CSS Transitions**: Gained a better understanding of how to create smooth transitions for hover effects. - **Accessibility**: Learned the importance of making websites accessible and the techniques to achieve it. ### Proud Moments - **Visual Appeal**: I am particularly proud of the visual appeal created by the gradients and hover effects. They add a layer of sophistication and interactivity to the site. - **Responsiveness**: Ensuring the site looks good on various devices was challenging but rewarding. The flexible layout enhances the user experience across different screen sizes. - **Smooth Interactions**: The transitions and hover effects work seamlessly, making the site feel more interactive and engaging. ### Next Steps - **JavaScript Interactivity**: I hope to add more JavaScript features to enhance interactivity, such as animations and dynamic content loading. - **Additional Content**: Including more detailed information, photos, and user reviews for each beach could provide a richer experience. - **Performance Optimization**: Optimizing images and CSS to ensure faster load times and smoother performance. - **Further Accessibility Improvements**: Continuously improving accessibility features to ensure the site is usable by everyone. ### Team Credits While this project was a solo endeavor, I would like to acknowledge the supportive community at DEV for their resources and inspiration. If this were a team submission, teammates would be credited here. ### License This code is open-source and can be used freely with proper attribution. Feel free to modify and improve it for your projects. Thank you for reading about my journey in the Frontend Challenge. It was a fun and educational experience, and I hope you enjoy the final product! I you need any advise or give advise than DM me on Linkedin [Syed Muhammad Ali Raza](https://www.linkedin.com/in/syed-muhammad-ali-raza/)
syedmuhammadaliraza
1,881,618
Environment Variables in Docker and Docker Compose - Part 2. Options and Properties
Introduction This document summarizes command options and properties of compose.yaml. ...
0
2024-06-09T00:41:56
https://dev.to/zundamon/enviroment-variables-in-docker-2od3
## Introduction This document summarizes command options and properties of `compose.yaml`. ### ◯ Overview #### 1. Flow of Passing Environment Variables ![Docker Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rmur1234ciffngnozppy.jpg) ![Docker Container](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bh648y6u1iomg7jiwjq0.jpg) #### 2. Options and Properties There was some confusion due to the presence or absence of command option names and `compose.yaml` property names... 1. `docker` command 1. When building a Docker image * `docker image build` command 1. Key-value format * `--build-arg` option 2. File format * None 2. When running a Docker container * `docker container run` command 1. Key-value format * `--env`, `-e` option 2. File format * `--env-file` option 2. `compose.yaml` file 1. When building a Docker image * `services.<service>.build` property 1. Key-value format * `args` property 2. File format * None 2. When running a Docker container * `services.<service>` property 1. Key-value format * `environment` property 2. File format * `env_file` property 3. `docker compose` command 1. When building a Docker image * `docker compose build` command 1. Key-value format * `--build-arg` option 2. File format * `--env-file` option 2. When running a Docker container * `docker compose up` command 1. Key-value format * None 2. File format * `--env-file` option ### ◯ Sample Code Directory names correspond to chapters ``` gh repo clone domodomodomo/docker-env-sample cd docker-env-sample/part2/111 # for Bash - macOS, Ubuntu bash cmd.sh # for PowerShell - Windows powershell ./cmd.ps1 ``` * https://github.com/domodomodomo/docker-env-sample --- ## 1. docker command ### 1.1. When building a Docker image [docker image build](https://docs.docker.com/reference/cli/docker/image/build/) command #### 1.1.1. Key-value format ```bash docker image build \ --build-arg BLUE=Squirtle \ --build-arg RED=Charmander \ --build-arg GREEN=Bulbasaur \ . ``` #### 1.1.2. File format Not found. * Pages checked * [Options - docker image build](https://docs.docker.com/reference/cli/docker/image/build/#options) ### 1.2. When running a Docker container [`docker container run`](https://docs.docker.com/reference/cli/docker/container/run/) command #### 1.2.1. Key-Value Format [`--env`](https://bit.ly/45crk4r) option or [`-e`](https://bit.ly/45crk4r) option ```bash docker container run \ --env BLUE=Squirtle \ --env RED=Charmander \ --env GREEN=Bulbasaur \ app ``` #### 1.2.2. File Format [`--env-file`](https://bit.ly/45crk4r) option ```bash docker container run \ --env-file .env.1 \ --env-file .env.2 \ --env-file .env.3 \ app ``` --- ## 2. compose.yaml File ### 2.1. When building a Docker image [`services.<service>.build`](https://docs.docker.com/compose/compose-file/build/) Property #### 2.1.1. Key-Value Format [`args`](https://bit.ly/3Kw92Se) Property ```yaml # Map Syntax services: app: build: context: . args: BLUE: Squirtle RED: Charmander GREEN: Bulbasaur ``` ```yaml # Array Syntax services: app: build: context: . args: - BLUE=Squirtle - RED=Charmander - GREEN=Bulbasaur ``` #### 2.1.2. File Format Not found. * Checked Pages * [`services.<service>.build`](https://docs.docker.com/compose/compose-file/build/) ### 2.2. When running a Docker container [`services.<service>`](https://docs.docker.com/compose/compose-file/05-services/) Property #### 2.2.1. Key-Value Format [`environment`](https://bit.ly/4aQqifF) Property ```yaml # Map Syntax services: app: build: context: . environment: BLUE: Squirtle RED: Charmander GREEN: Bulbasaur ``` ```yaml # Array Syntax services: app: build: context: . environment: - BLUE=Squirtle - RED=Charmander - GREEN=Bulbasaur ``` #### 2.2.2. File Format ```yaml services: app: build: context: . env_file: - .env.1 - .env.2 - .env.3 ``` --- ## 3. docker compose Command ### 3.1. When building a Docker image [`docker compose build`](https://docs.docker.com/reference/cli/docker/compose/build/) Command #### 3.1.1. Key-Value Format [`--build-arg`](https://bit.ly/3xaWZX9) Option ```bash docker compose build \ --build-arg BLUE="Squirtle" \ --build-arg RED="Charmander" \ --build-arg GREEN="Bulbasaur" ``` #### 3.1.2. File Format [`--env-file`](https://bit.ly/3RcrGlK) Option ```bash docker compose \ --env-file .env.1 \ --env-file .env.2 \ --env-file .env.3 \ build ``` #### ◯ good to know Separating `--env-file` and `--build-arg` by the command they belong to can help understand the Docker command format more easily. ```bash docker --log-level debug \ compose --env-file .env \ build --build-arg BLUE="Squirtle" ``` ``` command command-option \ subcommand subcommand-option \ subsubcommand subsubcommand-option ``` 1. `--log-level` is an option for the `docker` command[*](https://docs.docker.com/reference/cli/docker/#options) 2. `--env-file` is an option for the `docker compose` subcommand[*](https://docs.docker.com/reference/cli/docker/compose/#options) 3. `--build-arg` is an option for the `docker compose build` sub-subcommand[*](https://docs.docker.com/reference/cli/docker/compose/build/#options) ### 3.2. When running a Docker container [`docker compose up`](https://docs.docker.com/reference/cli/docker/compose/up/) Command #### 3.2.1. Key-Value Format Not found. * Checked Pages * [Options - docker compose](https://docs.docker.com/reference/cli/docker/compose/#options) * [Options - docker compose up](https://bit.ly/3X9sUlr) #### 3.2.2. File Format ```bash docker compose \ --env-file .env.1 \ --env-file .env.2 \ --env-file .env.3 \ up ``` --- ## Conclusion Thank you. * [Part 1. ARG and ENV - Hashnode](https://bit.ly/4ebZqtj) * [Part 2. Options and Properties - DEV Community](https://bit.ly/3yO7LmG) * [Part 3. Overall Flow - Hashnode](https://bit.ly/45i3S5Q)
zundamon
1,882,087
Understanding Background Services in .NET 8: IHostedService and BackgroundService
.NET 8 introduces powerful features for managing background tasks with IHostedService and...
27,293
2024-06-09T14:08:41
https://dev.to/moh_moh701/understanding-background-services-in-net-8-ihostedservice-and-backgroundservice-2eoh
dotnetcore, aspdotnet, dotnet
.NET 8 introduces powerful features for managing background tasks with `IHostedService` and `BackgroundService`. These services enable long-running operations, such as scheduled tasks, background processing, and periodic maintenance tasks, to be seamlessly integrated into your applications. This article explores these new features and provides practical examples to help you get started. You can find the source code for these examples on my [GitHub repository](https://github.com/mohamedtayel1980/DotNet8NewFeature/tree/main/DotNet8NewFeature/BackgroundingService). #### What are Background Services? Background services in .NET allow you to run tasks in the background independently of the main application thread. This is essential for tasks that need to run continuously or at regular intervals without blocking the main application flow. #### `IHostedService` Interface The `IHostedService` interface defines two methods: - **`StartAsync(CancellationToken cancellationToken)`**: Called when the application host starts. - **`StopAsync(CancellationToken cancellationToken)`**: Called when the application host is performing a graceful shutdown. **Example of `IHostedService` Implementation**: ```csharp using System; using System.Threading; using System.Threading.Tasks; using Microsoft.Extensions.Hosting; using Microsoft.Extensions.Logging; public class TimedHostedService : IHostedService, IDisposable { private readonly ILogger<TimedHostedService> _logger; private Timer _timer; public TimedHostedService(ILogger<TimedHostedService> logger) { _logger = logger; } public Task StartAsync(CancellationToken cancellationToken) { _logger.LogInformation("Timed Hosted Service running."); _timer = new Timer(DoWork, null, TimeSpan.Zero, TimeSpan.FromSeconds(5)); return Task.CompletedTask; } private void DoWork(object state) { _logger.LogInformation("Timed Hosted Service is working."); } public Task StopAsync(CancellationToken cancellationToken) { _logger.LogInformation("Timed Hosted Service is stopping."); _timer?.Change(Timeout.Infinite, 0); return Task.CompletedTask; } public void Dispose() { _timer?.Dispose(); } } ``` #### `BackgroundService` Class The `BackgroundService` class is an abstract base class that simplifies the implementation of background tasks. It provides a single method to override: - **`ExecuteAsync(CancellationToken stoppingToken)`**: Contains the logic for the background task and runs until the application shuts down. **Example of `BackgroundService` Implementation**: ```csharp using System; using System.Threading; using System.Threading.Tasks; using Microsoft.Extensions.Hosting; using Microsoft.Extensions.Logging; public class TimedBackgroundService : BackgroundService { private readonly ILogger<TimedBackgroundService> _logger; public TimedBackgroundService(ILogger<TimedBackgroundService> logger) { _logger = logger; } protected override async Task ExecuteAsync(CancellationToken stoppingToken) { _logger.LogInformation("Timed Background Service running."); while (!stoppingToken.IsCancellationRequested) { _logger.LogInformation("Timed Background Service is working."); await Task.Delay(TimeSpan.FromSeconds(5), stoppingToken); } _logger.LogInformation("Timed Background Service is stopping."); } } ``` #### Practical Usage To utilize these background services in your .NET application, you need to register them in your dependency injection container. This can be done in the `Program.cs` file. **Registering Hosted Services**: ```csharp using Microsoft.Extensions.DependencyInjection; using Microsoft.Extensions.Hosting; using System.Threading.Tasks; public class Program { public static async Task Main(string[] args) { var host = Host.CreateDefaultBuilder(args) .ConfigureServices(services => { services.AddHostedService<TimedHostedService>(); services.AddHostedService<TimedBackgroundService>(); }) .Build(); await host.RunAsync(); } } ``` ### Key Differences - **Level of Abstraction**: - **`IHostedService`**: Requires manual implementation of starting and stopping logic. - **`BackgroundService`**: Simplifies the implementation by providing a base class with a single method to override. - **Use Cases**: - **`IHostedService`**: Suitable for more complex scenarios where you need fine-grained control over the service lifecycle. - **`BackgroundService`**: Ideal for simpler, long-running tasks that benefit from reduced boilerplate code. ### Conclusion .NET 8's background services, through `IHostedService` and `BackgroundService`, offer a robust and flexible way to manage background tasks. By choosing the appropriate abstraction based on your needs, you can efficiently implement and manage long-running operations in your applications. These new features enhance the ability to create responsive, scalable, and maintainable .NET applications. This guide provides the foundation you need to start integrating background services into your .NET applications. For more complex scenarios, consider exploring additional capabilities and configurations offered by the .NET hosting framework.
moh_moh701
1,882,086
Mastering Client-Side Web Development Tools with JavaScript🚀
Client-side web development tools have revolutionized how developers build, test, and deploy web...
0
2024-06-09T14:08:13
https://dev.to/dharamgfx/mastering-client-side-web-development-tools-with-javascript-4lio
webdev, javascript, beginners, website
Client-side web development tools have revolutionized how developers build, test, and deploy web applications. This post will guide you through essential tools and techniques, helping you understand and implement them effectively in your projects. ## Understanding Client-Side Web Development Tools ### What Are Client-Side Web Development Tools? - **Definition**: Tools used to enhance, streamline, and automate various aspects of front-end web development. - **Purpose**: To improve productivity, manage dependencies, automate repetitive tasks, and optimize web applications. ### Importance of These Tools - **Efficiency**: Save time and effort with automation. - **Consistency**: Ensure consistent development and deployment processes. - **Optimization**: Improve performance and maintainability of web applications. ## Client-Side Tooling Overview ### Key Categories - **Build Tools**: Automate tasks like minification, compilation, and bundling. - **Package Managers**: Manage project dependencies. - **Task Runners**: Automate repetitive tasks. - **Linters/Formatters**: Ensure code quality and style consistency. - **Development Servers**: Serve the application locally during development. ### Examples - **Build Tools**: Webpack, Parcel - **Package Managers**: npm, Yarn - **Task Runners**: Gulp, Grunt - **Linters/Formatters**: ESLint, Prettier - **Development Servers**: Live Server, BrowserSync ## Command Line Crash Course ### Basic Commands - **Navigation**: `cd`, `ls` (or `dir` on Windows) - **File Operations**: `touch` (create a file), `mkdir` (create a directory), `rm` (delete a file) - **Example**: ```bash cd my-project ls touch index.html mkdir css rm old-file.js ``` ### Using CLI for Tooling - **Install Tools**: Use CLI to install and manage development tools. - **Example**: ```bash npm install -g webpack yarn global add parcel-bundler ``` ## Package Management Basics ### What Is a Package Manager? - **Definition**: A tool that automates the process of installing, updating, and managing software dependencies. - **Popular Options**: npm (Node Package Manager), Yarn ### Using npm - **Initialization**: ```bash npm init ``` - **Installing Packages**: ```bash npm install lodash ``` - **Managing Dependencies**: ```json { "dependencies": { "lodash": "^4.17.21" } } ``` ### Using Yarn - **Initialization**: ```bash yarn init ``` - **Installing Packages**: ```bash yarn add lodash ``` - **Managing Dependencies**: ```json { "dependencies": { "lodash": "^4.17.21" } } ``` ## Introducing a Complete Toolchain ### Setting Up a Development Environment - **Tools to Install**: - Node.js and npm - Webpack - Babel (for JavaScript transpiling) - ESLint (for linting) - Prettier (for code formatting) - **Example Configuration**: ```bash npm init -y npm install --save-dev webpack webpack-cli babel-loader @babel/core @babel/preset-env eslint prettier ``` ### Webpack Configuration - **Basic Setup**: ```javascript // webpack.config.js const path = require('path'); module.exports = { entry: './src/index.js', output: { filename: 'bundle.js', path: path.resolve(__dirname, 'dist') }, module: { rules: [ { test: /\.js$/, exclude: /node_modules/, use: { loader: 'babel-loader', options: { presets: ['@babel/preset-env'] } } } ] } }; ``` ### ESLint and Prettier Configuration - **ESLint Setup**: ```javascript // .eslintrc.js module.exports = { env: { browser: true, es6: true }, extends: 'eslint:recommended', parserOptions: { ecmaVersion: 12, sourceType: 'module' }, rules: { 'no-console': 'off' } }; ``` - **Prettier Setup**: ```json // .prettierrc { "singleQuote": true, "trailingComma": "es5" } ``` ## Deploying Our App ### Preparing for Deployment - **Build the Application**: ```bash npm run build ``` ### Deployment Options - **Static Hosting**: GitHub Pages, Netlify - **Example**: Deploying to GitHub Pages ```bash npm install --save-dev gh-pages ``` - **Add Scripts**: ```json "scripts": { "predeploy": "npm run build", "deploy": "gh-pages -d dist" } ``` - **Deploy**: ```bash npm run deploy ``` ## Additional Topics ### Version Control with Git - **Importance**: Track changes, collaborate with others. - **Basic Commands**: ```bash git init git add . git commit -m "Initial commit" git remote add origin <repository-url> git push -u origin master ``` ### Using a CSS Preprocessor - **Options**: SASS, LESS - **Example**: Installing and using SASS ```bash npm install sass ``` ```scss // styles.scss $primary-color: #333; body { color: $primary-color; } ``` ### Setting Up a Development Server - **Tools**: Live Server, BrowserSync - **Example**: Using Live Server ```bash npm install -g live-server live-server ``` By mastering these client-side web development tools, you can significantly enhance your workflow, build more efficient and maintainable applications, and streamline the deployment process. Happy coding!
dharamgfx
1,882,085
Best beaches in the world
This is a submission for Frontend Challenge v24.04.17, CSS Art: 09-June-2024 ...
0
2024-06-09T14:04:52
https://dev.to/nishanthi_s/best-beaches-in-the-world-40p8
frontendchallenge, devchallenge, css
This is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), CSS Art: 09-June-2024 ## Inspiration It's my first time doing something out of my comfort zone. I'm excited to participate in this challenge and eager to take on even more challenges to improve my skills. <!-- What are you highlighting today? --> ## Demo [Demo link](https://6665af80971505d6121bda73--best-beaches-in-the-world.netlify.app/) [Github](https://github.com/Nisha091999/Best-Beaches-in-the-World) CSS Font from [Google Font](https://fonts.google.com/) Hosted the website using [netlify](https://www.netlify.com/) <!-- Show us your CSS Art! You can directly embed an editor into this post (see the FAQ section of the challenge page) or you can share an image of your project and share a public link to the code. --> ## Journey It's been a few months since I last worked on a web application, and I learned a lot during this challenge. Even though I didn't have much time to make it more creative, I'm glad that I participated in it. <!-- Tell us about your process, what you learned, anything you are particularly proud of, what you hope to do next, etc. --> <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- We encourage you to consider adding a license for your code. --> ## MIT License Copyright (c) 2024 Nishanthi Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. <!-- Don't forget to add a cover image to your post (if you want). --> <!-- Thanks for participating! -->
nishanthi_s
1,882,084
Level Up: Developing Engaging JavaScript Games for the Web🚀
Creating web games with JavaScript has become increasingly popular and accessible, offering...
0
2024-06-09T14:02:04
https://dev.to/dharamgfx/level-up-developing-engaging-javascript-games-for-the-web-18e3
webdev, javascript, gamedev, programming
Creating web games with JavaScript has become increasingly popular and accessible, offering developers a variety of tools and techniques to craft engaging experiences. This post will guide you through the essential components of web game development, covering everything from APIs and technologies to techniques and tutorials. ## Introduction ### Introduction to JavaScript Game Development - **Definition and Scope**: JavaScript game development involves creating interactive games that run in web browsers using HTML5, CSS, and JavaScript. - **Evolution**: The field has evolved significantly with advancements in web technologies, enabling complex and visually rich games. ## Anatomy of a Web Game ### Key Components - **HTML5**: Provides the structure for the game. - **CSS**: Styles the game's visual elements. - **JavaScript**: Implements the game logic and interactivity. ### Example ```html <!DOCTYPE html> <html> <head> <title>Simple Web Game</title> <style> canvas { border: 1px solid black; } </style> </head> <body> <canvas id="gameCanvas" width="800" height="600"></canvas> <script src="game.js"></script> </body> </html> ``` ## APIs for Game Development ### Canvas API - **Functionality**: Used for drawing graphics and animations. - **Example**: ```javascript const canvas = document.getElementById('gameCanvas'); const ctx = canvas.getContext('2d'); ctx.fillStyle = 'red'; ctx.fillRect(20, 20, 150, 100); ``` ### CSS for Games - **Styling**: Enhance visual aspects of the game elements. - **Example**: ```css #gameCanvas { background-color: lightblue; } ``` ### Fullscreen API - **Immersion**: Allows games to be displayed in fullscreen mode. - **Example**: ```javascript document.documentElement.requestFullscreen(); ``` ### Gamepad API - **Controller Support**: Integrate gamepad input for enhanced gameplay. - **Example**: ```javascript window.addEventListener("gamepadconnected", function(event) { console.log("Gamepad connected at index %d: %s. %d buttons, %d axes.", event.gamepad.index, event.gamepad.id, event.gamepad.buttons.length, event.gamepad.axes.length); }); ``` ### IndexedDB - **Storage**: Store game data locally for persistence. - **Example**: ```javascript let request = indexedDB.open("gameDatabase", 1); request.onsuccess = function(event) { let db = event.target.result; console.log("Database opened successfully"); }; ``` ### Pointer Lock API - **Control**: Captures the mouse pointer for more immersive controls. - **Example**: ```javascript document.getElementById('gameCanvas').requestPointerLock(); ``` ### SVG for Games - **Vector Graphics**: Use scalable vector graphics for clear visuals. - **Example**: ```html <svg width="100" height="100"> <circle cx="50" cy="50" r="40" stroke="black" stroke-width="3" fill="red" /> </svg> ``` ### Typed Arrays - **Performance**: Efficiently handle binary data. - **Example**: ```javascript let buffer = new ArrayBuffer(16); let view = new Uint32Array(buffer); view[0] = 123456; ``` ### Web Audio API - **Sound**: Manage and play audio effects and music. - **Example**: ```javascript let audioCtx = new (window.AudioContext || window.webkitAudioContext)(); let oscillator = audioCtx.createOscillator(); oscillator.connect(audioCtx.destination); oscillator.start(); ``` ### WebGL - **3D Graphics**: Render 3D graphics in the browser. - **Example**: ```javascript const canvas = document.getElementById('gameCanvas'); const gl = canvas.getContext('webgl'); ``` ### WebRTC - **Real-time Communication**: Enable peer-to-peer communication for multiplayer games. - **Example**: ```javascript let pc = new RTCPeerConnection(); pc.createOffer().then(offer => pc.setLocalDescription(offer)); ``` ### WebSockets - **Networking**: Establish persistent connections for real-time data exchange. - **Example**: ```javascript let socket = new WebSocket("ws://game-server.example.com"); socket.onmessage = function(event) { console.log(event.data); }; ``` ### WebVR/WebXR - **Virtual Reality**: Create immersive VR experiences. - **Example**: ```javascript navigator.xr.requestSession('immersive-vr').then((session) => { console.log('VR session started'); }); ``` ### Web Workers - **Concurrency**: Run scripts in background threads. - **Example**: ```javascript let worker = new Worker('worker.js'); worker.postMessage('Hello, worker!'); ``` ### XMLHttpRequest - **Data Fetching**: Fetch data from a server. - **Example**: ```javascript let xhr = new XMLHttpRequest(); xhr.open('GET', 'https://api.example.com/data', true); xhr.send(); ``` ## Techniques ### Using Async Scripts for asm.js - **Optimization**: Improve loading times with async scripts. - **Example**: ```html <script async src="module.js"></script> ``` ### Optimizing Startup Performance - **Efficiency**: Techniques to reduce load times. - **Example**: Lazy loading assets and scripts. ### Using WebRTC Peer-to-Peer Data Channels - **Real-time Multiplayer**: Enable direct data transfer between players. - **Example**: ```javascript let dataChannel = pc.createDataChannel("gameData"); ``` ### Audio for Web Games - **Sound Design**: Enhance the game experience with audio. - **Example**: Using Web Audio API for sound effects. ### 2D Collision Detection - **Game Physics**: Detect and respond to collisions. - **Example**: ```javascript function isColliding(rect1, rect2) { return !(rect1.right < rect2.left || rect1.left > rect2.right || rect1.bottom < rect2.top || rect1.top > rect2.bottom); } ``` ### Tiles and Tilemaps Overview - **Level Design**: Use tiles to create game levels. - **Example**: ```javascript const tileSize = 32; const map = [ [0, 1, 0], [0, 1, 0], [0, 0, 0] ]; ``` ### 3D Games on the Web - **Overview**: Key concepts and tools for 3D game development. #### Explaining Basic 3D Theory - **Principles**: Understanding coordinates, meshes, lighting, and cameras. ### Building up a Basic Demo with A-Frame - **A-Frame**: An easy-to-use framework for 3D and VR. - **Example**: ```html <a-scene> <a-box position="-1 0.5 -3" rotation="0 45 0" color="#4CC3D9"></a-box> </a-scene> ``` ### Building up a Basic Demo with Babylon.js - **Babylon.js**: A powerful 3D engine for games. - **Example**: ```javascript const canvas = document.getElementById('gameCanvas'); const engine = new BABYLON.Engine(canvas, true); const scene = new BABYLON.Scene(engine); ``` ### Building up a Basic Demo with PlayCanvas - **PlayCanvas**: A web-first game engine for 3D games. - **Example**: ```html <script src="https://code.playcanvas.com/playcanvas-stable.min.js"></script> ``` ### Building up a Basic Demo with Three.js - **Three.js**: A popular library for 3D graphics. - **Example**: ```javascript const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera(75, window.innerWidth/window.innerHeight, 0.1, 1000); ``` ### WebXR - **Extended Reality**: Build immersive AR and VR experiences on the web. ### 3D Collision Detection - **Physics in 3D**: Implement collision detection in 3D environments. - **Example**: ```javascript let box1 = new THREE.Box3().setFromObject(mesh1); let box2 = new THREE.Box3().setFromObject(mesh2); if (box1.intersectsBox(box2)) { console.log("Collision detected!"); } ``` ### Bounding Volume Collision Detection with THREE.js - **Optimization**: Efficiently detect collisions using bounding volumes. - **Example**: ```javascript let sphere1 = new THREE.Sphere(new THREE.Vector3(0, 0, 0), 5); let sphere2 = new THREE.Sphere(new THREE.Vector3(1, 1, 1), 5); if (sphere1.intersectsSphere(sphere2)) { console.log ("Sphere collision detected!"); } ``` ## Implementing Game Control Mechanisms ### Control Mechanisms - **Types**: Mobile touch, desktop mouse and keyboard, gamepad. ### Mobile Touch - **Touch Controls**: Use touch events for mobile gameplay. - **Example**: ```javascript canvas.addEventListener('touchstart', function(event) { console.log('Touch start'); }); ``` ### Desktop with Mouse and Keyboard - **Traditional Controls**: Capture mouse and keyboard input. - **Example**: ```javascript document.addEventListener('keydown', function(event) { console.log(`Key pressed: ${event.key}`); }); ``` ### Desktop with Gamepad - **Gamepad Integration**: Use game controllers for input. - **Example**: ```javascript window.addEventListener("gamepadconnected", function(event) { console.log("Gamepad connected"); }); ``` ## Other Tutorials ### 2D Breakout Game Using Pure JavaScript - **Project**: Create a classic breakout game. - **Example**: ```javascript // Game initialization code here ``` ### 2D Breakout Game Using Phaser - **Framework**: Develop the game using Phaser. - **Example**: ```javascript const config = { type: Phaser.AUTO, width: 800, height: 600, scene: { preload: preload, create: create, update: update } }; ``` ### 2D Maze Game with Device Orientation - **Mobile Interaction**: Control a maze game using device orientation. - **Example**: ```javascript window.addEventListener('deviceorientation', function(event) { console.log(`Alpha: ${event.alpha}, Beta: ${event.beta}, Gamma: ${event.gamma}`); }); ``` ### 2D Platform Game Using Phaser - **Side-scrolling**: Develop a platformer with jumping mechanics. - **Example**: ```javascript // Platformer game code here ``` ## Publishing Games ### Publishing Games Overview - **Steps**: From development to launch. ### Game Distribution - **Platforms**: Where to publish your games (web portals, app stores). - **Example**: Uploading to itch.io or the Chrome Web Store. ### Game Promotion - **Marketing**: Strategies to promote your game. - **Example**: Social media marketing and game trailers. ### Game Monetization - **Revenue**: How to make money from your game. - **Example**: Ads, in-app purchases, and premium versions. By understanding these components and techniques, you'll be well-equipped to develop and publish engaging web games using JavaScript. Whether you're creating a simple 2D game or a complex 3D experience, the web offers a rich ecosystem for game development. Happy coding!
dharamgfx
1,882,083
Java 8 Lambda Expressions and Functional Interfaces
Overview : Java 8 introduced Lambda Expressions and Functional Interfaces, which bring functional...
0
2024-06-09T14:01:39
https://dev.to/abhishek999/java-8-lambda-expressions-and-functional-interfaces-3403
java, java8, lambda, functionalinterfaces
**Overview :** Java 8 introduced Lambda Expressions and Functional Interfaces, which bring functional programming capabilities to Java. These features allow for more concise and readable code, especially when working with collections and performing common tasks like filtering, mapping, and reducing By the end of this blog, we will understand: **What Lambda Expressions are? What Functional Interfaces are? How to create and use Lambda Expressions? Common use cases for Lambda Expressions and Functional Interfaces** **What are Lambda Expressions?** Lambda Expressions are a way to provide clear and concise syntax for writing anonymous methods (functions). They enable you to treat functionality as a method argument, or pass a block of code around as data **Syntax of Lambda Expressions :** The basic syntax of a lambda expression is: ``` (parameters) -> expression or (parameters) -> { statements; } ``` **Example: Simple Lambda Expression** ``` // Traditional way using an anonymous class Runnable runnable = new Runnable() { @Override public void run() { System.out.println("Hello, world!"); } }; // Using a lambda expression Runnable lambdaRunnable = () -> System.out.println("Hello, world!"); ``` **What are Functional Interfaces?** A Functional Interface is an interface with a single abstract method. They can have multiple default or static methods but only one abstract method. Lambda expressions can be used to instantiate functional interfaces. **Example: Defining a Functional Interface** ``` @FunctionalInterface public interface MyFunctionalInterface { void myMethod(); } ``` Java 8 includes several built-in functional interfaces in the java.util.function package, such as Predicate, Function, Consumer, and Supplier. **Creating and Using Lambda Expressions :** **Example 1: Using Predicate Interface** The Predicate interface represents a boolean-valued function of one argument. ``` import java.util.function.Predicate; public class Main { public static void main(String[] args) { Predicate<String> isEmpty = (str) -> str.isEmpty(); System.out.println(isEmpty.test("")); // Output: true System.out.println(isEmpty.test("Hello")); // Output: false } } ``` **Example 2: Using Function Interface** The Function interface represents a function that takes one argument and produces a result. ``` import java.util.function.Function; public class Main { public static void main(String[] args) { Function<Integer, String> intToString = (num) -> "Number: " + num; System.out.println(intToString.apply(5)); // Output: Number: 5 } } ``` **Example 3: Using Consumer Interface** The Consumer interface represents an operation that takes a single argument and returns no result. ``` import java.util.function.Consumer; public class Main { public static void main(String[] args) { Consumer<String> printUpperCase = (str) -> System.out.println(str.toUpperCase()); printUpperCase.accept("hello"); // Output: HELLO } } ``` **Example 4: Using Supplier Interface** The Supplier interface represents a supplier of results, which doesn't take any arguments and returns a result. import java.util.function.Supplier; ``` public class Main { public static void main(String[] args) { Supplier<String> helloSupplier = () -> "Hello, world!"; System.out.println(helloSupplier.get()); // Output: Hello, world! } } ``` **Common Use Cases for Lambda Expressions and Functional Interfaces :** **Use Case 1: Filtering Collections** ``` import java.util.Arrays; import java.util.List; import java.util.stream.Collectors; public class Main { public static void main(String[] args) { List<String> names = Arrays.asList("Alice", "Bob", "Charlie", "David"); // Filter names that start with 'A' List<String> filteredNames = names.stream() .filter(name -> name.startsWith("A")) .collect(Collectors.toList()); filteredNames.forEach(System.out::println); // Output: Alice } } ``` **Use Case 2: Mapping Collections** ``` import java.util.Arrays; import java.util.List; import java.util.stream.Collectors; public class Main { public static void main(String[] args) { List<String> names = Arrays.asList("Alice", "Bob", "Charlie", "David"); // Convert names to uppercase List<String> upperCaseNames = names.stream() .map(String::toUpperCase) .collect(Collectors.toList()); upperCaseNames.forEach(System.out::println); // Output: ALICE, BOB, CHARLIE, DAVID } } ``` **Use Case 3: Reducing Collections** ``` import java.util.Arrays; import java.util.List; public class Main { public static void main(String[] args) { List<Integer> numbers = Arrays.asList(1, 2, 3, 4, 5); // Sum of all numbers int sum = numbers.stream() .reduce(0, Integer::sum); System.out.println("Sum: " + sum); // Output: Sum: 15 } } ``` **Use Case 4: Creating Custom Functional Interfaces** ``` @FunctionalInterface interface MathOperation { int operate(int a, int b); } public class Main { public static void main(String[] args) { // Define lambda expressions for addition and multiplication MathOperation addition = (a, b) -> a + b; MathOperation multiplication = (a, b) -> a * b; System.out.println("Addition: " + addition.operate(5, 3)); // Output: Addition: 8 System.out.println("Multiplication: " + multiplication.operate(5, 3)); // Output: Multiplication: 15 } } ``` **Summary** Lambda Expressions and Functional Interfaces are powerful features in Java 8 that enable you to write more concise, readable, and functional-style code. They are particularly useful for operations on collections and data, allowing you to: **Filter:** Select elements based on a condition. **Map:** Transform elements. **Reduce:** Combine elements into a single result. **Custom Functional Interfaces:** Define your own interfaces for specific tasks **Happy Coding...**
abhishek999
1,882,082
Why Creating an ERD is Essential Before Starting Your Backend Project.
In the intricate realm of software development, meticulous planning is as critical as the execution...
0
2024-06-09T14:00:09
https://dev.to/yelethe1st/why-creating-an-erd-is-essential-before-starting-your-backend-project-40c1
backenddevelopment, databasedesign, erd, projectplanning
In the intricate realm of software development, meticulous planning is as critical as the execution itself. One foundational step that often gets overlooked in the rush to code is the creation of an Entity-Relationship Diagram (ERD). This diagram serves as a blueprint for the database architecture, ensuring clarity, efficiency, and alignment throughout the development process. In this article, we will explore the importance of ERDs, provide detailed steps for creating one, and illustrate how leveraging professional tools can enhance this essential practice. ## Understanding ERD: A Detailed Overview An Entity-Relationship Diagram (ERD) is a visual representation of the database's logical structure. It illustrates the entities within the system, their attributes, and the relationships between them. By laying out these components, developers can see how data interrelates, identifying potential issues and areas for optimization early in the development process. **Components of an ERD** 1. Entities: Represent major objects or concepts within the system (e.g., "User," "Order," "Product"). 2. Attributes: Describe properties or details of entities (e.g., "username," "email" for a "User"). Relationships: Define how entities interact with each other (e.g., a "User" can place multiple "Orders"). 3. Constraints: Include primary keys, foreign keys, and unique constraints to enforce data integrity. ## The Strategic Advantage of an ERD **1. Enhanced Clarity and Communication** Creating an ERD ensures that all team members have a unified understanding of the database structure. It acts as a common language, bridging the gap between developers, database administrators, and non-technical stakeholders. This shared understanding reduces miscommunication and aligns everyone with the project’s objectives. **2. Identifying and Resolving Potential Issues Early** An ERD allows for the early detection of design flaws and inconsistencies. By visualizing the data model, developers can spot redundant data, missing relationships, and other potential issues before they become costly problems. This proactive approach saves time and resources, as it is significantly easier to modify a diagram than to refactor an entire database. **3. Facilitating Scalable and Maintainable Designs** A well-constructed ERD aids in designing a scalable and maintainable database. It helps developers think through the implications of their design choices, ensuring that the system can grow and adapt to future requirements. This foresight is crucial in high-level backend projects, where scalability and maintainability are often top priorities. **4. Streamlining Development Processes** With an ERD in place, the actual development process becomes more streamlined. Developers have a clear roadmap to follow, reducing the likelihood of deviating from the planned architecture. This clarity accelerates development, as less time is spent on understanding how different parts of the database interact. **5. Improving Data Integrity and Consistency** Data integrity and consistency are paramount in backend systems. An ERD enforces these principles by defining the relationships and constraints within the database. It ensures that data is stored correctly and that the relationships between entities are maintained, leading to a more robust and reliable system. ## Steps to Create an Effective ERD **1. Identify the Entities** Begin by identifying all the entities that will be part of your system. Entities are typically nouns like "User," "Order," or "Product." Each entity represents a table in the database. For example: - User: Represents individuals using the application. - Product: Represents items available for purchase. - Order: Represents transactions made by users. **2. Define the Relationships** Next, determine how these entities relate to one another. Relationships can be one-to-one, one-to-many, or many-to-many. Clearly defining these relationships helps in understanding the data flow and interactions. For example: - A User can place many Orders (one-to-many). - An Order can include multiple Products (many-to-many). **3. Detail the Attributes** For each entity, list its attributes. Attributes are the data points that need to be stored for each entity, such as "username," "email," and "password" for a "User" entity. Ensure that each attribute is relevant and necessary for the system’s functionality. For example: - User: user_id (PK), username, email, password. - Product: product_id (PK), name, description, price. - Order: order_id (PK), user_id (FK), order_date, total_amount. **4. Apply Constraints** Define any constraints that should be applied to the attributes and relationships. Constraints include primary keys, foreign keys, and unique constraints, which help maintain data integrity and enforce business rules. For example: - User: user_id as primary key. - Order: user_id as foreign key referencing User. - Product: product_id as primary key. **5. Review and Refine** Finally, review the ERD with all stakeholders. Ensure that it accurately reflects the requirements and that everyone understands the design. Refine the diagram as needed based on feedback and insights. ## Example ERD Below is an example ERD for a simple e-commerce system: ![An example of an ERD for a simple e-commerce system](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wwpw90xo5gs6unogd9nm.png) This diagram illustrates the relationships between Users, Products, and Orders, highlighting the key attributes and constraints. ## Tools for Creating ERDs Several platforms can facilitate the creation of ERDs, offering various features to enhance the modeling process. Here are a few widely used ones: **1. Lucidchart** Lucidchart is a versatile diagramming tool that supports collaborative ERD creation. Its intuitive interface and integration with various data sources make it a popular choice for teams. **2. Microsoft Visio** Microsoft Visio is a robust diagramming application that offers extensive templates and tools for creating detailed ERDs. It is especially useful for enterprise-level projects due to its integration with other Microsoft products. **3. Draw.io** Draw.io is a free, web-based diagramming tool that provides a straightforward interface for creating ERDs. It offers flexibility and ease of use, making it suitable for both beginners and experienced developers. **4. MySQL Workbench** MySQL Workbench is a unified visual tool for database architects, developers, and DBAs. It provides data modeling, SQL development, and comprehensive administration tools for server configuration, user administration, and more. **5. ER/Studio Data Architect** ER/Studio Data Architect by IDERA is a powerful data modeling tool that allows for complex ERD creation and management. It offers features such as model-driven collaboration and cross-platform database support. **6. Eraser.io** Eraser.io is a modern, cloud-based diagramming tool that simplifies the creation of ERDs and other diagrams. Its user-friendly interface and collaboration features make it ideal for distributed teams working on complex projects. ## Conclusion Creating an ERD before starting a high-level backend project is a strategic move that pays dividends throughout the development lifecycle. It provides clarity, identifies potential issues early, ensures scalability, streamlines development processes, and enhances data integrity. By investing time in this crucial planning step and leveraging professional tools, developers can build more robust, efficient, and maintainable systems, setting the stage for successful project execution. Incorporating ERDs into your development workflow is not just a best practice; it is a necessity for delivering high-quality software in a structured and predictable manner. As senior developers at leading organizations know, the discipline of thorough planning and clear communication is the bedrock of successful software engineering. ​​
yelethe1st
1,882,077
Compile your NodeJS application to single file executable
Hi great developers :) In this article, I am trying to share with you my small experience about...
0
2024-06-09T13:59:07
https://dev.to/sudospace/compile-your-nodejs-application-to-single-file-executable-5aoe
typescript, javascript, node
Hi great developers :) In this article, I am trying to share with you my small experience about converting nodeJS projects to single file executable. In short, there are methods that you can use. 1. NodeJS single file executable built-in feature Here I link [NodeJS documentation](https://nodejs.org/api/single-executable-applications.html). Because it is straight. But it is still an experimental feature and may have some issues. My problem with it was that when I compiled my program in Linux, it showed me the message `Segmentation fault (core dumped)`. I tested the same steps on Windows and there was no problem with the implementation and it worked well. 2. Bun You can use [Bun](https://bun.sh/docs/bundler/executables) because it supports compilation to single file executable and it is very easy to work with. But all npm packages developed for NodeJS may not work well on it. If your program works well on Bun, you can use Bun for this. The problem I had with Bun was that it couldn't work properly with `node:fs.WriteStream`. 3. Deno I have not tried Deno. But you can read its [documentation](https://docs.deno.com/runtime/manual/tools/compiler). Of course, I don't think Deno is fully compatible with nodeJS packages. (as I understood from its documentation) 4. The method I used and it worked I used pkg. Of course, I don't mean [vercel pkg](https://www.npmjs.com/package/pkg) because its development has stopped, but we can use from [yao-pkg](https://www.npmjs.com/package/@yao-pkg/pkg). It's an active fork of vercel pkg that also supports node20. Let's implement an example together: Make a folder and create a file as `package.json` in that : ``` { "name": "test-bin", "version": "1.0.0", "main": "app.js", "scripts": { "build-ts": "tsc", "build-linux-x64": "pkg --targets node20-linux-x64 dist/app.js -o app-linux-x64" }, "keywords": [], "author": "", "license": "ISC", "description": "", "devDependencies": { "@types/express": "^4.17.21", "@types/node": "^20.14.2", "@yao-pkg/pkg": "^5.11.5", "typescript": "^5.4.5" }, "dependencies": { "express": "^4.19.2" } } ``` make a file as `tsconfig.json` with this content: ``` { "compilerOptions": { "target": "es2022", "lib": ["es2022"], "module": "commonjs", "esModuleInterop": true, "forceConsistentCasingInFileNames": true, "strict": true, "skipLibCheck": true, "rootDir": "./src", "outDir": "./dist" } } ``` Make `src` folder and create file as `app.ts` in that : ``` import express from "express"; import router from "./routes/router"; const app = express(); app.use(express.urlencoded({ extended: true })); app.use(router); app.listen(3000, () => { console.log("SERVER IS RUNNING..."); }); ``` Make a folder as `routes` in src and create a file as `router.ts` in that: ``` import { Router } from "express"; const router = Router(); router.get("/", (req, res) => { res.status(200).json({ message: "I work fine :D" }); }); export default router; ``` Install npm packages : ``` npm i ``` Run these commands to compile your project to single file executable : ``` npm run build-ts npm run build-linux-x64 ``` Run executable file : ``` ./app-linux-x64 ```
sudospace
1,882,081
Kevlar Motorcycle Shirts: The Ultimate Blend of Style and Protection
Riding a motorcycle is an exhilarating experience, but safety should always be a priority....
0
2024-06-09T13:57:57
https://dev.to/terryjohn/kevlar-motorcycle-shirts-the-ultimate-blend-of-style-and-protection-f6d
beginners, productivity
Riding a motorcycle is an exhilarating experience, but safety should always be a priority. Traditional leather jackets have been the go-to for many riders, but a new trend is emerging: Kevlar motorcycle shirts. These shirts, also known as Kevlar riding shirts or Kevlar armored shirts, combine style with cutting-edge protective technology, offering a comfortable and safe alternative for motorcyclists. In this article, we’ll explore what Kevlar motorcycle shirts are, their benefits, and why they might be the perfect addition to your riding gear. What is a Kevlar Motorcycle Shirt? A [Kevlar motorcycle shirt](https://www.greatbikersgear.com/kevlar-shirts/ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h7pft4ezj42u0ge1f3no.jpg)) is a type of protective clothing designed for motorcyclists. It’s made using Kevlar, a high-strength synthetic fiber known for its exceptional durability and resistance to abrasion. Kevlar is commonly used in bulletproof vests, but its applications in motorcycle gear, such as Kevlar protective shirts and Kevlar biker shirts, have been gaining popularity due to its lightweight nature and protective properties. Benefits of Kevlar Motorcycle Shirts Superior Protection Kevlar motorcycle shirts, or motorcycle shirts with Kevlar, offer excellent protection against abrasions and impacts. In the event of a fall, the Kevlar fibers help prevent road rash and reduce the risk of serious injury. Unlike regular shirts, Kevlar shirts are reinforced in critical areas, such as the shoulders, elbows, and back, providing enhanced safety for riders. Lightweight and Comfortable One of the main advantages of Kevlar motorcycle shirts over traditional leather jackets is their lightweight design. They offer a more comfortable riding experience, especially during hot weather. The breathable fabric ensures that riders stay cool and dry, making long rides more enjoyable. Whether referred to as Kevlar motorcycle clothing or a Kevlar armored shirt, these garments provide the comfort and flexibility riders need. Stylish Design Kevlar motorcycle shirts are designed to look like regular casual wear. They come in various styles and colors, allowing riders to maintain a fashionable appearance both on and off the bike. This versatility makes them a popular choice for those who want protection without compromising on style. A Kevlar motorcycle shirt, whether a classic flannel or a modern design, can seamlessly blend into your wardrobe. Versatility These shirts are perfect for both city commuting and long-distance touring. They can be worn on their own or layered under a jacket for added warmth and protection during colder months. Their casual look also means you can wear them while running errands or meeting friends without feeling out of place. The term motorcycle safety shirt aptly describes their dual functionality of safety and everyday usability. How to Choose the Right Kevlar Motorcycle Shirt Check the Level of Protection Ensure that the shirt offers adequate protection in critical areas. Look for reinforced panels on the shoulders, elbows, and back. Some shirts also come with pockets for inserting additional armor, making them an ideal Kevlar motorcycle gear choice. Fit and Comfort A good Kevlar motorcycle shirt should fit snugly but comfortably. It shouldn’t be too tight or too loose. Consider trying on different sizes and brands to find the best fit for your body type. Whether it’s a Kevlar riding shirt or a Kevlar protective shirt for motorcyclists, comfort is key. Style and Design Choose a design that matches your personal style. Whether you prefer a classic flannel look or a more modern design, there are plenty of options available. Remember, a shirt you love to wear is one you’ll be more likely to use regularly. The variety of Kevlar motorcycle shirts available ensures there’s something for everyone. Breathability and Ventilation Look for shirts with breathable materials and ventilation features. These will keep you cool during hot rides and prevent excessive sweating. A well-ventilated Kevlar armored shirt can make a significant difference in your riding comfort. Reviews and Ratings Before making a purchase, check online reviews and ratings. Other riders’ experiences can provide valuable insights into the shirt’s performance, durability, and comfort. Reading about other riders' experiences with Kevlar motorcycle clothing can help you make a more informed decision. Conclusion A Kevlar motorcycle shirt is a must-have for any serious rider. Combining the best of style, comfort, and protection, these shirts offer a practical alternative to traditional leather jackets. Whether you’re a daily commuter or a weekend warrior, investing in a quality Kevlar motorcycle shirt will enhance your riding experience and keep you safe on the road. For a wide selection of high-quality Kevlar motorcycle shirts, consider shopping at Great Bikers Gear. They offer a range of stylish and protective options to suit every rider's needs.
terryjohn
1,882,080
Mastering JavaScript MathML: Writing Mathematics with MathML
Mathematical Markup Language (MathML) allows you to write complex mathematical expressions in web...
0
2024-06-09T13:55:53
https://dev.to/dharamgfx/mastering-javascript-mathml-writing-mathematics-with-mathml-1fll
webdev, beginners, javascript, programming
Mathematical Markup Language (MathML) allows you to write complex mathematical expressions in web pages seamlessly. This guide will take you through the essential topics to get started with MathML and explore its various elements. ## MathML First Steps ### What is MathML? - **Definition**: MathML (Mathematical Markup Language) is an XML-based markup language designed to display mathematical notations. - **Purpose**: It allows browsers to render mathematical expressions and provides a way to include math in web pages. ### Basic Structure - **MathML Tags**: `<math>`, `<mrow>`, `<mi>`, `<mo>`, `<mn>`. - **Example**: ```html <math> <mrow> <mi>x</mi> <mo>=</mo> <mn>5</mn> </mrow> </math> ``` ## Getting Started with MathML ### Setting Up MathML - **Integration**: MathML can be embedded directly in HTML using the `<math>` tag. - **Browser Support**: Most modern browsers support MathML, but for complete compatibility, consider using MathJax. ### Simple Expression - **Inline Math**: Adding inline mathematical notation. ```html <p>The equation is <math><mrow><mi>x</mi><mo>=</mo><mn>5</mn></mrow></math>.</p> ``` ## MathML Text Containers ### Using `<mrow>` and `<mfrac>` - **Grouping**: `<mrow>` groups multiple elements together. - **Example**: ```html <math> <mrow> <mi>a</mi> <mo>+</mo> <mi>b</mi> </mrow> </math> ``` ### Fractions - **Using `<mfrac>`**: Represents fractions. - **Example**: ```html <math> <mfrac> <mi>a</mi> <mi>b</mi> </mfrac> </math> ``` ## MathML Fractions and Roots ### Fractions - **Creating Fractions**: Utilizing the `<mfrac>` tag. - **Example**: ```html <math> <mfrac> <mi>1</mi> <mi>2</mi> </mfrac> </math> ``` ### Roots - **Square Roots**: Using `<msqrt>`. - **Example**: ```html <math> <msqrt> <mi>x</mi> </msqrt> </math> ``` ## MathML Scripted Elements ### Subscripts and Superscripts - **Subscript (`<msub>`)**: Used for subscript notation. ```html <math> <msub> <mi>x</mi> <mi>i</mi> </msub> </math> ``` - **Superscript (`<msup>`)**: Used for superscript notation. ```html <math> <msup> <mi>x</mi> <mn>2</mn> </msup> </math> ``` ### Combined Scripts - **Example**: ```html <math> <msubsup> <mi>x</mi> <mi>i</mi> <mn>2</mn> </msubsup> </math> ``` ## MathML Tables ### Creating Matrices - **Using `<mtable>`**: Represents matrices and tables. - **Example**: ```html <math> <mtable> <mtr> <mtd><mi>a</mi></mtd> <mtd><mi>b</mi></mtd> </mtr> <mtr> <mtd><mi>c</mi></mtd> <mtd><mi>d</mi></mtd> </mtr> </mtable> </math> ``` ## Three Famous Mathematical Formulas ### Quadratic Formula - **Expression**: ```html <math> <mi>x</mi> <mo>=</mo> <mfrac> <mrow> <mo>-</mo> <mi>b</mi> <mo>&#xB1;</mo> <msqrt> <msup> <mi>b</mi> <mn>2</mn> </msup> <mo>-</mo> <mn>4</mn> <mi>a</mi> <mi>c</mi> </msqrt> </mrow> <mrow> <mn>2</mn> <mi>a</mi> </mrow> </mfrac> </math> ``` ### Pythagorean Theorem - **Expression**: ```html <math> <msup> <mi>a</mi> <mn>2</mn> </msup> <mo>+</mo> <msup> <mi>b</mi> <mn>2</mn> </msup> <mo>=</mo> <msup> <mi>c</mi> <mn>2</mn> </msup> </math> ``` ### Euler's Formula - **Expression**: ```html <math> <mi>e</mi> <msup> <mi>i</mi> <mi>&pi;</mi> </msup> <mo>+</mo> <mn>1</mn> <mo>=</mo> <mn>0</mn> </math> ``` ## Additional Topics ### MathML Operators - **Arithmetic Operators**: `+`, `-`, `*`, `/`. - **Relational Operators**: `=`, `>`, `<`. - **Example**: ```html <math> <mi>a</mi> <mo>+</mo> <mi>b</mi> <mo>=</mo> <mi>c</mi> </math> ``` ### MathML Accents - **Overlines and Underlines**: Using `<mover>` and `<munder>`. - **Example**: ```html <math> <mover> <mi>x</mi> <mo>¯</mo> </mover> </math> ``` ### MathML Integrals and Limits - **Integrals**: Using `<msubsup>` and `<mo>`. ```html <math> <msubsup> <mo>&#x222B;</mo> <mn>0</mn> <mi>∞</mi> </msubsup> <mi>f</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> <mi>d</mi> <mi>x</mi> </math> ``` - **Limits**: Using `<mo>` and `<munder>`. ```html <math> <munder> <mo>lim</mo> <mrow> <mi>x</mi> <mo>→</mo> <mn>0</mn> </mrow> </munder> <mrow> <mi>f</mi> <mo>(</mo> <mi>x</mi> <mo>)</mo> </mrow> </math> ``` ## Conclusion MathML is a powerful tool for embedding mathematical notation in web pages, providing a wide range of elements and functionalities. From basic expressions to complex equations and matrices, MathML ensures your mathematical content is displayed accurately and beautifully. Start experimenting with MathML in your projects today, and bring the world of mathematics to your web pages with ease!
dharamgfx
1,882,079
Taruhan Olahraga dan Kasino Sunmory33
Dalam dunia permainan online yang menarik, Sunmory33 telah muncul sebagai tujuan utama bagi...
0
2024-06-09T13:53:26
https://dev.to/withorwithout02/taruhan-olahraga-dan-kasino-sunmory33-2j0p
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jmudzupxrnmq341f4f55.jpeg) Dalam dunia permainan online yang menarik, Sunmory33 telah muncul sebagai tujuan utama bagi penggemar taruhan olahraga dan kasino. Baik Anda penggemar berat olahraga yang ingin memasang taruhan pada tim favorit Anda atau pecinta kasino yang mencari permainan mendebarkan, [sunmory33](https://wow-withorwithout.com) menawarkan platform komprehensif yang memenuhi semua kebutuhan permainan Anda. Sejarah dan Latar Belakang Sunmory33 dimulai sebagai usaha kecil namun dengan cepat berkembang menjadi pemain penting dalam industri game online. Dengan komitmen untuk memberikan pengalaman taruhan yang unggul, perusahaan ini terus berkembang, menggabungkan teknologi terbaru dan memperluas penawarannya agar tetap menjadi yang terdepan dalam persaingan. Pertumbuhan platform mencerminkan dedikasinya terhadap kualitas dan kepuasan pelanggan. Taruhan Olahraga di Sunmory33 Sunmory33 menawarkan beragam olahraga yang mengesankan untuk penggemar taruhan. Dari olahraga mainstream seperti sepak bola dan bola basket hingga olahraga khusus seperti kriket dan esports, selalu ada sesuatu untuk semua orang. Memasang taruhan sangatlah mudah; pengguna cukup memilih olahraga pilihannya, memilih acaranya, dan memasang taruhan berdasarkan peluang yang disediakan. Memahami berbagai jenis taruhan, seperti moneyline, spread poin, dan total, dapat meningkatkan strategi taruhan Anda dan meningkatkan peluang Anda untuk menang. Olahraga Populer untuk Taruhan Sepak bola Sepak bola tetap menjadi olahraga paling populer untuk bertaruh di Sunmory33. Dengan liga dan turnamen dari seluruh dunia, petaruh dapat memasang taruhan pada pertandingan di Liga Utama Inggris, La Liga, Serie A, dan banyak lagi. Platform ini menyediakan statistik terperinci dan pembaruan langsung untuk membantu pengguna membuat keputusan yang tepat. Bola basket Taruhan bola basket adalah favorit lainnya di antara pengguna Sunmory33. NBA, EuroLeague, dan liga internasional lainnya menawarkan banyak peluang untuk bertaruh pada permainan. Sunmory33 menawarkan berbagai opsi taruhan, termasuk spread poin, over/under, dan properti pemain, menjadikan pengalaman bertaruh menarik dan beragam. Pacuan kuda Penggemar pacuan kuda akan menemukan beragam acara lengkap untuk dipertaruhkan di Sunmory33. Dari balapan lokal hingga acara internasional bergengsi seperti Kentucky Derby, platform ini mencakup semuanya. Petaruh dapat menjelajahi berbagai jenis taruhan, termasuk taruhan menang, tempat, dan pertunjukan, serta taruhan eksotik seperti tepat dan trifecta. Fitur Taruhan Langsung Taruhan langsung, atau taruhan dalam permainan, adalah salah satu fitur menonjol di Sunmory33. Bentuk taruhan dinamis ini memungkinkan pengguna untuk memasang taruhan pada peristiwa yang terjadi secara real-time. Keuntungan dari taruhan langsung mencakup kemampuan untuk bereaksi terhadap tindakan, membuat keputusan berdasarkan keadaan permainan saat ini, dan berpotensi menemukan peluang yang lebih baik. Permainan Kasino di Sunmory33 Bagian kasino Sunmory33 juga sama mengesankannya, menawarkan beragam permainan untuk memenuhi semua selera. Dari permainan slot klasik dengan tema menarik dan jackpot besar hingga permainan meja tradisional seperti blackjack, roulette, dan bakarat, hiburan selalu ada. Platform ini bermitra dengan pengembang game terkemuka untuk memastikan grafis berkualitas tinggi dan gameplay yang lancar. Pengalaman Kasino Langsung Bagi mereka yang mencari pengalaman yang lebih mendalam, kasino langsung Sunmory33 wajib dicoba. Permainan kasino langsung disiarkan secara real-time dengan dealer profesional, memberikan suasana kasino otentik dari kenyamanan rumah Anda. Permainan langsung yang populer termasuk blackjack langsung, roulette langsung, dan bakarat langsung. Sifat interaktif permainan kasino langsung menambah lapisan kegembiraan dan realisme ekstra. Bonus dan Promosi Sunmory33 terkenal dengan bonus dan promosinya yang melimpah. Pengguna baru dapat memanfaatkan bonus sambutan yang meningkatkan setoran awal mereka, sehingga memberi mereka lebih banyak dana untuk menjelajahi platform. Selain itu, Sunmory33 menawarkan promosi berkelanjutan, seperti bonus isi ulang, penawaran uang kembali, dan putaran gratis, memastikan bahwa pemain setia terus diberi hadiah. Untuk memaksimalkan bonus ini, penting untuk membaca syarat dan ketentuan serta memahami persyaratan taruhan. Pengalaman Pengguna dan Antarmuka Menavigasi Sunmory33 sangatlah mudah, berkat antarmukanya yang ramah pengguna. Tata letak situs webnya intuitif, memungkinkan pengguna menemukan permainan olahraga atau kasino pilihan mereka dengan cepat. Selain itu, Sunmory33 sepenuhnya dioptimalkan untuk perangkat seluler, memastikan pengalaman yang lancar baik Anda bermain di ponsel cerdas atau tablet. Ulasan pengguna sering kali menyoroti kemudahan penggunaan platform dan kelancaran kinerja permainan dan fitur taruhannya. Keamanan dan Fair Play Keamanan dan keadilan adalah prioritas utama di Sunmory33. Platform ini beroperasi di bawah lisensi permainan yang bereputasi baik, memastikan bahwa platform tersebut mematuhi standar peraturan yang ketat. Teknologi enkripsi canggih melindungi data pengguna, menjamin bahwa informasi pribadi dan keuangan aman dari akses tidak sah. Selain itu, Sunmory33 menggunakan generator nomor acak (RNG) dan secara rutin mengaudit permainannya untuk memastikan hasil yang adil. cara Pembayaran Sunmory33 menawarkan berbagai metode pembayaran untuk mengakomodasi pengguna dari berbagai wilayah. Baik Anda lebih memilih kartu kredit/debit, dompet elektronik, atau transfer bank, platform ini mendukung semuanya. Menyetorkan dana cepat dan mudah, dengan sebagian besar metode menyediakan transaksi instan. Penarikan juga diproses secara efisien, meskipun waktunya dapat bervariasi tergantung metode yang dipilih. Biaya transaksi minimal, membuat aspek finansial penggunaan Sunmory33 tidak merepotkan. Dukungan Pelanggan Dukungan pelanggan yang andal sangat penting untuk platform game online apa pun, dan Sunmory33 unggul dalam bidang ini. Pengguna dapat menghubungi tim dukungan melalui berbagai saluran, termasuk live chat, email, dan telepon. Tim dukungan dikenal karena daya tanggap dan profesionalismenya, serta biasanya menyelesaikan masalah dengan cepat. Apakah Anda memiliki pertanyaan tentang memasang taruhan, aturan permainan, atau manajemen akun, tim dukungan Sunmory33 siap membantu. Perjudian yang Bertanggung Jawab Sunmory33 berkomitmen untuk mempromosikan perjudian yang bertanggung jawab. Platform ini menyediakan berbagai alat dan sumber daya untuk membantu pengguna mengelola aktivitas perjudian mereka, termasuk batas setoran, opsi pengecualian diri, dan akses ke organisasi bantuan profesional. Dengan mendorong praktik perjudian yang bertanggung jawab, Sunmory33 memastikan pengalaman bermain game tetap menyenangkan dan aman bagi semua pengguna. Pro dan Kontra Sunmory33 Keuntungan Menggunakan Sunmory33 Beragam pilihan taruhan : Dari taruhan olahraga hingga beragam pilihan permainan kasino, Sunmory33 menawarkan sesuatu untuk semua orang. Antarmuka yang ramah pengguna : Navigasi yang mudah dan kompatibilitas seluler membuat platform dapat diakses oleh semua pengguna. Bonus besar : Pengguna baru dan lama mendapat manfaat dari berbagai promosi dan bonus. Aman dan adil : Langkah-langkah keamanan tingkat lanjut dan praktik permainan yang adil memastikan lingkungan permainan yang aman. Potensi Kelemahan Pembatasan regional : Beberapa pengguna mungkin menghadapi pembatasan berdasarkan lokasi mereka. Persyaratan Taruhan : Bonus disertai dengan persyaratan taruhan yang harus dipenuhi sebelum menarik kemenangan. Sunmory33 menonjol sebagai platform utama untuk taruhan olahraga dan permainan kasino. Pilihannya yang luas, antarmuka yang ramah pengguna, dan komitmen terhadap keamanan dan keadilan menjadikannya pilihan utama bagi para penggemar game. Baik Anda memasang taruhan pada pertandingan besar atau memutar gulungan pada slot favorit Anda, Sunmory33 menawarkan pengalaman yang menarik dan bermanfaat. Kunjungi Disini: https://wow-withorwithout.com
withorwithout02
1,882,078
We Also Provide You Reviews At A Very Reasonable Price
Buy TrustPilot Reviews Trustpilot Reviews From US And Benefit Your Business Online sales of a...
0
2024-06-09T13:46:50
https://dev.to/alicia1fka/we-also-provide-you-reviews-at-a-very-reasonable-price-42kk
javascript, webdev, beginners, tutorial
Buy TrustPilot Reviews Trustpilot Reviews From US And Benefit Your Business Online sales of a particular company depend a lot on the reviews posted by the customers. In fact it has been observed that as many as 92% of the people rely on these reviews when they are making purchases. It is for this reason you will see that there are online reviews that have actually popped up for each and every industry. The customers have an internet even in their pockets today. So these online reviews can actually make or break the reputation of a particular brand. Why Choose US We, at[ Mangocityit understand ](https://mangocityit.com/service/buy-trustpilot-reviews/ )the importance of these online reviews. So you can be rest assured that if you buy Trustpilot reviews from us, your business would certainly benefit from it. This is because whenever the customers search for any company in the internet, they check out the Google rating and also the customer reviews of that particular company. WE Helps In Collecting Maximum Number Of Reviews Mangocityit realizes the fact that collecting trustpilot customer reviews is beneficial for both the consumers as well as the businesses. This is because before buying any product or services the customers would require a social proof. For businesses it is important to create feedbacks in order to get into the fast track mode and try to improve in those areas for which the customers care. It is for these reason that the importance of these reviews are growing every single day. We provide you genuine reviews because we are in touch with customers throughout the world. Since we follow Trustpilot Review Guidelines and all our reviews are genuine so there are no chances of getting punished for posting fake reviews. We also collect a number of real trustpilot reviews to ensure that the company is ranked at the top. Mangocityit Provides You With The Best And Most Genuine Feedbacks If you buy positive Trustpilot reviews from us, there are no chances of those having any kind of offensive languages. The real trustpilot reviews that we provide you will never have personal details like email id, phone number etc. These reviews will also not violate the privacy or confidentiality of any other person It also will not have any kind of marketing spam The customers will only be providing feedback about a particular product and it will not at all talk about either any kind of service or buying experience. The [trustpilot customer reviews](https://mangocityit.com/service/buy-trustpilot-reviews/ ) posted by us will never be from fake accounts and they will be written only for ethical and also political reasons. The reviewer will always be a genuine product or a service user. We also ensure that the reviews posted by our customers are compatible with the major search engines Customer reviews, as most of you are aware today have a major role to play as far as the Google and other search engine rankings are concerned. It is for this reason that you will need a customer review management tool. This tool will be compatible with the major search engines. Our reviews are verified and therefore they will definitely be counted as “trustworthy”. There are a number of factors that determine the authenticity of a particular website. So the trustpilot customer reviews that we post have a lot of weight. TrustPilot is basically a partner of Google. This is an open platform and therefore anyone can post reviews in them. So once these reviews get posted, there is no way to remove them. If a particular company buy trustpilot reviews from Easy Review, then they can be rest assured that the reviews will definitely help them. We Also Provide You Reviews At A Very Reasonable Price We, at mangocityit help you to buy trustpilot reviews cheap. So if you are interested in buying reviews at a reasonable price, you can certainly get in touch with us. We not only provide you with genuine reviews but also ensure that that you get these reviews at a reasonable price. We understand that the companies do have a budget and so we arrange them to buy trustpilot reviews cheap from our company. There are a number of companies providing you with trustpilot reviews. But in our company we ensure that the reviews that we post are genuine and also positive. We understand that these reviews actually help you to make or break a brand. We are therefore extremely careful and provide you with reviews that will actually help you in the best way possible. We also provide constructive feedbacks to our clients through these reviews. The client is able to understand the things that the customers are liking about their product and the things that the customers are not liking about their product. This way they are definitely able to improve their services or products. How are Trustpilot Reviews necessary for the business? Most potential customers prefer to read the reviews and feedback before purchasing a product/service. Owing to bad TrustPilot Reviews, the customers might leave without making a purchase. Customers are fond of spending more on a business that has several 5-star[ trustpilot reviews.](https://mangocityit.com/service/buy-trustpilot-reviews/ ) Why should I buy TrustPilot positive Reviews for my business? By purchasing positive reviews, you will be capable of earning the loyalty of the targeted customers. Irrespective of the business’s industry or niche, you will not be capable of underestimating the importance of these reviews. These reviews play an integral role in impacting the online reputation management or ORM of the business. In addition to this, these reviews are useful in placing your website on the search engine’s main pages. Protect the reputation of the company The online reviews of the company contribute to being the reflection of the reputation. Your customers will be encouraged to invest more in your business’s products as the existing clients leave positive reviews. You will be capable of beating the competitors and stand ahead in the town by purchasing trustpilot and Google Business Reviews. Reach the higher audience of the business It is possible to improve lead generation for the business by purchasing positive reviews. Moreover, TrustPilot reviews are regarded as an attractive option to reach a higher audience. You can leave an everlasting impression on the clients as you place the order of positive reviews with 5 star ratings. Strengthen the relationship with customers by investing in TrustPilot Reviews Buying positive TrustPilot reviews offer a helping hand in developing and strengthening the relationship with the targeted customers. Do not get overwhelmed, even if the customer leaves negative feedback. Respond to the review professionally, and it will help you strengthen your relationship with potential customers. It will help if you keep in mind that customer relationships form the foundation of a successful business. By purchasing the TrustPilot Reviews, you will save an ample amount of money and time. A primary benefit of TrustPilot is known to the audience size. A more robust audience offers assistance in creating more substantial and improved marketing efforts. With a stable and enhanced reputation through TrustPilot positive reviews, you can get no-cost advertising. It is possible to positively affect the buying decision of potential customers by seeing the positive reviews. It will also help increasing the potential customer base. If you are looking for a positive way to stay connected to your business’s customers without burning a hole in your pocket, you should purchase the TrustPilot positive reviews we offer. The reviews we offer are real and legit, owing to which several business owners have reaped a lot of benefits from them. Business owners looking for an ideal option to enhance the business’s revenue can choose the TrustPilot Reviews we offer. How to Get Positive Trustpilot Reviews For Your Business If you are looking to buy positive reviews about your business, then you need to understand how Trustpilot works. This Danish company, which was founded in 2007, specializes in European and North American markets. With over 500 employees, it is one of the world’s leading review sites. It is also easy to submit a review to Trustpilot. Just remember to follow the simple steps in the instructions below. You can submit a free profile with Trustpilot. You can respond to all reviews, even those with negative feedback. However, be aware that Trustpilot is a site that takes a strong stance on review authenticity. The platform even provides a process for reporting fake reviews. Once your review has been reported, the company makes the final decision. It is not your responsibility to explain this process. You should also take note that Trustpilot does not ask you for your account credentials. To make sure you get the best reviews, choose a package that suits your budget. Trustpilot’s packages start from $45 for 5 reviews and range up to $275 for 50. Delivery times are within 1-day or 60 days. The reviews are authentic and were written by real people. Some Trustpilot packages even offer a money-back guarantee if you’re not satisfied with the reviews. For your peace of mind, you should opt for a package that includes custom reviews and money-back guarantees. How to Buy Positive Reviews on Trustpilot One of the best ways to increase your online visibility is to Buy Positive Trustpilot Reviews. The site lets customers leave unbiased reviews about your company. As a result, more potential customers will be convinced to buy from you. Ultimately, your goal should be to provide a better service than your competitors. This way, you will earn repeat customers and build a credible online presence. To buy trustpilot reviews for your business, you simply need to place an order with us and provide us the essential details including your business trustpilot link, review texts (if you have written already). Our team will then start working on your order and will be submitting reviews gradually. Information for all [Disclaimer: https://mangocityit.com/ is not a participant or affiliate of Trustpilot. Their logo, Trustpilot Star, Images, Name etc are trademarks/copyrights of them.] If You Want To More Information just Contact Now Email Or Skype – 24 Hours Reply/Contact Email: admin@mangocityit.com Skype: live:mangocityit
alicia1fka
1,882,060
Negative Indexing in Python, with Examples 🐍
Python is known for its simplicity and readability, making it a popular choice for beginners and...
0
2024-06-09T13:41:36
https://dev.to/hichem-mg/negative-indexing-in-python-with-examples-1ind
python, beginners, coding
Python is known for its simplicity and readability, making it a popular choice for beginners and seasoned developers alike. One of the features that contributes to its flexibility is negative indexing. In this tutorial, I will go through what negative indexing is, how it works, and its practical applications in Python programming. ## Table of Contents {%- # TOC start (generated with https://github.com/derlin/bitdowntoc) -%} - [1. Introduction to Indexing](#1-introduction-to-indexing) - [2. What about Negative Indexing?](#2-what-about-negative-indexing) - [3. Using Negative Indexing in Lists](#3-using-negative-indexing-in-lists) - [4. Negative Indexing with Strings](#4-negative-indexing-with-strings) - [5. Negative Indexing in Tuples](#5-negative-indexing-in-tuples) - [6. Negative Indexing in Slicing](#6-negative-indexing-in-slicing) - [7. Practical Examples](#7-practical-examples) - [8. Advanced Use Cases](#8-advanced-use-cases) - [9. Common Pitfalls and How to Avoid Them](#9-common-pitfalls-and-how-to-avoid-them) - [10. Conclusion](#10-conclusion) {%- # TOC end -%} --- ## 1. Introduction to Indexing Indexing is a way to access individual elements from sequences like lists, strings, and tuples in Python. Each element in a sequence is assigned a unique index starting from 0. For instance, in the list `numbers = [10, 20, 30, 40, 50]`, the index of the first element (10) is 0, the second element (20) is 1, and so on. ### Example: ```python numbers = [10, 20, 30, 40, 50] print(numbers[0]) # Output: 10 print(numbers[1]) # Output: 20 ``` ## 2. What about Negative Indexing? Negative indexing is a powerful feature in Python that allows you to access elements from the end of a sequence. Instead of starting from 0, negative indices start from -1, which corresponds to the last element of the sequence. This can be especially useful when you need to work with elements at the end of a sequence without explicitly knowing its length. ### Example: ```python numbers = [10, 20, 30, 40, 50] print(numbers[-1]) # Output: 50 print(numbers[-2]) # Output: 40 ``` ## 3. Using Negative Indexing in Lists Negative indexing can be particularly useful when you need to access elements at the end of a list without knowing its length. This allows you to easily manipulate lists by referring to elements from the end. ### Example: ```python numbers = [10, 20, 30, 40, 50] # Accessing the last element print(numbers[-1]) # Output: 50 # Accessing the second last element print(numbers[-2]) # Output: 40 ``` You can also use negative indexing to slice lists. ### Example: ```python numbers = [10, 20, 30, 40, 50] # Slicing the last three elements print(numbers[-3:]) # Output: [30, 40, 50] # Slicing from the second last to the last element print(numbers[-2:]) # Output: [40, 50] ``` ## 4. Negative Indexing with Strings Just like lists, strings also support negative indexing. This feature allows you to work with substrings from the end of the string. It's a powerful way to manipulate text without the need to calculate lengths or create complex slicing conditions. ### Example: ```python text = "Hello, World!" # Accessing the last character print(text[-1]) # Output: '!' # Accessing the second last character print(text[-2]) # Output: 'd' ``` You can also slice strings using negative indices. ### Example: ```python text = "Hello, World!" # Slicing the last 5 characters print(text[-5:]) # Output: 'orld!' # Slicing from the second last to the end print(text[-2:]) # Output: 'd!' ``` ## 5. Negative Indexing in Tuples Tuples, being immutable sequences, also support negative indexing. This can be useful for accessing elements without modifying the tuple. ### Example: ```python coordinates = (1, 2, 3, 4, 5) # Accessing the last element print(coordinates[-1]) # Output: 5 # Accessing the second last element print(coordinates[-2]) # Output: 4 ``` ## 6. Negative Indexing in Slicing Slicing with negative indices can be particularly powerful. It allows for more flexible and intuitive extraction of subsequences from lists, strings, and tuples. ### Example: ```python # Reversing a list numbers = [10, 20, 30, 40, 50] reversed_numbers = numbers[::-1] print(reversed_numbers) # Output: [50, 40, 30, 20, 10] # Skipping elements skip_elements = numbers[::2] print(skip_elements) # Output: [10, 30, 50] # Reversing a string text = "Hello, World!" reversed_text = text[::-1] print(reversed_text) # Output: '!dlroW ,olleH' ``` ## 7. Practical Examples ### Reversing a List Reversing a list using negative indexing is a simple and elegant solution. ```python numbers = [10, 20, 30, 40, 50] reversed_numbers = numbers[::-1] print(reversed_numbers) # Output: [50, 40, 30, 20, 10] ``` ### Checking Palindromes You can use negative indexing to check if a string is a palindrome (a string that reads the same forward and backward). ```python def is_palindrome(s): return s == s[::-1] print(is_palindrome("radar")) # Output: True print(is_palindrome("python")) # Output: False ``` ### Rotating a List You can rotate a list to the right using negative indexing. This can be particularly useful in algorithms where list rotation is required. ```python def rotate_right(lst, k): k = k % len(lst) # Handle rotation greater than list length return lst[-k:] + lst[:-k] numbers = [10, 20, 30, 40, 50] rotated_numbers = rotate_right(numbers, 2) print(rotated_numbers) # Output: [40, 50, 10, 20, 30] ``` ## 8. Advanced Use Cases ### Dynamic List Partitioning Negative indexing can be used to dynamically partition lists based on conditions or calculations. This is particularly useful in scenarios like data processing or when working with dynamic datasets. ```python data = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] # Split data into two parts: last 3 elements and the rest split_point = -3 part1, part2 = data[:split_point], data[split_point:] print(part1) # Output: [1, 2, 3, 4, 5, 6, 7] print(part2) # Output: [8, 9, 10] ``` ### Efficient Tail Processing When working with logs or streaming data, you might often need to process only the most recent entries. Negative indexing simplifies this by allowing quick access to the end of the list. ```python log_entries = ["entry1", "entry2", "entry3", "entry4", "entry5"] # Get the last 2 log entries recent_entries = log_entries[-2:] print(recent_entries) # Output: ['entry4', 'entry5'] ``` ### Sliding Window for Time Series Data For time series analysis, you might need to work with sliding windows. Negative indexing can help you easily manage these windows. ```python time_series = [100, 200, 300, 400, 500, 600, 700] # Get the last 3 elements as a sliding window window_size = 3 sliding_window = time_series[-window_size:] print(sliding_window) # Output: [500, 600, 700] ``` ## 9. Common Pitfalls and How to Avoid Them ### Index Out of Range Negative indexing can sometimes lead to [IndexError](https://github.com/python-online/python-errors/tree/main/IndexError) if not handled properly. Ensure that your negative indices are within the range of the sequence length. #### Example: ```python numbers = [10, 20, 30, 40, 50] # This will raise an IndexError try: print(numbers[-6]) except IndexError as e: print(e) # Output: list index out of range ``` ### Using Negative Index in Slicing with Step When slicing with a step, be careful with negative indices to avoid confusion and ensure the slice direction is correct. #### Example: ```python numbers = [10, 20, 30, 40, 50] # This will return an empty list because the start is after the stop print(numbers[-1:-3:-1]) # Output: [50, 40] # Correct way print(numbers[-3:-1]) # Output: [30, 40] ``` ## 10. Conclusion Negative indexing is a simple yet powerful feature in Python that can make your code more concise and readable. By using negative indices, you can efficiently access and manipulate elements at the end of sequences without having to calculate their positions. Experiment with different sequences and see how negative indexing can be applied to solve real-world problems in your projects.
hichem-mg
1,882,074
Identifying Container Image Vulnerabilities with Docker Scout
A guide on how to maintain a more secure containerized software.
0
2024-06-09T13:39:54
https://dev.to/plutov/identifying-container-image-vulnerabilities-with-docker-scout-503o
docker, security, cicd, kubernetes
--- title: Identifying Container Image Vulnerabilities with Docker Scout published: true description: A guide on how to maintain a more secure containerized software. tags: Docker, Security, CICD, Kubernetes cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2kcgwo2gueldysu9l05q.jpeg --- ![diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2kcgwo2gueldysu9l05q.jpeg) [Read the full article on packagemain.tech](https://packagemain.tech/p/identifying-container-image-vulnerabilities)
plutov
1,882,073
Java Stream API
Overview : The Java Stream API facilitates processing sequences of elements, offering operations...
0
2024-06-09T13:37:34
https://dev.to/abhishek999/java-stream-api-lh2
java, stream, java8
**Overview :** The Java Stream API facilitates processing sequences of elements, offering operations like filtering, mapping, and reducing. Streams can be used to perform operations in a declarative way, resembling SQL-like operations on data **Key Concepts :** **Stream:** A sequence of elements supporting sequential and parallel aggregate operations **Intermediate Operations:** Operations that return another stream and are lazy (e.g., filter, map) **Terminal Operations:** Operations that produce a result or a side-effect and are not lazy (e.g., collect, forEach) **Example Scenario :** Suppose we have a list of Person objects and we want to perform various operations on this list using the Stream API ``` public class Person { private String name; private int age; private String city; public Person(String name, int age, String city) { this.name = name; this.age = age; this.city = city; } public String getName() { return name; } public int getAge() { return age; } public String getCity() { return city; } @Override public String toString() { return "Person{name='" + name + "', age=" + age + ", city='" + city + "'}"; } } ``` Use Cases : 1. Filtering 2. Mapping 3. Collecting 4. Reducing 5. FlatMapping 6. Sorting 7. Finding and Matching 8. Statistics --- **Filtering :** Filtering allows you to select elements that match a given condition ``` import java.util.Arrays; import java.util.List; import java.util.stream.Collectors; public class Main { public static void main(String[] args) { List<Person> people = Arrays.asList( new Person("Alice", 30, "New York"), new Person("Bob", 20, "Los Angeles"), new Person("Charlie", 25, "New York"), new Person("David", 40, "Chicago") ); // Filter people older than 25 List<Person> filteredPeople = people.stream().filter(person -> person.getAge() > 25) .collect(Collectors.toList()); filteredPeople.forEach(System.out::println); } } ``` --- **Mapping :** Mapping transforms each element to another form using a function ``` public class Main { public static void main(String[] args) { List<Person> people = Arrays.asList( new Person("Alice", 30, "New York"), new Person("Bob", 20, "Los Angeles"), new Person("Charlie", 25, "New York"), new Person("David", 40, "Chicago") ); // Get list of names List<String> names = people.stream() .map(Person::getName) .collect(Collectors.toList()); names.forEach(System.out::println); } } ``` --- **Collecting :** Collecting gathers the elements of a stream into a collection or other data structures ``` public class Main { public static void main(String[] args) { List<Person> people = Arrays.asList( new Person("Alice", 30, "New York"), new Person("Bob", 20, "Los Angeles"), new Person("Charlie", 25, "New York"), new Person("David", 40, "Chicago") ); // Collect names into a set Set<String> uniqueCities = people.stream() .map(Person::getCity).collect(Collectors.toSet()); uniqueCities.forEach(System.out::println); } } ``` --- **Reducing :** Reducing performs a reduction on the elements of the stream using an associative accumulation function and returns an Optional ``` public class Main { public static void main(String[] args) { List<Person> people = Arrays.asList( new Person("Alice", 30, "New York"), new Person("Bob", 20, "Los Angeles"), new Person("Charlie", 25, "New York"), new Person("David", 40, "Chicago") ); // Sum of ages int totalAge = people.stream() .map(Person::getAge).reduce(0, Integer::sum); System.out.println("Total Age: " + totalAge); } } ``` --- **FlatMapping :** FlatMapping flattens nested structures into a single stream. ``` public class Main { public static void main(String[] args) { List<List<String>> namesNested = Arrays.asList( Arrays.asList("John", "Doe"), Arrays.asList("Jane", "Smith"), Arrays.asList("Peter", "Parker") ); List<String> namesFlat = namesNested.stream() .flatMap(List::stream).collect(Collectors.toList()); namesFlat.forEach(System.out::println); } } ``` --- **Sorting :** Sorting allows you to sort the elements of a stream ``` public class Main { public static void main(String[] args) { List<Person> people = Arrays.asList( new Person("Alice", 30, "New York"), new Person("Bob", 20, "Los Angeles"), new Person("Charlie", 25, "New York"), new Person("David", 40, "Chicago") ); // Sort by age List<Person> sortedPeople = people.stream() .sorted(Comparator.comparing(Person::getAge)) .collect(Collectors.toList()); sortedPeople.forEach(System.out::println); } } ``` --- **Finding and Matching :** Finding and matching operations check the elements of a stream to see if they match a given predicate ``` public class Main { public static void main(String[] args) { List<Person> people = Arrays.asList( new Person("Alice", 30, "New York"), new Person("Bob", 20, "Los Angeles"), new Person("Charlie", 25, "New York"), new Person("David", 40, "Chicago") ); // Find any person living in New York Optional<Person> personInNY = people.stream() .filter(person -> "NewYork".equals(person.getCity())).findAny(); personInNY.ifPresent(System.out::println); // Check if all people are older than 18 boolean allAdults = people.stream() .allMatch(person -> person.getAge() > 18); System.out.println("All adults: " + allAdults); } } ``` --- **Statistics :** The Stream API can also be used to perform various statistical operations like counting, averaging, etc. ``` public class Main { public static void main(String[] args) { List<Person> people = Arrays.asList( new Person("Alice", 30, "New York"), new Person("Bob", 20, "Los Angeles"), new Person("Charlie", 25, "New York"), new Person("David", 40, "Chicago") ); // Count number of people long count = people.stream().count(); System.out.println("Number of people: " + count); // Calculate average age Double averageAge = people.stream() .collect(Collectors.averagingInt(Person::getAge)); System.out.println("Average Age: " + averageAge); } } ``` **Practical Example :** Here's a comprehensive example that uses several of the features mentioned above: ``` import java.util.*; import java.util.stream.*; public class Main { public static void main(String[] args) { List<Person> people = Arrays.asList( new Person("Alice", 30, "New York"), new Person("Bob", 20, "Los Angeles"), new Person("Charlie", 25, "New York"), new Person("David", 40, "Chicago") ); // Filter, map, sort, and collect List<String> names = people.stream() .filter(person -> person.getAge() > 20) .map(Person::getName) .sorted() .collect(Collectors.toList()); names.forEach(System.out::println); // Find the oldest person Optional<Person> oldestPerson = people.stream() .max(Comparator.comparing(Person::getAge)); oldestPerson.ifPresent(person -> System.out.println("Oldest Person: " + person)); // Group by city Map<String, List<Person>> peopleByCity = people.stream() .collect(Collectors.groupingBy(Person::getCity)); peopleByCity.forEach((city, peopleInCity) -> { System.out.println("People in " + city + ": " + peopleInCity); }); // Calculate total and average age IntSummaryStatistics ageStatistics = people.stream() .collect(Collectors.summarizingInt(Person::getAge)); System.out.println("Total Age: " + ageStatistics.getSum()); System.out.println("Average Age: " + ageStatistics.getAverage()); } } ``` **Summary :** The Java Stream API is a powerful tool for working with collections and data. It allows for: **Filtering:** Select elements based on a condition **Mapping:** Transform elements **Collecting:** Gather elements into collections or other data structures **Reducing:** Combine elements into a single result. **FlatMapping:** Flatten nested structures. **Sorting:** Order elements. **Finding and Matching:** Check elements against a condition. **Statistics:** Perform statistical operations. Understanding these features will help you write cleaner, more concise, and more readable code. **Happy Coding...**
abhishek999
1,882,072
Unified Cache Keys: How Namespaced Keys Improve Service Interoperability
More than just random keys in a Redis.
0
2024-06-09T13:36:57
https://dev.to/plutov/unified-cache-keys-how-namespaced-keys-improve-service-interoperability-2p2c
redis, systemdesign, microservices, distributedsystems
--- title: Unified Cache Keys: How Namespaced Keys Improve Service Interoperability published: true description: More than just random keys in a Redis. tags: Redis, SystemDesign, Microservices, DistributedSystems cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/653484m9d6f6x0m78sav.jpg --- ![diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/653484m9d6f6x0m78sav.jpg) [Read the full article on packagemain.tech](https://packagemain.tech/p/unified-namespaced-cache-keys)
plutov
1,882,071
What happens each time that we require a module by calling the require function with the “module” name as the argument.
First the path to the require module is resolved and the file is loaded. After the module is...
0
2024-06-09T13:32:41
https://dev.to/surjoyday_kt/what-happens-each-time-that-we-require-a-module-by-calling-the-require-function-with-the-module-name-as-the-argument-14cf
javascript, node, udemy, webdev
![what-happens-when-we-rquire-a-module-steps](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xie8dbg9tbbo92sa3dac.jpeg) 1. First the path to the require module is resolved and the file is loaded. ![requiring-and-loading-module](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yykx1lletfa5pgxq1e5h.png) 2. After the module is loaded, the modules code is wrapped into a special function which will give us access to a couple of special objects. ![wrapping-module](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q7pwhq4pyt2zbyxbpqee.png) The NodeJS runtime takes the code of our module and puts it inside an `IIFE` (the wrapper function is an IIFE), so it means Node does not directly execute the code that we write in the module and that we required using the `require` function, but instead its the “ `wrapper (IIFE)` **function**” that will contain our code in it’s body and will then execute our code . The “wrapper IIFE” is also the one that passes the “`exports`”, “`require`”, “`module`”, “`__dirname`” “`__filename`” into the module (file) and that is why in every module we automatically have access to stuff like the `require` function. **→ By doing this Node achieve 2 important things :** i. Firstly doing this gives developer access to all the variables like “require”, “__filename”, etc. ii. Secondly it keeps the “**top-level variables** ( variables declared outside of any function )” that we define in our modules private, so scoped only to the current module, instead of leaking everything to the global object. **Example** : If we have 2 modules and we export module 1 to module 2 then all the top-level variables of module 1 are now scoped inside the wrapper function of the module 2 instead of leaking the everything from the module 1 into the global object. It promotes modularity, prevents naming conflicts, and offers controlled access to functionalities within modules _math.js (Module 1):_ ``` // Top-level variable (private) const PI = 3.14159; function add(a, b) { return a + b; } exports.add = add; // Expose the add function ``` _app.js (Module 2):_ ``` const math = require('./math'); console.log(math.add(5, 3)); // Output: 8 (using the exported function) // Cannot access PI directly from math module // console.log(PI); // This would result in an error ``` 3.After that the code in the module’s “wrapper function” is EXECUTED by the Node.JS runtime. 4.Now after the code execution it time for the “require” function to return something and whats returns is the exports of the required module. This exports are stored in the “module.exports” object ![returning-exports](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7lbxthncvmhcfxe8yxis.png) → When to use “`module.exports`” or simply “`exports`” i. We use `module.exports` to export single variable, e.g. one class or one function and set it equal to the variable (`module.exports = Calculator` ) ii. We use `exports` when we want to export multiple named exports like multiple function, for example , `exports.add = (a+b) => a+b` 5.And finally the entire module gets “Cached” after the first time they are loaded, meaning if we require the same module multiple times we always get the same result and the code and modules is only executed in the first call , in subsequent calls the result is then retrieved from the cache.
surjoyday_kt
1,882,070
😂😂😂
https://t.me/Hamster_kombat_bot/start?startapp=kentId182256285 Play with me, become cryptoexchange...
0
2024-06-09T13:30:50
https://dev.to/meli_taj_0408251d97f014b6/-je2
https://t.me/Hamster_kombat_bot/start?startapp=kentId182256285 Play with me, become cryptoexchange CEO and get a token airdrop! 💸 +2k Coins as a first-time gift 🔥 +25k Coins if you have Telegram Premium
meli_taj_0408251d97f014b6
1,882,068
TronFc cloud mining platform, register and get 38000TRX
#TronFcRegister now: http://tronfc.fun TronFcOfficial Telegram Channel:...
0
2024-06-09T13:28:47
https://dev.to/tronfc/tronfc-cloud-mining-platform-register-and-get-38000trx-20lb
mining, minimicro
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4qs3392wb1dxjwdr5ie6.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0wkfk05ybeotixeppw1p.jpg)#TronFcRegister now: http://tronfc.fun #TronFcOfficial Telegram Channel: https://t.me/TronFc_Mining8 #TronFcOfficial Telegram Online Customer Service: https://t.me/Tron_Minning_7 Three ways to make money on the platform↓↓↓ The first way: current deposits-the funds deposited in the [basic account] receive mining income once a day. vip1 current deposits of more than 20, daily interest 8% vip10 current deposits of more than 500,000, daily interest 29% The second way is regular deposits-deposit funds into the [wallet account], click on the investment at the bottom of the homepage, select regular investment, and repay the principal and interest in one lump sum when it expires. #TRONFC The third way: You can share and invite friends to join through any social platform Tiktok, Facebook, Instagram, YouTube, Twitter and other social platforms to get 13%--2%--1% invitation rewards. #TronFc #trondao #TRONICS #epicsymphony #TRONAnthem #blockchain #crypto #cryptocurrencies #cryptocurrency #bitcoin #btc #eth #ethereum #web3 #tron #trx #JustinSunTRON #USDT #Mining #ETH #BNB #Binance #imToken #OKX #TronLink #BitgetWallet #TRONAnthem #blockchain #musicinnovation #Bitget #KuCoin #tronfcmining #tronmining #BTCmining #USDTmining #Quantify#Internet makes money#Internet money-making projects #Trailer,#OfficialTrailer,#MovieTrailer,#Venom,#VenomTheLastDance,#Venom3,#TomHardy,#ChiwetelEjiofor, #JunoTemple ,#RhysIfans ,#PeggyLu ,#AlannaUbach ,#StephenGraham, #MarvelTrailer, #MarvelOfficialTrailer , #Sony, #SonyPictures, #Marvel
tronfc
1,881,914
Symfony 7 vs. .NET Core 8 - Routing; part 3
Disclaimer This is a tutorial or a training course. Please don't expect a walk-through...
0
2024-06-09T13:25:30
https://dev.to/awons/symfony-7-vs-net-core-8-routing-part-3-n6
symfony, dotnetcore, routing
## Disclaimer This is a tutorial or a training course. Please don't expect a walk-through tutorial showing how to use ASP.NET Core. It only compares similarities and differences between Symfony and ASP.NET Core. Symfony is taken as a reference point, so if features are only available in .NET Core, they may never get to this post (unless relevant to the comparison). This is the continuation of the second post: [Symfony 7 vs. .NET Core 8 - Routing; part 2](https://dev.to/awons/symfony-7-vs-net-core-8-routing-part-2-1pdn) ## Generating URLs ### Symfony The generation of URLs is straightforward. We either have access to the request context or not, depending on the context. If we use URL generation from within a service or a controller that extends the `AbstractController`, we will have access to the request context. Therefore, the absolute URLs will have the correct domain and protocol. If we use it from within a command, we must provide the protocol or define defaults in a configuration. All generated URLs are absolute unless we want only the path. ```php // Controller $userProfilePage = $this->generateUrl('user_profile', [ 'username' => $user->getUserIdentifier(), ]); ``` ```php // Service $userProfilePage = $this->router->generate('user_profile', [ 'username' => $user->getUserIdentifier(), ]); ``` ```php // Command $userProfilePage = $this->urlGenerator->generate('user_profile', [ 'username' => $user->getUserIdentifier(), ]); ``` In all cases, the generator implements the `UrlGeneratorInterface` interface. One important aspect of URL generation is that route conditions are not considered. In the following example, the condition that checks if the post ID is lower than 1000 will not be checked when generating the URL. This contrasts with .NET Core, where even custom conditions will be checked to see if the route matches. ```php #[Route( '/posts/{id}', name: 'post_show', "params" variable condition: "params['id'] < 1000" )] ``` ### .NET Core Even though the rules governing the URL-generating process are more complicated than in Symfony, on the surface, everything looks very similar. We can use the `LinkGenerator` service directly (not that we need to pass the `HttpContext` object manually): ```c# public IActionResult Index() { var indexPath = _linkGenerator.GetPathByAction(HttpContext, values: new { id = 17 })!; return Content(indexPath); } ``` We can use a built-in method of the controller: ```c# public class GadgetController : ControllerBase { public IActionResult Index() => Content(Url.Action("Edit", new { id = 17 })!); } ``` In the preceding example, we can generate a URL relative to the current controller by specifying the action. This does not work in Symfony, where we must provide a route name. We could even provide both the controller and the action: ```c# var subscribePath = _linkGenerator.GetPathByAction("Subscribe", "Home", new { id = 17 })!; ``` As we can see, the .NET's URL generation seems more sophisticated. ## Signing generated URLs ### Symfony Symfony has an interesting feature that allows us to sign a URL. ```php $url = 'https://example.com/foo/bar?sort=desc'; $signedUrl = $this->uriSigner->sign($url); ``` We can even define an expiration time: ```php signedUrl = $this->uriSigner->sign($url, new \DateTimeImmutable('2050-01-01')); ``` Such URLs can be later checked like this: ```php $uriSignatureIsValid = $this->uriSigner->check($signedUrl); ``` or like this: ```php $uriSignatureIsValid = $this->uriSigner->checkRequest($request); ``` ### .NET Core There is no such thing in .NET Core. The only way to achieve this is by writing some customer code. ## What's next? We will continue controllers. We have already seen a glimpse of how controllers work with regard to routing, but we will dive deeper into that topic. Thanks for your time! I'm looking forward to your comments. You can also find me on [LinkedIn](https://www.linkedin.com/in/aleksanderwons/), [X](https://x.com/AleksanderWons), or [Discord](https://discordapp.com/users/601775386233405470).
awons
1,882,067
A way to speed up Next.js dynamic SSR
Let's say you have a React server component that fetches data on a server and renders a list of...
27,652
2024-06-09T13:23:12
https://dev.to/pavelkrasnov/a-way-to-speed-up-nextjs-dynamic-ssr-27ga
Let's say you have a React server component that fetches data on a server and renders a list of items: ``` import PokemonList from "./PokemonList"; async function fetchPokemon(id: number) { const response = await fetch(`https://pokeapi.co/api/v2/pokemon/${id}`); return response.json(); } const pokemonIds = Array .from({ length: 20 }) .map((_item, index) => index + 1); export default async function Home() { const pokemons = await Promise.all<any>(pokemonIds.map(item => fetchPokemon(item))); return <PokemonList pokemons={pokemons} />; } ``` You need the item list to be a client component for some reason: ``` "use client"; import PokemonItem from "./PokemonItem"; type Props = { pokemons: any[]; } export default function PokemonList({ pokemons }: Props) { return ( <ul> { pokemons.map((item, index) => <PokemonItem key={index} pokemon={item} />) } </ul> ) } ``` As client components are also executed on a server, you will get the same code running twice on both the server and the client. But have you ever thought about what really happens when you execute the code above? ## Next.js way to share server state with client When you pass props from a server component to a client in Next.js, it implicitly serializes the props and appends them to the HTML document. Then, on a client, it deserializes your props and uses them in the component they are passed to. ![Server state passed as props to a client component in the resulting HTML document](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ktksyq05uhicgj5wjpu4.png) *Server state passed as props to a client component in the resulting HTML document* ## The implicit overhead But aren't the props serialized one time more than needed? When you query your API for JSON data using `fetch`, you usually call the `json()` method on the response body. When you pass the data to a client component from the server, Next.js implicitly calls `JSON.stringify()` on them. Then isomorphic JavaScript in a client component runs twice - on a server and on a client, but though the data is already parsed on a server, Next.js has to implicitly call `JSON.parse()` on them on a client. Let's count it: parse + stringify on a server and parse on a client. Once again, isn't the data stringified one more time than needed? ## Fixing the issue The only thing we actually need to fix things is to remove the redundant serialization on a server. The response body also has other methods to read the stream. If we read it to a string by calling `text()` instead of `json()` and therefore not deserialize the JSON by calling `JSON.parse()`, we will still be able to pass the response string to a client component, deserialize it there, and use it without losing anything. We would still parse the data on a server and parse it on a client, but we wouldn't stringify it on a server! This is how the "fixed" components might look: ``` import PokemonList from "./PokemonList"; async function fetchPokemon(id: number) { const response = await fetch(`https://pokeapi.co/api/v2/pokemon/${id}`); return response.text(); } const pokemonIds = Array .from({ length: 20 }) .map((_item, index) => index + 1); export default async function Home() { const pokemonsStringArray = await Promise.all<string>(pokemonIds.map(item => fetchPokemon(item))); const pokemonsString = `[${pokemonsStringArray.join(",")}]`; return <PokemonList pokemons={pokemonsString} />; } ``` The code above still queries the API for the list of items, but this time we read the response body to a string and then concatenate all the strings, making a JSON array from it. Then we pass the string to a client component. ``` "use client"; import PokemonItem from "./PokemonItem"; type Props = { pokemons: string; } export default function PokemonList({ pokemons }: Props) { const pokemonObjects = JSON.parse(pokemons) as any[]; return ( <ul> { pokemonObjects.map((item, index) => <PokemonItem key={index} pokemon={item} />) } </ul> ) } ``` The client component needs to explicitly do the job it would do implicitly if we passed the data as an object. This code runs on both the server and the client. ## Note about static generation If you just create an app like this, Next.js will make all required requests at build time and [generate](https://nextjs.org/docs/app/building-your-application/rendering/server-components#static-rendering-default) static pages. If this is enough for you, you might not need to do the thing described in this article in your app. It happens though that you need to [dynamically render](https://nextjs.org/docs/app/building-your-application/rendering/server-components#dynamic-rendering) pages on every request. Dynamically rendered pages, which do their data requests on a server, are the biggest beneficiaries of the described technique. The most common scenario that will result in dynamic rendering is using Next.js [dynamic functions](https://nextjs.org/docs/app/building-your-application/rendering/server-components#dynamic-functions). For the sake of simplicity, I use the [`force-dynamic`](https://nextjs.org/docs/app/building-your-application/rendering/server-components#segment-config-options) segment config option in the example below. ## Note about caching As of now (June 2024), Next.js by default [caches](https://nextjs.org/docs/app/building-your-application/caching#data-cache) `fetch` requests made on the server. During local development, the cache is stored in files in the `.next/cache/fetch-cache` folder. But even if no actual network request is made, the response body is still deserialized using `JSON.parse()` if you call the `json()` method on a response body. ## Note about data transformations Sometimes you don't need to just pass the data fetched on a server to a client component. You would want to apply some computations to them instead and only then send them to a client. Consider the following example: ``` const pokemons = await Promise.all<any>(pokemonIds.map(item => fetchPokemon(item))); const filteredPokemons = pokemons.filter(item => item.height > 5); ``` In this case, it is necessary to parse the data on a server to process it according to your needs. You might do it in a client component instead: ``` "use client"; import PokemonItem from "./PokemonItem"; type Props = { pokemons: string; } export default function PokemonList({ pokemons }: Props) { const pokemonObjects = JSON.parse(pokemons) as any[]; const filteredPokemons = pokemonObjects.filter(item => item.height > 5); return ( <ul> { pokemonObjects.map((item, index) => <PokemonItem key={index} pokemon={item} />) } </ul> ) } ``` This is totally ok, but be aware that in this case the work gets done on both server and client, which may slow down your app's client-side performance. It is a trade-off, and you should decide which part of the app has to be optimized. ## How faster my application would be? Feel free to clone the [repo](https://github.com/pavel-krasnov/next-json-ssr) I created as an illustration for the issue. Let's run some tests together: 1. Run `npm run build` to build a production version of the app; 2. Run `npm start` to run a local Node.js server that will serve your app; 3. Open `http://localhost:3000/slow` in a browser to make sure Next.js creates a filesystem data cache and the first testing request doesn't send any actual network requests; 4. Install [oha](https://github.com/hatoo/oha) - a tool we will use to send requests to the server; 5. Run `oha http://localhost:3000/slow`. It will send 200 requests through 50 parallel connections to the server. 6. Stop the server and remove the `.next` folder to make sure there is no cached data; 7. Repeat steps 1–5, but this time use `http://localhost:3000/fast`. My Macbook Pro M1 powered by [Asahi Linux](https://asahilinux.org/) gives the following results: - slow version: ![Slow page results: around 7 requests per second](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b7pj5xuk7oox3kmntcmc.png) - fast version: ![Fast page results: around 18 requests per second](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xb168niotj0n5et0di7s.png) By making one simple change, we made the app ~2.5 times faster. ## Afterword This technique is used in the production version of the app I am currently working on: [Czech TV schedule](https://tv.seznam.cz/). Of course, in a complex app that does a lot of other work on the server, the effect will be more modest; in our case, it made the app around 30% faster. The need to speed up the SSR of a standalone build of this app led me to the development of a number of techniques, which I am going to share with you in this blog.
pavelkrasnov
1,882,064
Level Up Your Coding Skills for Free!
Explore these fantastic free resources for web development, backend development, data, APIs, DevOps, and programming languages.
0
2024-06-09T13:19:24
https://dev.to/iamatifriaz/level-up-your-coding-skills-for-free-3p7j
webdev, codenewbie
--- title: Level Up Your Coding Skills for Free! published: true description: Explore these fantastic free resources for web development, backend development, data, APIs, DevOps, and programming languages. tags: webdev, codenewbies cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/36qspkqsqmp7lfueuyhm.png # Use a ratio of 100:42 for best results. # published_at: 2024-06-09 12:46 +0000 --- Are you ready to enhance your programming skills? Explore these fantastic free resources for web development, backend development, data, APIs, DevOps, and programming languages. ### Web Development * 🌐 **HTML:** w3schools.com/html * 🎨 **CSS:** w3schools .com/css/ * 💎 **Tailwind CSS:**[tailwindcss.com](https://tailwindcss.com/) * ⚡ **JavaScript:**[javascript.info](https://javascript.info/) * 🔷 **TypeScript:** typescriptlang.org/docs/ * 🔺 **Angular:** angular.io/tutorial * ⚛️ **React:** react.dev * 🌟 **VueJS:** vuejs.org * 🔮 **Svelte:** svelte.dev/docs ### Backend Development * 🚀 **Node.js:** nodejs.org/en/learn/ * 🛤️ **Ruby on Rails:**[rubyonrails.org](https://rubyonrails.org/) * 🎡 **Laravel:**[laracasts.com](https://laracasts.com/) * 🎸 **Django:**[djangoproject.com](https://www.djangoproject.com/start/) * 🐹 **Go:**[gobyexample.com](https://gobyexample.com/) * 🦀 **Rust:** rust-lang.org/learn/ * 📱 **Kotlin:** kotlinlang.org/docs/tutorials/ * 🍏 **Swift:** docs.swift.org/swift-book/ * 🔧 **ASP.NET:**[docs.microsoft.com/en-us/learn/aspnet/](https://docs.microsoft.com/en-us/aspnet/core/) ### Data & APIs * 📊 SQL - [dev.mysql.com/doc/](http://dev.mysql.com/doc/) * 🔌 **API:** rapidapi.com/learn * 🕸️ **GraphQL:** graphql.org/learn/ * 🍃 **MongoDB:**[mongodb.com/docs/guides/](https://www.mongodb.com/docs/guides/) * 🐘 **PostgreSQL:**[postgresql.org/docs/](https://www.postgresql.org/docs/) ### DevOps & Version Control * 🐙 **Git and GitHub:**[git-scm.com](https://git-scm.com/) * 🐋 **Docker:**[docker-curriculum.com](https://docker-curriculum.com/) * ☁️ **AWS:**[aws.amazon.com/training/](https://aws.amazon.com/training/) * 🛠️ **Kubernetes:** kubernetes.io/docs/tutorials/ ### Programming Languages * 🐍 **Python:**[learnpython.org](https://www.learnpython.org/) * 🖥️ **C++:**[learncpp.com](https://www.learncpp.com/) * 🔷 **C#:**[learn.microsoft.com/en-us/dotnet/csharp/tour-of-csharp/](http://learn.microsoft.com/en-us/dotnet/csharp/tour-of-csharp/) * 💻 **Java:**[docs.oracle.com/javase/tutorial/](https://docs.oracle.com/javase/tutorial/) * 🦉 **Haskell:** haskell.org/learning/ * 📜 **Perl:** learn.perl.org These resources are shared on my Twitter profile as well. {% twitter 1799774568212468065 %} ### Conclusion These resources cover a wide range of programming skills and technologies. Dive in and start learning today to enhance your coding expertise and advance your career in technology. Happy coding! 🚀💻
iamatifriaz
1,882,062
Setting up a Local Kafka Environment on Windows
Setting up a Local Kafka Environment on...
0
2024-06-09T13:14:33
https://dev.to/codegreen/setting-up-a-local-kafka-environment-on-windows-1h8c
java, kafka, zookeeper
##Setting up a Local Kafka Environment on Windows## =============================================== **Note:** It's recommended to place Kafka and ZooKeeper folders in the root directory (C:/Windows) without any spaces in the folder names to avoid potential issues. Prerequisites ------------- * [Java Development Kit (JDK)](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html) Step 1: Download Kafka and ZooKeeper ------------------------------------ Download the following files: * [Apache Kafka 2.12](https://kafka.apache.org/downloads#2.12) * [Apache ZooKeeper](https://zookeeper.apache.org/releases.html) Step 2: Extract Kafka and ZooKeeper ----------------------------------- Extract the downloaded Kafka and ZooKeeper files to `C:/Windows`. Step 3: Configure ZooKeeper --------------------------- 1. Navigate to `C:/Windows/zookeeper/conf`. 2. Rename `zoo_sample.cfg` to `zoo.cfg`. 3. Edit `zoo.cfg` and set `dataDir=C:/Windows/zookeeper/data`. Step 4: Start ZooKeeper ----------------------- Open a terminal and navigate to `C:/Windows/zookeeper`. Run: `bin\zkServer.cmd` Step 5: Configure Kafka ----------------------- 1. Navigate to `C:/Windows/kafka/config`. 2. Edit `server.properties`. 3. Set `zookeeper.connect=localhost:2181`. Step 6: Start Kafka Server -------------------------- Open a terminal and navigate to `C:/Windows/kafka`. Run: `bin\windows\kafka-server-start.bat config\server.properties` Step 7: Create a Topic ---------------------- In a new terminal, navigate to `C:/Windows/kafka`. Run: `bin\windows\kafka-topics.bat --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test` Step 8: Start Producer ---------------------- In a new terminal, navigate to `C:/Windows/kafka`. Run: `bin\windows\kafka-console-producer.bat --broker-list localhost:9092 --topic test` Step 9: Start Consumer ---------------------- In a new terminal, navigate to `C:/Windows/kafka`. Run: `bin\windows\kafka-console-consumer.bat --bootstrap-server localhost:9092 --topic test --from-beginning` Now you have a local Kafka environment set up on your Windows machine! ----------------------------------------------------------------------
manishthakurani
1,881,937
Learn Bun.sh in Simple ways
Bun.sh is a modern JavaScript runtime similar to Node.js, but it promises better performance and...
0
2024-06-09T13:13:11
https://dev.to/aakash10802/learn-bunsh-in-simple-ways-41l9
webdev, bunjs, beginners, tutorial
Bun.sh is a modern JavaScript runtime similar to Node.js, but it promises better performance and built-in tools for tasks such as bundling, transpiling, and running scripts. 10 Facts about Bun.sh [](https://youtu.be/eTB0UCDnMQo) 1. Bun.sh is a modern JavaScript runtime designed for better performance than Node.js. 2. It includes built-in tools for bundling, transpiling, and running scripts. 3. Bun.sh supports hot reloading, making development faster and easier. 4. It can directly run both JavaScript and TypeScript files without separate compilation steps. 5. Bun.sh uses the npm ecosystem for managing dependencies. 6. The installation of Bun.sh can be done via a simple curl command or through npm. 7. Bun.sh aims to simplify the development workflow with integrated tools and faster execution. 8. It is designed to work seamlessly with existing JavaScript and TypeScript projects. 9. Bun.sh provides a command-line interface for initializing new projects and managing builds. 10. Its built-in bundler can create production-ready builds from your source code. ![bun.sh](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pa5s7d4a5sazxfxl3c6i.gif) ## **1. Install Bun.sh**[](https://bun.sh/docs/installation) To install Bun.sh, you'll need to run the following script from your terminal: for powershell(sh) ``` curl -fsSL https://bun.sh/install | bash ``` **Basic commands in bun.sh** 1. ## Initialize a new project **For init a new project in powershell(sh)** ``` bun init ``` This sets up a new project directory with the necessary files. ##2. Run a JavaScript or TypeScript file: ``` bun run <file> ``` For example, to run index.js ``` //eg file name index.js bun run index.js ``` ## 3. Install dependencies: ``` bun install <package-name> ``` Eg: To install express ``` bun install express ``` ##4. Add Dependency to your Project ``` bun add <package-name> ``` _This will add the package to your package.json and install it._ ## 5. Remove a dependency from your project: ``` bun remove <package-name> ``` ## 6. Bundle your code: ``` bun bundle <file> ``` _For example, to bundle index.js:_ ``` bun bundle index.js ``` ## 7.Run the project with hot reloading: ``` bun run --hot <file> ``` _For example, to run server.js with hot reloading:_ ``` bun run --hot server.js ``` ## 8.Check the Bun.sh version: ``` bun --version ``` ## 9._Build your project for production:_ ``` bun build ``` _This will create a production build of your project._ ## 10. Update Bun.sh to the latest version: ``` bun upgrade ``` These commands cover the basic functionalities you will use when working with Bun.sh, from initializing projects and running files to managing dependencies and building for production.
aakash10802
1,882,061
Java Records
All About Java Records : Overview : Java Records, introduced in Java 14 as a preview feature and...
0
2024-06-09T13:12:41
https://dev.to/abhishek999/java-records-1bil
java, record
**All About Java Records :** **Overview :** Java Records, introduced in Java 14 as a preview feature and finalized in Java 16, provides a concise way to model immutable data. They simplify the boilerplate code needed for data-carrying classes by automatically generating methods like equals(), hashCode() and toString() By the end of this post we will understand: **What records are in Java?** **How to create and use records in Java?** **The benefits of using records in Java?** **And some limitations of records in Java?** **What are Records?** A record in Java is a special kind of class that is designed to hold immutable data. It automatically provides implementations for methods like: equals() hashCode() toString() Getters for all fields **Why Use Records?** Records help reduce boilerplate code in data classes. Instead of writing constructors, getters, equals(), hashCode(), and toString() methods manually, we can define all of this with a single line of code **How to Define a Record?** Here's how we can define a simple record: ``` public record Point(int x, int y) { } ``` This single line of code automatically provides: 1. A constructor 2. Getters (x() and y()) 3. equals(), hashCode(), and toString() methods **Example: How to use Records?** Let's see how we can use the Point record: ``` public class Main { public static void main(String[] args) { Point point = new Point(3, 4); // Using the auto-generated toString() method System.out.println(point); // Output: Point[x=3, y=4] // Accessing the fields using auto-generated methods System.out.println("X: " + point.x()); // Output: X: 3 System.out.println("Y: " + point.y()); // Output: Y: 4 } } ``` **Example: Custom Methods in Records :** You can also add custom methods to records: ``` public record Point(int x, int y) { public double distanceFromOrigin() { return Math.sqrt(x * x + y * y); } } public class Main { public static void main(String[] args) { Point point = new Point(3, 4); System.out.println("Distance from origin: " + point.distanceFromOrigin()); // Output: 5.0 } } ``` **Custom Constructors :** We can also define custom constructors in records, but they must delegate to the canonical constructor: ``` public record Point(int x, int y) { public Point(int x) { this(x, 0); // Delegating to the canonical constructor } } ``` **Limitations of Records :** **1. Immutability:** Records are immutable by design. You cannot change the values of their fields after creation. **2. No Inheritance:** Records cannot extend other classes. They implicitly extend java.lang.Record **3. All-Args Constructor:** We cannot create a no-argument constructor directly in records. We must always provide all the components. **When to Use Records :** Use records when: 1. We need a simple, immutable data carrier class 2. We want to reduce boilerplate code for equals(), hashCode(), and toString() methods 3. We don't need to extend another class or implement complex behavior **Practical Example :** Let's create a more complex example with a Person record: ``` public record Person(String name, int age) { public Person { // Custom constructor with validation if (age < 0) { throw new IllegalArgumentException("Age cannot be negative"); } } // Custom method public String greeting() { return "Hello, my name is " + name + " and I am " + age + " years old."; } } public class Main { public static void main(String[] args) { Person person = new Person("Alice", 30); System.out.println(person.greeting()); // Output: Hello, my name is Alice and I am 30 years old. System.out.println(person); // Output: Person[name=Alice, age=30] } } ``` **Key Points :** 1. **Definition:** public record Person(String name, int age) {} 2. **Custom Constructor:** Validates that age is non-negative 3. **Custom Method:** Adds a greeting() method **Conclusion :** Java Records provide a simple and powerful way to create immutable data classes with minimal boilerplate code. They are especially useful for modeling data transfer objects and other simple data carriers. By understanding and using records, you can write cleaner and more concise code. **Happy Coding...**
abhishek999
1,882,054
The Importance of Good Health for Indie Developers
Indie development is an exciting and sometimes, rewarding field as it offers the freedom to create...
0
2024-06-09T13:03:23
https://dev.to/leonardsangoroh/the-importance-of-good-health-for-indie-developers-55o2
programming, programmers, mentalhealth, developers
Indie development is an exciting and sometimes, rewarding field as it offers the freedom to create and innovate. However, it also comes with unique challenges, especially when it comes to maintaining good health. In this post, we will explore why good health is crucial for us, Indie Developers, and look at how we can achieve a healthy work-life balance. ### Who's an Indie Dev? An indie developer is simply someone who creates or intends to create software, games, or applications independently or as a part of a small team, without the financial backing of a large company. Indie devs often: - Work on passion projects - Manage all aspects of development; from coding and design to marketing and distribution - Face common challenges; limited resources, balancing multiple roles, and financial project sustainability. ### Why Focus on **Our Health** as Indie Devs? The indie dev description above has clearly shown that we wear many hats. This multitasking can lead to long working hours and intense pressure, making it easy for us to neglect our health. Yet, maintaining good health is essential for sustaining productivity, creativity, and overall well-being. Focusing on our health as indie developers is crucial for several reasons; **1. Sustained Productivity** Good health helps maintain high levels of energy and focus, which are essential for long coding sessions and creative problem-solving **2. Enhanced Creativity** A healthy mind and body can improve cognitive functions and foster creativity, enabling you to come up with innovative solutions and ideas that can change the world for the better. **3. Prevention of Burnout** Long hours and intense pressure can lead to burnout. Prioritizing our health helps prevent burnout, ensuring we can continue to work on our projects with enthusiasm. **4. Longevity in Our Career** Maintaining good health is essential for a long and sustainable career. Imagine building a successful indie project that later reaches the global limelight but not being healthy enough to live and see its success. Neglecting our health can lead to this. Strive to live long so that you can see your once a little project, change this world for the better! **5. Better Decision-Making** Good health, especially **mental health**, enhances decision-making, crucial for managing projects, deadlines, and business aspects of indie development. ## Ways of Maintaining Good Health ### Physical Health **1. Egonomics and Workspace Setup** - A proper ergonomic setup is crucial to prevent musculoskeletal issues. Invest in a good chair, desk, and monitor setup to promote a good posture. - If you're blessed to have some extra coins, add ergonomic keyboards and a mouse to reduce strain on your hands and wrist. **2. Regular Exercise** - Regular physical activity boosts energy levels, improves mood, and has a positive impact on one's cognitive function. - Simple exercises like walking can break up long periods of sitting on your desk and keep you active. **3. Healthy Eating** - A balanced diet is key to maintaining energy and concentration levels. - Remember to always hydrate by drinking plenty of fluids throughout the day. ### Mental Health **1. Stress Management** - If Indie development stresses you once in a while, then you are not alone, we are in this together. One coping mechanism I use is immediately noticing when I get stressed and start taking steps to manage it. - Start healthy practices like mindfulness, meditation, and mental exercises. For me, they seem to work and I bet for you they might too. **2. Work-Life Balance** - Set clear boundaries between work and personal life. This can be challenging when working from home, but it's essential to prevent burnout. - As busy as our schedules are, make sure to allocate time for hobbies, socializing, and relaxation to recharge your mental batteries. **3. Healthy Sleep Patterns** - Quality sleep is vital for cognitive function and overall health. Establish a regular sleep schedule and create a restful sleep environment. - Avoid screens before bedtime and consider relaxation techniques to improve sleep quality. ### Parting Shot Good health is the cornerstone of a successful and sustainable indie development career. By prioritizing physical and mental health, we as indie devs can enhance our creativity, productivity, and overall quality of life. **Do you have any other health dimensions that are relevant to Indie Devs? Feel free to share with us in the comment section!**
leonardsangoroh
1,881,664
Hello ?
A post by Prgull Kamal
0
2024-06-09T00:41:35
https://dev.to/prgull/hello--44ln
prgull
1,882,059
Test
Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test...
0
2024-06-09T12:44:51
https://dev.to/petrache/test-3144
Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test Test
petrache
1,882,057
Types vs Interfaces in TypeScript
To describe complex types, we can use two TypeScript entities: types or interfaces. In the post we...
0
2024-06-09T12:43:11
https://dev.to/betelgeuseas/types-vs-interfaces-in-typescript-2ebo
javascript, typescript
To describe complex types, we can use two TypeScript entities: types or interfaces. In the post we will look into their differences and similarities. Both allow us to create reusable structures for variables, function parameters, return types, and more. In many cases, types and interfaces are interchangeable and can be used similarly to function expression and function declaration: ```ts type PersonType = { number: string; age: number; isFemale: boolean } interface PersonInterface { number: string; age: number; isFemale: boolean } ``` However, there are slight differences that are important to consider: - Types allow you define types that can hold primitive types or union types within a single definition. This isn’t possible with interfaces, with them you can only define objects. ```ts type A = number; type B = A | string; ``` - Multiple interfaces with the same name and in the same scope are automatically merged. This is a feature called declaration merging. Types, on the other hand, are immutable. Multiple type aliases with the same name in the same scope will throw an error: ```ts interface User { email: string; } interface User { password: string; } // No error const user: User = { email: 'email', password: 'password', }; // ------------------------------------- type User = { email: string; } type User = { password: string; } // Error ``` - When you extend an interface, TypeScript will make sure that the interface you’re extending is assignable to your extension. If not, it will throw an error which is quite helpful. This is not the case when you use intersection types: here, TypeScript will do its best to combine your extension.
betelgeuseas
1,882,053
You're doing state wrong
Implementing component state as a combination of booleans may seem like the easiest way to do it, but...
0
2024-06-09T12:40:51
https://nabiltharwat.com/blog/2024-06-08-youre-doing-state-wrong
webdev, javascript, programming, typescript
Implementing component state as a combination of booleans may seem like the easiest way to do it, but let's do something different. _Cover by Namroud Gorguis on Unsplash_ > This article is framework- and language- agnostic. Code examples presented are written in a generic form. ## Consider a music player That can play, pause, and stop. Developers are often tempted to represent each state in a separate boolean: ```typescript const isStopped = createState(true) const isPlaying = createState(false) const isPaused = createState(false) ``` If you think about this for a moment, each of those boolean states can be either true or false. Counting all possibilities yields 8 possible state variations, when our component only has 3 actual states. Which means we have 5 **impossible states** in our tiny component. **Impossible states** are states that the component is **never** meant to be in, usually indicating a logic error. The music player can't be playing and stopped at the same time. It also can't be paused and playing at the same time. And so on. Guard statements usually accompany boolean states for this reason: ```typescript if (isStopped && !isPlaying && !isPaused) { // display stopped UI } else if (!isStopped && isPlaying && !isPaused) { // display playing UI } else if (!isStopped && !isPlaying && isPaused) { // display paused UI } ``` And state updates turn into a repetitive set of instructions: ```typescript // To play setIsPlaying(true) setIsPaused(false) setIsStopped(false) // To stop setIsPlaying(false) setIsPaused(false) setIsStopped(true) ``` Each addition and modification later to the component needs to respect these 3 valid states, and to guard against those 5 **impossible states**. ## Hello, state machines! Every program can be simplified into a state machine. A state machine is a mathematical model of computation, an abstract machine that can be in exactly one of a finite number of states at any given time. It has a list of transitions between its defined states, and may execute **effects** as a result of a transition. If we convert our media player states into a state machine we end up with a machine containing exactly 3 states (stopped, playing, and paused), and 5 transitions. ![Media player state machine](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rza4bk2x2vcpx0fwqtep.jpg) Now we can represent our simple machine in a single state that can be anything, from a Union Type to an Enum: ```typescript type State = 'stopped' | 'playing' | 'paused' enum State { STOPPED, PLAYING, PAUSED } ``` Now state updates can be a single, consistent instruction: ```typescript setState('stopped') // or setState(State.STOPPED) ``` With this approach we completely eliminate **impossible states**, make our state easier to control, and improve the component's readability. ## What about effects? An **effect** is anything secondary to the component's functionality, like loading the track, submitting a form's data, etc. An **action**. Let's consider forms. A form is usually found in one of four states: idle, submitting, success, and error. If we use boolean states we end up with 4 booleans, 16 possible combinations, and 12 **impossible states**. Instead, let's make it a state machine too! ![Form state machine](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dz41tizodnfhclffs1z3.jpg) The code behind this machine can be as simple as another method on the component: ```typescript enum State { IDLE /* default state */, SUBMITTING, ERROR, SUCCESS } const submit = (formData: FormData) => { setState(State.SUBMITTING) postFormUtility(formData) .then(() => { setState(State.SUCCESS) }) .catch(() => { setState(State.ERROR) }) } ``` ## The exception Obviously there are cases where a component may truly have only 2 states, therefore using a boolean for it works perfectly. Examples of this are modals to control their visibility, buttons to indicate a11y activation, etc. ```typescript const isVisible = createState<boolean>(false) const toggle = () => { setState(!isVisible) } ``` The problem starts to form when you introduce multiple booleans to represent variations of the state. ## I still need booleans! You can derive booleans from your state. Control your component through a single state machine variable, but derive a hundred booleans from it if you want. Using the form example: ```typescript enum State { IDLE /* default state */, SUBMITTING, ERROR, SUCCESS } const state = createState(State.IDLE) const isSubmitting = state === State.SUBMITTING const hasError = state === State.ERROR const isSuccessful = state === State.SUCCESS ``` ## Wrap up Thinking of components as state machines has helped me simplify a lot of codebases. It's effect on the overall accessibility of a codebase is truly immense. Try it and tell me what you think! 👀 --- Thanks for reading! You can follow me on [Twitter](https://twitter.com/kl13nt), or read more of my content on my [blog](https://nabiltharwat.com/blog)!
kl13nt
1,882,055
Accessibility on TV
There are all these rules for accessibility on websites, which admittedly most sites don't follow,...
0
2024-06-09T12:45:29
https://blog.nicm42.co.uk/accessibility-on-tv
a11y
--- title: Accessibility on TV published: true date: 2024-06-09 12:39:32 UTC tags: accessibility canonical_url: https://blog.nicm42.co.uk/accessibility-on-tv --- There are all these rules for accessibility on websites, which admittedly most sites don't follow, but theoretically they should. However that's not the case on TV. Last week's Doctor Who Dot and Bubble was a case in point. The main character spends her time in a literal social media bubble. It goes around her head so she can't see the real world. In the background are rectangles with various people in. In the foreground are rectangles of whoever she's talking to. ![The bubble in question](https://www.doctorwhotv.co.uk/wp-content/uploads/DW-14.5-D-dot-and-bubble.jpg) That all sounds fine right? Now what if I tell you the background images constantly move around the bubble. From the perspective of the person looking at them, the background moves from one side of your vision to the other. Constantly. At a constant speed. If you have a vestibular problem that is probably the worst Doctor Who episode to attempt to watch. Every time we saw that background moving I was distracted thinking about people who had to give up on the episode because it made them too ill. And if that existed in real life, surely the constantly moving background would make you feel travel sick? The minute you turn it off you wouldn't be able to walk in a straight line. And if you tried standing up you'd end up leaning to one side. There are warnings on TV before episodes, like for flashing images for example. But no one warns for a lot of motion. They should, though. Accessibility should apply to TV. I'd like to be able to turn off motion on my TV and watch the version where the background rectangles stay where they are. Then I could concentrate on the episode itself. And maybe I'd have enjoyed it more.
nicm42
1,882,043
Intro to MultiversX blockchain interactions with JavaScript SDK
Writing low-level code for blockchain applications is one thing, but you must also write tools to...
27,816
2024-06-09T12:39:19
https://www.julian.io/articles/multiversx-js-sdk-intro.html
multiversx, blockchain, web3, javascript
Writing low-level code for blockchain applications is one thing, but you must also write tools to interact with the protocol and smart contracts. What is the best programming language for that? A couple exist, but what's quickest to pick up, especially with a great SDK? Of course, JavaScript. I want to cover interactions in a Node.js environment with MultiversX SDK. For web application integration, I will prepare separate articles. **What we will cover in this article:** - How to prepare the environment and tools - How to prepare and broadcast a simple transaction _You will find a video walkthrough at the end of this article._ Let's focus on the Node.js environment first. It will be more helpful to show how the MultiversX JS SDK works in Node.js before introducing you to abstractions like web apps-related tooling. **Preparing the tools** We will need Node.js to interact with the MultiversX blockchain using the JavaScript SDK. If you are a JavaScript developer, there is no need to explain how to install Node.js. If not, it would be better to understand how it all works before reading further, so please learn more about it and then go back to this article. Ok, let's jump into it. First of all, we need to initialize our dummy project. So create a directory, and inside of it, use npm to initialize the Node.js project: ``` npm init ``` It will guide you in creating a package.json file with the setup. Then we must install `@multiversx/sdk-core`, `@multiversx/sdk-network-providers` and `@multiversx/sdk-wallet`. I will use the second library to prepare a convenient way of making network requests and broadcasting transactions. The third library will be required to prepare a 'signer' wallet for tests, but we will primarily work with the sdk-core. ``` npm install @multiversx/sdk-core @multiversx/sdk-network-providers @multiversx/sdk-wallet ``` We will work without the Typescript configuration just for simplicity, but of course, you can also configure this step if you want to. The next step is to create a wallet. The simplest way is to use the Devnet Web Wallet. Check [devnet-wallet.multiversx.com](https://devnet-wallet.multiversx.com) and create a new wallet there. Save the password you provided, the seed phrase, and the generated Keystore file. You will need them later. Remember, all that data should stay private, so don't share it with anyone. I will share everything for demo purposes, but it is still on the devnet, so nothing harms me if the data leaks. You also need some funds in your wallet. After connecting to the Web wallet using the Keystore, you will find a faucet. You can request some 'fake' EGLD (the native MultiversX token) for testing. The Web Wallet UI should be intuitive enough, but here are the docs on using it: [Web Wallet docs](https://docs.multiversx.com/wallet/web-wallet/). Ok, let's return to our JavaScript code. Now that we have all the necessary libraries let's create a setup.js file and move the Keystore JSON file to the same directory for convenience. Till now, you should have: ``` node_modules package.json erd....json setup.js ``` We must initialize and prepare some things: our signer to sign transactions with our newly created wallet, our user account to synchronize it with the network, and the network provider to broadcast transactions. You will do that using helper functions from our previously installed npm packages. It should look like: ```javascript import { promises } from "node:fs"; import { Address, Account } from "@multiversx/sdk-core"; import { ApiNetworkProvider } from "@multiversx/sdk-network-providers"; import { UserSigner } from "@multiversx/sdk-wallet"; export const senderAddress = "erd10dgr4hshjgkv6wxgmzs9gaxk5q27cq2sntgwugu87ah5pelknegqc6suj6"; export const receiverAddress = "erd176ddeqqde20rhgej35taa5jl828n9z4pur52x3lnfnj75w4v2qyqa230vx"; const keyFilePath = `./${senderAddress}.json`; // Should be always kept privately, here hardcoded for the demo const password = "Ab!12345678"; // The convenient way of doing network requests using the devnet API export const apiNetworkProvider = new ApiNetworkProvider( "https://devnet-api.multiversx.com" ); export const syncAndGetAccount = async () => { const address = new Address(senderAddress); const userAccount = new Account(address); const userAccountOnNetwork = await apiNetworkProvider.getAccount(address); userAccount.update(userAccountOnNetwork); return userAccount; }; // We read the Keystore file contents here const getKeyFileObject = async () => { const fileContents = await promises.readFile(keyFilePath, { encoding: "utf-8", }); return fileContents; }; export const getSigner = async () => { const wallet = await getKeyFileObject(); return UserSigner.fromWallet(JSON.parse(wallet), password); }; ``` The functions are self-explanatory, but basically, the tools are: - apiNetworkProvider - required for broadcasting transactions - syncAndGetAccount - preparing and synchronization of the user account - getSigner - prepare the wallet signer for transaction signing **Sign and broadcast the transaction** With our tools ready, we can review our transactions. We want to use the simplest method to prepare and send some EGLD with an attached custom message. The flow is as follows: - We synchronize and prepare the user instance - We prepare the transaction configuration and payload - We increment the nonce using the user's on-chain data - We serialize the transaction data for signing - We sign the transaction data - We get and set the signature on the transaction object - We broadcast the transaction It looks like that in the code: ```javascript import { Transaction, TransactionComputer } from "@multiversx/sdk-core"; import { receiverAddress, syncAndGetAccount, senderAddress, getSigner, apiNetworkProvider, } from "./setup.js"; const sendEgld = async () => { const user = await syncAndGetAccount(); const transaction = new Transaction({ data: Buffer.from("This is the demo transaction!"), gasLimit: 100000n, sender: senderAddress, receiver: receiverAddress, value: 1000000000000000n, // 0.001 EGLD chainID: "D", }); transaction.nonce = user.getNonceThenIncrement(); const computer = new TransactionComputer(); const serializedTransaction = computer.computeBytesForSigning(transaction); const signer = await getSigner(); transaction.signature = await signer.sign(serializedTransaction); const txHash = await apiNetworkProvider.sendTransaction(transaction); console.log( "Check in the explorer: ", `https://devnet-explorer.multiversx.com/transactions/${txHash}` ); }; sendEgld(); ``` For more information about all the helper functions from MultiversX JavaScript SDK, you can check the [autogenerated API documentation](https://multiversx.github.io/mx-sdk-js-core/v13/). There is also a cookbook with the basics like here, but also with much more. You can find it here: [sdk-js-cookbook](https://docs.multiversx.com/sdk-and-tools/sdk-js/sdk-js-cookbook-v13/). I'll also try to prepare more articles on topics like interaction with custom smart contracts and token management, but these will be included in the following articles. **Summary** The MultiversX JavaScript SDK is a powerful tool for interacting with blockchain and smart contracts. Here, you have complete logic for managing simple transactions, but you can reuse that for any other transaction. The only difference will be how the transaction payload is built. Follow me on X ([@theJulianIo](https://x.com/theJulianIo)) and YouTube ([@julian_io](https://www.youtube.com/channel/UCaj-mgcY9CWbLdZsC5Gt00g)) or [GitHub](https://github.com/juliancwirko) for more MultiversX magic. Please check the tools I maintain: the [Elven Family](https://www.elven.family) and [Buildo.dev](https://www.buildo.dev). With Buildo, you can do a lot of management operations using a nice web UI. You can [issue fungible tokens](https://www.buildo.dev/fungible-tokens/issue), [non-fungible tokens](https://www.buildo.dev/non-fungible-tokens/issue). You can also do other operations, like [multi-transfers](https://www.buildo.dev/general-operations/multi-transfer) or [claiming developer rewards](https://www.buildo.dev/general-operations/claim-developer-rewards). There is much more. **Walkthrough video** {% embed https://www.youtube.com/watch?v=Fxxdly9QYHw %} **The demo code** - [learn-multiversx-js-sdk-with-examples](https://github.com/xdevguild/learn-multiversx-js-sdk-with-examples/tree/setup-and-transaction)
julian-io
1,881,622
HashiCorp Vault Quickstart
https://github.com/darkedges/quickstart-hashicorp-vault This is a sample project to initialise a...
0
2024-06-08T23:56:17
https://dev.to/darkedges/hashicorp-vault-quickstart-26g6
https://github.com/darkedges/quickstart-hashicorp-vault This is a sample project to initialise a [HashiCorp Vault](https://www.vaultproject.io/) instance with a PKI Instance and generate some secrets that can be used by the ForgerRock Identity Platform. It uses [HashiCorp Terraform](https://www.terraform.io/) to provision the PKI and secrets so that they can be quickly and easily rotated. Secrets are generated in the `volumes/secrets` folder, but this can be easily changed to use Docker Volumes if required. Config for both Vault and Terraform are initially baked into the container, but can be modified and attached without rebuilding as the folders are mounted to the running containers. Terraform state is also local, meaning you could rerun the Terraform plan from within a running container thus allowing quick and easy updates and testing without having to rebuid containers. ## Execution The following describes how to run the sample. ### Vault Init The following command will start a HashiCorp Vault instance and initiliase it so that you can enter the token in the [HashiCorp Vault UI](http://localhost:8200) ```console docker-compose up qhcv-vault-init ``` returns ```console qhcv-vault-init | VAULT_TOKEN=xxxx.xxxxxxxxx ``` It also available via ```console cat volumes/vault/keys.json | jq .root_token -r ``` returns ```console xxxx.xxxxxxxxx ``` ### Terraform Apply The following command will perform a Terraform apply to the running HashiCorp Vault instance. It will grab and configure the `VAUL_TOKEN` from the value saved in the previous run. **Note:** If HashiCorp Vault is not running it will start and initiliase it and that service will remaing running in the background. ```console docker-compose run qhcv-terraform ``` The state file will be stored in the `volumes/terraform` folder and the secrets in the `volumes/secrets` folder. ### Shutdown and cleanup To shutdown and cleanup issue the following (depending on OS) ```console docker-compose down rm -rf volumes ``` ```powershell docker-compose down rm -r -force volumes ``` ## Explanation ### Vault Config The Vault container extends an existing HashiCorp Vault container to add - [docker/vault/init/vault-init.sh](https://github.com/darkedges/quickstart-hashicorp-vault/blob/main/docker/vault/init/vault-init.sh) - [docker/vault/config/vault-server.json](https://github.com/darkedges/quickstart-hashicorp-vault/blob/main/docker/vault/config/vault-server.json) - [docker/vault/config/vault-agent.json](https://github.com/darkedges/quickstart-hashicorp-vault/blob/main/docker/vault/config/vault-agent.json) The configs are basics to show how to get the solution running, but can be extended with your specific needs. ### Vault Init The init script depends on HashiCorp running and checks to see if the Vault has been previously unsealed as the file `volumes/vault/keys.json`. If it has not been unsealed it will issue a request to - initiliase the vault with a single `secret` and store the details in `keys.json` - unseal the Vault, using that single `secret`. **Note:** This is not a production solution as the secrets are not safely stored and should only be used for Local Development purposes. ### Terraforms Config The Vault container extends an existing HashiCorp Vault container to add - Plugins needed to perform the management of the Vault and secrets. - [docker/terraform/scripts/init-vault.sh](https://github.com/darkedges/quickstart-hashicorp-vault/blob/main/docker/terraform/scripts/init-vault.sh) Performs the core operations of the script. - [docker/terraform/init/_terraform.tf](https://github.com/darkedges/quickstart-hashicorp-vault/blob/main/docker/terraform/init/_terraform.tf) Details about the required providers and their configuguration. - [docker/terraform/init/certificate_clients.tf](https://github.com/darkedges/quickstart-hashicorp-vault/blob/main/docker/terraform/init/certificate_clients.tf) Configuration of any Client Certificates needed. - [docker/terraform/init/certificates_tls.tf](https://github.com/darkedges/quickstart-hashicorp-vault/blob/main/docker/terraform/init/certificates_tls.tf) Configuration of any TLS Certificates - [docker/terraform/init/variables.tf](https://github.com/darkedges/quickstart-hashicorp-vault/blob/main/docker/terraform/init/variables.tf) Variables used in the plan. - [docker/terraform/init/vault.tf](https://github.com/darkedges/quickstart-hashicorp-vault/blob/main/docker/terraform/init/vault.tf) The core Vault configuration of PKI It creates - Root Certificate Authority - Intermeddiate Certificate Authority - Roles - Policies When it runs it performs the 3 core tasks of using the Vault Token derived from `keys.json` - `init` - `plan` - `apply --auto-approve` The state files are stored in `volumes/terraform` It will also export the Root and Intermediatte certifcates into - `volumes/secrets/qhcv_idam_root.pem` - `volumes/secrets/qhcv_idam_intermediate.pem` ### Secrets The Terraform plan will export secrets into `volumes/secrets` TLS Certicates are exported as `tls.crt` and `tls.key`. Client certificates are exported as `.p12`
darkedges
1,882,038
Swift Custom Array Implementation Using UnsafeMutablePointer
In this article, we explore a custom array implementation in Swift using UnsafeMutablePointer. This...
0
2024-06-09T12:33:58
https://dev.to/binoy123/swift-custom-array-implementation-using-unsafemutablepointer-fl5
swift, customearray, architecture, beginners
In this article, we explore a custom array implementation in Swift using UnsafeMutablePointer. This implementation offers insights into manual memory management, dynamic resizing, and conformance to protocols such as CustomDebugStringConvertible and Sequence. The goal is to provide a detailed overview of the internal workings of Swift arrays. ## Overview The custom array MyArray<T> allows storage and manipulation of elements of any type T. It supports dynamic resizing, appending, insertion, and removal of elements, while ensuring memory safety through proper allocation and deallocation. ## Key Features * **Dynamic Resizing:** Automatically adjusts the capacity of the array when it reaches its limit. * **Memory Management:** Uses UnsafeMutablePointer for low-level memory operations. * **Sequence Conformance:** Implements the Sequence protocol for iteration. * **Debug Description:** Provides a custom debug description for easier debugging. ## Implementation Details #### Properties ``` struct MyArray<T> : CustomDebugStringConvertible, Sequence { private var capacity: Int private var storage: UnsafeMutablePointer<T> private var size: Int var count: Int { return size } var isEmpty: Bool { return size == 0 } ``` **capacity:** The maximum number of elements the array can hold without resizing. **storage:** A pointer to the array's memory storage. **size:** The current number of elements in the array. **count:** Returns the number of elements in the array. **isEmpty:** Checks if the array is empty. #### Initialiser ``` init(initialCapacity: Int = 2) { self.capacity = initialCapacity self.size = 0 self.storage = UnsafeMutablePointer<T>.allocate(capacity: initialCapacity) } ``` #### Resizing ``` private mutating func resize() { if size >= capacity { /* Double the capacity */ let newCapacity = capacity * 2 /* Allocating new storage with new capacity */ let newStorage = UnsafeMutablePointer<T>.allocate(capacity: newCapacity) /* Copying the existing elements to the new storage */ for i in 0..<count { newStorage[i] = storage[I] } /* Deallocating old storage */ storage.deallocate() storage = newStorage capacity = newCapacity } } ``` The resize method doubles the capacity when needed and reallocates memory, copying existing elements to the new storage. #### Adding Elements ``` public mutating func append(_ item: T) { resize() storage[size] = item size += 1 } ``` The append method adds a new element to the end of the array, resizing if necessary. #### Inserting Elements ``` public mutating func insert(_ item: T, at index: Int) { guard index >= 0 && index <= size else { fatalError("Index out of bounds") } resize() for i in stride(from: count, to: index, by: -1) { storage[i] = storage[i - 1] } storage[index] = item size += 1 } ``` The insert method inserts an element at a specified index, shifting elements as needed. #### Removing Elements ``` @discardableResult public mutating func remove(at index: Int) -> T { guard index >= 0 && index < size else { fatalError("Index out of bounds") } let removedElement = storage[index] for i in index..<size - 1 { storage[i] = storage[i + 1] } size -= 1 return removedElement } ``` The remove method removes an element at a specified index and returns it. #### Clearing the Array ``` public mutating func removeAll() { // Deallocate the existing elements storage.deallocate() capacity = 2 // Reinitialise the storage storage = UnsafeMutablePointer<T>.allocate(capacity: capacity) size = 0 } ``` The removeAll method deallocates all elements and resets the array. #### Subscript ``` subscript(index: Int) -> T { get { guard index >= 0 && index < size else { fatalError("Index out of bounds") } return storage[index] } set { guard index >= 0 && index < size else { fatalError("Index out of bounds") } storage[index] = newValue } } ``` The subscript allows getting and setting elements at a specified index. #### Sequence Conformance ``` func makeIterator() -> AnyIterator<T> { var index = 0 return AnyIterator { guard index < self.size else { return nil } let element = self.storage[index] index += 1 return element } } ``` The makeIterator method provides an iterator for the array. #### Debug Description ``` var debugDescription: String { var result = "[" for i in 0..<size { result += "\(storage[I])" if i < size - 1 { result += ", " } } result += "]" return result } ``` The debugDescription property returns a string representation of the array. ## Conclusion This custom array implementation demonstrates how to manage memory manually and provides basic array functionalities. While UnsafeMutablePointer offers powerful capabilities, it requires careful handling to avoid memory leaks and ensure safety. This implementation serves as an educational example and can be extended for more advanced use cases. Please find the complete source code [Here](https://github.com/benoy/MyArray)
binoy123
1,882,044
YOLOv10 on Custom Dataset
What is YOLO? You Only Look Once (YOLO) is a state-of-the-art, real-time object detection...
0
2024-06-09T12:33:41
https://dev.to/wydoinn/yolov10-on-custom-dataset-4dld
ai, machinelearning, deeplearning, python
## What is YOLO? **You Only Look Once (YOLO)** is a state-of-the-art, real-time object detection algorithm . ![YOLOv10 Architecture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/22c4x05mrv5pu3ijjo26.jpeg) ## What makes YOLO popular? - Speed - Detection accuracy - Good generalization - Open-source Google Colab is an excellent platform for running deep learning models due to its free access to GPUs and ease of use. This guide will walk you through the process of running the latest version, YOLOv10, on Google Colab. ### Before You Start To make sure that you have access to GPU. You can use `nvidia-smi` command to do that. In case of any problems navigate to `Edit` -> `Notebook settings` -> `Hardware accelerator`, set it to `GPU`, and then click `Save`. ``` !nvidia-smi ``` ### Install Required Packages Clone the GitHub repository. ``` !git clone https://github.com/THU-MIG/yolov10.git ``` ``` cd yolov10 ``` ``` !pip install . ``` ## Upload Data To Colab ### Step 1: Mount Google Drive ``` from google.colab import drive drive.mount('/content/drive') ``` ### Step 2: Upload Files Directly ``` from google.colab import files uploaded = files.upload() ``` ### Step 3: Organize Data for YOLOv10 **Images:** - The images directory contains subdirectories for train and val (validation) sets. - Each subdirectory contains the corresponding images for training and validation. **Labels:** - The labels directory mirrors the images directory structure. - Each text file in the labels/train and labels/val subdirectories contains the annotations for the corresponding images. **Annotations Format:** ``` /my_dataset /images /train image1.jpg image2.jpg ... /val image1.jpg image2.jpg ... /labels /train image1.txt image2.txt ... /val image1.txt image2.txt ... data.yaml ``` **Data Configuration File (data.yaml):** ``` train: /content/my_dataset/images/train val: /content/my_dataset/images/val nc: N # N for number of classes names: ['class1', 'class2', ..., 'classN'] ``` ## Download Pre-trained Weights ``` import os import urllib.request # Create a directory for the weights in the current working directory weights_dir = os.path.join(os.getcwd(), "weights") os.makedirs(weights_dir, exist_ok=True) # URLs of the weight files urls = [ "https://github.com/jameslahm/yolov10/releases/download/v1.0/yolov10n.pt", "https://github.com/jameslahm/yolov10/releases/download/v1.0/yolov10s.pt", "https://github.com/jameslahm/yolov10/releases/download/v1.0/yolov10m.pt", "https://github.com/jameslahm/yolov10/releases/download/v1.0/yolov10b.pt", "https://github.com/jameslahm/yolov10/releases/download/v1.0/yolov10x.pt", "https://github.com/jameslahm/yolov10/releases/download/v1.0/yolov10l.pt" ] # Download each file for url in urls: file_name = os.path.join(weights_dir, os.path.basename(url)) urllib.request.urlretrieve(url, file_name) print(f"Downloaded {file_name}") ``` ## Train Custom Model ``` !yolo task=detect mode=train epochs=100 batch=4 plots=True model=weights/yolov10n.pt data=data.yaml ``` ## Inference on Image ``` !yolo task=detect mode=predict conf=0.25 save=True model=runs/detect/train/weights/best.pt source=img.jpg ``` ## Inference on Video ``` !yolo task=detect mode=predict conf=0.25 save=True model=runs/detect/train/weights/best.pt source=video.mp4 ``` ## Summary This guide covers running YOLOv10 on Google Colab by setting up the environment, installing necessary libraries, and running inference with pre-trained weights. It also explains how to upload and organize data in Colab for YOLOv8, including the required directory structure and configuration files. These steps enable efficient training and inference for object detection models using Colab's resources.
wydoinn
1,882,052
How to setup Deep Links for Android applications
Deep links are special URIs that take users directly to specific content within a mobile app, rather...
0
2024-06-09T12:24:12
https://wannabedev.io/guides/how-to-setup-deep-links-for-android-applications
android, mobile, learning, deeplinks
Deep links are special URIs that take users directly to specific content within a mobile app, rather than just launching the app or opening a webpage in a browser. These links enhance user experience by letting users navigate to precise content or a particular section of the app with just one click. ## Types of links On Android, there are clear distinctions between different types of links: Deep Links, Web Links, and Android App Links. To implement deep link handling on Android, it's essential to understand how each type behaves. The following illustration shows the relationships between these types of links. ![Relation between different links](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kv4mcazuj1tbohejmkdv.png) In the image, you can see that Web Links and Android App Links are special cases of deep links. Let's explore them a bit further. ### Deep Links Deep links can take various forms depending on the URI scheme and the type of content they point to within the app. Here is an overview of possible schemes available with deep links: 1️⃣ **HTTP/HTTPS Scheme** These are URLs that can open web pages in a browser or specific content within the app if the app is set up to handle them. Here is an example: ```text https://shopapp.com/shop https://shopapp.com/profile ``` 2️⃣ **Custom Scheme** These are **custom URI schemes** defined by the app. They don't open in a web browser and are designed to be **handled only** by the app. Here is an example: ```text shopapp://shop shopapp://profile ``` Notice that URI starts with `shopap`, this is called **custom scheme**. If there is an app on your Android device configured to handle this custom scheme, Android will delegate handling of the link to the capable app. In simple terms, deep links are URIs that navigate users directly to specific content within a mobile app and can use both HTTP/HTTPS and custom URI schemes. ### Web Links Web links are deep links that use the **HTTP/HTTPS schemes**. Starting with Android 12, clicking a Web Link that is not an *Android App Link* **always shows** content in a web browser. On devices running previous versions of Android, if the app or other apps installed on a user's device can also handle the Web Link, users might not go directly to the browser. Instead, they'll see a system dialog letting them choose which app to use to open the link. ### Android App Links Android App Links are a special type of URL designed for Android that looks similar to a regular web link (using HTTP/HTTPS). They allow users to be directed to specific content within an app, instead of a web page. Here are some examples: ```text https://www.shopapp.com/product/67890 https://www.shopapp.com/profile/12345 ``` When the user clicks on an Android App Link, the app opens immediately if it's installed. The Android system will not show any modals to the user; it just works. If the user doesn't want your app to be the default handler, they can override this behavior in the app's settings. App Links are required to support deep linking behavior using HTTP/HTTPS scheme on Android 12 and onwards, ensuring a seamless user experience. ## Implicit Intent Before configuring deep links, it's important to understand how the Android system handles such links. This is done using something called an *Intent*. In the world of Android apps, intents act as messengers. These messages coordinate actions between different parts of an app, or even between different apps entirely. There are two main types of intents: explicit intents and implicit intents. Today, we'll focus on implicit intents because they play a key role in deep linking and how the Android system directs users to specific app content. Implicit intents specify a **general action to be performed**. They **do not specify** which application to use for performing the action. The system decides which installed application is best suited to handle the implicit intent. To achieve this, the system checks something called **intent filters**. ### How Deep Links use Implicit Intents Deep links rely on **intent filters** specified in the app's `AndroidManifest.xml` to handle particular URL schemes or host/path combinations. When a deep link is triggered (e.g., a user clicks a link), an **implicit intent is created** with the action `Intent.ACTION_VIEW` and a data URI specifying the URL to be handled. Next, the Android system matches this intent against the intent filters declared by installed applications to find the appropriate activity to handle the link. When setting up deep links, our goal is to ensure that the system has all the necessary data required to start our application when the designated deep link gets triggered. ### Understanding intent filters When you define an intent filter in your `AndroidManifest.xml` to handle deep links, the system uses the specified `<data>` attributes to match incoming intents to the correct activity. The `<data>` element can specify the *scheme*, *host*, *port*, *path*, *pathPrefix*, and *pathPattern*. If these attributes do not match the incoming URL, the intent filter will not match, and the activity will not be triggered (e.g., the application will not start). Intent filters are crucial in handling deep links, allowing you to fine-tune how your application responds to specific links. We will discuss this in more detail in the next sections. ## Deep Link flow Deep links can seem complex at first, but the following flowchart will break it down into simpler steps. Let's see how deep links navigate users to specific app content. ![Flowchart that shows how the Android handles deep links](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bzd4urgmqkel20wbnkxj.png) The flowchart illustrates how the Android system resolves deep links, which can be in the form of a custom domain or an App Link. The process begins with the user triggering the deep link. This can happen by pressing a button or clicking a link, for example in an email or messaging app. Upon this action, the Android system creates an implicit intent using the URL associated with the button or link. The system checks the intent filters declared in `AndroidManifest.xml` for every installed app to find a matching filter that can handle the URL. The system then checks if any installed apps have registered an intent filter that can handle the URL from the deep link and analyzes the deep link further to determine its specific type. If the deep link uses a custom URI scheme but no matching app is found, the link isn't handled by the Android system. In all other cases, the link opens in a web browser. If the system finds a matching intent filter, it takes further action based on the deep link type. For custom URI schemes, the link is directly opened within the corresponding app. However, for App Links, an additional step verifies domain ownership to ensure security. We'll explore this verification process in detail later. For now, let's focus on what happens after successful verification of an App Link. Upon successful domain ownership verification, the link opens directly within the app. However, if verification fails, the link will be opened in a web browser for user safety. I hope this gives you a clearer picture of how deep links are processed and handled by the system. If it's still not clear, consider reading this section once more. Now, let's dive deeper and explore how to configure App Links to leverage this functionality within your own app! ## Using App Links As mentioned earlier, App Links are a special type of URL designed for Android that looks similar to a regular web link. Since App Links use the HTTP/HTTPS scheme, we need to configure them for designated web URLs. Setting up App Links involves a few key steps to ensure that your application can handle URLs and open the app directly when those URLs are clicked. Before we start, let's say we want to configure the app so that when the following link is clicked: `https://shopapp.com/shop`, it opens the *ShopActivity* inside the app itself. There are a couple of important pieces of information that we can extract from the link: - Host: `shopapp.com` - Scheme: `https` - Pathname: `/shop` We will later use this information to construct the required intent filter needed to open the app when the link is triggered. So, let's get started. ### Update Manifest file First, we need to declare intent filters in the `AndroidManifest.xml` for the activity that should handle our link. ```xml <activity android:name=".ShopActivity"> <intent-filter android:autoVerify="true"> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <!-- Define the scheme, host and path --> <data android:scheme="https" android:host="shopapp.com" android:pathPrefix="/shop" /> </intent-filter> </activity> ``` So, what just happened? Let's break it down. One very important attribute is `android:autoVerify="true"`, and links **will not work** without it! According to Google, this attribute allows the app to designate itself as the default handler of a given type of link. So, when the user clicks on an Android App Link, your app opens immediately if it's installed. The intent filter that we just created will only match `https://shopapp.com/shop`, but what if we want to open the app when `https://shopapp.com` link is clicked? Well, we can modify the intent filter and remove the `android:pathPrefix="/shop"` attribute, like this: ```xml <intent-filter android:autoVerify="true"> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <!-- Define the scheme and host --> <data android:scheme="https" android:host="shopapp.com" /> </intent-filter> ``` This will match any URL with the scheme **https** and host `shopapp.com`, regardless of the path. So, URLs like `https://shopapp.com`, `https://shopapp.com/shop`, and `https://shopapp.com/profile/1117` would all open the application. You might wonder, when this is the case, why even use `android:pathPrefix` attribute? There are a couple of reason to consider, but most important is *Selective Handling*. This basically means that you can ensure that **only specific paths** within your domain are handled by your application. If your website has different parts like `/shop`, `/categories` and `/profile`, and you only want the app to handle links under `/shop`, you would use `android:pathPrefix="/shop"` to ensure that the app only opens links that have `/shop` as a path. ### Create the Digital Asset Links File Great, now we have intent filter that say to the system: “Hey, when you see this link `https://shopapp.com/shop` being clicked somewhere please open our app”. But, this is still not enough. The system will not trust our app without **proof that we own the link domain**. Why, you might wonder? Well, we could have added an intent filter for `https://amazon.com/`, but we obviously do not own this domain, and it would be really weird, and a major safety hazard, if the system would use our app to handle the link that points to the `amazon.com` domain. So how can we prove to the system that we actually own the domain of the link? This is where the **Digital Asset Links File** comes into play. This file is used to prove the ownership of the domain (e.g., the association between your website and your app). This file needs to be named `assetlinks.json`, and as the file extension implies, it needs to be in JSON format. This is how the file might look: ```json [ { "relation": ["delegate_permission/common.handle_all_urls"], "target": { "namespace": "android_app", "package_name": "com.shopapp.app", "sha256_cert_fingerprints": [ "XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX:XX" ] } } ] ``` You can see that the file is an array, and the configuration for our application is contained in an object. This means if you own multiple apps that can open the links from your website, you need to list the configuration for both (or more) apps in this file. For Android apps, you will have to modify only the `package_name` and `sha256_cert_fingerprints` attributes. The `package_name` is pretty self-explanatory; it is the package name of your app. In the `sha256_cert_fingerprints` attribute, you need to put the SHA-256 fingerprint of your app's signing certificate. There can be multiple fingerprints added (e.g., debug and production). > ⚠️ Warning > When adding certificate, it is important to use capital letters for HEX values! Not using capital letters might prevent the system from opening your app (errorCode: ERROR_CODE_MALFORMED_CONTENT). If you do not know your SHA-256 fingerprint, you can use the following `keytool` command to get it: ```bash keytool -list -v -keystore <path-to-keystore> -alias <key-alias> -storepass <keystore-password> -keypass <key-password> ``` Make sure to replace the following: - `<path-to-keystore>` - Path to the keystore file used to sign the app. - `<key-alias>` - Alias name given to the keystore when it was created. - `<keystore-password>` - Keystore password given to the keystore when it was created. - `<key-password>` - Key password given when the keystore was created. It is important to add the correct SHA-256 fingerprint to the `sha256_cert_fingerprints` attribute or the link will not work! If you are debugging the app using the default Android keystore, you can get the fingerprint for it using the following command and add it to the `sha256_cert_fingerprints` as well. ```bash keytool -list -v -keystore ~/.android/debug.keystore -alias androiddebugkey -storepass android -keypass android ``` With this done, your `assetlinks.json` should be ready, and we can move to the next step. ### Upload Digital Asset Links File to website host We've created the `assetlinks.json` file, but it won't work its magic just sitting on your computer. Let's explore what the Android system needs from us to leverage this file for App Link verification. When the Android system tries to verify the `https://shopapp.com/shop` link, it will make a GET request to a specific location on our website. In our example, the request URL will look like this: ```bash https://shopapp.com/.well-known/assetlinks.json ``` Notice the `.well-known/assetlinks.json` path in the link. The Android system will try to get our Digital Asset Links File from that location. This means we need to upload/host our previously created `assetlinks.json` in that specific location. > ⚠️ Warning > The `assetlinks.json` file must be hosted in the `.well-known` directory at the **root of the domain** to work correctly. > That means, URL like `https://shopapp.com/shop/.well-known/assetlinks.json` will not work because `/shop` path is added. Here's what the Android system expects for your `assetlinks.json` file to function correctly: - Directly Accessible: The file must be reachable without any redirects. - Open to Bots: The file needs to be accessible by automated programs (bots). - Correct Content Type: The file's content type should be identified as `application/json`. - Secure Connection: The file must be served over a secure HTTPS connection for added security. In most cases, simply uploading your `assetlinks.json` file as a static file to `https://<your_domain_name>/.well-known/assetlinks.json` will be good enough to satisfy all the requirements. ### Verify that Digital Asset Links File is correct Let's assume we have our Digital Assets Link File hosted on `https://shopapp.com/shop/.well-known/assetlinks.json` and ready to be used. How can we know that the configuration is correct and the Android system will be able to use it? Luckily, for this reason, Google provides a Digital Asset Links API to verify the accuracy. ```text https://digitalassetlinks.googleapis.com/v1/statements:list?source.web.site=https://<domain_name>&relation=delegate_permission/common.handle_all_urls ``` You can execute this request straight in you browser, make sure to replace `<domain_name>` with the correct domain name of your website. If the API encounters any problems processing your assets file, it will return an error code along with additional details about the issue. This information can help you diagnose and fix problems within your asset file. ### Test App Links on Device The best way to know if deep links are working is, obviously, to test them on the device. To test deep links, you can either use a real device or an Android emulator. Before any testing can happen, let's follow some setup phases: - Make sure that `AndroidManifest.xml` is properly configured. - Use the correct signing certificate (keystore), the one you added to the `assetlinks.json` file. - Create new app build. - Install the app on device or emulator. With the setup out of the way, we can start testing deep links. The first thing we can do is to artificially create test deep links using the `adb` command like this: ```bash adb shell am start -W -a android.intent.action.VIEW \ -d "https://shopapp.com/shop" ``` This command simulates a deep link click on your connected device or emulator. It essentially tells the system to launch an activity capable of handling the provided URL, mimicking how the system would normally create an implicit intent for a deep link. Think of it as manually crafting an intent to test your app's deep link response. Remember to replace `https://shopapp.com/shop` with your actual deep link. If our setup was correct, the System will open the link in our app rather than opening it in the browser. For a more hands-on test, you can leverage the deep link itself! Send yourself an email or message containing the link, or generate a QR code from the link and scan it using your device's camera app. If the app is installed on your device, clicking the link or scanning the QR code should automatically launch the app and navigate to the intended content. If you have some issues, make sure you followed the guide correctly and that nothing is skipped over. ## Using Deep Links with Custom scheme Deep links with a custom scheme are another way to implement deep linking functionality. While deep links via the HTTP/HTTPS scheme are more commonly used, custom schemes can also come in handy in some cases. So how to set it up? Luckily, it can be done in just one step compare to App Links that involved quite a lot things to go trough. ### Update manifest file As with App Links, we need to define an intent filter in the `AndroidManifest.xml` file to specify the custom URI scheme that the app will handle. ```xml <activity android:name=".ShopActivity"> <intent-filter> <action android:name="android.intent.action.VIEW" /> <category android:name="android.intent.category.DEFAULT" /> <category android:name="android.intent.category.BROWSABLE" /> <!-- Define custom scheme --> <data android:scheme="shopapp" /> </intent-filter> </activity> ``` The intent filter for a custom scheme deep link resembles the one for App Links, but with a key difference. The crucial attribute here is `android:scheme`, which specifies the custom scheme your app will respond to. In this example, the `shopapp` scheme is defined. It goes without saying, but we also need to add code to the Activity to handle the deep link intent. However, as this is not part of our topic, we will skip it. Believe it or not, with this setup, the configuration is actually completed. Our app should open when someone triggers a link like this one: ```text shopapp://shop ``` The Android system will create an intent, check intent filters, and open the app immediately without any validation or modals in between—it just works. ### Test link on Device To test your custom scheme, similar to App Links, we can use the `adb` command. ```bash adb shell am start -a android.intent.action.VIEW \ -d "shopapp://shop" ``` The command triggers an activity on the connected Android device or emulator to handle the specified URL, testing how the app responds to the deep link. If the app is installed and set up correctly, the link should open in the app. Another way to test it us by creating a simple HTML file with a link that uses our custom scheme. ```html <!DOCTYPE html> <html> <head> <title>Test Custom Scheme</title> </head> <body> <a href="shopapp://shop">Open App</a> </body> </html> ``` You can use a simple static HTTP server to serve the file. Open the file on the device and click on the "Open App" button. When the link is triggered, the system should open the link in the app. ## Wrapping up Deep links are a powerful tool in today's mobile development landscape. Integrating deep links into your app can significantly improve user interaction by allowing users to access specific features or content directly from external sources. We explored key deep link concepts and provided instructions for setting up deep links in Android applications. By understanding these concepts and implementing the discussed techniques, developers can leverage deep links to enhance user navigation, streamline user journeys, and ultimately create a more engaging app experience. Whether through custom URI schemes or verified web links (App Links), deep linking offers a robust solution for guiding users to specific content and improving the overall app experience. --- Check out other articles on [wannabedev.io](https://wannabedev.io/).
rmmgc
1,882,051
Postmortem: A Guide to Learning from Failure
Introduction In the world of software development, postmortems are a crucial step in...
0
2024-06-09T12:23:33
https://dev.to/ferdi_code/postmortem-a-guide-to-learning-from-failure-1bbm
## Introduction In the world of software development, postmortems are a crucial step in ensuring that we learn from our mistakes and improve our processes. A postmortem is a detailed analysis of a project or incident that identifies what went wrong, what went right, and what we can do better next time. In this blog, we will explore the importance of postmortems, how to conduct a successful postmortem, and provide a template to help you get started. Why Postmortems Matter Postmortems are essential for several reasons: Learning from Failure: Postmortems help us identify the root causes of failures and provide actionable steps to prevent them from happening again. Improving Processes: By analyzing what went well and what didn't, we can refine our processes and make them more efficient. Enhancing Communication: Postmortems promote open communication among team members, ensuring that everyone is on the same page and working towards the same goals. ### Conducting a Successful Postmortem To conduct a successful postmortem, follow these steps: Schedule the Meeting: Schedule the postmortem meeting as close to the project's completion as possible. Prepare the Agenda: Create an agenda that includes the following sections: What Went Well: Identify the strengths and successes of the project. What Went Wrong: Identify the challenges and failures of the project. Lessons Learned: Document the lessons learned from the project. Action Items: Create a list of actionable steps to improve future projects. Prepare the Team: Ensure that all team members are prepared for the meeting by providing them with a survey or questionnaire to fill out beforehand. This helps to gather their thoughts and opinions on the project. Conduct the Meeting: Lead the meeting with a positive and objective mindset. Encourage open communication and ensure that everyone has a chance to share their thoughts and opinions. Document the Meeting: Take detailed notes during the meeting and ensure that all action items are documented. #### Postmortem Template Here is a template you can use to conduct a successful postmortem: What Went Well What were the core strengths of this project team? What were the biggest weaknesses of this team? Did we get the why? If no, why? #### What Went Wrong What were the biggest challenges faced during the project? What were the most significant failures or setbacks? What could we have done differently? #### Lessons Learned What did we learn from this project? What would we do differently next time? What are the key takeaways from this project? #### Action Items What are the actionable steps we can take to improve future projects? What are the key changes we need to make to our processes? What are the key skills or knowledge we need to acquire? Here is an example of a postmortem i did as part of my project: ## Postmortem: Outage of the E-commerce Website Issue Summary On June 7, 2024, at 10:45 AM UTC, our e-commerce website experienced an outage that lasted for approximately 2 hours and 15 minutes until it was fully restored at 1:00 PM UTC. The outage affected 30% of our users, causing them to experience slow loading times and occasional errors when attempting to place orders. The root cause of the outage was a misconfigured database connection. Timeline *10:45 AM UTC*: The issue was detected by our monitoring system, which alerted our DevOps team to a sudden spike in database query times. *10:50 AM UTC*: The DevOps team investigated the issue, initially suspecting a high traffic volume due to a recent marketing campaign. They checked the server logs and monitored the database performance. *11:15 AM UTC*: The team escalated the issue to the database administration team, assuming it was a database performance issue. *11:30 AM UTC*: The database administration team investigated the issue, but their initial findings did not indicate any performance issues. *12:15 PM UTC*: The DevOps team re-investigated the issue, this time focusing on the database connection configuration. They discovered a misconfigured database connection that was causing the slow query times. *1:00 PM UTC*: The issue was resolved by updating the database connection configuration and restarting the database service. ### Root Cause and Resolution The root cause of the outage was a misconfigured database connection. This misconfiguration caused the database to take longer to respond to queries, resulting in slow loading times and occasional errors for users. The issue was resolved by updating the database connection configuration and restarting the database service. This ensured that the database was properly connected and queries were processed efficiently. ### Corrective and Preventative Measures To prevent similar outages in the future, we will: Improve Database Connection Configuration: Regularly review and update database connection configurations to ensure they are properly set up. Enhance Monitoring: Implement additional monitoring to detect potential issues earlier, such as monitoring database query times and connection configurations. Database Performance Optimization: Regularly optimize database performance to prevent slow query times. Database Connection Testing: Implement automated testing for database connections to detect misconfiguration. Documentation: Update documentation to include detailed instructions for configuring database connections. By implementing these measures, we can reduce the likelihood of similar outages and ensure a smoother user experience for our customers. ## Conclusion The outage of our e-commerce website on June 7, 2024, was caused by a misconfiguration in the database connection. The issue was detected by our monitoring system and resolved by updating the database connection configuration and restarting the database service. To prevent similar outages in the future, we will improve database connection configuration, enhance monitoring, optimize database performance, implement automated testing for database connections, and update documentation.
ferdi_code
1,882,027
Integrating AI into Your Website: A Step-by-Step Guide with ReactJS and OpenAI
Integrating AI into your website can significantly enhance user experience, especially when dealing...
0
2024-06-09T12:22:58
https://dev.to/limacodes/integrating-ai-into-your-website-a-step-by-step-guide-with-reactjs-and-openai-46b8
ai, openai, react, nextjs
**Integrating AI into your website can significantly enhance user experience, especially when dealing with customer support.** In this article, we will walk you through the process of integrating OpenAI’s latest model into a Next.js React application to create an intelligent FAQ hub for a support company. > This AI will be trained with a prompt and use external sources as knowledge bases to provide accurate and relevant answers. ## Step 1: Setting Up the Project First, we need to set up our Next.js application. If you don’t have Next.js installed, you can create a new project by running: `npx create-next-app@latest faq-hub cd faq-hub` Next, install the necessary dependencies: `npm install openai react-markdown` ## Step 2: Configuring the OpenAI API To interact with OpenAI’s API, you need an API key. Sign up on the OpenAI website if you haven’t already and obtain your API key from the dashboard. ## Create a file named .env.local in the root of your project to store your API key: `NEXT_PUBLIC_OPENAI_API_KEY=your_openai_api_key_here` ## Step 3: Creating the AI Service Create a new directory called services and inside it, create a file named openai.js. This file will contain the function to interact with the OpenAI API: `// services/openai.js export async function fetchOpenAIResponse(prompt) { const response = await fetch('https://api.openai.com/v1/completions', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': Bearer ${process.env.NEXT_PUBLIC_OPENAI_API_KEY}, }, body: JSON.stringify({ model: 'text-davinci-003', // Replace with the latest model prompt: prompt, max_tokens: 150, }), }); const data = await response.json(); return data.choices[0].text.trim(); }` ## Step 4: Building the FAQ Component Now, let’s create a React component to display the FAQ section. Create a new directory called components and inside it, create a file named FAQHub.js: `// components/FAQHub.js import React, { useState } from 'react'; import { fetchOpenAIResponse } from '../services/openai'; import ReactMarkdown from 'react-markdown'; const FAQHub = () => { const [query, setQuery] = useState(''); const [response, setResponse] = useState(''); const handleInputChange = (e) => { setQuery(e.target.value); }; const handleSubmit = async (e) => { e.preventDefault(); const aiResponse = await fetchOpenAIResponse(query); setResponse(aiResponse); }; return ( <div> <h1>FAQ Hub</h1> <form onSubmit={handleSubmit}> <input type="text" value={query} onChange={handleInputChange} placeholder="Ask a question..." /> <button type="submit">Get Answer</button> </form> <div> <ReactMarkdown>{response}</ReactMarkdown> </div> </div> ); }; export default FAQHub;` ## Step 5: Integrating the FAQ Component into the Next.js Application Open the pages/index.js file and import the FAQHub component: `// pages/index.js import FAQHub from '../components/FAQHub'; export default function Home() { return ( <div> <FAQHub /> </div> ); }` ## Step 6: Training the Model with a Prompt and External Sources To enhance the model’s responses, you can prime it with a specific prompt and leverage external knowledge bases. Here’s an example of how you can modify the fetchOpenAIResponse function to include a custom prompt: `// services/openai.js export async function fetchOpenAIResponse(query) { const prompt = You are a highly intelligent FAQ bot. Answer the following question based on the knowledge from our support database and external sources: Question: ${query}; const response = await fetch('https://api.openai.com/v1/completions', { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${process.env.NEXT_PUBLIC_OPENAI_API_KEY}`, }, body: JSON.stringify({ model: 'text-davinci-003', // Replace with the latest model prompt: prompt, max_tokens: 150, }), }); const data = await response.json(); return data.choices[0].text.trim(); }` ## Step 7: Deploying the Application Once you have tested the application locally and ensured everything works as expected, you can deploy your Next.js application to platforms like Vercel or Netlify. Deploy to Vercel If you choose Vercel, you can deploy your application with the following commands: `npm install -g vercel vercel` Follow the prompts to link your project and deploy it. ## Conclusion > Congratulations! You have successfully integrated OpenAI into your Next.js React application to create an intelligent FAQ hub. By leveraging AI, you can provide users with accurate and dynamic responses to their queries, enhancing their support experience. Remember to keep your API keys secure and monitor usage to avoid unauthorized access and potential abuse. **Happy coding!**
limacodes
1,882,025
I’ve worked in IT for over 10 years. Here are 5 things I wish I knew when I started
Hello, dear Dev.to community. I need to get some things off my chest, so here I am, hoping to share...
0
2024-06-09T12:22:22
https://dev.to/vorniches/ive-worked-in-it-for-over-10-years-here-are-5-things-i-wish-i-knew-when-i-started-43pe
burnout, productivity, beginners, career
Hello, dear Dev.to community. I need to get some things off my chest, so here I am, hoping to share something useful with young IT professionals. Over my career, I’ve gone through freelancing, internships, corporate jobs, career changes, and even launching my own SaaS (a story for another time…). I’ve made countless mistakes and learned some painful lessons. Here are 5 important things I wish I had known 10 years ago. ## 1. Consistency is Key There was a time when I doubted everything I did – quality, choices, from direction to tech stack. I switched between technologies, considered quitting what I was doing, and changing careers again. This led to a lack of confidence in my skills, and I often felt deeply demotivated. Add freelancing income and general introversion to the mix – I didn’t even have anyone more experienced to consult to gauge my progress. It was tough – at that time, I mainly built WordPress sites. If I had spent the time wasted on doubts and indecision focusing on one career path, I would have achieved much more, much faster. Choose a path and stick to it – it will yield more results than a broad spectrum of mediocrely developed skills, especially at the start. This also applies to finding your first job. If you can’t land your dream job or any IT job at first, it’s not the end. Yes, it might take months – even years! But if you feel that IT is your place – keep digging in that spot. Find temporary work to stay afloat. Find cheaper housing, live with your parents if you have to. Buy inexpensive and healthy food (hint: the more protein you eat, the less hungry you feel throughout the day). If you systematically dedicate time to development and job hunting – you will succeed. ## 2. You will struggle and not understand things – and that’s normal (and it will get better over time, but not completely) Over time, it will get easier, but the struggle never fully disappears. I skipped classes in university, leaving gaps in my fundamental knowledge of computer science that experience didn’t fill. But that’s not the most important thing. The most important thing is that in your work, you will have gaps in knowledge. Maybe not in a specific job, role, or project – you can learn a project thoroughly, especially if you work on it long enough. But it’s normal not to know certain things about your profession in general. You don’t need to know every processor architecture ever created; a system architect doesn’t need to know specific testing tools. You don’t need to know every Amazon service inside out to create a robust testing system. It’s normal. ## 3. Don’t cling to a Bad Job Sometimes you end up in a bad job. Recognizing a bad job is simple – at the end of the day, you want to wrap yourself in a blanket and hide in a corner, and most importantly, there’s no one at work you can talk to about improving the situation. Bad jobs can have various causes – sometimes it’s the team, sometimes the management, sometimes it’s you – not a fit for the role, a hiring mistake, and that’s okay. What’s not okay is clinging to that job. There can be many reasons – no safety net, no suitable alternative, no confidence that a new job will come… and you decide to wait. Wait, endure, drag it out until you burn out completely or are explicitly shown the door, despite your efforts. This can happen at any stage of your career, and you must never let it reach the extreme. If you feel something is wrong, you’re probably right. If you feel a burning desire not to go to work – something is wrong. Cut those ties, or you’ll burn out or grow roots in a bad place for weeks, months, even years, without the strength to change anything. And when the breaking point comes, you’ll face it even more depleted. ## 4. Frequently changing jobs can be beneficial, but not for everyone I still see recommendations for beginner programmers: change jobs more often. This way, they say, you’ll gain more experience. A year here, six months there, and in three or four years, you’re as experienced as a senior. This can work. But it’s not for everyone. People differ in how they can concentrate and maintain attention. If you don’t have focus issues, you can easily work for several years in one place and learn all the processes thoroughly – this will increase your value in the current company and give you stories to tell in future interviews. People underestimate deep understanding, but many positions and companies value it. Job hopping is also useful, but it can be beneficial for people who struggle to maintain attention when the task is understood. For these people, when surprises at work run out or nearly run out, the job becomes routine, and they might start sabotaging it. If you feel something like this – it might be your case, and you need to jump from the familiar to the unknown. Again and again. Over time, such people become super adaptive specialists, for whom neither a new language nor a new field is a hindrance. It’s important to recognize in time what suits you personally. ## 5. Don’t miss opportunities, even if they seem small or insignificant A career in test automation changed my life for the better. This opportunity was always in front of me. I thought about trying it more than once, even started learning something but dropped it – I thought testing wasn’t serious, and it was bad idea to switch to testing after several years of web development (haha). It turned out I could build a serious career in this field without significant effort. Switching from bar work to web development was a much bigger effort for me. The same goes for jobs to support yourself. My first web development job earned me $50. I made two WordPress sites – one for $30 and one for $20. It was not bad since I was learning from scratch. All my previous work experience was mostly behind a bar. Though I positioned myself (mostly in my head) as a web developer, I took any job – from writing texts to editing images. My largest single earnings in the first 2-3 years of freelancing were Photoshop editing several thousand movie posters. Three days and three sleepless nights of almost nonstop work earned me $500 – a fantastic result for those times. ## And one more thing: Jargon and Abstractions Much of what you read, listen to, and do can be so confusing and complicated that it becomes white noise. Sometimes one incomprehensible thing flows into another, leaving an unpleasant mark and a sense of limitation. But that’s normal! Once you start untangling the knots of abstractions and realizing what lies behind the terms and jargon, everything quickly falls into place. It may seem like this tangle has no end, but it doesn’t – sooner or later, you’ll understand everything (or almost everything). Practically, programming forums and technical podcasts helped me a lot. I just read and listened to everything, googling every unknown word and term. At some point, this leads to dozens and hundreds of tabs in browsers on your phone and computer, but eventually, this flow starts to shrink. With each new read tab, you become smarter and more confident in your knowledge, even if it doesn’t seem so for a long time. --- I hope this note will be helpful and inspire someone not to fear changes, to seek their place, and not to give up at the first difficulties. Remember, every path is unique, and it’s important to find your own, following your interests, aspirations, and paying attention to your feelings. Everything will work out, but still, good luck.
vorniches
1,882,045
How to Manage Services in Linux: systemd and SysVinit Essentials - DevOps Prerequisite 8
Service and Daemon Management in Linux: Mastering systemd and SysVinit Effective...
0
2024-06-09T12:19:00
https://dev.to/iaadidev/how-to-manage-services-in-linux-systemd-and-sysvinit-essentials-devops-prerequisite-8-1jop
linux, commnads, services, daemon
## Service and Daemon Management in Linux: Mastering systemd and SysVinit Effective management of services and daemons is a critical aspect of Linux system administration. Services and daemons are background processes that perform essential functions, such as handling network requests, managing hardware, and running scheduled tasks. In this comprehensive guide, we will explore how to manage services and daemons using `systemctl` (systemd) and `service` (SysVinit). We will cover how to start, stop, enable, and disable services, and include relevant code snippets to illustrate these concepts. ### Table of Contents 1. **Introduction to Services and Daemons** 2. **Understanding systemd and SysVinit** 3. **Managing Services with systemd and systemctl** - Checking the Status of a Service - Starting and Stopping Services - Enabling and Disabling Services - Restarting and Reloading Services - Viewing Logs 4. **Managing Services with SysVinit and service** - Checking the Status of a Service - Starting and Stopping Services - Enabling and Disabling Services - Restarting Services 5. **Creating and Managing Custom Services** - Creating a Custom systemd Service - Managing Custom Services with systemd - Creating a Custom SysVinit Service - Managing Custom Services with SysVinit 6. **Advanced Service Management** - Masking and Unmasking Services - Editing Service Configuration Files - Dependency Management 7. **Troubleshooting Common Issues** - Analyzing Logs - Debugging Service Failures - Recovering from Service Misconfigurations 8. **Best Practices for Service Management** - Regular Monitoring - Security Considerations - Backup and Recovery ### 1. Introduction to Services and Daemons Services and daemons are fundamental components of a Linux system. A service is a program that runs in the background and provides essential functions, while a daemon is a type of service that is specifically designed to run unattended. Examples of services and daemons include web servers (e.g., Apache), database servers (e.g., MySQL), and system services (e.g., cron). ### 2. Understanding systemd and SysVinit Linux systems use different init systems to manage services and daemons. The two most common init systems are systemd and SysVinit. - **systemd:** The most widely used init system in modern Linux distributions. It provides a comprehensive suite of tools and features for managing services, including parallel startup, on-demand activation, and dependency management. - **SysVinit:** An older init system that uses simple scripts to start and stop services. It is still used in some distributions but has largely been replaced by systemd. ### 3. Managing Services with systemd and systemctl `systemctl` is the primary command-line tool for managing services in systemd. It provides a wide range of options for starting, stopping, enabling, and disabling services. #### Checking the Status of a Service To check the status of a service, use the following command: ```bash sudo systemctl status <service_name> ``` Example: ```bash sudo systemctl status apache2 ``` #### Starting and Stopping Services To start a service, use the `start` command: ```bash sudo systemctl start <service_name> ``` Example: ```bash sudo systemctl start apache2 ``` To stop a service, use the `stop` command: ```bash sudo systemctl stop <service_name> ``` Example: ```bash sudo systemctl stop apache2 ``` #### Enabling and Disabling Services To enable a service to start automatically at boot, use the `enable` command: ```bash sudo systemctl enable <service_name> ``` Example: ```bash sudo systemctl enable apache2 ``` To disable a service, preventing it from starting at boot, use the `disable` command: ```bash sudo systemctl disable <service_name> ``` Example: ```bash sudo systemctl disable apache2 ``` #### Restarting and Reloading Services To restart a service, use the `restart` command: ```bash sudo systemctl restart <service_name> ``` Example: ```bash sudo systemctl restart apache2 ``` To reload a service's configuration without restarting it, use the `reload` command: ```bash sudo systemctl reload <service_name> ``` Example: ```bash sudo systemctl reload apache2 ``` #### Viewing Logs To view logs for a specific service, use the `journalctl` command: ```bash sudo journalctl -u <service_name> ``` Example: ```bash sudo journalctl -u apache2 ``` ### 4. Managing Services with SysVinit and service `service` is the primary command-line tool for managing services in SysVinit. It provides basic functionality for starting, stopping, and checking the status of services. #### Checking the Status of a Service To check the status of a service, use the following command: ```bash sudo service <service_name> status ``` Example: ```bash sudo service apache2 status ``` #### Starting and Stopping Services To start a service, use the `start` command: ```bash sudo service <service_name> start ``` Example: ```bash sudo service apache2 start ``` To stop a service, use the `stop` command: ```bash sudo service <service_name> stop ``` Example: ```bash sudo service apache2 stop ``` #### Enabling and Disabling Services Enabling and disabling services in SysVinit involves updating runlevel directories. The `update-rc.d` command is used for this purpose. To enable a service, use: ```bash sudo update-rc.d <service_name> defaults ``` Example: ```bash sudo update-rc.d apache2 defaults ``` To disable a service, use: ```bash sudo update-rc.d -f <service_name> remove ``` Example: ```bash sudo update-rc.d -f apache2 remove ``` #### Restarting Services To restart a service, use the `restart` command: ```bash sudo service <service_name> restart ``` Example: ```bash sudo service apache2 restart ``` ### 5. Creating and Managing Custom Services Creating custom services allows you to run your own scripts or applications as services. This can be done with both systemd and SysVinit. #### Creating a Custom systemd Service 1. **Create the Service File:** ```bash sudo nano /etc/systemd/system/myservice.service ``` 2. **Add Service Configuration:** ```plaintext [Unit] Description=My Custom Service After=network.target [Service] ExecStart=/usr/local/bin/myscript.sh Restart=on-failure [Install] WantedBy=multi-user.target ``` 3. **Reload systemd and Enable the Service:** ```bash sudo systemctl daemon-reload sudo systemctl enable myservice ``` 4. **Start the Service:** ```bash sudo systemctl start myservice ``` #### Managing Custom Services with systemd - **Check the status of the custom service:** ```bash sudo systemctl status myservice ``` - **Stop the custom service:** ```bash sudo systemctl stop myservice ``` - **Restart the custom service:** ```bash sudo systemctl restart myservice ``` #### Creating a Custom SysVinit Service 1. **Create the Init Script:** ```bash sudo nano /etc/init.d/myservice ``` 2. **Add Script Content:** ```bash #!/bin/sh ### BEGIN INIT INFO # Provides: myservice # Required-Start: $network # Required-Stop: $network # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: My Custom Service ### END INIT INFO case "$1" in start) echo "Starting myservice" /usr/local/bin/myscript.sh & ;; stop) echo "Stopping myservice" pkill -f /usr/local/bin/myscript.sh ;; *) echo "Usage: /etc/init.d/myservice {start|stop}" exit 1 ;; esac exit 0 ``` 3. **Make the Script Executable:** ```bash sudo chmod +x /etc/init.d/myservice ``` 4. **Enable the Service:** ```bash sudo update-rc.d myservice defaults ``` 5. **Start the Service:** ```bash sudo service myservice start ``` #### Managing Custom Services with SysVinit - **Check the status of the custom service:** ```bash sudo service myservice status ``` - **Stop the custom service:** ```bash sudo service myservice stop ``` - **Restart the custom service:** ```bash sudo service myservice restart ``` ### 6. Advanced Service Management #### Masking and Unmasking Services Masking a service prevents it from being started, either manually or automatically. - **Mask a Service:** ```bash sudo systemctl mask <service_name> ``` Example: ```bash sudo systemctl mask apache2 ``` - **Unmask a Service:** ```bash sudo systemctl unmask <service_name> ``` Example: ```bash sudo systemctl unmask apache2 ``` #### Editing Service Configuration Files You can edit the configuration of systemd services directly. - **Edit a Service File:** ```bash sudo systemctl edit --full <service_name> ``` Example: ```bash sudo systemctl edit --full apache2 ``` After making changes, reload the systemd configuration: ```bash sudo systemctl daemon-reload ``` #### Dependency Management Systemd allows you to manage dependencies between services. - **Adding Dependencies:** In the service file, use directives like `After=`, `Requires=`, and `Wants=` to specify dependencies. Example: ```plaintext [Unit] Description=My Custom Service After=network.target Requires=mysqld.service Wants=apache2.service ``` ### 7. Troubleshooting Common Issues #### Analyzing Logs Logs are crucial for diagnosing issues with services. - **View Logs:** ```bash sudo journalctl -u <service_name> ``` Example: ```bash sudo journalctl -u apache2 ``` #### Debugging Service Failures - **Check Service Status:** ```bash sudo systemctl status <service_name> ``` Example: ```bash sudo systemctl status apache2 ``` - **View Detailed Logs:** ```bash sudo journalctl -xe ``` #### Recovering from Service Misconfigurations If a service fails to start due to misconfiguration: - **Edit the Service File:** ```bash sudo systemctl edit --full <service_name> ``` - **Reload systemd and Restart the Service:** ```bash sudo systemctl daemon-reload sudo systemctl restart <service_name> ``` ### 8. Best Practices for Service Management #### Regular Monitoring Regularly monitor your services to ensure they are running smoothly. - **Check Service Status:** ```bash sudo systemctl status <service_name> ``` - **Monitor Logs:** ```bash sudo journalctl -u <service_name> ``` #### Security Considerations - **Limit Access:** Restrict access to service configuration files and management commands. - **Use Secure Configuration:** Ensure services are configured securely, with appropriate permissions and firewall rules. #### Backup and Recovery - **Backup Configuration Files:** Regularly backup service configuration files. ```bash sudo cp /etc/systemd/system/myservice.service /backup/ ``` - **Automate Backups:** Use cron jobs to automate the backup process. ```bash sudo crontab -e ``` Add a cron job: ```bash 0 2 * * * cp /etc/systemd/system/myservice.service /backup/ ``` ### Conclusion Mastering service and daemon management in Linux is essential for maintaining a stable and secure system. Whether you are using systemd or SysVinit, understanding how to start, stop, enable, and disable services is crucial for effective system administration. This guide has provided a comprehensive overview of service management, including advanced techniques and best practices. By applying these concepts, you can ensure your Linux system runs smoothly and efficiently, tailored to your specific needs. Happy managing!
iaadidev
1,882,042
Kubernetes: Hello World
Introduction Deploying software can be a daunting and unpredictable task. Kubernetes,...
0
2024-06-09T12:14:45
https://dev.to/pratikjagrut/kubernetes-hello-world-268d
kubernetes, containers
### Introduction Deploying software can be a daunting and unpredictable task. Kubernetes, often referred to as ***K8s***, serves as a proficient navigator in this complex landscape. It is an open-source container orchestration platform, that automates the deployment, scaling, and management of applications within containers. These containers are compact, self-sufficient units that house everything an application needs, ensuring consistency across diverse environments. ### Prepare the application **Clone the Repository** In this guide, we're using [***hello-Kubernetes***](https://github.com/pratikjagrut/hello-kubernetes)***, a*** simple web-based application written in Go. You can find the source code [here](https://github.com/pratikjagrut/hello-kubernetes). ```bash git clone https://github.com/pratikjagrut/hello-kubernetes.git cd hello-kubernetes ``` **Understanding the Code** ```go package main import ( "fmt" "log" "net/http" "os" ) func handler(w http.ResponseWriter, r *http.Request) { log.Printf("Received request from %s", r.RemoteAddr) fmt.Fprintf(w, "Hello, Kubernetes!") } func main() { port := os.Getenv("PORT") if port == "" { port = "8080" } http.HandleFunc("/", handler) go func() { log.Printf("Server listening on port %s...", port) err := http.ListenAndServe(":"+port, nil) if err != nil { log.Fatal("Failed to start the server") } }() log.Printf("Click on http://localhost:%s", port) done := make(chan bool) <-done } ``` This Go application sets up an HTTP server that responds with "Hello, Kubernetes!" and logs request details. It runs the server concurrently, keeping the main function active. **The Dockerfile** The Dockerfile uses a multi-stage build to create a minimal container: * Builds the Go application. * Creates a minimal final image using ***scratch***. * Exposes port ***8080*** and sets the command to run the Go application. ```dockerfile # Builder Stage FROM cgr.dev/chainguard/go:latest as builder # Set the working directory inside the container WORKDIR /app # Copy the application source code into the container COPY . . # Download dependencies using Go modules RUN go mod download # Build the Go application RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o main . # Final Stage FROM scratch # Copy the compiled application binary from the builder stage to the final image COPY --from=builder /app/main /app/main # Expose port 8080 to the outside world EXPOSE 8080 # Command to run the executable CMD ["/app/main"] ``` **Building the Container Image** 1. **Docker**: Install Docker to create container images for your application. Refer to the official [Docker documentation](https://docs.docker.com/get-docker/) for installation instructions. 2. **Image Registry Account**: Sign up for an account on [GitHub](https://github.com), [DockerHub](https://hub.docker.com), or any other container image registry. You'll use this account to store and manage your container images. 3. Open the terminal and navigate to the repository directory. 4. Build the container image using the following command: ```bash docker build -t ghcr.io/pratikjagrut/hello-kubernetes . ``` This command builds the container image using the `Dockerfile` current directory. The `-t` flag specifies the image name. **Testing application image** 1. Once the image is built, run a Docker container from the image: ```bash ➜ docker run -p 8080:8080 ghcr.io/pratikjagrut/hello-kubernetes 2023/08/08 13:25:24 Click on the link http://localhost:8080 2023/08/08 13:25:24 Server listening on port 8080... ``` This command maps port 8080 of your host machine to port 8080 in the container. 2. Open a web browser and navigate to [`http://localhost:8080`](http://localhost:8080). You should see the `Hello, Kubernetes!` message. **Pushing the image to the container registry** Here we've opted for the GitHub container registry. However, feel free to select a registry that aligns with your preferences. 1. Log in to Docker using the GitHub Container Registry: ```bash docker login ghcr.io ``` When you run the command, it will ask for your username and password. Enter these credentials to log into your container registry. 2. Push the tagged image to the GitHub Container Registry: ```bash docker push ghcr.io/pratikjagrut/hello-kubernetes ``` 3. Verify that the image is in your GitHub Container Registry by visiting the `Packages` section of your GitHub repository. Next, we'll set up a Kubernetes cluster to deploy our containerized application. ### Setup Kubernetes cluster Here we'll use KIND (Kubernetes in Docker) as our local k8s cluster. **Installing KIND and Kubectl** Before we dive into setting up the Kubernetes cluster, you'll need to install both KIND and kubectl on your machine. * **KIND (Kubernetes in Docker)**: KIND allows you to run Kubernetes clusters as Docker containers, making it perfect for local development. Follow the [official KIND installation guide](https://kind.sigs.k8s.io/docs/user/quick-start/) to install it on your system. * **kubectl**: This command-line tool is essential for interacting with your Kubernetes cluster. Follow the [Kubernetes documentation](https://kubernetes.io/docs/tasks/tools/) to install kubectl on your machine. **Creating Your KIND Cluster** Once KIND and Kubectl are set up, let's create your local Kubernetes cluster: 1. Open your terminal. 2. Run the following command to create a basic KIND cluster: ```bash kind create cluster ``` 3. Check if the cluster is properly up and running using `kubectl get ns` It should get all the namespaces present in the cluster. ```bash ➜ kubectl get ns NAME STATUS AGE default Active 3m13s kube-node-lease Active 3m14s kube-public Active 3m14s kube-system Active 3m14s local-path-storage Active 3m9s ``` **Alternative Setup Options:** * **Minikube**: If you prefer another local option, [Minikube](https://minikube.sigs.k8s.io/docs/start/) provides a hassle-free way to run a single-node Kubernetes cluster on your local machine. * **Docker Desktop**: For macOS and Windows users, [Docker Desktop](https://www.docker.com/products/docker-desktop) offers a simple way to set up a Kubernetes cluster. * **Rancher Desktop**: [Rancher Desktop](https://rancherdesktop.io/) is another choice for a local development cluster that integrates with Kubernetes, Docker, and other tools. * **Cloud Clusters**: If you'd rather work in a cloud environment, consider platforms like [Google Kubernetes Engine (GKE)](https://cloud.google.com/kubernetes-engine) or [Amazon EKS](https://aws.amazon.com/eks/) for managed Kubernetes clusters. With your Kubernetes cluster up and running, you're ready to sail ahead with deploying your first application. ### Deploy application on Kubernetes Now, we'll deploy our application onto the Kubernetes cluster. **Create a Kubernetes Deployment** A **Deployment** in Kubernetes serves as a manager for your application's components, known as *Pods*. Think of it like a supervisor ensuring that the right number of Pods are running and matching your desired configuration. In more technical terms, a Deployment lets you define how many Pods you want and how they should be set up. If a Pod fails or needs an update, the Deployment Controller steps in to replace it. This ensures that your application remains available and runs smoothly. To put it simply, a Deployment takes care of keeping our application consistent and reliable, even when Pods face issues. It's a fundamental tool for maintaining the health of your application in a Kubernetes cluster. Here's how we can create a Deployment for our application: 1. Create a YAML file named `app-deployment.yaml`: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: hello-k8s-deployment spec: replicas: 2 selector: matchLabels: app: hello-k8s template: metadata: labels: app: hello-k8s spec: containers: - name: hello-k8s-container image: ghcr.io/pratikjagrut/hello-kubernetes ports: - containerPort: 8080 ``` This YAML defines a Deployment named `hello-k8s-deployment` that runs two replicas of our application. 1. Apply the Deployment to your Kubernetes cluster: ```bash kubectl apply -f hello-k8s-deployment.yaml ``` Now, if you're using a GitHub registry just like me then you'll see an error(`ImagePullBackOff or ErrImagePull`)(`failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized`) in deploying your application. By default the images on the GitHub container registry are private. When you describe the pods you'll see warning messages in the events section such as `Failed to pull image "`[`ghcr.io/pratikjagrut/hello-kubernetes`](http://ghcr.io/pratikjagrut/hello-kubernetes)`"`. ```bash ➜ kubectl describe pods hello-k8s-deployment-54889c9777-549rn ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m40s default-scheduler Successfully assigned default/hello-k8s-deployment-54889c9777-549rn to kind-control-plane Normal Pulling 75s (x4 over 2m39s) kubelet Pulling image "ghcr.io/pratikjagrut/hello-kubernetes" Warning Failed 74s (x4 over 2m39s) kubelet Failed to pull image "ghcr.io/pratikjagrut/hello-kubernetes": rpc error: code = Unknown desc = failed to pull and unpack image "ghcr.io/pratikjagrut/hello-kubernetes:latest": failed to resolve reference "ghcr.io/pratikjagrut/hello-kubernetes:latest": failed to authorize: failed to fetch anonymous token: unexpected status: 401 Unauthorized Warning Failed 74s (x4 over 2m39s) kubelet Error: ErrImagePull Warning Failed 50s (x6 over 2m39s) kubelet Error: ImagePullBackOff Normal BackOff 36s (x7 over 2m39s) kubelet Back-off pulling image "ghcr.io/pratikjagrut/hello-kubernetes" ``` This happened because Kubernetes is trying to pull the private image and it does not have permission to do so. When a container image is hosted in a private registry, we need to provide Kubernetes with credentials to pull the image via **Image Pull Secrets**. 1. Create a Docker registry secret: ```bash kubectl create secret docker-registry my-registry-secret \ --docker-username=<your-username> \ --docker-password=<your-password> \ --docker-server=<your-registry-server> ``` 1. Attach the secret to your Deployment: ```yaml spec: template: spec: imagePullSecrets: - name: my-registry-secret ``` Apply the changes:: ```bash kubectl apply -f hello-k8s-deployment.yaml ``` After applying the updated deployment you can see that all the pods are running. ```bash ➜ kubectl get pods NAME READY STATUS RESTARTS AGE hello-k8s-deployment-669788ccd6-4dbb6 1/1 Running 0 22s hello-k8s-deployment-669788ccd6-k5gfg 1/1 Running 0 37s ``` **Access Your Application** With the Deployment in place, we can access our application externally. Since we're using KIND, we can use port-forwarding to access the application: 1. Find the name of one of the deployed Pods: ```bash kubectl get pods -l app=hello-k8s ``` 1. Forward local port 8080 to the Pod: ```bash kubectl port-forward <pod-name> 8080:8080 ``` Now, if you open a web browser and navigate to [`http://localhost:8080`](http://localhost:8080) or use `curl http://localhost:8080` you should see "Hello, Kubernetes!" displayed, indicating your application is running successfully. ```bash ➜ curl http://localhost:8080 Hello, Kubernetes!% ``` > NOTE: For production, use a Kubernetes service and Ingress for optimal traffic handling. ### **Conclusion** In conclusion, this beginner's guide has walked you through deploying your first application on Kubernetes. But remember, this is just the start. Kubernetes offers vast opportunities for optimizing your application's performance, scalability, and resilience. With features like advanced networking, load balancing, automated scaling, and self-healing, Kubernetes ensures seamless application operation in any environment. So, while this guide ends here, your journey with Kubernetes is only beginning. Thank you for reading! Hope you find this helpful!
pratikjagrut
1,882,041
How to configure a Disk on a virtual machine from Azure Cloud and Install windows features using PowerShell.
*Step 1: Login to Azure portal * _Click on virtual machines and create _ **Step 2: a. Select...
0
2024-06-09T12:13:37
https://dev.to/busybrain/how-to-configure-a-disk-on-a-virtual-machine-from-azure-cloud-and-install-windows-features-using-powershell-3m54
**Step 1: Login to Azure portal ** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yf1vl2l2nan9vsyp80xf.png) **_Click on virtual machines and create _** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmg6287ettjfqs2qd99u.png) _**_**Step 2: a. Select the Azure subscription, b. Create or select a unique resource group. c. Create Virtual machine name d. Select a region, if you require available options, select any but for this we need no redundancy. e. Select Windows 10pro or any you wish f. Create admin details (username and password) g. Select port, RDP, HTTP because we will need to communicate to them h. Mark licensing **_**_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6r4ddymahdnbc6bioon7.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rwlzjowwcvj8sd7r8d5o.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sn7imi4vjeoff7gtvko2.png) **_Step 3 : Check the Disk information, ensure you select image default or choose anyone you wish Better you leave it as default and click review and create_** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2n5bz67y5gzvyikgohmz.png) _Once Validation is passed as shown below “Click create” _ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zwyqpqcnw83qixcijy6i.png) _Once Deployment is completed, ‘Go to resource” _ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k01cadymerpzagxhsna4.png) **_Step 4: On the Virtual machine page , search for ‘Disk” under settings_** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bzdmdvg06otxiophc5yx.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wr14tk6ubas19l81rgll.png) _Give the Disk a name, and choose the size you desire, leave the remaining as default _ _Back to the VM overview page., connect the VM and select “Native RDP” _ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wwbbcatf38qm0qfw0hmj.png) _ Wait for the Public address to get configured and Download RDP file _ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30q48twq9g4xrb2c9jtb.png) **_Step 5: Connect to the VM on your local computer. On your computer, Connect the Virtual Machine, this is how it will look below _** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pe88hbhr43armz3sqm29.png) _Insert your admin details._ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nsgqd02zs9tmwn12bnoj.png) _Leave the rest information as default, click yes _ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g163pesfrpg8iedphetk.png) **_Step 6: Search for ‘Disk Management’ on your VM and an Initialize window will pop up, leave everything as default and select ok _** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mvlnrsz71co1l4j8fj63.png) _ On the Disk2 tab , right click anywhere on it and select “ New simple volume” _ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hco5s01okpv36sctkn2d.png) _Continue selecting the default options and click finish, until the Disk 2 is partitioned _ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gk5nbecnsuohry610npm.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jump52rkppl82vonwl8t.png) **_Congratulations you have partitioned your Disk on your VM. _** **_Step 7: Lets install windows features on the VM through PowerShell. Open PowerShell on your VM and ensure it is in “the name of your VM” _** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r1k73hc798tav4lcapt8.png) **_Now enter this command “ Enable-WindowsOptionalFeature -Online -FeatureName IIS-WebServerRole”, click enter._** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pd7mtmgdf34plrt7xk6q.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kd9jqn926okoi7jxo903.png) **_Once it is done. CONGRATULATIONS, YOU HAVE INSTALLED WINDOWS FEATURES AND MANAGEMENT TOOLS._** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/giz1d7hqgj5nxvp78jej.png) _Now let’s confirm this by copying the IP address of the VM and paste it into our browser._ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i3g6sm2rjnr60iy859np.png) **_CONGRATULATION, BELOW PAGE CONFIRMS YOU HAVE ATTACHED A DISK TO YOUR VM AND HAVE SUCCESSFULLY INSTALLED WINDOWS FEATURES. _** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/paq6mh94ku8j6l4ykyu3.png)
busybrain
1,882,028
react js useEffect hook
The useEffect hook is used to handle function side effects in React. A function side effect in...
0
2024-06-09T11:50:06
https://dev.to/kemiowoyele1/react-js-useeffect-hook-1a5l
react, useeffect, javascript, webdev
The useEffect hook is used to handle function side effects in React. A function side effect in JavaScript is any effect outside the function’s local variable. Examples of function side effect include; I. Making network request II. Manipulating the DOM III. Fetching data from an API IV. Changing the value of a variable or an object that is outside the scope of the function V. setInterval or setTimeout operations VI. Accessing or modifying web API’s The use effect hook runs every time the component is rendered. Remember that apart from the initial render when a component first load to the DOM, the component is re rendered whenever there is a state change. ## How to use the useEffect hook First of all, we need to; • Import useEffect from react ` import { useState, useEffect } from "react";` • Somewhere in your component, before the return statement, call the useEffect hook with two arguments. • The first argument is a callback function containing instructions we want to execute whenever the useEffect is triggered. • The second optional argument is an array of dependencies that will determine when the useEffect will run. If the array is empty, the useEffect will run only once after the initial render. If the dependency argument is omitted, the useEffect will trigger whenever there is a state change. If values are provided inside the dependency array, the useEffect will be triggered only when there is a state change in any of those values. • Syntax ``` useEffect(()=>{ // excecute side effect }, [an array of values that the effect depends on]); ``` **Example ** ``` import { useState, useEffect } from "react"; const Home = () => { const [name, setName] = useState(""); const changeName = () => { const userName = prompt("what is your name"); setName(userName); }; useEffect(() => { alert(`welcome ${name}`); }, [name]); return ( <> <h1>welcome {name}</h1> <button onClick={changeName}>click me</button> </> ); }; export default Home; ```
kemiowoyele1
1,882,040
EKS Secret Management — with Golang, AWS ParameterStore and Terraform
Table of Contents Introduction InitContainer with GO binary OIDC Federated Access for...
0
2024-06-09T12:08:38
https://dev.to/wardove/eks-secret-management-with-golang-aws-parameterstore-and-terraform-4h24
aws, terraform, go, security
## Table of Contents <a name="Toc"></a> 1. [Introduction](#introduction) 2. [InitContainer with GO binary](#part-1) 3. [OIDC Federated Access for EKS Pods](#part-2) 4. [Farewell](#farewell) ## Introduction <a name="introduction"></a> Hey Folks! In this article, we are going to delve into a robust approach to Kubernetes secret management by utilizing the efficiency of Golang, the security and flexibility of AWS ParameterStore, the authentication power of OIDC, and the infrastructure-as-code advantages of Terraform. We will explore ways to enhance your cloud-based applications and significantly bolster your security posture, providing you with a comprehensive understanding of this innovative strategy for revolutionizing your secret management processes. Keep in mind that we have a few prerequisites. To fully engage with the material and examples we provide, you’ll need an AWS Account, an EKS Cluster, and a configured Terraform project with the AWS provider. ##SSM Parameters To kick things off, let’s explore how we can manage secrets using AWS Systems Manager (SSM) Parameter Store. This service from AWS provides secure, hierarchical storage for configuration data management and secrets. Leveraging the Parameter Store can significantly enhance the security posture of your applications by segregating and securing sensitive information like database passwords, license codes, and API keys. Let’s consider a Terraform script to create these SSM parameters, starting with a locals block. This block includes the projects in which we want to manage the secrets and the keys that need to be stored securely. ```hcl locals { projects = { demo-project = { team_id = "demo_team" namespace = "demo" platform = "fargate" fqdn = "www.huseynov.net" ssm_secret_keys = ["AMQ_USER", "AMQ_PASS"] deployment_grace_period = 60 vpa_update_mode = "Initial" svc_config = { service_port = 8080 target_port = 80 type = "NodePort" lb_access_mode = "internet-facing" alb_group = "demo" } hpa_config = { min = 1 max = 3 mem_threshold = 80 cpu_threshold = 60 } } } ssm_secret_keys = { for v in flatten([ for project_name, parameters in local.projects : [ for key in try(parameters.ssm_secret_keys, []) : { key = key, namespace = parameters.namespace, project_name = project_name, team_id = try(parameters.team_id, var.cluster_namespace) } ] ]) : "${v.namespace}.${v.project_name}.${v.key}" => v } } resource "aws_ssm_parameter" "project_ssm_secrets" { for_each = local.ssm_secret_keys name = "/eks/${var.cluster_name}/${each.value.namespace}/${each.value.project_name}/${each.value.key}" type = "SecureString" value = "placeholder" tags = merge(local.default_tags, { "eks.namespace" = each.value.namespace "eks.project" = each.value.project_name "eks.team_id" = each.value.team_id } ) lifecycle { ignore_changes = [value] prevent_destroy = true } } ``` Please note that in our locals block, several parameters are of particular importance for this tutorial: 1. `ssm_secret_keys`: This specifies the secrets that we need to securely manage. These are essentially the names of the secrets we are storing in the AWS ParameterStore. 2. `namespace`: This identifies the Kubernetes namespace where our resources will be located. 3. `team_id`: It's used to denote the team that owns the particular resource. It is optional, but further it can be used for [ABAC for AWS](https://docs.aws.amazon.com/IAM/latest/UserGuide/introduction_attribute-based-access-control.html). 4. `project_name`: This is the name of the project under which the resources will fall. These are used in our `aws_ssm_parameter` block to create secure parameters, which are initially set with placeholder values. The remaining parameters, such as `platform`, `fqdn`, `deployment_grace_period`, `svc_config`, and `hpa_config`, although not explicitly used in our SSM parameters creation script, can be utilized within the same Terraform project to create various resources with AWS and Kubernetes providers. These could include load balancers, Horizontal Pod Autoscalers, and other vital components for our cloud infrastructure, and they contribute to the flexibility and comprehensiveness of the system we are setting up. The `aws_ssm_parameter` block creates the SSM parameters. Each SSM parameter is given a placeholder value to initialize it. The actual values will be input later by a dedicated team member, such as a developer or a DevOps engineer, with sufficient permissions. This can be done via the AWS console or command-line interface (CLI). It’s important to note that storing these values directly from the Terraform project is not advisable because they would end up in the Terraform state. This is a situation we want to avoid for security reasons, as we don’t want sensitive information like secrets stored in the Terraform state. --- ## InitContainer with GO binary <a name="part-1"></a> Moving on to the next crucial step, we need to prepare the init container Golang binary. This binary will fetch our secrets from the AWS Parameter Store and write them into a file. Kubernetes will then load these secrets from the file into environment variables for use by our applications. For this, we’ll write a small program in Go. The script’s operation can be summarized as follows: It creates an AWS session, uses the SSM client to fetch the secrets we stored earlier, writes the secrets into an environment file (.env), and then stores this file in a specific path (/etc/ssm/). The environment variables stored in this file will be available to other containers in the pod once loaded. > main.go > ```go package main import ( "bufio" "fmt" "github.com/aws/aws-sdk-go/aws" "github.com/aws/aws-sdk-go/aws/session" "github.com/aws/aws-sdk-go/service/ssm" "os" "path" ) func main() { sess, err := session.NewSession(&aws.Config{ Region: aws.String(os.Getenv("AWS_REGION")), }) if err != nil { fmt.Println("Failed to create session,", err) return } ssmSvc := ssm.New(sess) eks_cluster := os.Getenv("EKS_CLUSTER") namespace := os.Getenv("NAMESPACE") ci_project_name := os.Getenv("CI_PROJECT_NAME") ssmPath := fmt.Sprintf("/eks/%s/%s/%s", eks_cluster, namespace, ci_project_name) paramDetails := &ssm.GetParametersByPathInput{ Path: aws.String(ssmPath), WithDecryption: aws.Bool(true), } resp, err := ssmSvc.GetParametersByPath(paramDetails) if err != nil { fmt.Println("Failed to get parameters,", err) return } file, err := os.Create("/etc/ssm/.env") if err != nil { fmt.Println("Failed to create file,", err) return } defer file.Close() writer := bufio.NewWriter(file) for _, param := range resp.Parameters { name := path.Base(*param.Name) value := *param.Value writer.WriteString(fmt.Sprintf("export %s=%s\n", name, value)) } err = writer.Flush() if err != nil { fmt.Println("Failed to write to file,", err) } fmt.Println("env file created successfully") } ``` > Dockerfile > ```dockerfile FROM golang:1.20-alpine as BUILD WORKDIR /app COPY . . RUN go build -o main FROM alpine:3.16 AS RUNTIME WORKDIR /app COPY --from=BUILD /app/main . CMD ["./main"] ``` In this Go script: - We start by creating a new AWS session and initializing an SSM client using the aws-sdk-go package. - We retrieve the environment variables for the EKS cluster, the namespace, and the project name. These will be used to construct the path to our secrets stored in the SSM Parameter Store. - With the SSM client, we fetch the secrets from the Parameter Store using the GetParametersByPath method. This method retrieves all parameters within the provided path. - We then create a .env file and write the fetched parameters into this file. Each line in the file will contain one secret, with the syntax export SECRET_NAME=secret_value. Great, with the init container’s Go script ready, the next step is building this into a Docker image and pushing it to the Amazon Elastic Container Registry (ECR). Ideally, this should be part of your CI/CD process. Here’s a condensed guide to help you achieve this manually: ```bash # Build the image, authenticate Docker to ECR, and push the image docker build -t ssm-init-container . aws ecr get-login-password | docker login --username AWS --password-stdin your-account-id.dkr.ecr.region.amazonaws.com aws ecr create-repository - repository-name ssm-init-container docker tag ssm-init-container:latest your-account-id.dkr.ecr.region.amazonaws.comssm-init-container:latest docker push your-account-id.dkr.ecr.region.amazonaws.com/ssm-init-container:latest ``` Please replace ‘your-account-id’ and ‘region’ with your AWS account ID and your region, respectively. --- ## OIDC Federated Access for EKS Pods <a name="part-2"></a> Before we proceed to actual deployments, we need to ensure the IAM roles are correctly associated with the Kubernetes service accounts. This is vital for the secure operation of our applications, allowing them to access necessary AWS resources. Our Terraform scripts will take care of this association using the concept of IAM Roles for Service Accounts (IRSA) in AWS EKS, facilitated by OpenID Connect (OIDC) federation. It’s a recommended best practice to follow to ensure fine-grained access control to AWS services, directly from within the EKS environment. This eliminates the need to provide broad access permissions at the node level and greatly enhances our application’s security. For enhanced security, AWS EKS allows IAM roles to be assigned to Kubernetes Service Accounts through a feature called IAM Roles for Service Accounts (IRSA). This mechanism uses OpenID Connect (OIDC) federation to drive the mapping between Kubernetes service accounts and AWS IAM roles. This section shows how to implement it using Terraform. The following guides from AWS EKS are recommended reading to gain a deeper understanding of the topic: 1. [Enabling IAM roles for service accounts on your cluster](https://docs.aws.amazon.com/eks/latest/userguide/enable-iam-roles-for-service-accounts.html) 2. [Creating an IAM role and policy for your service account](https://docs.aws.amazon.com/eks/latest/userguide/iam-roles-for-service-accounts.html) Here’s the Terraform code that sets up the IAM roles, policies and associates them with Kubernetes service accounts: ```hcl # Assume Role Policy for service accounts data "aws_iam_policy_document" "service_account_assume_role" { for_each = local.projects statement { actions = ["sts:AssumeRoleWithWebIdentity"] effect = "Allow" condition { test = "StringEquals" variable = "${replace(aws_iam_openid_connect_provider.oidc_provider_sts.url, "https://", "")}:aud" values = ["sts.amazonaws.com"] } condition { test = "StringEquals" variable = "${replace(aws_iam_openid_connect_provider.oidc_provider_sts.url, "https://", "")}:sub" values = ["system:serviceaccount:${each.value.namespace}:${each.key}"] } principals { identifiers = [aws_iam_openid_connect_provider.oidc_provider_sts.arn] type = "Federated" } } } # SSM Access permissions resource "aws_iam_role" "service_account_role" { for_each = local.projects assume_role_policy = data.aws_iam_policy_document.service_account_assume_role[each.key].json name = "project-${each.key}-service-account-role" tags = local.default_tags } # IAM Policy for SSM secrets data "aws_iam_policy_document" "ssm_secret_policy" { for_each = local.projects statement { effect = "Allow" actions = ["ssm:GetParametersByPath", "ssm:GetParameters"] resources = [ "arn:aws:ssm:${local.region}:${local.account_id}:parameter/eks/${var.cluster_name}/${each.value.namespace}/${each.key}*" ] } } resource "aws_iam_policy" "ssm_secret_policy" { for_each = local.projects name = "project-${each.key}-ssm-access" description = "Policy to allow EKS pods/projects to access respective SSM parameters" policy = data.aws_iam_policy_document.ssm_secret_policy[each.key].json tags = local.default_tags } resource "aws_iam_role_policy_attachment" "service_account_role_ssm" { for_each = local.projects role = aws_iam_role.service_account_role[each.key].name policy_arn = aws_iam_policy.ssm_secret_policy[each.key].arn } ``` This code sets up an IAM role for each service account, granting it permissions to access specific SSM parameters. The service account is then mapped to the IAM role using OIDC. It’s a great approach to securely handle permissions and access secrets in EKS environments. Remember to replace the placeholders with actual values before running the script. Make sure you also have the appropriate AWS access and permissions to execute these commands. With this setup, your applications running in Kubernetes can securely and efficiently access the resources they need to function, all while adhering to the principle of least privilege. --- ## Application Deployment <a name="part-3"></a> With our Go-based init container built and securely stored in ECR, let’s move on to deploying a demo application. For this, we’ll need an `entrypoint.sh` shell script, a Dockerfile for our application, and a Kubernetes Deployment manifest. > entrypoint.sh > ```dockerfile FROM golang:1.20-alpine as BUILD WORKDIR /build COPY . . RUN go mod tidy RUN go build -o main FROM alpine:3.16 AS RUNTIME WORKDIR /app COPY - from=BUILD /build . RUN chmod +x entrypoint.sh ENTRYPOINT ["./entrypoint.sh"] ``` To give you a head start, here is a simple Go application you can deploy for testing purposes: [Go Resume Demo](https://github.com/WarDove/go-resume) Lastly, we have our Kubernetes Deployment manifest ```yaml apiVersion: apps/v1 kind: Deployment metadata: namespace: ${NAMESPACE} name: ${PROJECT_NAME} spec: selector: matchLabels: app.kubernetes.io/name: ${PROJECT_NAME} replicas: 1 template: metadata: labels: app.kubernetes.io/name: ${PROJECT_NAME} app.kubernetes.io/environment: ${EKS_CLUSTER} app.kubernetes.io/owner: Devops spec: serviceAccountName: ${PROJECT_NAME} initContainers: - name: secret-gopher image: ${ECR_REGISTRY}/<INIT_CONTAINER_NAME>:latest env: - name: EKS_CLUSTER value: ${EKS_CLUSTER} - name: NAMESPACE value: ${NAMESPACE} - name: CI_PROJECT_NAME value: ${PROJECT_NAME} volumeMounts: - name: envfile mountPath: /etc/ssm/ subPath: .env containers: - image: ${BUILD_IMAGE}:${BUILD_TAG} volumeMounts: - name: envfile mountPath: /etc/ssm/ subPath: .env imagePullPolicy: Always name: ${PROJECT_NAME} ports: - containerPort: 80 volumes: - name: envfile emptyDir: {} ``` This sets us up for a Kubernetes deployment, which uses the init container we built previously to populate our environment variables from AWS SSM, securely managing our application’s secrets. Here’s how it works: 1. When the Kubernetes Pod starts, the init container is the first to run. It fetches the secret data from AWS SSM and writes it to a file named .env located in the /etc/ssm/ directory. 2. This directory is a shared volume (emptyDir) that's accessible to all containers in the Pod. The emptyDir volume is created when a Pod is assigned to a Node, and it exists as long as that Pod is running on that Node. The data in emptyDir is safe across container crashes and restarts. 3. Once the init container successfully writes the .env file, it exits, and the main application container starts. 4. The main application container reads the .env file from the shared volume, thus having access to the secret data. The entrypoint.sh script in the main container sources the .env file to set the environment variables. Then, it removes the .env file for added security, ensuring the secrets do not persist in the file system once they've been loaded into the application's environment. 5. The main application then continues to run with the environment variables securely set, all without the secrets having to be explicitly stored or exposed outside the application’s memory. The following block of code can be added to your application to debug and verify that you are retrieving the secret parameters correctly from the AWS Systems Manager (SSM) Parameter Store. ```go // Debug ssm fetching key1 := os.Getenv("AMQ_USER") key2 := os.Getenv("AMQ_PASS") filePath := "/app/ssm-vars" file, err := os.Create(filePath) if err != nil { log.Fatalf("Failed to create file: %v", err) } defer file.Close() content := fmt.Sprintf("KEY1=%s\nKEY2=%s\n", key1, key2) err = os.WriteFile(filePath, []byte(content), 0644) if err != nil { log.Fatalf("Failed to write file: %v", err) } ``` ![confirm ssm param fetching](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/iuo68qoq1wlu1ptm07q2.png) This piece of code reads the values of two environment variables, `AMQ_USER` and `AMQ_PASS`, which have been populated from the SSM Parameter Store. It then writes these values to a file for debugging purposes. It’s important to understand that in a production environment, writing secrets to a file may expose them to additional security risks. This should only be used for debugging purposes and removed from the production code. We achieved this secure management of secrets by following these steps: 1. Storing Secrets in AWS SSM Parameter Store: We stored our secrets securely in the AWS Systems Manager Parameter Store, which is a managed service that provides a secure and scalable way to store and manage configuration data. 2. Fetch Secrets with an Init Container: We used an init container in our Kubernetes pod, which runs before our main application starts. This init container runs a Go program that fetches the secrets from the AWS SSM Parameter Store and writes them to an environment file. 3. Populate Environment Variables: In the main container where our application runs, we used an entrypoint script that sources the environment file, thereby populating the environment variables with the secrets. 4. Removal of .env file: To ensure that our secrets are not written to disk in the main container, the entrypoint script removes the .env file after sourcing it. 5. Secrets in Memory: As a result of these steps, the secrets are only present in the memory of the running application process and nowhere else. They are not written to disk in the main container, and are not included in any container layers. This is a robust way to keep secrets secure. 6. Security Best Practices: We also ensured security best practices such as setting appropriate IAM roles for access to AWS SSM Parameter Store, and used Kubernetes RBAC for restricting access to Kubernetes --- Farewell 😊 <a name="farewell"></a> ![farewell](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/irgyi0j1ov5riexm4j3c.gif) We’ve explored an effective method of Kubernetes secret management using Golang, AWS ParameterStore, OIDC, and Terraform. Thank you for reading this article. I hope it has provided you with valuable insight and can serve as a handy reference for your future projects. Keep exploring, keep learning, and continue refining your security practices!
wardove