id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,748,598 | Unleashing Creativity: A Beginner's Guide to Adobe Illustrator | Embarking on a journey into the world of graphic design can be both exciting and a little... | 0 | 2024-02-01T14:22:10 | https://dev.to/khushi2025/unleashing-creativity-a-beginners-guide-to-adobe-illustrator-2fj2 | Embarking on a journey into the world of graphic design can be both exciting and a little overwhelming, especially when faced with powerful tools like Adobe Illustrator. Fear not, aspiring artists and designers! This beginner's guide is your compass through the vast landscapes of vectors and illustrations, helping you navigate Adobe Illustrator with confidence.
**1. Introduction to Adobe Illustrator: The Creative Canvas**
Adobe Illustrator is a vector-based design software that allows you to create stunning graphics, illustrations, and logos. Unlike raster images, vectors maintain their quality regardless of size, making Illustrator the go-to tool for scalable and precise designs.
**2. Getting Started: The Illustrator Workspace**
Familiarize yourself with the Illustrator workspace. Understand the toolbar, panels, and artboard—the canvas where your creative ideas will come to life. Take a moment to explore menus and palettes, setting the stage for your design journey.
**3. Mastering Basic Tools: A Palette of Possibilities**
Learn the essentials:
- **Selection Tools**: Navigate, group, and manipulat objects.
- **Shape Tools**: Create rectangles, ellipses, and polygons.
- **Pen Tool**: Master the art of drawing paths for custom shapes.
- **Type Tool**: Add text to your designs with various formatting options.
**4. Colors and Gradients: Infusing Life into Your Art**
Discover the color palette and swatches panel. Experiment with solid colors, gradients, and patterns. Learn how to apply and adjust fills and strokes to add depth and dimension to your designs.
**5. Layers: Organizing Your Artistic Universe**
Unlock the power of layers to organize and control elements within your artwork. Understand how layers facilitate easy editing, hiding, and reordering of design components.
**6. Transformations and Effects: Shaping Your Vision**
Explore transformations like scaling, rotating, and reflecting to manipulate objects effortlessly. Delve into Illustrator's vast array of effects, from drop shadows to blurs, to elevate your designs.
**7. Working with Text: Typography Magic**
Master the art of text manipulation. Learn to create and edit text, experiment with fonts, and use character and paragraph styles for consistent and aesthetically pleasing typography.
**8. Saving and Exporting: From Artboard to the World**
Understand the various file formats, save options, and export settings. Learn how to save your work for future edits and export it for print, web, or social media platforms.
**9. Practice, Practice, Practice: Hone Your Skills**
The best way to become proficient in Adobe Illustrator is through hands-on practice. Experiment with different tools, recreate existing designs, and gradually challenge yourself with more complex projects.
**10. Community and Resources: Learn and Grow Together**
Join the vibrant community of designers and illustrators. Explore online tutorials, forums, and Adobe's official resources to stay updated on new features and techniques. Embrace a mindset of continuous learning and improvement.
Embarking on your Adobe Illustrator journey is like stepping into a boundless realm of creativity. With dedication, practice, and the guidance of this beginner's guide, you'll soon find yourself confidently crafting captivating designs and illustrations. So, unleash your imagination, let the vectors flow, and enjoy the exhilarating ride through the artistic wonders of Adobe Illustrator!
| khushi2025 | |
1,748,860 | A multi-dimensional array in PostgreSQL | *Memos: An array has elements from [1] but not from [0] so [0] returns NULL. Basically, you should... | 0 | 2024-02-02T20:03:57 | https://dev.to/hyperkai/a-multi-dimensional-array-in-postgresql-eak | postgres, array, multidimensional, arrays | *Memos:
- An array has elements from `[1]` but not from `[0]` so `[0]` returns `NULL`.
- Basically, you should use type conversion to create an array except when you declare a non-empty array in a `DECLARE` clause in a function, procedure or `DO` statement because the type may be different from your expectation and there is some case which you cannot create an array without type conversion. *[My answer](https://stackoverflow.com/questions/13809547/convert-integer-to-string-in-postgresql#77723276) explains type conversion in detail.
- [The doc](https://www.postgresql.org/docs/current/arrays.html) explains a multi-dimensional array in detail.
- [My answer](https://stackoverflow.com/questions/71406989/declare-an-array-in-postgresql#77723752) explains how to create and use the 1D(one-dimensional) array with `VARCHAR[]` in detail.
- [My answer](https://stackoverflow.com/questions/11831555/get-nth-element-of-an-array-that-returns-from-string-to-array-function/#77723802) explains how to create and use the 1D array with `INT[]` in detail.
- [My answer](https://stackoverflow.com/questions/23924815/create-an-empty-array-in-an-sql-query-using-postgresql-instead-of-an-array-with#77932610) explains how to create an use an empty array in detail.
- [My post](https://dev.to/hyperkai/foreach-and-for-statements-with-arrays-in-postgresql-2bnc) explains how to iterate a 1D and 2D array with a [FOREACH](https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-FOREACH-ARRAY) and [FOR](https://www.postgresql.org/docs/current/plpgsql-control-structures.html#PLPGSQL-INTEGER-FOR) statement.
You can create and use a 2D(two dimensional) array with these ways below:
```sql
DO $$
DECLARE
_2d_arr VARCHAR[] := ARRAY[
['a','b','c','d'],
['e','f','g','h'],
['i','J','k','l']
];
BEGIN
RAISE INFO '%', _2d_arr; -- {{a,b,c,d},{e,f,g,h},{i,J,k,l}}
RAISE INFO '%', _2d_arr[0][2]; -- NULL
RAISE INFO '%', _2d_arr[2]; -- NULL
RAISE INFO '%', _2d_arr[2][0]; -- NULL
RAISE INFO '%', _2d_arr[2][3]; -- g
RAISE INFO '%', _2d_arr[2:2]; -- {{e,f,g,h}}
RAISE INFO '%', _2d_arr[1:1][2:3]; -- {{b,c}}
RAISE INFO '%', _2d_arr[2:2][2:3]; -- {{f,g}}
RAISE INFO '%', _2d_arr[3:3][2:3]; -- {{J,k}}
RAISE INFO '%', _2d_arr[1:3][2:3]; -- {{b,c},{f,g},{J,k}}
RAISE INFO '%', _2d_arr[:][:]; -- {{a,b,c,d},{e,f,g,h},{i,J,k,l}}
RAISE INFO '%', _2d_arr[1][2:3]; -- {{b,c}} -- Tricky
RAISE INFO '%', _2d_arr[2][2:3]; -- {{b,c},{f,g}} -- Tricky
RAISE INFO '%', _2d_arr[3][2:3]; -- {{b,c},{f,g},{J,k}} -- Tricky
END
$$;
```
*Memos:
- The type of the array above is `VARCHAR[]`(`CHARACTER VARYING[]`).
- You can set `VARCHAR[][]`, `VARCHAR[][][]`, etc to `_2d_arr`, then the type of `_2d_arr` is automatically converted to `VARCHAR[]`(`CHARACTER VARYING[]`) but you cannot set `VARCHAR` to `_2d_arr` otherwise there is [the error](https://stackoverflow.com/questions/71397289/apply-subscripting-to-text-data-type-in-postgresql#77924037).
- The last 3 `RAISE INFO ...` are tricky.
- If the number of the elements in each 1D array in `_2d_arr` is different, there is error.
- Don't create the 2D array which has numbers and strings otherwise there is [the error](https://stackoverflow.com/questions/72975653/why-do-i-get-an-invalid-input-syntax-for-type-integer-error#77919995).
Or:
```sql
DO $$
DECLARE
_2d_arr VARCHAR[] := '{
{a,b,c,d},
{e,f,g,h},
{i,j,k,l}
}';
BEGIN
RAISE INFO '%', _2d_arr; -- {{a,b,c,d},{e,f,g,h},{i,J,k,l}}
RAISE INFO '%', _2d_arr[0][2]; -- NULL
RAISE INFO '%', _2d_arr[2]; -- NULL
RAISE INFO '%', _2d_arr[2][0]; -- NULL
RAISE INFO '%', _2d_arr[2][3]; -- g
RAISE INFO '%', _2d_arr[2:2]; -- {{e,f,g,h}}
RAISE INFO '%', _2d_arr[1:1][2:3]; -- {{b,c}}
RAISE INFO '%', _2d_arr[2:2][2:3]; -- {{f,g}}
RAISE INFO '%', _2d_arr[3:3][2:3]; -- {{J,k}}
RAISE INFO '%', _2d_arr[1:3][2:3]; -- {{b,c},{f,g},{J,k}}
RAISE INFO '%', _2d_arr[:][:]; -- {{a,b,c,d},{e,f,g,h},{i,J,k,l}}
RAISE INFO '%', _2d_arr[1][2:3]; -- {{b,c}} -- Tricky
RAISE INFO '%', _2d_arr[2][2:3]; -- {{b,c},{f,g}} -- Tricky
RAISE INFO '%', _2d_arr[3][2:3]; -- {{b,c},{f,g},{J,k}} -- Tricky
END
$$;
```
*Memos:
- The type of the array above is `VARCHAR[]`(`CHARACTER VARYING[]`).
- You can set `VARCHAR[][]`, `VARCHAR[][][]`, etc to `_2d_arr`, then the type of `_2d_arr` is automatically converted to `VARCHAR[]`(`CHARACTER VARYING[]`) but you cannot set `VARCHAR` to `_2d_arr` otherwise there is [the error](https://stackoverflow.com/questions/71397289/apply-subscripting-to-text-data-type-in-postgresql#77924037).
- The last 3 `RAISE INFO ...` are tricky.
- If the number of the elements in each 1D array in `_2d_arr` is different, there is error.
In addition, even if you set `VARCHAR(2)[2]` to the array, the result is the same and the type of the 2D array is `VARCHAR[]`(`CHARACTER VARYING[]`) as shown below:
```sql
DO $$
DECLARE -- ↓ ↓ ↓ ↓ ↓ ↓
_2d_arr VARCHAR(2)[2] := ARRAY[
['a','b','c','d'],
['e','f','g','h'],
['i','J','k','l']
];
BEGIN
RAISE INFO '%', _2d_arr; -- {{a,b,c,d},{e,f,g,h},{i,J,k,l}},
RAISE INFO '%', pg_typeof(_2d_arr); -- character varying[]
END
$$;
```
And, even if you set `::TEXT` to 'a', the type of 'a' is `VARCHAR`(`CHARACTER VARYING`) rather than `TEXT` as shown below because the type `VARCHAR[]` set to `_2d_arr` is prioritized. *You cannot set `::TEXT[]` to each 1D array otherwise there is error but you can set `::TEXT[]` to each 1D array if you set the keyword `ARRAY` just before each 1D array but the type of each row is `VARCHAR[]`(`CHARACTER VARYING[]`) rather than `TEXT[]` because the type `VARCHAR[]` set to `_2d_arr` is prioritized as well:
```sql
DO $$
DECLARE
_2d_arr VARCHAR[] := ARRAY[
['a'::TEXT,'b','c','d'],
['e','f','g','h'],
['i','J','k','l']
];
BEGIN
RAISE INFO '%', _2d_arr[1][1]; -- a
RAISE INFO '%', pg_typeof(_2d_arr[1][1]); -- character varying
END
$$;
``` | hyperkai |
1,748,877 | Mastering The HTML Details And Summary Elements | by Japheth Basil In the field of web development, creating experiences that are both engaging and... | 0 | 2024-02-01T18:49:56 | https://blog.openreplay.com/mastering-html-details-and-summary-elements/ | by [Japheth Basil](https://blog.openreplay.com/authors/japheth-basil)
<blockquote><em>
In the field of web development, creating experiences that are both engaging and user-friendly is crucial. Among the tools that facilitate this pursuit of seamless interaction, the `<details>` and `<summary>` elements are excellent examples. With the help of these modest yet effective HTML tags, content management can be handled dynamically, allowing users to explore information at their own pace, as this article will show.
</em></blockquote>
<div style="background-color:#efefef; border-radius:8px; padding:10px; display:block;">
<hr/>
<h3><em>Session Replay for Developers</em></h3>
<p><em>Uncover frustrations, understand bugs and fix slowdowns like never before with <strong><a href="https://github.com/openreplay/openreplay" target="_blank">OpenReplay</a></strong> — an open-source session replay suite for developers. It can be <strong>self-hosted</strong> in minutes, giving you complete control over your customer data.</em></p>
<img alt="OpenReplay" style="margin-top:5px; margin-bottom:5px;" width="768" height="400" src="https://raw.githubusercontent.com/openreplay/openreplay/main/static/openreplay-git-hero.svg" class="astro-UXNKDZ4E" loading="lazy" decoding="async">
<p><em>Happy debugging! <a href="https://openreplay.com" target="_blank">Try using OpenReplay today.</a></em><p>
<hr/>
</div>
In the constantly growing field of web development, the importance of HTML Details and Summary elements exceeds their seemingly modest stature. These elements matter significantly for several compelling reasons, each contributing to a more refined and user-centric approach to crafting web pages.

* Enhanced User Interaction:
HTML Details and Summary elements offer dynamic user interaction, making content collapsible and expandable. This feature empowers users to control information flow, enhancing their experience and allowing engagement based on preferences and needs.
* Streamlined Code Structure:
From a developer's standpoint, Details and Summary elements enhance code elegance. They provide a concise and semantically meaningful structure, resulting in cleaner and more maintainable code. This simplicity is particularly valuable in collaborative and large-scale projects.
* Improved Accessibility:
HTML Details and Summary elements are pivotal for web accessibility and organizing content in a logical hierarchy. This ensures a positive experience for users with disabilities, particularly those relying on screen readers or assistive technologies.
* Progressive Disclosure of Information:
Details and Summary elements are instrumental in achieving progressive disclosure—a fundamental concept for user engagement. They allow developers to strategically unveil information as users interact with the page, avoiding overwhelming them with excessive content upfront. This progressive approach guides users through a seamless and more digestible exploration of the provided information.
* Versatility in Content Presentation:
HTML Details and Summary elements are not confined to a singular use case. Their versatility allows developers to present various types of content, including text, images, or multimedia, in a structured and visually appealing manner. This flexibility is particularly advantageous when dealing with diverse content formats within a single webpage.
## Understanding HTML Details Element
The HTML `<details>` element is a powerful tool that allows web developers to create collapsible sections of content on a webpage. When combined with the `<summary>` element, this element provides an interactive and user-friendly way to present information.
The basic syntax of a `<details>` element involves encapsulating content within the opening and closing `<details>` tags:
```html
<details>
<!-- Content goes here -->
</details>
```
In this structure:
`<details>`: This tag defines the container for the collapsible section.
Content: This is the actual content that will be hidden or revealed based on user interaction.
Adding content to the syntax:
```html
<details>
<p>Lorem ipsum dolor, sit amet consectetur adipisicing elit. Molestiae impedit voluptatum esse at eius repellat, enim nihil ipsum cum ducimus eligendi qui sunt distinctio possimus earum laudantium facere exercitationem sit.</p>
</details>
```
The output:

## Exploring the Purpose of the Details Element
In understanding the Purpose of the Details Element, we'll uncover how it helps organize and present content on the web. This exploration reveals its role in creating a dynamic and engaging user experience.
* Content Organization:
The primary purpose of `<details>` is to organize content effectively. It allows developers to structure information hierarchically, preventing information overload on a webpage. By presenting content in collapsible sections, users can focus on what is relevant to them, promoting a more user-friendly experience.
* Progressive Disclosure:
`<details>` facilitates progressive disclosure, a user experience strategy where information is revealed gradually. Instead of overwhelming users with all information at once, details can be disclosed progressively, keeping the user engaged and preventing cognitive overload.
* Interactive User Experience:
`<details>` enhances interactivity by allowing users to control which sections they want to expand or collapse. This interactive feature empowers users, providing a more engaging and personalized browsing experience.
## Customization with the Open Attribute
The open attribute in the HTML `<details>` element provides a valuable customization option, allowing developers to control whether the content is initially visible or hidden.
The `open` attribute, when added to the `<details>` tag, dictates whether the content within the container is initially visible or hidden.
Here's a simple example:
```html
<details open>
<p>Lorem ipsum dolor, sit amet consectetur adipisicing elit. Molestiae impedit voluptatum esse at eius repellat, enim nihil ipsum cum ducimus eligendi qui sunt distinctio possimus earum laudantium facere exercitationem sit.</p>
</details>
```
The output:

In this example, the open attribute ensures the content is visible when the page loads. Without the open attribute, the content would be initially hidden.
The open attribute is handy when you want specific sections to be expanded by default, providing users immediate access to certain information.
Here’s an example:
```html
<details open>
<p>This is the initially visible section with the open attribute.</p>
</details>
<details>
<p>This is the content of section 2.</p>
</details>
<details>
<p>This is the content of section 3.</p>
</details>
<details>
<p>This is the content of section 4.</p>
</details>
```
The output:

## Unveiling the Role of the Summary Element
The `<summary>` element in HTML plays a pivotal role in conjunction with the `<details>` element, providing a clickable header or label for a collapsible section of content. By default, using the `<details>` element without the `<summary>` element, the clickable header for the collapsible section would be the text `details.`
### Basic Structure of `<summary>`
The `<summary>` element is typically nested within the `<details>` element, creating a structured hierarchy:
```html
<details>
<summary>Clickable Header</summary>
<p>Lorem ipsum dolor, sit amet consectetur adipisicing elit. Molestiae impedit voluptatum esse at eius repellat, enim nihil ipsum cum ducimus eligendi qui sunt distinctio possimus earum laudantium facere exercitationem sit.</p>
</details>
```
The output:

### Essential Functionality
The essential functionalities of the `<summary>` tag include:
* Clickable Header:
The primary functionality of `<summary>` is to serve as a clickable header. When users interact with the `<summary>` element, it toggles the visibility of the content within the associated `<details>` container. This interactivity is key to implementing collapsible sections on a webpage.
* Accessible Labeling:
`<summary>` provides an accessible and descriptive label for the collapsible section. It should be chosen thoughtfully to convey the nature of the content it represents. Screen readers use this label to announce the purpose of the collapsible section to users with visual impairments.
* Versatility in Content:
While commonly used for text, `<summary>` is versatile and can include various content types such as images, icons, or other HTML elements. This flexibility allows developers to create visually appealing and informative headers.
<CTA_Middle_Basics />
## Practical Applications of Details and Summary Elements
HTML Details and Summary elements offer a versatile set of tools for web developers, providing a range of practical applications that enhance user experience, content organization, and interactivity.
Let's explore some key practical applications of Details and Summary elements:
* FAQ Sections:
Details and Summary elements are ideal for creating web FAQ (Frequently Asked Questions) sections. Each question can be represented by a `<summary>` element, and the corresponding answer can be revealed when users click the summary.
Here’s an example with some styling:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<style>
body { font-family: 'Arial', sans-serif; background-color: #f7f7f7; margin: 0; padding: 20px; }
details { background-color: #fff; border: 1px solid #ddd; margin-bottom: 15px; border-radius: 5px; overflow: hidden; }
summary { padding: 15px; cursor: pointer; outline: none; background-color: #28a745; color: #fff; border: none; border-radius: 5px; transition: background-color 0.3s; }
summary:hover { background-color: #0f842a; }
p { padding: 15px; margin: 0; }
</style>
</head>
<body>
<details>
<summary>How do I reset my password?</summary>
<p>Follow these steps to reset your password...</p>
</details>
<details>
<summary>What payment methods do you accept?</summary>
<p>We accept various payment methods, including...</p>
</details>
<details>
<summary>Is there a refund policy?</summary>
<p>Yes, we have a refund policy. If you are not satisfied with your purchase, please review our refund policy...</p>
</details>
</body>
</html>
```
The output:

In this example:
The background color, border, and padding style the `<details>` element, creating a clean card-like appearance.
The `<summary>` element has a distinct background color, and its appearance changes on hover to provide a visual cue that it is clickable.
The `<p>` element, representing the answer, has padding to enhance readability.
* Content Clarity and Readability:
Use Details and Summary elements to structure long-form content, allowing users to expand sections they find interesting while keeping the overall page layout clean and concise.
Here’s an example with some styling:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<style>
body{font-family:'Arial',sans-serif;background-color:#f7f7f7;margin:0;padding:20px;}
details{background-color:#fff;border:1px solid #ddd;margin-bottom:15px;border-radius:5px;overflow:hidden;}
summary{padding:15px;cursor:pointer;outline:none;background-color:#bc3d24;color:#fff;border:none;border-radius:5px;transition:background-color .3s;}
summary:hover{background-color:#a92b11;}
p{padding:15px;margin:0;}
</style>
</head>
<body>
<details><summary>Introduction</summary><p>This is the introduction to our topic. It provides a brief overview of what you can expect to learn and sets the stage for the key concepts...</p></details>
<details><summary>Key Concepts</summary><p>Here are the key concepts you need to understand. Each concept is explained in detail, allowing you to grasp the fundamental ideas that form the foundation of this topic...</p></details>
<details><summary>Implementation</summary><p>Explore the practical implementation of the concepts learned. Understand how to apply the key ideas in real-world scenarios...</p></details>
</body>
</html>
```
The output:

* Interactive Tutorials:
Details and Summary elements can be employed to create interactive tutorials, where each step is hidden by default, and users can reveal the steps one at a time.
Here’s an example with styling:
```html
<!DOCTYPE html>
<html lang="en">
<head>
<style>
body{font-family:'Arial',sans-serif;background-color:#f7f7f7;margin:0;padding:20px;}
details{background-color:#fff;border:1px solid #ddd;margin-bottom:15px;border-radius:5px;overflow:hidden;}
summary{padding:15px;cursor:pointer;outline:none;background-color:#416696;color:#fff;border:none;border-radius:5px;transition:background-color .3s;}
summary:hover{background-color:#2d5181;}
p{padding:15px;margin:0;}
</style>
</head>
<body>
<details open>
<summary>Step 1: Getting Started</summary>
<p>Follow these instructions to get started. This step covers the initial setup and introduces you to the basics...</p>
</details>
<details>
<summary>Step 2: Customization</summary>
<p>Learn how to customize your experience. This step provides insights into customization options and advanced features...</p>
</details>
<details>
<summary>Step 3: Troubleshooting</summary>
<p>Discover troubleshooting tips to address common issues. This step equips you with the knowledge to overcome challenges...</p>
</details>
</body>
</html>
```
The output:

* Product Descriptions:
In e-commerce websites, use Details and Summary elements to present detailed product descriptions. Users can expand sections for specifications, features, and customer reviews.
Here’s an example with some styling:
``` html
<!DOCTYPE html>
<html lang="en">
<head>
<style>
body{font-family:'Arial',sans-serif;background-color:#f7f7f7;margin:0;padding:20px;}
details{background-color:#fff;border:1px solid #ddd;margin-bottom:15px;border-radius:5px;overflow:hidden;}
summary{padding:15px;cursor:pointer;outline:none;background-color:#2727d9;color:#fff;border:none;border-radius:5px;transition:background-color .3s;}
summary:hover{background-color:#08088e;}
p{padding:15px;margin:0;}
</style>
</head>
<body>
<details>
<summary>Product Specifications</summary>
<p>Explore the technical specifications of our product. This section provides details on size, weight, and other key specifications...</p>
</details>
<details>
<summary>Product Features</summary>
<p>Discover the features that make our product unique. From advanced functionalities to innovative design, this section covers it all...</p>
</details>
<details>
<summary>Customer Reviews</summary>
<p>Read what our customers have to say about this product. Find authentic reviews to help you make an informed decision before purchasing...</p>
</details>
</body>
</html>
```
The output:

## Conclusion
In summary, HTML Details and Summary elements offer web developers powerful tools to enhance user experience. These elements, from optimizing content clarity to creating interactive tutorials, are versatile and valuable. Embracing them in web development not only streamlines processes but also contributes to creating more accessible, organized, and engaging websites. As we navigate the dynamic landscape of the digital world, HTML Details and Summary elements remain essential in shaping the future of web development.
| asayerio_techblog | |
1,748,937 | Fundamentals of Site Reliability Engineering | Understanding Reliability Exploring Site Reliability Engineering (SRE) Why SRE Matters in Modern... | 0 | 2024-02-01T21:03:34 | https://dev.to/sre_panchanan/fundamentals-of-site-reliability-engineering-373l | performance, devops, sitereliabilityengineering, observability |
- [Understanding Reliability](#understanding-reliability)
- [Exploring Site Reliability Engineering (SRE)](#exploring-site-reliability-engineering-sre)
- [Why SRE Matters in Modern IT](#why-sre-matters-in-modern-it)
- [Key Pillars of SRE](#key-pillars-of-sre)
- [Navigating the Error Budget](#navigating-the-error-budget)
- [Implementing the Error Budget](#implementing-the-error-budget)
- [Spending Leftover Error Budget](#spending-leftover-error-budget)
- [Understanding Service Level Indicators (SLIs)](#understanding-service-level-indicators-slis)
- [Formula](#formula)
- [Choosing Effective SLIs](#choosing-effective-slis)
- [User Requests Metrics](#user-requests-metrics)
- [Data Processing Metrics](#data-processing-metrics)
- [Storage Metrics](#storage-metrics)
- [Measurement strategies for SLIs](#measurement-strategies-for-slis)
- [1. User-Centric Measurement:](#1-user-centric-measurement)
- [2. Instrumentation in Application Code:](#2-instrumentation-in-application-code)
- [3. Infrastructure and Server Monitoring:](#3-infrastructure-and-server-monitoring)
- [4. Database Query and Transaction Monitoring:](#4-database-query-and-transaction-monitoring)
- [5. API Endpoint Monitoring:](#5-api-endpoint-monitoring)
- [6. Microservices Interaction Analysis:](#6-microservices-interaction-analysis)
- [7. Load Balancer Effectiveness Assessment:](#7-load-balancer-effectiveness-assessment)
- [Setting Service Level Objectives (SLOs)](#setting-service-level-objectives-slos)
- [Effective SLO Targets](#effective-slo-targets)
- [Service Level Agreements (SLAs)](#service-level-agreements-slas)
- [Best Practices for SLAs](#best-practices-for-slas)
- [Final Thoughts](#final-thoughts)
---
## Understanding Reliability
Reliability in IT means that a system or service consistently performs its functions without errors or interruptions. It ensures users can rely on technology for a smooth experience.
## Exploring Site Reliability Engineering (SRE)
Site Reliability Engineering (SRE) blends software engineering with IT operations, focusing on scalable and highly reliable software systems. Its importance lies in balancing reliability, system efficiency, and rapid innovation in modern IT.
### Why SRE Matters in Modern IT
1. **Reliability as a Competitive Edge:** SRE builds dependable systems, fostering customer trust and satisfaction for a competitive advantage.
2. **Resilience Amid Complexity:** In intricate IT environments, SRE provides a structured approach to manage complex systems while maintaining reliability.
3. **Balancing Innovation with Stability:** SRE empowers teams to innovate without compromising system stability, achieving a balance between agility and dependability.
4. **Cost-Efficiency through Automation:** Automation reduces downtime, saving time and resources.
5. **Enhanced Incident Response:** SRE's proactive incident management ensures quick responses, minimizing failure impact and reducing recovery time.
---
## Key Pillars of SRE
SRE is built on Service Level Indicators (SLIs), Objectives (SLOs), and Agreements (SLAs), along with the crucial concept of Error Budget.

---
## Navigating the Error Budget
The Error Budget represents the permissible margin of errors or disruptions in a system's reliability over time. SREs use it to balance innovation and reliability.
### Implementing the Error Budget
- Set clear reliability targets and acceptable downtime levels.
- Implement robust monitoring for real-time performance tracking.
- Encourage cross-team collaboration.
- Monitor service performance and adjust to stay within the Error Budget.
- Regularly review and learn from incidents to improve processes.
### Spending Leftover Error Budget
- Release new features.
- Implement expected changes.
- Address hardware or network failures.
- Plan scheduled downtime.
- Conduct controlled, risk-managed experiments.
---
## Understanding Service Level Indicators (SLIs)
SLIs are metrics quantifying a service's performance. They provide measurable insights into how well a service is operating.
### Formula:

This formula provides a ratio indicating the proportion of successful events, serving as a key metric to assess the reliability and performance of a service.
---
### Choosing Effective SLIs
- Identify critical aspects impacting user experience.
- Choose measurable, relevant metrics aligned with user expectations.
- Prioritize simplicity and clarity in defining SLIs.
- Collaborate with stakeholders to understand user priorities.
- Regularly review and update SLIs to adapt to changing needs.
> "**100%** is the **wrong** reliability target for basically everything."
> > **`- Betsy Beyer`**, Site Reliability Engineering: How Google Runs Production Systems
---
Let's understand good SLIs through some examples.
#### User Requests Metrics
1. **Latency SLI:**
- **Metric:** Response time for user requests
2. **Availability SLI:**
- **Metric:** System uptime and availability
3. **Error Rate SLI:**
- **Metric:** Percentage of failed requests
4. **Quality SLI:**
- **Metric:** User satisfaction ratings
---
#### Data Processing Metrics
1. **Coverage SLI:**
- **Metric:** Percentage of data processed compared to total incoming data.
2. **Correctness SLI:**
- **Metric:** Percentage of accurate results compared to expected outcomes.
3. **Freshness SLI:**
- **Metric:** Time taken for the system to process and make data available.
4. **Throughput SLI:**
- **Metric:** The number of data records processed per unit of time.
---
#### Storage Metrics
1. **Latency SLI:**
- **Metric:** Time taken for storage operations to be completed.
2. **Throughput SLI:**
- **Metric:** Rate of data transfer to and from the storage system.
3. **Availability SLI:**
- **Metric:** Percentage of time the storage system is available for read and write operations.
4. **Durability SLI:**
- **Metric:** Assurance that data written to the storage system is not lost.
**NOTE:** Clearly define what is considered a **success** and **failure** for your **SLIs**.
---
## Measurement strategies for SLIs
Clear strategies for effective SLI measurement:
### 1. User-Centric Measurement:
- **Measure:** User experience metrics.
- **Tools:** Real user monitoring (RUM).
### 2. Instrumentation in Application Code:
- **Measure:** Code-level SLIs.
- **Tools:** Application performance monitoring (APM).
### 3. Infrastructure and Server Monitoring:
- **Measure:** Resource utilization and response times.
- **Tools:** Server monitoring solutions.
### 4. Database Query and Transaction Monitoring:
- **Measure:** Database operation SLIs.
- **Tools:** Database monitoring tools.
### 5. API Endpoint Monitoring:
- **Measure:** Endpoint response times and efficiency.
- **Tools:** API monitoring tools.
### 6. Microservices Interaction Analysis:
- **Measure:** Microservices interaction points.
- **Tools:** Distributed tracing tools.
### 7. Load Balancer Effectiveness Assessment:
- **Measure:** Load balancer efficiency.
- **Tools:** Load balancer logs.
These strategies offer concise approaches to gathering valuable insights into SLIs at different system levels.
---
## Setting Service Level Objectives (SLOs)
SLOs are specific, measurable targets set for SLIs to quantify the desired performance and reliability of a system, acting as a bridge between business expectations and technical metrics.
### Effective SLO Targets
- Understand user expectations and critical service aspects.
- Review historical performance data for patterns and improvements.
- Align SLO targets with broader business goals.
- Consider dependencies on other services or components.
- Involve stakeholders, including product owners and end-users.
- Prioritize SLIs with a significant impact on user satisfaction.
- Set realistic yet challenging SLO targets.
- Embrace an iterative approach for refinement over time.
---
Here are real-world examples of SLOs based on various SLIs:
1. **E-commerce Platform - Latency:**
- **SLI:** Time taken to load product pages.
- **SLO:** Ensure that 90% of page loads occur within 3 seconds.
- **Alert:** Trigger an alert if the page load time exceeds 4 seconds for more than 5 minutes. Notify the on-call engineer via Slack.
2. **Cloud Storage Service - Availability:**
- **SLI:** Percentage of time the storage service is accessible.
- **SLO:** Maintain an availability of 99.99% over a monthly period.
- **Alert:** Send an alert and create a high-priority JIRA ticket if the availability falls below 99.95% in a given month. Notify the operations team.
3. **Video Streaming Service - Buffering Rate:**
- **SLI:** Percentage of video playback without buffering interruptions.
- **SLO:** Limit buffering interruptions to less than 1% of total viewing time.
- **Alert:** Trigger an alert, escalate to video streaming engineers, and create a critical incident in PagerDuty if the buffering rate exceeds 1.5% during peak hours.
4. **Financial Transactions - Error Rate:**
- **SLI:** Percentage of error-free financial transactions.
- **SLO:** Keep the error rate below 0.1% for all monetary transactions.
- **Alert:** Issue an alert, notify the security team, and initiate a forensic analysis if the error rate surpasses 0.2% on a given day.
5. **Healthcare Application - Response Time:**
- **SLI:** Time taken to process and display patient records.
- **SLO:** Ensure that 95% of requests for patient records are served within 2 seconds.
- **Alert:** Send an alert, notify the support team, and create a ticket for the development team if the response time for patient records exceeds 2.5 seconds for more than 2 hours.
6. **Social Media Platform - Throughput:**
- **SLI:** Number of user posts processed per second.
- **SLO:** Maintain a throughput of at least 1000 posts per second during peak usage.
- **Alert:** Trigger an alert, notify the performance team, and create a post-incident report if the throughput falls below 900 posts per second for more than 10 minutes.
7. **Online Learning Platform - Uptime:**
- **SLI:** Percentage of time the platform is operational.
- **SLO:** Achieve an uptime of 99.9% throughout the academic year.
- **Alert:** Issue an alert, notify the operations team, and engage incident response if the platform's uptime drops below 99.5% within a 24-hour period.
8. **Telecommunications Network - Call Drop Rate:**
- **SLI:** Percentage of completed phone calls without dropping.
- **SLO:** Keep the call drop rate below 1% during peak hours.
- **Alert:** Send an alert, escalate to network operations, and initiate a bridge call with the engineering team if the call drop rate exceeds 1.2% during peak hours for more than 15 minutes.
---
## Service Level Agreements (SLAs)
SLAs are formal agreements between service providers and customers outlining the expected level of service, including performance metrics, responsibilities, and guarantees.
### Best Practices for SLAs
- Set SLOs slightly above the contracted SLA to provide a buffer, avoiding user dissatisfaction.
- Ensure SLIs do not fall below the SLA, maintaining a commitment to agreed-upon service levels.
---
## Final Thoughts
In concluding our exploration of Site Reliability Engineering, remember that SRE is more than a set of practices; it's a mindset fostering reliability, collaboration, and continuous improvement. May your systems be resilient, incidents few, and services remain reliable. Happy engineering! | sre_panchanan |
1,749,003 | Go Bun ORM with PostgreSQL Quickly Example | Install dependencies and environment variable Replace the values from database connection... | 0 | 2024-02-12T00:05:54 | https://dev.to/luigiescalante/go-bun-orm-with-postgresql-quickly-example-394o | go, orm, postgres | ## Install dependencies and environment variable
Replace the values from database connection with yours.
```
export DB_USER=admin
export DB_PASSWORD=admin123
export DB_NAME=code_bucket
export DB_HOST=localhost
export DB_PORT=5432
go get github.com/uptrace/bun@latest
go get github.com/uptrace/bun/driver/pgdriver
go get github.com/uptrace/bun/dialect/pgdialect
```
## Create table
Bun have a migration system, but I think it will be required its own post. For this quickly example
```
create table customers(
id serial not null,
first_name varchar(80) not null ,
last_name varchar(80) not null ,
email varchar(50) not null unique,
age int not null default 1,
created_at timestamp not null DEFAULT now(),
updated_at timestamp null,
deleted_at timestamp null
);
```
## Manager DB
Create a file to manage.go This will contain a method to get the connection db for instance in other modules and services.
```
import (
"database/sql"
"fmt"
"github.com/uptrace/bun"
"github.com/uptrace/bun/dialect/pgdialect"
"github.com/uptrace/bun/driver/pgdriver"
"os"
"sync"
)
var (
dbConnOnce sync.Once
conn *bun.DB
)
func Db() *bun.DB {
dbConnOnce.Do(func() {
dsn := postgresqlDsn()
hsqldb := sql.OpenDB(pgdriver.NewConnector(pgdriver.WithDSN(dsn)))
conn = bun.NewDB(hsqldb, pgdialect.New())
})
return conn
}
func postgresqlDsn() string {
dsn := fmt.Sprintf("postgres://%s:%s@%s:%s/%s?sslmode=disable",
os.Getenv("DB_USER"),
os.Getenv("DB_PASSWORD"),
os.Getenv("DB_HOST"),
os.Getenv("DB_PORT"),
os.Getenv("DB_NAME"),
)
return dsn
}
```
## Create Struct entity
On Customers entity add the tags for mapped with bun fields.
```
type Customers struct {
bun.BaseModel `bun:"table:customers,alias:cs"`
ID uint `bun:"id,pk,autoincrement"`
FirstName string `bun:"first_name"`
LastName string `bun:"last_name"`
Email string `bun:"email"`
Age uint `bun:"age"`
CreatedAt time.Time `bun:"-"`
UpdatedAt time.Time `bun:"-"`
DeletedAt time.Time `bun:",soft_delete,nullzero"`
}
```
* the bun:"-" omit the field in queries.
You can review all tags option on the official bun documentation.
[Bun tags](https://bun.uptrace.dev/guide/models.html#struct-tags)
## Create Struct for manage the entity (customer) repository
Create a struct for manage the db connection and Get all methods to interact with the database entity (CRUD operations and queries)
With this struct, any time we need to access the entity (customer) data, we can instance and start to use it as a repository pattern.
```
type CustomersRepo struct {
Repo *bun.DB
}
func NewCustomerRepo() (*CustomersRepo, error) {
db := Db()
return &CustomersRepo{
Repo: db,
}, nil
}
```
## CRUD methods
A method example to store, update or get information from the entity.
These methods are used from the CustomersRepo entity.
They received a customer entity with the information and depending on the operation return the result.
### Save a new customer
```
func (cs CustomersRepo) Save(customer *models.Customers) (*models.Customers, error) {
err := cs.Repo.NewInsert().Model(customer).Scan(context.TODO(), customer)
if err != nil {
return nil, err
}
return customer, nil
}
```
### Update a customer data (Required the field ID)
```
func (cs CustomersRepo) Update(customer *models.Customers) (*models.Customers, error) {
_, err := cs.Repo.NewUpdate().Model(customer).Where("id = ? ", customer.ID).Exec(context.TODO())
return customer, err
}
```
### Get a customer from one field
```
func (cs CustomersRepo) GetByEmail(email string) (*models.Customers, error) {
var customer models.Customers
err := cs.Repo.NewSelect().Model(&customer).Where("email = ?", email).Scan(context.TODO(), &customer)
return &customer, err
}
```
### Get a list of customers
```
func (cs CustomersRepo) GetCustomers() ([]*models.Customers, error) {
customers := make([]*models.Customers, 0)
err := cs.Repo.NewSelect().Model(&models.Customers{}).Scan(context.TODO(), &customers)
return customers, err
}
```
### Delete a customer (soft deleted)
```
func (cs CustomersRepo) Delete(customer *models.Customers) error {
_, err := cs.Repo.NewDelete().Model(customer).Where("id = ?", customer.ID).Exec(context.TODO())
return err
}
```
## Review the code example
[BUN CRUD Example](https://github.com/luigiescalante/go-code-bucket/tree/main/bun-orm)
| luigiescalante |
1,749,017 | Run Docker Without sudo in WSL2 | My problem is, when run docker has a problem like this permission denied while trying to connect... | 0 | 2024-02-01T23:13:35 | https://dev.to/bukanspot/run-docker-without-sudo-in-wsl2-4kfl | docker, devops | My problem is, when run docker has a problem like this
```
permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/containers/json": dial unix /var/run/docker.sock: connect: permission denied
```
I solve with this
```
sudo addgroup --system docker
sudo adduser $USER docker
newgrp docker
sudo chown root:docker /var/run/docker.sock
sudo chmod g+w /var/run/docker.sock
```
---
### reference
- [stackoverflow.com](https://stackoverflow.com/questions/64710480/docker-client-under-wsl2-doesnt-work-without-sudo) | bukanspot |
1,749,090 | 5 Google My Business Management Tools to Boost Your Visibility | In the ever-evolving landscape of online marketing, local search visibility has become paramount for... | 0 | 2024-02-02T01:59:53 | https://dev.to/socialmediatime/5-google-my-business-management-tools-to-boost-your-visibility-38c3 | digital, marketing | In the ever-evolving landscape of online marketing, local search visibility has become paramount for businesses. Harnessing the power of Google tools and Google My Business (GMB) is a surefire way to bolster your presence on Google’s search engine results pages (SERPs). However, to maximise the impact of your Google Business Online Profile, you need the right management tools at your disposal.
Understanding the Significance of Google Business Profile
Google My Business serves as the cornerstone for local SEO strategies. It's a potent tool that allows businesses to manage their online presence, display crucial and accurate information, engage with customers, and ultimately, stand out in local business searches and Google's search results.
The Role of Management Tools in GMB Search Engine Optimisation
Effective business account management tools streamline the process of maintaining and optimising your Google Business Profile account. From managing reviews to updating business information like your business address and opening hours, these business tools simplify tasks, enhance efficiency, and drive better visibility, helping you appear on Google Maps.
Let’s Explore the Top Five Google Business Profiles Management Tools:
1. Google My Business Dashboard
The GMB Dashboard is the command centre for your business profile. It offers a centralised platform to manage essential information like business operating hours, services, and updates. It's your go-to hub for your business hours, responding to online reviews uploading images and business posts, significantly impacting your local businesses' SEO efforts.
2. Moz Local
Moz Local is a comprehensive tool for managing business listings across various platforms, ensuring consistency and accuracy in information across multiple locations. Its functionality aids in local search results ranking by optimising listings and identifying opportunities to improve the visibility of business locations.
3. BrightLocal
BrightLocal is a robust suite of tools catering to local SEO needs. It offers functionalities like rank tracking, citation building, and managing both positive and negative reviews on search engines, enabling businesses to monitor and enhance their local Google search visibility effectively.
4. SEMrush
SEMrush provides a wide array of SEO tools, including local search engine optimisation features and keyword suggestions, and allows you to schedule posts and competitor analysis. Its comprehensive insights help in devising a local SEO tools strategy, optimising content, and improving online marketing campaigns.
5. Hootsuite
While primarily known for social media management, Hootsuite can play a crucial role in GMB management. It allows businesses to schedule GMB posts, ensuring consistent engagement and visibility across multiple platforms.
Optimise Your Google My Business Profile Today!
With the ever-increasing competition in the digital sphere, leveraging Google My Business effectively is crucial for businesses seeking local visibility. These top five management tools empower businesses to optimise their Google Business Profiles, improve local SEO rankings, and ultimately attract more customers.
By harnessing the functionalities of these tools, businesses can not only manage their GMB profiles effectively but also pave the way for sustained growth in the highly competitive online landscape.
Optimising your Google Business Profile isn’t a one-time task; it’s an ongoing effort. Regular updates, prompt responses to business reviews, and utilising these management tools can significantly impact your business page, listing local search visibility, attracting potential customers and boosting your own business page's online presence.
**Website URL:**
https://www.socialmediatime.co.uk/
| socialmediatime |
1,749,116 | Hướng Dẫn Chơi Baccarat Online: Kỹ Thuật Chiến Thắng Tại 188bet | Đối với những người mới chơi Baccarat và chưa nắm chắc quy tắc, việc đặt cược mà không có một chiến... | 0 | 2024-02-02T02:42:16 | https://dev.to/88betg/huong-dan-choi-baccarat-online-ky-thuat-chien-thang-tai-188bet-23gj |

Đối với những người mới chơi Baccarat và chưa nắm chắc quy tắc, việc đặt cược mà không có một chiến lược cụ thể thường dẫn đến tổn thất. Do đó, khi tham gia trò chơi này, người chơi cần phải áp dụng các chiến lược đặt cược để tăng cơ hội chiến thắng.
Mỗi người chơi Baccarat có thể phát triển chiến lược riêng của mình. Qua thời gian, việc tích lũy kinh nghiệm sẽ giúp họ tuân thủ phương pháp đã chọn và đạt được hiệu quả cao hơn. Tránh việc đặt cược một cách tùy tiện dựa trên cảm xúc là điều quan trọng, vì điều này có thể dẫn đến lãng phí tiền bạc không cần thiết.
Xem thêm: https://88betg.com/cach-choi-baccarat-online/
xem thêm: https://flipboard.com/@88betg
#baccarat #88betg #188BET #nha_cai_188BET #nha_cai #casino
| 88betg | |
1,749,151 | Regression Testing Fundamentals – All You Need To Know | Regression testing is software testing that verifies an app’s continued performance, irrespective of... | 0 | 2024-02-02T04:01:18 | https://dev.to/berthaw82414312/regression-testing-fundamentals-all-you-need-to-know-15ne | [Regression testing](https://www.headspin.io/blog/regression-testing-a-complete-guide) is software testing that verifies an app’s continued performance, irrespective of the number of updates, upgrades, or code changes.
Developers run regression tests when they add a new modification to the code. Since it is in charge of ensuring the app’s functionality and stability, regression tests ensure the app is sustainable despite changes.
This test is essential because when developers perform code changes, it can cause a ripple effect and introduce malfunctions, dependencies, or flaws in the code. Regression tests ensure that previously tested code is unaffected by these changes.
Regression tests are typically the final step of the SDLC. They check the product’s behavior, verify its functionality, and help improve time to market.
## Differences Between Re-testing and Regression Testing
To those new to automation, regression testing and re-testing can be confusing. Although these tests sound similar, they are very different from each other.
Re-testing involves repeating a test for a particular purpose. Testers can re-test the app and repair the source code if they find a fault. Sometimes test cases can fail in their execution. In such cases, testers will need to re-test or repeat the test.
Regression testing, however, involves verifying updates or changes to the code to find out if these changes introduced flaws in existing functions.
Organizations usually perform re-testing before regression tests. While re-testing mainly ensures that failed test cases work well, regression testing examines the ones that succeed and ensures they don’t introduce any new flaws. Another significant distinction is that, unlike regression testing, re-testing involves error verifications.
Automation plays a significant role in regression testing, allowing testers to focus on interpreting test case results.
**Regression Testing in Agile**
Agile development strategies can help teams achieve benefits like increased ROI, product improvements, customer support, and improved time-to-market. Yet, the challenge is to avoid conflicts with balancing iterative testing and sprint development.
Agile regression testing implementation is critical in synchronizing existing and upgraded capabilities, preventing future rework. Agile regression testing guarantees that business functions are reliable and sustainable.
Regression testing through agile methodologies prevents future rework by helping to synchronize upgraded and existing app capabilities.
Agile methodologies help ensure stable and functional business functions and improve regression testing. Developers will have more time to focus on developing new functionalities for their apps rather than constantly fixing existing problems. Additionally, regression testing through agile helps testers identify unexpected issues helping them respond quicker and more efficiently.
## Why is regression testing necessary?
Automated regression testing helps improve time to market by improving feedback time. The quicker product teams receive feedback, the faster they can identify flaws. Early identification of faults can help save organizations a lot of money and time.
The fact that a small iteration can have a cascading effect on the app’s significant functionalities is alarming. Hence, developers cannot ignore regression tests to ensure seamless app performance.
Functional tests are undoubtedly beneficial; however, while they do a great job examining the app’s new features, they do not check their compatibility with existing features.
Adding regression testing to the mix is crucial as it can help testers perform a root cause analysis, serving as a filter to ensure improvements in product quality. Without regression testing, this process would be highly time-consuming and challenging.
## When is it beneficial to perform Regression testing?
You will find it beneficial to perform regression testing in the following scenarios:
- Regression tests are essential when the product you’re developing is in repeated need of new features.
- When testing product enhancements and minimizing manual testing labor.
- For areas where you want to reduce manual testing efforts and enhance your product testing capabilities.
- Regression tests can help you validate customer defect reports.
- When your product is due for an upgrade in performance, regression tests can help you.
**How to Conduct Regression Testing**
While strategies for regression testing differ from organization to organization, here are some fundamental steps you must take:
**Detect Changes in the Source Code**
It is essential to detect optimizations and changes to the source code. This check helps you track the components that were changed and understand the impacts these changes have had on existing features.
**Prioritize Changes and Product Requirements**
After identifying the modifications, prioritize them to help streamline the testing process. Ensure you assign appropriate test cases to help fix the issues.
**Determine Entry/Exit Criteria**
Before executing the regression test, ensure your application meets the required eligibility criteria. Also, determine an exit point.
**Schedule The Test**
Once you identify all test components, execute the regression test.
## Conclusion
Regression testing helps improve user experience and product quality. Choosing the right regression intelligence tool can help detect surface and deep-rooted flaws early in the pipeline. Regression testing can make things easier for manual testers.
Furthermore, using agile methodologies in regression testing ensures technical and business advantages. However, to ensure you meet all your regression test requirements, it’s essential to start planning early and have a comprehensive strategy ready.
[HeadSpin](https://www.headspin.io/enterprise) offers AI-based testing insights that significantly improve the quality of your regression tests. The AI provides actionable insights that can help you solve problems quickly and improve your time to market.
Original source: https://complextime.com/regression-testing-fundamentals-all-you-need-to-know/ | berthaw82414312 | |
1,749,194 | Why is Web App and API Security Testing So Critical? | When you search for the most common causes of cyberattacks; phishing and unpatched vulnerabilities... | 26,275 | 2024-02-02T13:30:00 | https://dev.to/jigar_online/why-is-web-app-and-api-security-testing-so-critical-3i2i | webdev, security, cybersecurity, learning | When you search for the most common causes of cyberattacks; phishing and unpatched vulnerabilities will top the list. Hackers mostly exploit inherent vulnerabilities in APIs and web apps to gain unauthorized access and steal data. However, sad but true that [OWASP Top 10 report](https://owasp.org/Top10/A03_2021-Injection/) states that 94% of the applications they tested were vulnerable to injection attacks.
So, why do vulnerabilities become a security challenge? This happens due to complete negligence of API and web app security testing or not using effective measures for this. Unfortunately, many times organizations fail to understand the importance of security testing.
Thus, such organizations suffer significant losses. In 2023, the [data breach cost was $4.45 million](https://www.ibm.com/reports/data-breach) on average. This shows how disastrous vulnerabilities can be.
## A Brief Note on Web App and API Security Testing
The aim of security testing is to evaluate if a web app or API is susceptible to hacking. It involves performing various tests and seeking responses to detect unexpected behavior. Professional security testers use automated and/or manual procedures to perform these tests. Any vulnerability is like a chink in your Armour that will cause destruction. Finding and removing them is vital to fortify your digital landscape. Hence, API and web application security testing is essential.
## Why is API and Web App Security Testing So Important?
There is one thing that one can find common in every web app and API is they all handle data. Being a valuable asset to a business, data must be handled and protected efficiently. However, security loopholes in your API or web app can result in data breaches that will impact your business. Finding and fixing vulnerabilities that can compromise security helps to protect these assets.
Let’s check its importance as follows.
**1. Data Security**
Data protection is important to secure it from theft, unauthorized access, and corruption. Organizations must protect their data for various ethical and legal reasons. For example, GDPR is a well-known data protection law that safeguards data of EU citizens. Organizations failing to comply with this law must face heavy penalties.
Vulnerabilities in web apps and APIs can cause data breaches that will violate laws and risk confidential information. However, you can prevent such scenarios by testing the applications and APIs for security issues and addressing them before they get exploited. For example, a quick web app vulnerability scan with an automated tool can reveal weaknesses related to data safety.
**2. Reputation**
Security breaches result in financial and reputational damage for organizations. Many customers will stop doing business with you, which will affect your sales and future growth. Further, many of them will also share their experiences with others through verbal communication or on social media.
To avoid this nightmare, you need to work on the root cause of this problem and that’s the vulnerabilities. With security testing, you can discover potential vulnerabilities like OWASP Top 10 that cause security breaches and protect your web apps and APIs.
**3. Customer Confidence**
Trust is the reason that customers are ready to share their personal or financial information when using various applications. However, security breaches can erode customer trust and regaining their confidence will be extremely challenging. They are likely to choose competitors after this.
Moreover, you can maintain customer trust by better protecting their information. API and web app security testing plays a critical role here. You can find weaknesses in the application and API that will help to strengthen their security.
**4. Smooth Business Operations**
Another advantage of API and web application testing is avoiding disruptions in your business. Business operations are affected after a security issue as hackers can cause service downtime with attacks like DDoS. They can make your web application unavailable to users or damage the system to stop it from running.
You can perform an automated web app or API vulnerability scan to find vulnerabilities that cause DDoS and similar attacks to avoid disruptions. It also helps you build a strong response plan to allow users to continue using the application even in case of a cyberattack.
**5. Compliance Requirements**
Adhering to legal compliances and standards is another benefit you get with thorough security testing. There are many automated vulnerability scanning tools that not only detect application weaknesses but also highlight compliance issues. Hence with web app security testing, you can ensure that it complies with standards like HIPAA, ISO27001, PCI DSS, SOC2, and more.
## Different Methods of Web App and API Security Testing
While security testing is crucial for web apps and APIs without any doubt, choosing the right method is equally important. The following are the different approaches to application testing you can choose from.
**Black-Box Testing**
In this type of testing method, a tester hunts for security issues without knowing the internal details of the application or API. The tester performs tests “outside in” with simulated attacks just like a hacker from the front-end.
**White-Box Testing**
The tester knows the internal details of an application or API and checks the source code to detect potential security issues. It is a static code analysis method also known as “inside-out” testing.
**Gray-Box Testing**
It is a combination of both and allows a tester to identify issues with improper use or design of the application or API. Having partial knowledge of the application’s implementation, the tester checks the target both inside-out and outside-in.
## Final Note
As cyberattacks are growing in number and complexity, keeping your systems, applications, and APIs secure becomes more challenging. Security testing tools like [ZeroThreat](https://zerothreat.ai/) helps you evaluate your APIs and applications to find flaws and weaknesses that allow cyberattacks. The tests help to pinpoint the issues and their reasons to effectively resolve them and mitigate security risks. | jigar_online |
1,749,197 | JavaScript Functions: The Heroes of Your Code! ⚡️ | Welcome, fellow code adventurers, to the thrilling world of JavaScript functions! 🌟 In this epic... | 25,941 | 2024-02-17T08:47:51 | https://dev.to/aniket_botre/javascript-functions-the-heroes-of-your-code-4l2d | javascript, webdev, programming, frontend | Welcome, fellow code adventurers, to the thrilling world of JavaScript functions! 🌟 In this epic journey, we'll delve deep into the heart of functions, exploring their myriad forms and unraveling their mysteries. So buckle up, sharpen your swords (or rather, your keyboards), and let's embark on this exhilarating quest! 🎮
[JavaScript Function - Fireship video on youtube](https://medium.com/r/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3DgigtS_5KOqo%26list%3DPL0vfts4VzfNixzfaQWwDUg3W5TRbE7CyI%26index%3D4)
---
## Introduction: What's the "Function" of Functions? 🌟
A function, my friends, is a real workhorse. It's a reusable block of code that performs a specific task. Think of it like a personal assistant who's always ready to perform a particular task whenever you snap your fingers. 🤖
And, boy oh boy, are they important! They offer encapsulation, reusability, modularity, abstraction, and make testing a breeze. In short, they're the superheroes of your code, silently saving the day! 🦸♂️ It's hoisted, so it can be called before being declared.
### The Syntax of a Function 📝
Declaring a function is as easy as saying "Abracadabra!". To create a function, you start with the `function` keyword, followed by a unique name, and then a pair of parentheses that may or may not include parameters. The body of the function, where the magic happens, is wrapped in curly braces.
Here's a simple example:
```javascript
function sayHello() {
console.log("Hello world");
}
sayHello(); //output: Hello world
```
Just like magic, isn't it? 🎩
---
## Parameters vs. Arguments: What's the Difference? 🤔
Parameters are like placeholders in a recipe, waiting to be filled with actual ingredients, which are the arguments. When you declare a function, you define parameters. When you call a function, you pass arguments.
Parameters act as placeholders within the function's definition, defining the inputs it expects to receive. They serve as variables that store the values passed to the function when invoked, allowing it to operate on different data each time it's called. Arguments, on the other hand, are the actual values passed to a function when it's called. They provide concrete data that fulfils the function's parameters, allowing it to perform its designated task.
Parameters and arguments work in tandem, allowing functions to be versatile and adaptable to various scenarios. In JavaScript, parameters and arguments can come in different types, including function parameters, default parameters, rest parameters, and the special "arguments" object, each offering unique capabilities and flexibility in crafting powerful and expressive code.
Parameters and arguments are like relay runners. The former waits for the baton (value) that the latter passes. Here's how they team up:
```javascript
function multiply(num1, num2 = 2) {
return num1 * num2;
}
console.log(multiply(5, 10)); // Output: 50
```
In this race, `num1` and `num2` are the parameters, and `5` and `10` are the arguments. They make a great team, don't they? 🏆
---
## Function Expressions 🎭
Function expressions offer a dynamic approach to function creation, allowing us to assign functions to variables and constants. Like a secret identity, it's assigned to a variable. But remember, it's not hoisted!
```javascript
const greet = function(name) {
console.log(`Hello, ${name}!`);
};
greet('Jane'); // Output: Hello, Jane!
```
---
## Arrow Functions: The Cool Kids of JavaScript ➡️🏹
Arrow functions, introduced in ES6, are the sleek sports cars of the function world. They have a shorter syntax and some unique features compared to traditional functions. They might seem like a mystical entity with their different syntax and behavior, but fear not, for we're about to demystify them! 🕵️♂️
### Syntax: Short and Sweet 🍭
Arrow functions feature a shorter syntax compared to traditional function expressions. Here's an example:
```javascript
const greet = name => console.log(`Hello, ${name}!`);
greet('Tom'); // Output: Hello, Tom!
```
This is equivalent to:
```javascript
const greet = function(name) {
console.log(`Hello, ${name}!`);
};
greet('Tom'); // Output: Hello, Tom!
```
Notice how we saved some keystrokes with the arrow function? That's the beauty of its syntax!
### The Deal with 'this' 🎭
One of the key differences between arrow functions and regular functions is how they handle the `this` keyword. In regular functions, `this` is a shape-shifter, changing its value based on the context in which the function is called (such as the calling object).
However, arrow functions are more predictable - their `this` is lexically bound. This means it takes on the value of `this` from the surrounding code where the arrow function is defined. Let's look at an example:
[P.S. We will be covering concepts like context, object and lexical scoping in later blogs...]
```javascript
const myObject = {
name: 'John',
sayHello: function() {
setTimeout(function() {
console.log(`Hello, ${this.name}!`);
}, 1000);
}
};
myObject.sayHello(); // Output: Hello, undefined!
```
In this example, `this.name` is `undefined` because this inside the `setTimeout` function refers to the global object, not `myObject`.
Now, let's use an arrow function:
```javascript
const myObject = {
name: 'John',
sayHello: function() {
setTimeout(() => {
console.log(`Hello, ${this.name}!`);
}, 1000);
}
};
myObject.sayHello(); // Output: Hello, John!
```
This time, `this.name` correctly refers to the `name` property of `myObject`, because the arrow function takes its `this` from the surrounding (lexical) context.
In conclusion, arrow functions are a powerful addition to JavaScript, offering a shorter syntax and a predictable this. However, they come with their own caveats, so use them wisely!
Remember, the right tool for the right job. Sometimes, an arrow is perfect for the target, at other times, you might need the whole quiver! 🏹
---
## IIFE: The Undercover Agent of JavaScript 🕵️♀️
IIFE or Immediately Invoked Function Expression is like an undercover agent in the world of JavaScript. It's a function that runs as soon as it is defined. Think of it as a fire-and-forget missile; it does its job and then self-destructs. Let's dive deeper into this intriguing concept.
### Syntax: Now You See Me, Now You Don't! 🎩
Here's what an IIFE looks like:
```javascript
(function() {
console.log('Hello World!');
})(); // Output: Hello World!
```
Notice the extra pair of parentheses at the end? That's the secret sauce that makes the function execute immediately. It's like saying, "Hey function, don't just stand there, do your job NOW!".
### Use Cases: When to Deploy the Agent? 🚀
IIFEs are particularly good in two scenarios:
- **Variable Isolation:** When you need to isolate variables and prevent them from polluting the global scope, IIFEs are the way to go.
```javascript
(function() {
let secretAgent = 'James Bond';
console.log(secretAgent); // Output: James Bond
})();
console.log(secretAgent); // Error! secretAgent is not defined
```
In this example, `secretAgent` is only accessible inside the IIFE and doesn't pollute the global scope. This is a great way to encapsulate logic in your code.
- **Temporary Work:** If you need to do some temporary work that doesn't need to be reused or leave behind any global variables or functions, an IIFE is perfect.
```javascript
(function() {
let temp = 'Processing some data...';
console.log(temp); // Output: Processing some data...
})();
```
### Caveats: Every Hero has a Kryptonite! 🤷♂️
While IIFEs are powerful, they have some limitations:
1. No Reusability: Since an IIFE is executed immediately, it cannot be reused. This is by design, but it also means you have to create a new function if you need the same functionality again.
2. Debugging: Debugging can be trickier with IIFEs since they are anonymous. This can make stack traces harder to follow.
In the world of JavaScript, IIFEs are a powerful tool to run a piece of code immediately and keep the global scope clean. They are like secret agents, doing their job without leaving a trace.
Remember, with great power comes great responsibility. Use IIFEs wisely, and they can be a valuable asset in your coding arsenal.
---
## Wrapping Up🎁
Functions in JavaScript are essential tools for writing clean, reusable, and modular code. Whether you're using a normal function, an arrow function, or an IIFE, understanding how they work and when to use them is key to becoming a JavaScript pro. Stay tuned for more deep dives into the world of JavaScript functions in upcoming blogs!
And remember, just like in cooking, the right function can make your code a delightful dish to serve! 🍽️👨🍳
We've just scratched the surface of JavaScript functions today. There's more to come in our next blog posts. Stay tuned, and keep on coding! 🚀
| aniket_botre |
1,749,235 | Streamlining Communication: Unveiling the Best Shopify Contact Form Plugins | In the dynamic world of e-commerce, establishing seamless communication channels between businesses... | 0 | 2024-02-02T06:14:45 | https://dev.to/cftoanyapi/streamlining-communication-unveiling-the-best-shopify-contact-form-plugins-1de9 | contactforemtoanyapi, wordpressplugins | In the dynamic world of e-commerce, establishing seamless communication channels between businesses and customers is paramount. One of the key components in achieving this is a robust and user-friendly contact form. Shopify, being a popular e-commerce platform, provides a myriad of options for integrating contact forms into your online store. In this blog post, we will explore and highlight the best [Shopify contact form plugins](https://www.contactformtoapi.com/shopify/) that can elevate your customer interaction game.

### The Importance of Contact Forms
Before delving into the specifics of the best Shopify contact form plugins, let's first understand the significance of having an effective contact form on your e-commerce website.
1. **Customer Engagement**: A well-designed contact form encourages customers to reach out with queries, feedback, or concerns. This engagement fosters a sense of trust and transparency between the customer and the business.
2. **Problem Resolution**: Contact forms serve as a direct link for customers to communicate issues they may be facing. Quick and efficient resolution of problems can significantly impact customer satisfaction and loyalty.
3. **Lead Generation**: Contact forms are not just for troubleshooting; they can also be powerful tools for lead generation. By collecting customer information through these forms, businesses can build their email lists and tailor marketing strategies.
4. **Professionalism**: A professionally designed contact form adds a touch of legitimacy to your online store. It signals to customers that your business is open to communication and takes their concerns seriously.
### Top Shopify Contact Form Plugins
Now, let's explore some of the best Shopify contact form plugins that can enhance your store's communication capabilities.
#### 1. **Form Builder with File Upload by HulkApps**
HulkApps' Form Builder is a versatile tool that empowers **[Shopify](https://www.contactformtoapi.com/shopify/)** store owners to create customizable contact forms effortlessly. What sets this plugin apart is its file upload feature, enabling customers to attach relevant documents or images when submitting their inquiries. This is particularly useful for resolving specific issues or addressing product-related queries that may require visual clarification.
**Key Features:**
- Drag-and-drop form builder for easy customization
- File upload capability for added convenience
- Mobile-responsive forms for a seamless user experience
- Integration with popular email marketing tools
#### 2. **Zendesk Chat + Email by Zendesk**
Zendesk is a renowned name in customer support, and their Shopify integration brings forth a powerful combination of live chat and email. The contact form feature seamlessly integrates with Zendesk's ticketing system, allowing for efficient management and resolution of customer inquiries. Real-time chat functionality also enhances customer engagement.
**Key Features:**
- Live chat for instant customer support
- Email ticketing system for organized query resolution
- Customizable contact forms to align with your brand
- Analytics to track and improve customer interactions
#### 3. **Contact Us Form + Captcha by POWr.io**
POWr.io's Contact Us Form is a user-friendly solution for creating sleek and functional contact forms. With an intuitive interface, this plugin makes it easy to design and embed contact forms anywhere on your Shopify store. The inclusion of a captcha feature enhances security and prevents spam submissions.
**Key Features:**
- Simple drag-and-drop form builder
- Captcha integration for spam prevention
- Customizable design to match your store's aesthetic
- Analytics to track form submissions
#### 4. **Reamaze by Reamaze**
Reamaze is an all-in-one customer communication platform that seamlessly integrates with Shopify. Beyond traditional contact forms, Reamaze provides features like live chat, knowledge base, and automated workflows. The unified dashboard simplifies customer communication management, making it an ideal solution for growing e-commerce businesses.
**Key Features:**
- Multi-channel support (email, chat, social media)
- Automated workflows for efficient query resolution
- Knowledge base for self-service customer support
- Integration with third-party apps and tools
#### 5. **Tidio Live Chat + Bots by Tidio Ltd.**
Tidio offers a comprehensive solution for customer communication, combining live chat, chatbots, and email in one platform. The live chat feature provides real-time assistance, while the chatbots can handle common queries automatically. The email component ensures that customer inquiries submitted through the contact form are promptly addressed.
**Key Features:**
- Live chat for instant communication
- Chatbots for automated responses
- Email support for detailed queries
- Mobile app for on-the-go customer support
### Conclusion
In conclusion, selecting the best Shopify contact form plugin depends on the specific needs and preferences of your online store. Whether you prioritize file uploads, live chat, spam prevention, or a comprehensive customer communication platform, there's a plugin that can cater to your requirements.
Investing in a reliable and user-friendly contact form plugin is not just about fulfilling a basic requirement; it's about enhancing the overall customer experience and building lasting relationships. As you explore the options mentioned above, consider the unique features that align with your business goals and customer service objectives. Ultimately, a well-implemented contact form can be a game-changer in creating a positive and communicative online shopping environment for your customers. | cftoanyapi |
1,749,252 | Mastering Vue 3: A Comprehensive Guide to Building Modern Web Applications < Part 2 /> | Content: Mastering Vue 3 - Part 1 [Introduction and key features] 2. Mastering Vue 3 - Part 2... | 0 | 2024-02-02T06:49:38 | https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-part-2--1k0l | webdev, javascript, programming, vue | **<u>Content:</u>**
1. [Mastering Vue 3 - Part 1 [Introduction and key features]](https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-part-1--c75)
**2. Mastering Vue 3 - Part 2 [Benefits of using Vue 3 for web development]**
[3. Mastering Vue 3 - Part 3 [Template syntax in Vue 3]](https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-part-3--fde)
[4. Mastering Vue 3 - Part 4 [Reactivity Fundamentals]](https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-19c0)
[5. Mastering Vue 3 - Part 5 [Class and Style Bindings]](https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-16g)
[6. Mastering Vue 3 - Part 6 [Lifecycle Hooks]](https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-1c6e)
[7. Mastering Vue 3 - Part 7 [Understanding components in Vue 3
]](https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-3nnd)
[8. Mastering Vue 3 - Part 8 [Installing Vue project and file structure]](https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-17ge)
[9. Mastering Vue 3 - Part 9 [Vue Router in Vue 3]](https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-5d1p)
[10. Mastering Vue 3 - Part 10 [Animation in Vue 3]](https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-4ig2)
[11. Mastering Vue 3 - Part 11 [State Management with Pinia]](https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-53gh)
[12. Mastering Vue 3 - Part 12 [Teleport in Vue 3]]
(https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-5h64)
[13. Mastering Vue 3 - Part 13 [Working with API Calls ]](https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-5h64)
[14. Mastering Vue 3 - Part 14 [Forms and Form Validation ]](https://dev.to/hanytaha61/mastering-vue-3-a-comprehensive-guide-to-building-modern-web-applications-4bhi)
---
## Benefits of using Vue 3 for web development
Vue 3 has emerged as a game-changer in the world of web development, offering a plethora of advantages that cater to the evolving needs of modern developers. In this blog post, we'll delve into the compelling benefits of using Vue 3 and how it can elevate your web development endeavors to new heights.
**#### Enhanced Reactivity and Performance**
Vue 3 introduces a revamped reactivity system, leveraging ES6 Proxies to achieve more efficient reactivity tracking. This enhancement not only simplifies the management of reactivity but also brings notable performance improvements, making Vue 3 an ideal choice for building highly responsive and dynamic user interfaces.
**#### Composition API for Code Organization**
The Composition API, a prominent feature of Vue 3, offers a structured and flexible approach to organizing code logic within components. By enabling better code reuse and encapsulation, the Composition API fosters cleaner and more maintainable codebases, empowering developers to build scalable applications with ease.
**#### Smaller Bundle Size and Faster Mounting**
Vue 3 optimizes bundle size and speeds up mounting times, contributing to a more streamlined user experience. With improved tree-shaking capabilities and a reduced runtime size, Vue 3 minimizes the overhead of the framework, resulting in faster initial loads and enhanced overall performance.
**#### Teleport for Seamless DOM Rendering**
The addition of the Teleport feature in Vue 3 allows for effortless rendering of content at any position in the DOM tree. This feature enables developers to build more flexible and dynamic user interfaces, facilitating the creation of sophisticated layout structures without the need for complex workarounds.
**#### Integration with the Vue Devtools**
Vue 3 seamlessly integrates with the Vue Devtools, providing a powerful set of developer tools for debugging, inspecting component hierarchies, and analyzing state changes. This integration enhances the development experience, allowing developers to gain valuable insights into their Vue 3 applications and streamline the debugging process.
**#### Ecosystem and Community Support**
Vue 3 benefits from a thriving ecosystem of libraries, plugins, and tools, bolstered by an active and supportive community. With a rich selection of official and third-party packages, Vue 3 empowers developers with the resources they need to accelerate development and build feature-rich applications.
**#### Migration Path from Vue 2**
For existing Vue 2 projects, Vue 3 provides a clear migration path, allowing for a smooth transition to the latest version while offering compatibility with existing codebases. This ensures that developers can leverage the new features and improvements of Vue 3 without facing significant migration hurdles. [[Read More…](https://v3-migration.vuejs.org/)]
**In conclusion,** Vue 3 stands out as a robust and versatile framework for web development, offering enhanced reactivity, improved performance, a more organized code structure, and a rich ecosystem of tools and community support. Embracing Vue 3 empowers developers to craft modern and responsive web applications with efficiency and confidence, setting the stage for a compelling and rewarding development journey. Vue 3's array of benefits positions it as a compelling choice for developers seeking an intuitive, performant, and future-ready framework for web development. --- I highlighted the various benefits of Vue 3 for web development, showcasing its strengths in reactivity, code organization, performance, and ecosystem support. If there's a specific aspect you'd like to explore further, feel free to let me know!
| hanytaha61 |
1,749,270 | Fixing Raycasting (Cosplore3D Pt:2) | Intro This is a new series following Cosplore3D, a raycaster game to learn 3D graphics.... | 26,268 | 2024-02-02T07:18:35 | https://dev.to/chigbeef_77/fixing-raycasting-cosplore3d-pt2-5a15 | go, gamedev, learning | ## Intro
This is a new series following [Cosplore3D](https://github.com/Chig-Beef/Cosplore3D), a raycaster game to learn 3D graphics. This project is part of 12 Months 12 Projects, a challenge I set myself.
## Main
Let's go through the code so far, as I've spent quite a while on it so I'll have to go through most of it. Firstly, we're using [Ebitengine](https://ebitengine.org/) to work with graphics. I have a bit of experience with this, but not too much so this will also be a learning point for me.
## Levels
First, before we get into how we see the world, we actually have to implement the world. World data is currently saved as a text file, for example.
```
1111111
1030001
1030001
1030001
1000201
1000001
1411111
```
Each level has `data`, which is a 2D slice of `Tile`s, which correspond to each digit in the file above. The level also has a `name`.
```golang
type Level struct {
name string
data [][]Tile
}
```
Here is also the `Tile` struct, which corresponds to each digit.
```golang
type Tile struct {
x float64
y float64
w float64
h float64
code uint8
color color.Color
}
```
We need to keep its coordinates and its size. By keeping hold of its size will make it easier to implement rectangular prisms in the future. The `code` property keeps track of which type of `Tile` we have, for example, 0 would be an empty space, and 1 could be a wall. We also have `color`, which can change based on the `Tile`'s `code`.
## Loading The Levels
Since this is going to be a small game (at least for now) we can load all levels at the start of the game.
```golang
func load_levels(tileSize float64) map[string]Level {
levels := make(map[string]Level)
var levelData [][]string
rawLevelData, err := os.ReadFile("maps/levels.json")
if err != nil {
return levels
}
err = json.Unmarshal(rawLevelData, &levelData)
if err != nil {
log.Fatal("failed to load `./maps/levels.json`, file may have been tampered with, reinstall advised")
}
// Get every level held in ./maps/
for i := 0; i < len(levelData); i++ {
fName := levelData[i][0]
lName := levelData[i][1]
rawLevel, err := os.ReadFile("maps/" + fName)
if err != nil {
return levels
}
// Now we have a list of strongs as the data
rawRows := strings.Split(string(rawLevel), "\r\n")
tiles := [][]Tile{}
for row := 0; row < len(rawRows); row++ {
tiles = append(tiles, []Tile{})
for col := 0; col < len(rawRows[row]); col++ {
code, err := strconv.Atoi(string(rawRows[row][col]))
if err != nil {
log.Fatal("failed to load a level correctly")
}
clr := get_color(uint8(code))
tiles[row] = append(tiles[row], Tile{
float64(col) * tileSize,
float64(row) * tileSize,
tileSize,
tileSize,
uint8(code),
clr,
})
}
}
levels[lName] = Level{lName, tiles}
}
return levels
}
```
Now to understand this code a bit more, it would be good to understand how levels are saved. Each level is saved in the maps folder. In the maps folder there is an extra file called "levels.json". This json file holds both the file for each level, and the name of that level. So the first thing we do is load this json up so we know all the levels to load. We can then loop over all levels to load.
## Collision With Tiles
Something that will be important later is checking for collision with a tile.
```golang
func (t *Tile) check_hit(x, y int) bool {
if x == int(t.x) || x == int(t.x+t.w) {
if y >= int(t.y) && y <= int(t.y+t.h) {
return true
}
}
if y == int(t.y) || y == int(t.y+t.h) {
if x >= int(t.x) && x <= int(t.x+t.w) {
return true
}
}
return false
}
```
Now, since rays from a raycaster always end up on tile edges, we can check whether it is on one of the `Tile`'s horizontal lines, or vertical lines. If it's on either, we then need to check whether it's in the range. If this function returns `true`, then the ray has ended up on a point of the outline of the `Tile`. It does not check for collision within the `Tile`.
## The Player
```golang
type Player struct {
x float64
y float64
angle float64
camera *Camera
curLevel string
}
```
Most of this may seem normal for a player object, but the `Player` also has a `Camera`. Since there is going to be quite a bit of code dedicated to viewing the world, I separated it from the `Player` logic
```golang
func (p *Player) update() {
if ebiten.IsKeyPressed(ebiten.KeyA) {
p.angle--
}
if ebiten.IsKeyPressed(ebiten.KeyD) {
p.angle++
}
if ebiten.IsKeyPressed(ebiten.KeyW) {
p.x += math.Cos(to_radians(p.angle)) * 2
p.y += math.Sin(to_radians(p.angle)) * 2
}
if ebiten.IsKeyPressed(ebiten.KeyS) {
p.x -= math.Cos(to_radians(p.angle)) * 2
p.y -= math.Sin(to_radians(p.angle)) * 2
}
bound_angle(&p.angle)
p.camera.update_props(p)
}
```
We are going to control the player with the keyboard first, and we'll implement mouse movement later on. Keyboard is easier for testing. We want to move the player in the direction they are facing using sine and cosine. `bound_angle` is a function that keeps an angle between 0 and 360.
Now, if you don't know what sine and cosine are, that's fine, just imagine I have a circle with a point in the middle. The `Player` is that point, and we can draw a line from the `Player` to the outside of the circle. This line will be drawn at the `Player`'s angle. If the `Player` is at `x=0` and `y=0` then cosine is the x coordinate of the other end of that line, and sine is the y coordinate. Now if we imagine this line is only 1 unit long, then we have x and y values between -1 and 1 that we can scale, and we can make them as big as we want (say if we want the player to move faster). At some point we will look at tangent in the code, which is just sine divided by cosine.
## Logic For Drawing The World
Now it would make sense to start off by only worrying about `Tile`s that are solid, which in my case are ones with a `code` that isn't 0. Now, we want to cast a ray for every vertical column of the screen, so we create a loop to do this for us. We need another variable to keep track of the angle of the rays we are casting. This value should start at the angle the `Player` is facing, with half of the field of view taken away. this means that the last ray ends up at the `Player`'s angle, plus half the field of view, so that difference in angle between the first and last ray should be the field of view. We cast this ray, and the function we are going to look at after this will return the tile it hit, and how far it went.
Now, you may see that we multiply this distance by a cosine value. This gets rid of the fisheye effect I showed in my last post (I don't want to talk about it).
Lastly, we just find how tall the line we need to draw will be, and where we need to draw it, and we can use the color from the tile to draw a line. To finish it off, we need to increment the angle a bit, so that each ray shoots off at a slightly different angle.
```golang
func (c *Camera) draw_world(level Level, screen *ebiten.Image) {
tiles := level.get_solid_tiles()
var ra float64
ra = c.angle - c.fov/2.0
bound_angle(&ra)
ri := c.fov / float64(screenWidth)
for r := 0; r < int(screenWidth); r++ {
t, disT := c.ray_cast(ra, tiles)
a := c.angle - ra
bound_angle(&a)
if a > 180 {
a -= 360
}
disT *= math.Cos(to_radians(a))
lineH := (float64(tileSize) * screenHeight) / disT
if lineH == screenHeight {
lineH = screenHeight
}
lineX := screenWidth - (a/c.fov*screenWidth + screenWidth/2.0)
ebitenutil.DrawLine(screen, lineX, float64(screenHeight)/2.0-lineH/2.0, lineX, float64(screenHeight)/2.0+lineH/2.0, t.color)
ra += ri
bound_angle(&ra)
}
}
```
## Casting Rays
Now we are actually going to look at the real blunt work of a raycaster. The way a raycaster works is that each ray works in 2 ways. First it travels, but only checks for collisions with vertical lines. Then, it tries again, checking for horizontal lines. It then acts like nothing happened and only shows us the collision that occurred with the lowest distance. First, we need to find the direction of the ray.
```golang
if ra > 180 && ra != 360 { // Looking up
ry = c.y - float64(int(c.y)%tileSize)
rx = (c.y-ry)*aTan + c.x
yo = -tileSize
xo = -yo * aTan
} else if ra < 180 && ra != 0 { // Looking down
ry = c.y - float64(int(c.y)%tileSize) + tileSize
rx = (c.y-ry)*aTan + c.x
yo = tileSize
xo = -yo * aTan
} else {
rx = c.x
ry = c.y
dof = 8
}
```
Since we're checking for horizontal lines first, we need to know if the ray is pointing up or down, and especially whether it's exactly left or right. If it's left or right, it's parallel, so the ray will never find a collision.
In the other cases, we move the ray to the nearest horizontal line. We also create `xo` and `yo`, which are the offsets, which we will use later. `dof` is used later to keep us from looking infinitely (or really far).
```golang
for dof < 8 {
hit := false
for i := 0; i < len(tiles); i++ {
if tiles[i].check_hit(int(rx), int(ry)) {
dof = 8
hit = true
hx = rx
hy = ry
disH = math.Sqrt(math.Pow(hx-c.x, 2) + math.Pow(hy-c.y, 2))
ht = tiles[i]
break
}
}
if hit {
break
}
rx += xo
ry += yo
dof++
}
```
Now, we need to keep moving this ray until it hits a horizontal line. We do this by first checking whether we've already hit something. If we have, then we exit the loop. We also keep track of the rays position, how far we have traveled, and which tile we hit. If we didn't make contact, we need to keep travelling, and this is where we simply add `xo` and `yo`. These offsets move us to the next horizontal line using an addition, which is a relatively fast calculation. We do that same thing for the vertical lines (I'll skip in post but it's still on GitHub).
```golang
if disV < disH {
disT = disV
tt = vt
} else {
disT = disH
tt = ht
}
```
This is how we check which ray hit first, and we only use the tile and distance from that ray, which we return.
## The Result

That looks 3D right?
It looks good enough for now, I'm sick of coding 3D graphics, so I'm going to leave it at that and work on something else.
## Next
Now that we have a somewhat working graphics engine, we can start working on literally everything else. I'm going to try to work on the storyline of the game along with minor code changes. Hopefully we can get most of the story down, and start to create art and sound for each section of the storyline. | chigbeef_77 |
1,749,475 | Monkey Business | A post by Alex Roor | 0 | 2024-02-02T10:35:19 | https://dev.to/alexroor4/monkey-business-4gdp | webdev, javascript, beginners, jokes |
 | alexroor4 |
1,749,495 | Prototypal Inheritance in JavaScript | JavaScript is a prototype-based language, which means objects can serve as prototypes for other... | 0 | 2024-02-02T10:59:35 | https://dev.to/cyberohn/prototypal-inheritance-in-javascript-58kd | javascript, webdev, typescript, frontend | JavaScript is a prototype-based language, which means objects can serve as prototypes for other objects. Each object in JavaScript has a prototype, and it inherits properties and methods from that prototype.
##Object Prototype
Every object in JavaScript is linked to a prototype object. When you create an object, JavaScript automatically attaches a special property called __proto__ to it, pointing to its prototype.
```
const myObject = {};
console.log(myObject.__proto__); // Outputs the prototype of myObject
```
##Prototype Chain
Objects in JavaScript form a chain, known as the prototype chain. If a property or method is not found on an object, JavaScript looks up the prototype chain until it finds the property or reaches the end of the chain (usually the Object prototype).
Creating objects with prototypes
```
// Creating a prototype object
const animalPrototype = {
speak: function() {
console.log("Some generic sound");
}
};
// Creating an object using the prototype
const dog = Object.create(animalPrototype);
dog.speak(); // Outputs "Some generic sound"
```
In this example, dog is created with animalPrototype as its prototype. It inherits the speak method from animalPrototype.
**Constructor Functions:**
```
// Constructor function
function Animal() {}
// Adding a method to the prototype of the constructor
Animal.prototype.speak = function() {
console.log("Some generic sound");
};
// Creating an instance
const cat = new Animal();
cat.speak(); // Outputs "Some generic sound"
```
Here, Animal is a constructor function, and instances created with new Animal() inherit from Animal.prototype.
Prototypal inheritance is a fundamental concept in JavaScript that allows objects to inherit properties and methods from other objects. Here's a breakdown :
* Prototype: Think of a prototype as a blueprint for creating objects. Every object has a prototype, and it serves as a template for the object's properties and methods.
* Prototype Chain: Objects are linked together in a chain, forming the prototype chain. If a property or method is not found on an object, JavaScript looks up the chain until it finds the property or reaches the end.
* Creating Objects with Prototypes: You can create objects directly using Object.create(prototype) or use constructor functions. Objects created with the same prototype share common functionality.
* Constructor Functions: Constructor functions are a way to create objects with a shared prototype. They serve as a template for creating multiple instances with similar properties and methods. | cyberohn |
1,749,504 | Navigating Through Turbulence: My Experience with Etihad Airways Flight Change Policy | Unexpected situations can derail even the most painstakingly scheduled flights. Such was the case for... | 0 | 2024-02-02T11:09:33 | https://dev.to/james7088/navigating-through-turbulence-my-experience-with-etihad-airways-flight-change-policy-4mgn | travel, airlines | Unexpected situations can derail even the most painstakingly scheduled flights. Such was the case for me when an emergency forced me to change my journey with [Etihad Airways](https://en.wikipedia.org/wiki/Etihad_Airways). In this essay, I'll discuss my firsthand experience with Etihad Airways flight change policy, putting light on the process, customer service, and general effectiveness of their approach to dealing with unexpected occurrences.
## The Unplanned Twist
As any experienced traveler knows, life can provide unanticipated problems. In my instance, a family emergency occurred, prompting a fast change in my travel plans. Faced with the urgency of the matter, I resorted to Etihad Airways, hoping that their Etihad change flight policy would solve my problem.
## Initiating the Change
The initial step in the process was to contact Etihad Customer Service. With [Etihad change flight policy](https://airlinesupdates.com/blog/etihad-flight-change-policy/), I went through their website to obtain the necessary contact details. The Etihad Airways website is easy to use, having a separate section for managing tickets, including flight modifications.
I dialed the customer service number, half-expecting a lengthy wait time and automated responses. To my surprise, a friendly and efficient representative answered promptly. After explaining my situation, the agent guided me through the process of changing my flight. Etihad change flight policy proved to be an essential tool in steering the conversation towards a swift resolution.
## Flexibility and Options
One aspect of Etihad's flight change policy that stood out was its flexibility in accommodating unexpected events. The representative offered me various options, including alternative flight schedules, dates, and even the possibility of a credit for future travel. This level of flexibility was a welcome relief during a stressful time.
It's worth noting that the availability of alternative flights was contingent on factors such as seat availability and fare conditions. Nevertheless, the agent took the time to explore multiple scenarios, providing me with a range of choices to fit my needs.
## Transparent Fee Structure
Flight changes often come with associated fees, and as a traveler, it's crucial to understand the financial implications. Etihad Airways maintains a transparent fee structure, with clear information on applicable charges for flight changes. Etihad change flight led me to the relevant terms and conditions, helping me make an informed decision.
While the fees were not entirely avoidable, the agent explained the rationale behind them and ensured that I was aware of the costs involved. The transparency in communication instilled confidence in the process, demonstrating Etihad's commitment to honesty and openness.
## Efficient Processing
Once I confirmed my preferred option, the agent swiftly processed the flight change. The entire procedure, from the initial call to the completion of the transaction, took less time than I had anticipated. Etihad's emphasis on efficiency was evident throughout the process, making the experience seamless and stress-free.
## Communication and Updates
Effective communication is paramount, especially during critical moments such as flight changes. Etihad Airways demonstrated commendable communication practices by providing timely updates on the status of my new booking. I received confirmation emails, updated itineraries, and relevant information to ensure a smooth transition to my revised travel plans.
## Conclusion
In the face of unexpected challenges, Etihad Airways flight change policy proved to be a reliable and customer-centric solution. From the flexibility in options to transparent fee structures and efficient processing, my experience highlighted the airline's commitment to passenger satisfaction. Etihad Change Flight served as a valuable guide, leading me to the necessary information and facil
itating a prompt resolution to my travel predicament.
While no one anticipates disruptions to their travel plans, knowing that an airline is equipped to handle such situations with grace and efficiency provides peace of mind for any traveler. Etihad Airways, through its well-crafted flight change policy and attentive customer service, has undoubtedly earned my confidence and appreciation.
Thanks for visiting: [Dev.to](https://dev.to/)
| james7088 |
1,749,519 | The Power of Report Parts in Data-Driven Reports | Have you ever worked on a report for hours, only to have to build the same report all over again next... | 25,056 | 2024-02-13T07:36:51 | https://www.boldreports.com/blog/report-parts-in-data-driven-reports | reportserver, reportviewer, reportdesigner, syncfusionreport | Have you ever worked on a report for hours, only to have to build the same report all over again next month or next quarter? Say goodbye to time-consuming report building and hello to report parts, your new best friend in the world of reporting.
In this blog, we'll explore the magic of report parts, their advantages, and how to use them to make reporting easier.
Are you ready to transform your reports? Let's go!
**What are report parts?**
[Report parts](https://help.boldreports.com/enterprise-reporting/designer-guide/report-designer/report-parts/?utm_source=dev&utm_dev=partner_blog&utm_campaign=dev_reportpartsdatadrivenreports_cy24) are reusable components that can be used to compose complex reports with ease. These parts can include tables, charts, graphs, text elements, and more. By standardizing report parts, businesses ensure consistency in data presentation, which is essential for accurate analysis and comparison over time. The use of report parts also simplifies report maintenance and updates.
## Report parts in Bold Reports
[Bold Reports](https://www.boldreports.com/?utm_source=dev&utm_dev=partner_blog&utm_campaign=dev_reportpartsdatadrivenreports_cy24) allows users to create and publish report parts for reuse across any report. Users design and customize report parts in the Report Designer tool and then insert them in reports as needed. Components that can be used as report parts include:
• **Tables**: Present data in a structured format with rows and columns. They can display data from various databases.
• **Charts**: Visualize data in a graphical format, such as bars, areas, lines, and other chart types. Users can configure and format the chart to present data effectively.
• **Maps**: Display geographical data or analytical data on a geographical background. They can be used to create basic maps, color maps, or marker maps.
• **Indicators**: Concise representations of critical insights, such as key performance indicators.
## Streamlining report creation with report parts
The following figure shows a table named Product_Details_Table that was created to display product details like product name, product ID, quantity, unit price, and total price.
Product Details Table Report Part
Let’s say we need to include this table in several reports. Our time is valuable, and creating this table from scratch for each report is both time-consuming and tedious. To streamline this process, we can publish the table as a report part and then reuse it in anytime we want.
The steps to publish this table as a report part in Bold Reports are as follows:
**Step 1**: **Click Publish Report Part** in the dropdown menu at the top-right corner of the Report Designer page.
 Publish Report Part Option
This will open the Publish Report Part dialog.
Publish Report Part
The report items on the designer surface will be listed in the Report Part dropdown.
Report Part List
**Step 2:** Select the **Product_Details_Table **report item from the **Report Part** dropdown, name the report part "Product_Details_Table" in the **Name** field, add a description if needed, and click **Publish**.
Product Details Table Report Part Details
**Note:** Only the data source, dataset, and query parameters associated with the widget will be published along with the report part. Dataset binds with the scope of any expressions, and report parameters will not be published with the report part.
## Reusing report parts
Reusing report parts in Bold Reports significantly streamlines report creation, especially for businesses that frequently generate reports with similar structures or content. This approach saves time and ensures consistency across different reports, which is essential for maintaining a professional image and facilitating better data comprehension.
## The advantages of reusing report parts
• **Improved collaboration**: Teams can share and collaborate on report parts, fostering a more cohesive work environment. This collaboration can lead to better report designs and more insightful data representations as best practices are shared among team members.
• **Customization and flexibility**: Reusable report parts can be easily customized to fit the specific needs of different departments or projects without altering the entire report, offering greater flexibility in reporting.
• **Reduced errors:** The likelihood of errors decreases with a library of prevalidated report parts. Reusing these components ensures that the data representation is accurate and reliable, which is crucial for making informed business decisions.
• **Enhanced consistency:** Reusing report elements ensures a uniform appearance and structure across multiple reports. This consistency reinforces brand identity and aids in creating a standardized reporting framework that stakeholders can easily recognize and understand.
• **Increased efficiency**: By reusing report parts, organizations can significantly reduce the time and effort required to create new reports. This streamlines the report development cycle, allowing businesses to focus on analysis rather than repetitive design tasks.
**Conclusion**
As the data landscape continues to evolve, so must the tools and techniques we use to interpret and communicate information. Report parts represent a significant step forward in the way we think about and construct data-driven reports, ensuring that businesses can keep pace with the speed of data and maintain a competitive edge.
Visit our [website](https://www.boldreports.comhttps//www.boldreports.com/?utm_source=dev&utm_dev=partner_blog&utm_campaign=dev_reportpartsdatadrivenreports_cy24) to learn more about Bold Reports embedded reporting tools and [solutions](https://solutions.boldreports.com/#/report-viewer/sales/northwind-products-and-suppliershttps://www.boldreports.com/?utm_source=dev&utm_dev=partner_blog&utm_campaign=dev_reportpartsdatadrivenreports_cy24).
If you have any questions, please post them in the comment section below. You can also contact us through our [contact page](https://www.boldreports.com/contact?utm_source=dev&utm_dev=partner_blog&utm_campaign=dev_reportpartsdatadrivenreports_cy24), or if you already have an account, you can [log in](https://login.boldid.net/accounts/login/?utm_source=dev&utm_dev=partner_blog&utm_campaign=dev_reportpartsdatadrivenreports_cy24) to ask your question. Bold Reports offers a [15-day free trial](https://www.boldreports.com/pricing/?utm_source=dev&utm_dev=partner_blog&utm_campaign=dev_reportpartsdatadrivenreports_cy24) without any credit card information required. We welcome you to start a free trial and experience Bold Reports. Let us know what you think!
Stay tuned to our official X, Facebook, and LinkedIn pages for announcements about upcoming releases.
| boldreportsinfo |
1,749,576 | Breaking the Myths: Common Misconceptions about Online Gambling | As the world of online gambling continues to thrive, so do the myths and misconceptions surrounding... | 0 | 2024-02-02T12:30:34 | https://dev.to/johnbudd345/breaking-the-myths-common-misconceptions-about-online-gambling-1i5c | As the world of online gambling continues to thrive, so do the myths and misconceptions surrounding this form of entertainment. This article aims to debunk common misunderstandings about online gambling, providing clarity on what players can expect in the digital gaming realm.
**1. Myth: Online Gambling is Rigged**
One prevalent misconception is that online casinos are rigged to favor the house. In reality, reputable online casinos utilize Random Number Generators (RNGs) to ensure the fairness and unpredictability of game outcomes. Licensed and regulated platforms adhere to strict standards to guarantee fair play and build trust among players.
**2. Myth: Online Gambling is Not Secure**
Concerns about the security of personal and financial information often deter potential online gamblers. However, reputable online casinos employ advanced encryption technologies to safeguard sensitive data. Licensed platforms prioritize player security and privacy, implementing robust measures to protect against cyber threats.Click here now ''**[50 daftar slot via dana tanpa rekening](https://tvgohome.com/)**''.
**3. Myth: Winning is Impossible**
Some believe that online casinos are designed for players to lose consistently. In truth, online gambling operates on the same principles as land-based casinos, with games featuring established odds. While winning is never guaranteed, players can employ strategies, manage their bankrolls wisely, and enjoy occasional wins with luck on their side.
**4. Myth: Addiction is Inevitable**
There is a common misconception that engaging in online gambling inevitably leads to addiction. Responsible gambling practices, such as setting limits, managing time effectively, and recognizing warning signs, play a crucial role in preventing addiction. Online casinos also provide tools like self-exclusion and reality checks to support responsible gaming.
**5. Myth: Online Gambling is Illegal Everywhere**
Another misconception is that online gambling is illegal globally. In reality, the legal status varies from one jurisdiction to another. Many countries have established regulations and licensing frameworks to govern online gambling, providing a legal and secure environment for players. It's essential to be aware of local laws and choose reputable, licensed online casinos.
**6. Myth: Online Gambling is a Quick Money-Making Scheme**
The belief that online gambling is a shortcut to financial success is a dangerous myth. While some players experience substantial wins, gambling should be approached as a form of entertainment rather than a reliable income source. Responsible gaming involves setting realistic expectations and understanding that losses are part of the experience.
**7. Myth: Online Gambling is Exclusively Luck-Based**
Contrary to popular belief, successful online gambling involves a combination of luck and skill. Games like poker, blackjack, and sports betting require strategy and decision-making. While luck plays a role, players who invest time in learning the rules and refining their skills can improve their chances of success.
In conclusion, dispelling these common myths about online gambling is crucial for fostering a more informed and responsible gaming community. By understanding the principles of fair play, prioritizing security, practicing responsible gaming, and recognizing the role of both luck and skill, players can navigate the online gambling landscape with confidence and enjoy the entertainment it offers. | johnbudd345 | |
1,749,687 | This Week In React #172: Next.js, PPR, Remotion, State of React Native, Parcel, Panda, Remix, Skia, Storybook, Tamagui... | Hi everyone! @ThisWeekInReact is our brand-new X account, hit subscribe! The promise is to share and... | 18,494 | 2024-02-02T14:10:22 | https://thisweekinreact.com/newsletter/172 | react, reactnative | ---
series: This Week In React
canonical_url: https://thisweekinreact.com/newsletter/172
---
Hi everyone!
**[@ThisWeekInReact](https://twitter.com/ThisWeekInReact)** is our brand-new X account, hit subscribe! The promise is to share and retweet interesting things happening in the React community in near-real-time, and to be as **high-signal/low-noise** as possible.
This week React Server Components have been once-again the subject of various articles and discussions. The community has mitigated feelings about Next.js App Router. Even if people embrace the vision, they seem a little disappointed by its current implementation. Let's hope it's only temporary 🤞.
React Native has been particularly interesting this week. The [State of React Native](https://results.stateofreactnative.com/) survey results have been published, and the general feeling towards React Native is increasingly positive. There are also interesting bits related to Skia, Reanimated, or the bridgeless mode. The community is excited about Expo becoming a full-stack universal framework.
Happy reading!
---
💡 Subscribe to the [official newsletter](https://thisweekinreact.com?utm_source=dev_crosspost) to receive an email every week!
[](https://thisweekinreact.com?utm_source=dev_crosspost)
---
## 💸 Sponsor
[](https://axiom.co/vercel)
**[Axiom - the best logging platform for Vercel apps](https://axiom.co/vercel)**
😴 Sleep peacefully knowing that Axiom’s zero-config observability for Vercel projects has you covered.
- Use Axiom's pre-built dashboard for an overview across all your Vercel logs and vitals, drill down to specific projects and deployments, and get insight on how functions are performing with a single click.
- [next-axiom](https://github.com/axiomhq/next-axiom) allows you to send logs and events from any part of your Next.js projects - client, edge, or server-side - without any special configuration.
- Easily send structured logs directly from your code and analyze them together with Vercel logs.
Axiom efficiently captures 100% of your event data so you’ll never have to worry about sampling or retention, and you’ll never have to guess what your users are experiencing.
**PS**: [I use it myself to monitor the newsletter signups](https://twitter.com/sebastienlorber/status/1528666563452317696) 😉
---
## ⚛️ React
[](https://www.youtube.com/watch?v=MTcPrTIBkpA)
🎥 [**Next.js Explained - Partial Prerendering**](https://www.youtube.com/watch?v=MTcPrTIBkpA)
It’s is not often that a video makes the headlines here, but this one is different for a few reasons.
First the topic is super interesting and well-explained with many animated visualizations. Partial pre-rendering is a [Next.js 14 experimental feature](https://nextjs.org/blog/next-14#partial-prerendering-preview) that completes the Next.js vision and offers the best of both worlds between static and dynamic rendering. This makes Next.js a multi-paradigm framework that supports all ranges of rendering diversity. This new feature is simple to adopt, builds on top of Suspense, and doesn’t introduce new APIs: you just need to turn a flag on.
Second: this incredibly well-produced video has been created with [Remotion](https://www.remotion.dev/), using React web code 🙉. Good job Delba, you nailed it! We can’t wait to see more videos like this and to know more about your Remotion creative process. On a related note, the Remotion team just shared theirs with a [behind the scene of GitHub Unwrapped 2023](https://www.youtube.com/watch?v=WW8vIZ1kwR0).
---
- 💸 [Accelerate your workflow with MightyMeld, a visual dev tool for React codebases. Drag, drop, click, and prompt your UI into place while you hand-code the rest of it.](https://www.mightymeld.com/?utm_source=thisweekinreact5&utm_medium=newsletter&utm_campaign=1&utm_content=top)
- 👀 [React core PR - react-server-dom-parcel package](https://github.com/facebook/react/pull/28138): Devon from Parcel working on a new bundler integration.
- 👀 [Remix - support for Cloudflare through the Remix Vite plugin](https://github.com/remix-run/remix/tree/dev/templates/unstable-vite-cloudflare)
- 🐦 [Partykit - Real-time React Server Component sneak-peek](https://twitter.com/threepointone/status/1751963277394116993)
- 📜 [My first impressions of Panda CSS](https://newsletter.baptiste.devessier.fr/archive/my-first-impressions-of-panda-css/): Baptiste likes Tailwind but now prefers Panda for multiple reasons, including type-safety, the ability to group styles and make them more readable, and its out-of-the-box support for styles merging and variants.
- 📜 [Fix Next.js Routing to Have Full Type-Safety](https://www.flightcontrol.dev/blog/fix-nextjs-routing-to-have-full-type-safety): the Next.js typedRoute experimental feature is not good enough. We can improve routing type-safety (route params, search params, linking…) thanks to a route builder API.
- 📜 [Migrating to Next.js App Router: the good, bad, and ugly](https://www.flightcontrol.dev/blog/nextjs-app-router-migration-the-good-bad-and-ugly): Flightcontrol rebuilt from scratch their dashboard with Next.js App Router and share feedback. There are positive experiences, but overall their team would have chosen Remix instead, and they’d prefer to use another language than using the Next.js dev server again 😅 (note they are stuck on Next.js 13.5 and performance improved lately).
- 📜 [How To Use forwardRef With Generic Components](https://www.totaltypescript.com/forwardref-with-generic-components): Matt wraps forwardRef to redefine its types and fix type inference problems.
- 📜 [Updating public Next.js environment variables without rebuilds](https://phase.dev/blog/nextjs-public-runtime-variables/): the NEXT_PUBLIC env variables are inlined in the client bundles at build time. This proposes to use a find-and-replace script to update them without rebuilding.
- 📜 [How to start a React Project in 2024](https://www.robinwieruch.de/react-starter/): Robin discusses the tradeoffs of starting a React project with Vite, Next.js or Astro.
- 📜 [Including static files in App Router RSCs](https://www.hipstersmoothie.com/blog/posts/including-static-files-with-rscs): Shows usage of advanced Webpack features such as require.context and inline loaders.
- 📜 [Take a Qwik Break from React with Astro](https://thenewstack.io/take-a-qwik-break-from-react-with-astro/): Paul compares Qwik and React over various examples. “Write code like React, Browser tastes of Vanilla”.
- 📜 [Where do React Server Components fit in the history of web development?](https://dev.to/matfrana/where-do-react-server-components-fit-in-the-history-of-web-development-1l0f)
- 📦 [Remix-Client-Cache - using clientLoader to cache server loader data](https://github.com/Code-Forge-Net/remix-client-cache)
- 📦 [StyleX 0.5 - new stylex.attrs function, ESLint sort-keys rule, Babel aliases option, esbuild plugin](https://stylexjs.com/blog/v0.5.0)
- 📦 [React Cosmos 6 - Sandbox for developing and testing UI components in isolation - RSC support, plugin system, integrations (Vite,Webpack, Next.js, RN), Lazy Mode, MDX fixtures…](https://github.com/react-cosmos/react-cosmos/releases/tag/v6.0.0)
- 📦 [Panda CSS 0.29 - config validation, default values in patterns, media-quey tokens, color opacifier…,](https://github.com/chakra-ui/panda/discussions/2110)
- 📦 [React-Live-Chat-Loader 2.9 - Performant live chat integration - Add support for HubSpot/Front](https://github.com/calibreapp/react-live-chat-loader/)
- 📦 [fumadocs - powerful framework for building documentation sites in Next.js](https://github.com/fuma-nama/fumadocs)
- 📦 [React Bricks 4.2 - RSC support](https://docs.reactbricks.com/docs/rsc/visual-components/)
- 📦 [Redux Toolkit 2.1](https://github.com/reduxjs/redux-toolkit/releases/tag/v2.1.0)
- 🎥 [Jack Herrington - Are RSCs and NextJS Really That Bad?](https://www.youtube.com/watch?v=u0OMdWJfdhg): compares side-by-side 3 cases where App Router has a much better DX than Page Router, and highlights how easy it is to encapsulate and distribute RSC logic inside a package.
- 🎥 [Nadia Makarevich - Advanced React - All about memoization in React](https://www.youtube.com/watch?v=huBxeruVnAM)
---
## 💸 Sponsor
[](https://clerk.com/?utm_source=sponsorship&utm_medium=newsletter&utm_campaign=thisweekinreact&utm_term=01-31-24)
**[Complete User Management for React](https://clerk.com/?utm_source=sponsorship&utm_medium=newsletter&utm_campaign=thisweekinreact&utm_term=01-31-24)**
Clerk streamlines React app authentication and user management, ensuring a quick setup for the modern web.
Experience the benefits of Clerk:
- 💅 Pre-built UI components for sign-in, sign-up, user profiles, and organizations. Customize with any CSS library and deploy on your domain
- 📦 SDKs for React, React Native, Next.js, Redwood, Remix, and other frameworks
- ⚡ Integrations with Firebase, Supabase, Convex, and other BaaS providers
- 🎁 User management, social login, magic links, MFA, and more out of the box
Dive into Clerk's [quickstarts and tutorials](https://clerk.com/docs/quickstarts/overview?utm_source=sponsorship&utm_medium=newsletter&utm_campaign=thisweekinreact&utm_term=01-31-24) to kickstart your project🚀
---
## 📱 React-Native
This section is now co-authored with [Benedikt](https://twitter.com/bndkt). Feel free to send us your comments by email or on [Twitter](https://twitter.com/bndkt)!
[](https://results.stateofreactnative.com/)
**[State of React Native 2023 - Survey Results](https://results.stateofreactnative.com/)**
I’m a big fan of all the “State of …” surveys, because they give us a unique insight into what’s going on in the ecosystem. It’s one thing to follow the hype generated on X and elsewhere, but another thing to see stats of what developers actually use. So I’m really happy to see last year’s State of React Native results published! And as this is now the second year this survey is conducted, we can also identify some trends by comparing the results to the year before. For all the details, [have a look at the results yourself](https://results.stateofreactnative.com/)! I’ll just highlight a few things that I found interesting, some of them really surprised me:
- State management continues to be in a very interesting state (pun intended), with Redux taking the last place in retention with only 44 % (down from 57 % last year), but still clinging to second place after React’s native capabilities in terms of usage with 78 % (down from 85 %).
- Data fetching is a category where the results are in line with the amount of attention that can be observed on social media, with TanStack Query and tRPC taking top places in terms of interest and retention, and Meta’s own Relay staying mostly irrelevant outside of Meta with only 5 % usage.
- The future of navigation is Expo Router, which exploded from 12 % usage last year to 40 % today.
- Gauging by social media activity, styling seems to be the most talked about category with an abundance of options. In terms of usage, the classics (StyleSheet API, inline styling, styled components) remain the dominant options, with NativeWind and Tamagui catching up. Tamagui managed to generate the most interest in the past year (and also has the most retention and interest as a UI component library), closely followed by a new contender: react-native-unistyles.
- Special shoutout to Expo: In the “React Native tools” section, 6 out of the top 10 tools as well as the top 2 options in the deployment category are made by Expo.
- Over 85 % of respondents think that React Native is moving in the right direction and the share of developers that find that building RN apps is overly complex right now decreased from about 30 % to 25 % - that’s still too much, but we seem to be moving in the right direction!
Overall, really interesting results that are worth [checking out](https://results.stateofreactnative.com/) in detail! Many thanks to Bartłomiej, Kacper and Software Mansion for making this happen!
---
- 💸 [Moropo - Stop Losing Days to Detox and Build Meaningful Tests in Minutes](https://www.moropo.com?utm_source=newsletter&utm_medium=emails&utm_campaign=twir-20240131)
- 👀 [React Native 0.74 PR - Make bridgeless mode the default when the New Architecture is enabled](https://github.com/facebook/react-native/pull/42714)
- 👀 [Reanimated PR - Generators support](https://github.com/software-mansion/react-native-reanimated/pull/5565): William Candillon feels productive writing animations with generators, and contributed this new feature to Reanimated (🐦 [demo](https://twitter.com/wcandillon/status/1751893521907261773))
- 👀 [Flashlight React Native performance dashboard](https://app.flashlight.dev/projects/react-native): this uses Flashlight to measure performance all React Native versions over time. For now there’s only one FlatList-based scenario.
- 🐦 [React Native visionOS - New 𝚂𝚙𝚊𝚝𝚒𝚊𝚕 API coming soon to](https://x.com/o_kwasniewski/status/1750532404245373434?s=20)
- 📖 [Expo Docs - Using Sentry](https://docs.expo.dev/guides/using-sentry/): Using Sentry with Expo got way easier in SDK 50: Now only needs one dependency (@sentry/react-native), and doesn’t require you to configure hooks in app.json.
- 📖 [Skia Docs - new Atlas component - efficiently draw a very large number of similar textures/images](https://shopify.github.io/react-native-skia/docs/shapes/atlas/): Here’s a great 🐦 [demo](https://twitter.com/alihdjr/status/1751175731533488464) to see it in action.
- 🧑🎓 [Build and Deploy React Native Apps with Expo EAS - Free Egghead course from Expo’s Kadi Kraman](https://egghead.io/courses/build-and-deploy-react-native-apps-with-expo-eas-85ab521e)
- 📜 [John Gruber - Apple’s Plans for the DMA in the European Union](https://daringfireball.net/2024/01/apples_plans_for_the_dma): Apple is introducing a host of changes to the App Store in response to demands by the EU, in a way that has by many press outlets been described as “malicious compliance.” (also see: 📖 [Apple Developer Docs](https://developer.apple.com/support/alternative-browser-engines/))
- 📜 [Recommended practices for React Native Testing Library in 2024](https://mdj.hashnode.dev/recommended-practices-for-react-native-testing-library-in-2024): RNTL author Maciej shared tips for people that might know the library but maybe haven’t kept up with the latest changes and improvements (like screen API and semantic queries).
- 📜 [Communicating with React Native Web Views](https://making.close.com/posts/react-native-webviews): Using web views can be a good way to reuse parts of a web app, but those parts can also feel disconnected from the native part - this article highlights how to avoid this by establishing communication between native and web.
- 📜 [Building a PhotoRoom-like background remover app with React Native and Skia](https://medium.com/@ludwighenne/building-a-photoroom-like-background-remover-app-with-react-native-and-skia-%EF%B8%8F-540743c6ec0f)
- 📦 [React Native Storybook 7](https://github.com/storybookjs/react-native/discussions/545): After some time, RN Storybook has finally catched up to the major version of the “parent project.”
- 📦 [React Native 0.73.3 - Bug fixes](https://github.com/facebook/react-native/releases/tag/v0.73.3)
- 📦 [vision-camera-resize-plugin 2.0: Full rewrite to C++, much faster performance, and customizeable cropping logic](https://github.com/mrousavy/vision-camera-resize-plugin/releases/tag/v2.0.0)
- 📦 [Tamagui 1.88 - Biggest Tamagui release, brings Expo SDK 50 support, simplifies setup via new Metro plugin](https://github.com/tamagui/tamagui/releases/tag/v1.88.0)
- 📦 [Create Expo Stack 2.4 - Starter with Expo SDK 50, Expo Router v3, NativeWind v4, Shopify’s Restyle, and Unistyles](https://github.com/danstepanov/create-expo-stack/releases/tag/create-expo-stack%402.4.0)
- 🎙️ [RNR 286 - What’s new in React Native 0.73?](https://x.com/ReactNativeRdio/status/1751015494591918434?s=20)
- 🎙️ [Rocket Ship 27 - React Native’s Superpower with Theo Browne](https://share.transistor.fm/s/5b8f8a1a)
- 🎥 [Simon Grimm - Can this replace NextJS?](https://www.youtube.com/watch?v=gzehQxHxtfw): Expo Router becoming an universal full-stack React framework makes it a direct competitor to Next.js.
- 🎥 [Theo Brown - Mobile Devs Hate Servers. Expo Wants To Fix That.](https://www.youtube.com/watch?v=2P0q1EdH_oQ)
---
## 🔀 Other
- 📜 [Some use cases for CSS revert-layer - better isolation/encapsulation than shadow DOM](https://www.mayank.co/blog/revert-layer/)
- 📜 [You Probably Don't Need eslint-config-prettier or eslint-plugin-prettier](https://www.joshuakgoldberg.com/blog/you-probably-dont-need-eslint-config-prettier-or-eslint-plugin-prettier/)
- 📜 [The Web Component Success Story](https://jakelazaroff.com/words/the-web-component-success-story/)
- 📜 [12 Modern CSS One-Line Upgrades](https://moderncss.dev/12-modern-css-one-line-upgrades/)
- 📜 [Zed is now open source - Free modern Rust-based IDE](https://zed.dev/blog/zed-is-now-open-source)
- 📦 [TypeScript 5.4 Beta - `NoInfer<T>`, closure type narrowing, groupBy…](https://devblogs.microsoft.com/typescript/announcing-typescript-5-4-beta/)
- 📦 [Safari 17.4 beta - groupBy, withResolvers, @scope, ArrayBuffer, Intl improvements](https://developer.apple.com/documentation/safari-release-notes/safari-17_4-release-notes)
- 📦 [Qwik 1.4 - JSX types changes, automatic Link prefetching, MPA navigation fallback](https://github.com/BuilderIO/qwik/releases/tag/v1.4.0)
- 📦 [Lucia Auth 3.0 - greatly simplified lib, Oauth logic extrated to new “arctic” package](https://github.com/lucia-auth/lucia/discussions/1361)
- 📦 [Shiki 1.0 beta.0 - Modern ESM syntax highlighter - merges back Shikiji into Shiki](https://github.com/shikijs/shiki/releases/tag/v1.0.0-beta.0)
- 📦 [Adonis v6 - ESM, type-safety imrovements…](https://adonisjs.com/blog/adonisjs-v6-announcement)
- 📦 [Deno 1.40 - Temporal API, meta.filename/dirname, decorators](https://deno.com/blog/v1.40)
---
## 🤭 Fun
[](https://twitter.com/dabit3/status/1752143163546911036)
See ya! 👋 | sebastienlorber |
1,749,714 | Modules Status Update | Happy Friday Happy Friday! Wishing you a day filled with laughter, positivity, and a... | 0 | 2024-02-08T15:28:47 | https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-02-02-modules-status-update/ | community, puppet | ---
title: Modules Status Update
published: true
date: 2024-02-02 00:00:00 UTC
tags: community, puppet
canonical_url: https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-02-02-modules-status-update/
---
## Happy Friday
Happy Friday! Wishing you a day filled with laughter, positivity, and a sprinkle of excitement.Reflecting on the past week, it was a period of steady dedication to ongoing projects, with no major updates or notable events in our modules. Behind the scenes, our team is working on some fantastic projects that we can’t wait to share with you. Anticipate greatness and stay tuned for the big reveal! 🎉 #FridayFeeling #ExcitingNewsAhead
## Community Contributions
We want to express our gratitude to the community contributors for their valuable contributions, even though last week saw a relatively quiet period in terms of new contributions. | puppetdevx |
1,749,733 | Unveiling the Secrets of Comprehensive Home Inspections | Buying a home is a significant investment, and ensuring that it's a sound one involves more than just... | 0 | 2024-02-02T14:48:22 | https://dev.to/ainspectionsinc/unveiling-the-secrets-of-comprehensive-home-inspections-1fc2 | Buying a home is a significant investment, and ensuring that it's a sound one involves more than just a cursory glance. Home inspections play a pivotal role in unveiling the secrets a property may hold. In the picturesque Transylvania County, NC, where charming homes are nestled amid the natural beauty of the Blue Ridge Mountains, a thorough home inspection becomes essential.
Understanding the Scope of Home Inspections:
A home inspection is not merely about ticking boxes; it's a comprehensive examination of a property's structural and mechanical components. Trained inspectors meticulously assess the condition of the foundation, roofing, plumbing, electrical systems, and more.
Foundation Inspection:
The foundation is the backbone of any home. Home inspectors in Transylvania, NC, delve into the foundation's integrity, checking for cracks, settling, or other signs of potential issues. Early detection of foundation problems can prevent costly repairs down the road.
Roofing Assessment:
Transylvania's climate, characterized by seasonal variations, places a premium on a durable roof. Inspectors scrutinize the roof's condition, looking for missing shingles, signs of leaks, or any damage that might compromise its protective function. A robust roof ensures a home remains a safe haven in all weather conditions.
Plumbing and Electrical Systems:
From faucets to wiring, every aspect of a home's plumbing and electrical systems undergoes scrutiny. Leaks, faulty wiring, or outdated systems can pose significant risks. A thorough inspection ensures that all elements are up to code, promoting safety and efficiency.
Environmental Considerations:
Transylvania's natural beauty also means a diverse environment. Home inspections extend beyond the structure itself to evaluate factors like mold, radon, and pest infestations. Identifying these issues early allows homeowners to address them promptly, maintaining a healthy living environment.
Structural Elements and Interior Spaces:
Inspectors meticulously examine the structural components of a home, including walls, ceilings, and floors. They assess the integrity of load-bearing elements and scrutinize the condition of interior spaces for signs of water damage, mold, or any structural compromise.
The Importance of Timely Home Inspections:
In the competitive real estate market of Transylvania, NC, where charming homes are sought after, a pre-purchase inspection provides buyers with crucial insights. Armed with detailed information about a property's condition, potential buyers can make informed decisions and negotiate repairs or adjustments as needed.
Conclusion:
Home inspections in Transylvania, NC, are not merely a formality; they are a key step in safeguarding your investment and ensuring the long-term comfort of your home. Whether you are a buyer seeking transparency or a seller aiming to enhance your property's market value, a comprehensive home inspection serves as a vital tool in achieving your goals. The secrets that a home may hold are unveiled through the discerning eyes of trained inspectors, providing peace of mind and confidence in your real estate endeavors in the heart of Transylvania County.
| ainspectionsinc | |
1,749,738 | Clipboard API and Angular | In this article, I will show you how to display an image copied directly into your application using... | 0 | 2024-02-02T15:10:25 | https://dev.to/broggi/clipboard-api-and-angular-g8g | typescript, angular, webdev |
In this article, I will show you how to display an image copied directly into your application using Ctrl+V.
Configuration in the Template
In your template, you will create a div that listens to the (paste) event. This event will be triggered when the user performs a CTRL+V, and it defines an area in your application where users can paste content:
```
<div style="height: 500px; width: 500px;" (paste)="onPaste($event)" >
<img *ngIf="imageSrc" [src]="imageSrc" alt="Pasted Image">
</div>
```
Processing Clipboard Data
This will provide you with a ClipboardEvent, in which you can find everything that has been copied, be it text, an image, a PDF file, etc. Here, I will filter based on the MIME type of the file to ensure it is indeed an image. However, you can also paste other file types like a Word or PDF file directly into an <input type=”file”>, for example, using Ctrl+V.
```
public onPaste(event: ClipboardEvent): void {
const clipboardData = event.clipboardData;
if (clipboardData) {
const items = clipboardData.items;
// Check if there is at least one item and if the first item is an image
if (items.length > 0 && items[0].type.indexOf('image') !== -1) {
// Attempt to retrieve the image file from the clipboard item
const blob = items[0].getAsFile();
if (blob) {
const reader = new FileReader();
reader.onload = (event: ProgressEvent<FileReader>) => {
if (event.target)
// Setting the component's imageSrc property to the base64 data URL of the image
this.imageSrc = event.target.result as string;
};
reader.readAsDataURL(blob);
}
}
}
}
```
Remember to declare your imageSrc as public:
```
public imageSrc = "";
```
Global Listening on the Document
You also have the option to set up global listening on the entire document, not just on a div, in this manner:
```
constructor(@Inject(DOCUMENT) private document: Document) { }
ngOnInit() {
this.document.addEventListener('paste', this.onPaste.bind(this));
}
```
This way, no matter where the focus is, you can retrieve the file.
| broggi |
1,749,747 | Déployer une Application Next.js sur Cloud Run | Chez ESENS, l'exploration de différentes technologies est au cœur de nos projets, que ce soit pour... | 0 | 2024-02-02T15:24:47 | https://dev.to/esensconsulting/deployer-une-application-nextjs-sur-cloud-run-dfd | nextjs, cloudrun, javascript, googlecloud | Chez [ESENS](https://www.esensconsulting.com), l'exploration de différentes technologies est au cœur de nos projets, que ce soit pour nos clients ou dans le cadre de développements internes. Pour favoriser la montée en compétences de nos équipes, nous avons mis en place le projet "Pizza Shop". Cette application permet à chaque collaborateur d'approfondir ses connaissances sur les dernières versions des langages, outils et frameworks de son choix.
Dans cet article, nous partageons notre expérience sur le déploiement d'une version du Pizza Shop, cette fois-ci développée avec Next.js, sur le service Cloud Run de la plateforme Google Cloud.
Avant d’être développée avec Next.js, une première version du Pizza Shop a également été développée avec React.js, ce qui nous permet de partager un retour d’expérience sur le passage de React.js à Next.js à la fin de cet article.
Bonne lecture !
##Application Next.js
Next.js est un framework JavaScript open-source basé sur React.js et permettant le développement d'applications web modernes et performantes.
Contrairement à React.js, qui est une librairie dédiée uniquement au développement Front-End, Next.js est un framework dit “full-stack” car il permet à la fois de développer la partie Front-End et la partie Back-End d’un projet web.
Cela lui permet de proposer des fonctionnalités avancées telles que le pré-rendu côté client (CSR), le rendu côté serveur (SSR), la génération de site statique (SSG) et la régénération statique incrémentielle (ISR).
Chacune de vos pages web peuvent utiliser l’une de ces fonctionnalités indépendamment des autres pages afin d’avoir un site web entièrement optimisé selon ses besoins. Next.js peut également être couplé à la syntaxe TypeScript, au framework CSS Tailwind, à l’outil d'analyse de code ESLint, et peut fournir de base certains modules indispensables comme un routeur innovant et la solution de tests unitaire automatisés Jest. Toutes ces options vous sont proposées dès la création du projet pour une intégration facile, rapide et prête à l’emploi.
Les étapes décrites ci-dessous sont basées sur un environnement Linux Ubuntu. Pour d'autres systèmes d'exploitation, les commandes peuvent être adaptées en suivant les liens associés.
##Création du Projet Next.js
Comme Next.js utilise Node.js, assurez-vous d'avoir [Node.js](https://docs.npmjs.com/downloading-and-installing-node-js-and-npm) et npm installés sur votre machine.
Ensuite, initialisez un projet Next.js avec les commandes suivantes :
`npx create-next-app pizza-shop`
Des questions vous sont posées pour activer automatiquement certains modules (Typescript, EsLint, Tailwind, Routeur, etc.)
Une fois le projet créé, entrez dans le dossier qui a été généré.
`cd pizza-shop`
Note: Vous pouvez remplacer “pizza shop” pour le nom de votre propre projet.
Vous pouvez maintenant lancer l'application en utilisant :
`npm run dev`
L'application Next.js est accessible localement à l’adresse http://localhost:3000
##Déploiement sur Cloud Run
Chez ESENS, nous privilégions le déploiement sur Cloud Run, un service serverless de type Container as a Service (CAAS) proposé par Google Cloud Platform.
##Construction de l’Image Docker
La première étape consiste à créer une image Docker pour l'application Next.js. Créez un fichier Dockerfile à la racine du projet avec le contenu suivant :
\#Première étape - Installation des dépendances Node.js
FROM node:alpine
WORKDIR /app
COPY package*.json ./
RUN npm install --production
\#Deuxième étape - Construction de l'application Next.js
COPY . .
RUN npm run build
\#Troisième étape - Démarrage de l'application
EXPOSE 3000
CMD ["npm", "start"]
##Déploiement sur Cloud Run
Pour déployer l'application Next.js sur Cloud Run depuis votre poste de travail, suivez ces étapes :
1. Créez un projet Google Cloud.
2. Assurez-vous que la facturation est activée pour votre projet Google Cloud.
3. Installez le Google Cloud SDK sur votre poste de travail.
Ensuite, initialisez le Google Cloud CLI :
`gcloud init`
Définissez le projet que vous avez créé comme le projet par défaut pour Cloud Run :
```
gcloud auth application-default login
gcloud config set project PROJECT_ID
```
Remplacez PROJECT_ID par votre identifiant de projet Google Cloud.
Maintenant vous pouvez déployer votre service directement à partir du code source avec la commande suivante:
`gcloud run deploy pizza-shop --source . --allow-unauthenticated --zone europe-west3`
Lors du déploiement, acceptez d'activer les API nécessaires pour le bon fonctionnement du service.
Une fois le déploiement terminé, vous disposerez d'une URL publique pour accéder à votre application Next.js ou l’on trouve notre application Pizza-Shop.
[Ici](https://cloud.google.com/run/docs/deploying?hl=fr#command-line) vous pouvez trouver plus d’informations sur les déploiements sur Cloud Run.
##Industrialisation
Bien que la méthode décrite soit adaptée pour un déploiement simple depuis votre poste, dans un contexte d'industrialisation, la création d'une chaîne CI/CD est essentielle.
Chez ESENS, nous avons mis en place une chaîne pour déployer nos projets sur Cloud Run. Cette chaîne utilise des workflows GitHub Actions pour activer les API nécessaires, construire l'image Docker, la pousser sur Google Container Registry (GCR), puis la déployer sur Cloud Run. Le tout, en communiquant avec GCP en utilisant Workload Identity.
##Passer de React.js à Next.js
Passer de React.js à Next.js est une transition relativement fluide pour la plupart des projets, car Next.js est construit au-dessus de React.js et facilite le développement des applications React.js en ajoutant des fonctionnalités telles que le rendu côté serveur, le routage côté serveur, le préchargement des pages, etc.
Cependant, il peut y avoir quelques différences et défis à prendre en compte. Voici quelques points à considérer basés sur mon retour d’expérience :
1 - Structure du Projet :
Next.js a une structure de projet différente de celle d'une application React.js classique. Vous devrez vous familiariser avec le dossier pages de Next.js, où chaque fichier représente une route.
La structure du dossier pages de Next.js est cruciale. Voici un exemple simple avec deux pages:
```
/pages
/index.js
/about.js
```
Le fichier index.js représente la page d'accueil, et about.js représente la page "À propos".
En React, vous pouvez organiser votre projet de manière similaire, mais sans la structure spécifique de Next.js.
Par exemple:
```
/src
/components
/Home.js
/About.js
```
Le fichier Home.js représente la page d'accueil, et About.js représente la page "À propos".
2 - Routage :
Next.js gère le routage côté serveur, ce qui peut différer de la gestion côté client de React.js. Certaines logiques de routage ou de navigation ont dû nécessiter des ajustements.
Dans Next.js, le routage est géré par les fichiers du dossier "pages". Voici un exemple de lien entre deux pages:
```js
// Dans index.js
import Link from 'next/link';
function HomePage() {
return (
<div>
<h1>Accueil</h1>
<Link href="/about">
<a>À propos de nous</a>
</Link>
</div>
);
}
```
En React, vous pouvez utiliser une bibliothèque de routage comme react-router-dom. Voici un exemple:
```js
// Dans Home.js
import React from 'react';
import { Link } from 'react-router-dom';
function Home() {
return (
<div>
<h1>Accueil</h1>
<Link to="/about">À propos de nous</Link>
</div>
);
}
```
3 - Server-side Rendering (SSR) :
Avec Next.js, vous avez la possibilité d'utiliser le rendu côté serveur (SSR). Cela a nécessité des ajustements dans la gestion de l'état global, car le code côté client s'exécute après le rendu initial côté serveur.
Utiliser le rendu côté serveur peut nécessiter des ajustements dans la gestion de l'état global.
Exemple avec getServerSideProps:
```js
// Dans une page avec SSR (par exemple, pages/about.js)
function AboutPage({ data }) {
return (
<div>
<h1>À propos de nous</h1>
<p>{data.description}</p>
</div>
);
}
export async function getServerSideProps() {
// Appel à une API ou chargement de données côté serveur
const data = await fetchData();
return {
props: {
data,
},
};
}
```
En React, le rendu côté serveur est souvent géré avec des solutions comme ReactDOMServer.
Voici un exemple simplifié:
```js
// Dans About.js
import React from 'react';
import { fetchData } from '../utils/api';
class About extends React.Component {
constructor(props) {
super(props);
this.state = {
data: null,
};
}
async componentDidMount() {
try {
const data = await fetchData();
this.setState({ data });
} catch (error) {
console.error('Erreur lors de la récupération des données', error);
}
}
render() {
return (
<div>
<h1>À propos de nous</h1>
<p>{this.state.data ? this.state.data : 'Chargement des données...'}</p>
</div>
);
}
}
```
4 - Modules équivalents :
La plupart des modules React.js sont compatibles avec Next.js, mais vous devrez peut-être faire quelques ajustements, en particulier si vous migrez vers des fonctionnalités spécifiques à Next.js comme le getServerSideProps ou getStaticProps pour le rendu des pages.
Exemple avec getStaticProps:
```js
// Dans une page avec rendu statique (par exemple, pages/index.js)
function Home({ data }) {
return (
<div>
<h1>Bienvenue {data.user} !</h1>
</div>
);
}
export async function getStaticProps() {
// Appel à une API ou chargement de données pour le rendu statique
const data = await fetchData();
return {
props: {
data,
},
};
}
```
En React, l'utilisation de modules est similaire. Voici un exemple:
```js
// Dans Home.js
import React, { useState, useEffect } from 'react';
import { fetchData } from '../utils/api';
function Home() {
const [data, setData] = useState(null);
const [loading, setLoading] = useState(true);
const [error, setError] = useState(null);
useEffect(() => {
const fetchDataFromAPI = async () => {
try {
const response = await fetchData();
const result = await response.json();
setData(result);
} catch (error) {
setError(error);
} finally {
setLoading(false);
}
};
fetchDataFromAPI();
}, []); // Le tableau vide en tant que deuxième argument signifie que useEffect ne s'exécute qu'une seule fois (équivalent de componentDidMount)
return (
<div>
<h1>Accueil</h1>
{loading && <p>Chargement des données...</p>}
{error && <p>Erreur lors de la récupération des données: {error.message}</p>}
{data && <p>Données récupérées: {data}</p>}
</div>
);
}
export default Home;
```
Dans cet exemple:
- useState est utilisé pour gérer l'état du composant.
- useEffect est utilisé pour déclencher l'appel à l'API au montage du composant.
- L'utilisation de fetchData (supposée être une fonction qui retourne une promesse) dans la fonction fetchDataFromAPI est illustrée.
5 - Tests Unitaires :
Les tests unitaires peuvent nécessiter des modifications en raison de l'introduction de nouvelles fonctionnalités liées à SSR ou de la structure du projet. Les bibliothèques de test comme Jest peuvent toujours être utilisées, mais certaines configurations ont dû nécessiter des ajustements.
```js
// Exemple de test Jest pour une fonction React
import { render, screen } from '@testing-library/react';
import HomePage from '../pages/index';
test('Rend la page d\'accueil avec un titre', () => {
render(<HomePage />);
const titleElement = screen.getByText(/Accueil/i);
expect(titleElement).toBeInTheDocument();
});
```
Voici un exemple d'utilisation de mocks dans les tests unitaires avec Jest, en particulier pour simuler un appel à une API dans un composant React. Dans cet exemple, nous allons mocker une fonction qui effectue une requête HTTP pour récupérer des données.
Supposons que nous avons une fonction fetchData qui effectue une requête HTTP pour récupérer des données dans notre composant React. Nous allons mocker cette fonction dans le test unitaire pour éviter les appels réseau réels.
```js
// Composant React avec une fonction fetchData
// Exemple: components/ExampleComponent.js
import React, { useState, useEffect } from 'react';
import fetchData from '../utils/api'; // La fonction à mocker
function ExampleComponent() {
const [data, setData] = useState(null);
useEffect(() => {
const fetchDataFromAPI = async () => {
try {
const result = await fetchData();
setData(result);
} catch (error) {
console.error('Erreur lors de la récupération des données', error);
}
};
fetchDataFromAPI();
}, []);
return (
<div>
{data ? (
<p>Données récupérées: {data}</p>
) : (
<p>Chargement des données...</p>
)}
</div>
);
}
export default ExampleComponent;
```
Maintenant, le test unitaire avec Jest et l'utilisation de mocks:
```js
// Test unitaire avec Jest et utilisation de mocks
// Exemple: __tests__/ExampleComponent.test.js
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import ExampleComponent from '../components/ExampleComponent';
import fetchData from '../utils/api';
// Mock de la fonction fetchData
jest.mock('../utils/api');
test('Rend le composant avec les données récupérées', async () => {
// Définir le comportement du mock
fetchData.mockResolvedValue('Données simulées');
// Rend le composant
render(<ExampleComponent />);
// Vérifie que le texte de chargement des données est affiché
const loadingText = screen.getByText(/Chargement des données.../i);
expect(loadingText).toBeInTheDocument();
// Attend que les données soient récupérées et vérifie leur affichage
const dataText = await screen.findByText(/Données récupérées: Données simulées/i);
expect(dataText).toBeInTheDocument();
});
test('Gère les erreurs lors de la récupération des données', async () => {
// Définir le comportement du mock pour simuler une erreur
fetchData.mockRejectedValue(new Error('Erreur simulée'));
// Rend le composant
render(<ExampleComponent />);
// Vérifie que le texte d'erreur est affiché
const errorText = await screen.findByText(/Erreur lors de la récupération des données/i);
expect(errorText).toBeInTheDocument();
});
```
Dans cet exemple, le mock de fetchData est créé avec jest.mock et son comportement est configuré avec mockResolvedValue et mockRejectedValue pour simuler le succès et l'échec de la requête. Ces mocks garantissent que le test unitaire n'effectue pas de véritables appels réseau.
6 - Middleware et API Routes :
Next.js propose des API Routes qui peuvent simplifier la gestion des points de terminaison côté serveur. Nous avons dû migrer des middleware Express et des routes API.
Exemple de création d'une API Route:
```js
// Dans pages/api/example.js
export default function handler(req, res) {
// Logique de l'API
res.status(200).json({ message: 'Exemple d\'API Next.js' });
}
```
En React, la gestion des points de terminaison côté serveur peut être réalisée avec des middleware Express, par exemple.
Voici un exemple simplifié:
```js
// Exemple de middleware Express dans votre serveur Node.js
const express = require('express');
const app = express();
app.get('/api/example', (req, res) => {
// Logique de l'API
res.json({ message: 'Exemple d\'API React' });
});
// Autres configurations et écoute du serveur
```
7 - Optimisation des Performances :
Next.js offre des fonctionnalités telles que le prérendu statique et le préchargement des pages. Vous pouvez revoir et ajuster vos stratégies d'optimisation des performances.
Utilisation de la prélecture de pages avec next/link:
```js
// Dans une page avec utilisation de prélecture (par exemple, pages/index.js)
import Link from 'next/link';
function HomePage() {
return (
<div>
<h1>Accueil</h1>
<Link href="/about" prefetch>
<a>À propos de nous</a>
</Link>
</div>
);
}
```
En React, l'optimisation des performances peut être réalisée de différentes manières, telles que le chargement dit “paresseux” (lazy loading) des composants. Voici un exemple simplifié:
```js
// Dans Home.js avec React.lazy pour le chargement paresseux
import React, { lazy, Suspense } from 'react';
const About = lazy(() => import('./About'));
function Home() {
return (
<div>
<h1>Accueil</h1>
<Suspense fallback={<div>Chargement...</div>}>
<About />
</Suspense>
</div>
);
}
```
##Conclusion
En résumé, bien que la migration de React.js à Next.js soit généralement lisse, la prudence et la compréhension approfondie des fonctionnalités spécifiques de Next.js sont essentielles pour minimiser les difficultés. Les ajustements nécessaires dépendent de la complexité de votre application et de la manière dont vous avez mis en œuvre React.js.
Voilà, nous espérons que cet article vous a été utile et vous inspire pour déployer vos applications Next.js sur Cloud Run !
------------------------------
Article rédigé par Florian, Développeur Front-End, et Andrés, Tech Lead Cloud & IoT et membre de la DT ESENS.
Retrouvez tous nos articles tech sur le [Blog ESENS](https://www.esensconsulting.com/blog)!
Vous êtes à la recherche d’un nouveau challenge technique ? Découvrez la dream team et rejoignez-nous en postulant à nos [offres d’emploi](https://www.welcometothejungle.com/fr/companies/esens-consulting)!
| esensconsulting |
1,749,754 | Build an Advanced Contact Form in React with Nodemailer | A contact form allows website visitors to send messages, but you can make it more useful with... | 0 | 2024-02-02T15:41:28 | https://blog.learnhub.africa/2024/02/02/build-an-advanced-contact-form-in-react-with-nodemailer/ | react, node, javascript, programming | A contact form allows website visitors to send messages, but you can make it more useful with validation, error handling, and email integration. In this 2023 updated tutorial, I’ll show you how to build an advanced React contact form integrated with Nodemailer to email submissions to yourself.
## Overview
Here's what our contact form will do:
- Accept name, email, subject, and message inputs
- Validate fields to ensure proper inputs
- Display error messages for invalid fields
- Send an email with form data to your email
- Display success or error message after submitting
- Reset to initial state after submit
We'll use the latest version of React for the frontend form and Nodemailer for sending emails from the Express backend.
## Use Cases
This contact form has a variety of use cases:
**Business Website**
Add to a "Contact Us" page so potential customers can inquire about your business. Email questions directly to your sales team.
**Support Site**
Include on a SaaS app support site for users to report issues or ask for help. Messages go to your support team mailbox.
**Landing Pages**
Embed on a landing page to capture leads. Add name and email validation to gather more info on signups.
The built-in validation and spam filtering also improve the quality of submissions by preventing bogus form entries. With the Nodemailer backend, messages safely reach the appropriate team's email, ready to respond to users.
With some styling customizations, this form can fit seamlessly into any site to accept visitor messages. Try integrating it into your projects!
## Set Up React App
Create a React app:
npx create-react-app my-contact-form
cd my-contact-form
npm start
Build the contact form component in App.js:
- Custom hooks to manage form state
- Input change handlers to update the state
- Validation and error messaging
- Handle form submission with Axios
## Create the Form UI
Design the form UI with:
- Name, Email, Subject, Message inputs
- Submit button
Use a custom hook to initialize and manage state:
```javascript
function useForm() {
const [name, setName] = useState('');
// input change handler
const handleNameChange = e => {
setName(e.target.value);
}
// additional form state and handlers
return {
name,
handleNameChange
// ...
};
}
```
## Add Validation
To validate:
- Check required fields are not empty
- Validate email format
Show error message if invalid:
```javascript
{errors.name && <p>Name required</p>}
```
Add logic to check the validity of input change and submit. Clear errors when fixed.
## Submit Form Data
Submit handler:
- Prevent default event
- Check the form validity
- POST data to API endpoint
```javascript
const submitForm = async e => {
e.preventDefault();
if(isValid) {
try {
await axios.post('/api/contact', {
name,
email
});
showSuccess();
} catch(err) {
showError();
}
}
}
```
## Nodemailer Backend
Install Express, Nodemailer, and middlewares:
npm install express nodemailer cors dotenv
Configure credentials and transporter:
```javascript
const transporter = nodemailer.createTransport({
service: 'Gmail',
auth: {
user: process.env.EMAIL,
pass: process.env.PASSWORD
}
});
```
POST route to send mail:
```javascript
app.post('/api/contact', (req, res) => {
const { name, email, message } = req.body;
const mailOptions = {
from: name + ' <' + email + '>',
replyTo: email,
// email fields
};
transporter.sendMail(mailOptions, (err, data) => {
// return response
});
});
```
## Display Submission Messages
Handle success and error API responses in React.
On success:
- Show a "Thank you" message
- Clear form state
On error:
- Show the "Error - please try again" message
```javascript
{message && <p>{message}</p>}
<button onClick={clearMessage}>Ok</button>
```
And we have an advanced, validated contact form integrated with email!
Some ideas for future improvement:
- Additional field validation
- Captcha/spam filtering
- Save messages to a database
- Automated confirmation email reply
## Conclusion
In this tutorial, we built an advanced React contact form with validation and integrated email capabilities using Nodemailer.
Some key points:
- The form accepts user name, email, subject, and message inputs
- Client and server-side validation ensure correct user inputs
- Submitted messages are emailed to you via Nodemailer transporter
- Users see success/error messages after form submission
- Everything resets after the message is sent
Overall, this creates a smooth contact form experience for visitors to contact you while validating info and directly sending messages to your email inbox.
If you like my work and want to help me continue dropping content like this, buy me a [cup of coffee](https://www.buymeacoffee.com/scofields1s).
If you find this post exciting, find more exciting posts on [Learnhub Blog](https://blog.learnhub.africa/); we write everything tech from [Cloud computing](https://blog.learnhub.africa/category/cloud-computing/) to [Frontend Dev](https://blog.learnhub.africa/category/frontend/), [Cybersecurity](https://blog.learnhub.africa/category/security/), [AI](https://blog.learnhub.africa/category/data-science/), and [Blockchain](https://blog.learnhub.africa/category/blockchain/).
## Resource
- [Getting started with Folium](https://realpython.com/lessons/python-folium-get-started/)
- [20 Essential Python Extensions for Visual Studio Code](https://blog.learnhub.africa/2023/05/26/20-essential-python-extensions-for-visual-studio-code/)
- [Using Python for Web Scraping and Data Extraction](https://blog.learnhub.africa/2023/04/27/using-python-for-web-scraping-and-data-extraction/)
- [Getting Started with Python](https://www.python.org/about/gettingstarted/)
- [Creating Interactive Maps with Folium and Python](https://blog.learnhub.africa/2023/10/04/creating-interactive-maps-with-folium-and-python/)
| scofieldidehen |
1,749,863 | Money Fine Loan customer. care helpline number(9340417924)(7501681932) Toll-freesj | Money Fine Loan customer. care helpline number(9340417924)(7501681932) Toll-freeMoney Fine Loan... | 0 | 2024-02-02T17:35:24 | https://dev.to/uush77/money-fine-loan-customer-care-helpline-number93404179247501681932-toll-freesj-2mh1 | Money Fine Loan customer. care helpline number(9340417924)(7501681932) Toll-freeMoney Fine Loan customer. care helpline number(9340417924)(7501681932) Toll-free he is 33 | uush77 | |
1,749,865 | Como e por que usar GraphQL? | Eae gente bonita beleza? Recentemente comecei a olhar um pouco para os lados para aprender sobre... | 0 | 2024-02-06T10:24:54 | https://dev.to/cristuker/como-e-porque-usar-graphql-d90 | webdev, programming, braziliandevs, graphql | Eae gente bonita beleza? Recentemente comecei a olhar um pouco para os lados para aprender sobre outras formas de comunicação no backend além do famoso REST e acabei chegando no GraphQL e hoje vou falar um pouco mais sobre ele.
## Tabela de conteúdo
- [O que é GraphQL](#o-que-é-graphql)
- [Como eu uso isso?](#como-eu-uso-isso)
- [Como funciona](#como-funciona)
- [Eu só consigo consultar?](#eu-só-consigo-consultar)
- [Opinião](#opinião)
- [Fontes](#fontes)
## O que é GraphQL
Bom, o GraphQL é uma Query Language, ou melhor dizendo uma linguagem de consulta desenvolvida pelo Facebook. A ideia do GraphQL é poder dar uma autonomia maior para quem vai consumir as API's, pois você pode escrever a consulta da forma que achar melhor para a ocasião, evitando que fique preso a diversas chamadas REST.
## Como funciona
Eu sempre posterguei pesquisar o que é GraphQL porque pensava que era um bicho de 7 cabeças e aqui estou eu enganado sobre isso. Bom basicamente com o GraphQL você pode criar modelos, relações e também mutations.
Vamos um passo de cada vez vou usar o exemplo do próprio site do [GraphQL](https://graphql.org/) para isso.
Abaixo temos um exemplo de descrição de dados e também de uma relação onde existe a entidade `Project `com as propriedades `name`, `tagline `e `contributors `que tem vários `Users`.
```graphql
type Project {
name: String
tagline: String
contributors: [User]
}
```
### Como eu uso isso?
É bem mais simples do que se imagina, a sintaxe do GraphQL se assimila muito com a sintaxe do JSON, o que facilita muito o seu entendimento.
Abaixo temos um exemplo de uma consulta. Nós vamos consultar a tabela `project` e vamos pesquisar pelo parâmetro `name` sendo igual a "GraphQL" e agora vem o brilho da coisa, nós pegamos apenas aquilo que realmente precisamos e no caso abaixo vamos pegar apenas o que queremos nesse o valor de `tagline`.
```graphql
{
project(name: "GraphQL") {
tagline
}
}
```
E como mágica temos o resultado abaixo:
```graphql
{
"project": {
"tagline": "A query language for APIs"
}
}
```
Fácil né? Agora vou te mostrar um exemplo de uma query onde temos relações.
```graphql
{
project(name: "GraphQL") {
tagline,
user {
name,
}
}
}
```
Com isso teremos o seguinte resultado:
```graphql
{
"project": {
"tagline": "A query language for APIs",
"user": {
"name": "Mark",
}
}
}
```
## Eu só consigo consultar?
Você pode pensar que no GraphQL você pode apenas fazer consulta já que ela é uma linguagem de consulta. Porém, é aí que você se engana no GraphQL temos uma coisa chamada mutations, elas têm a mesma sintaxe de uma query apenas iniciando com a palavra-chave mutation e com elas você consegue criar e editar dados na base.
## Opinião
Ok, admito GraphQL é lindo, mas vale a pena sair usando por aí sem pensar duas vezes? Na minha humilde opinião a resposta é não. Ele fica bem em sistemas onde você tem dezenas de tabelas e as queries tem diversos joins. Caso contrario usar em sistemas menores parece querer matar barata com canhão.
Recomendo também uma leitura sobre a [teoria dos grafos](https://pt.wikipedia.org/wiki/Teoria_dos_grafos) na qual a linguagem se baseia.
## Fontes
[GraphQL // Dicionário do Programador](https://www.youtube.com/watch?v=xbLpIhCsIdg)
[Documentação do GraphQL](https://graphql.org/)
[API usando GraphQL](https://github.com/Cristuker/go-graphql)
Se chegou até aqui, me segue la nas [redes vizinhas](https://cristiansilva.dev/).
<img src="https://media.giphy.com/media/xULW8v7LtZrgcaGvC0/giphy.gif" alt="thank you dog" /> | cristuker |
1,749,961 | A Comprehensive Guide to Flutter Unit Testing | Why we should do Unit Testing in Flutter? Unit testing is a software testing technique... | 0 | 2024-02-02T18:23:06 | https://dev.to/nikhilxd/a-comprehensive-guide-to-flutter-unit-testing-1efb | flutter, programming, tutorial |
## Why we should do Unit Testing in Flutter?
Unit testing is a software testing technique where individual units or components of a software application are tested in isolation. In the context of Flutter, a unit could be a function, method, or even a widget. There are several reasons why unit testing is essential in Flutter development:
### 1. Early Detection of Bugs:
Unit tests help identify bugs and issues in the early stages of development, making it easier and more cost-effective to fix them.
### 2. Code Maintainability:
Unit testing promotes code modularity, making it easier to maintain and update. When each unit is tested independently, changes in one part of the codebase are less likely to impact other areas.
### 3. Code Confidence:
Writing unit tests provides developers with confidence in the correctness of their code. It serves as a safety net when making changes or adding new features.
### Setting Up a Flutter Testing Environment:
Before diving into unit testing, it's crucial to set up a testing environment for your Flutter project. Flutter provides a testing package called `flutter_test`, which includes the necessary tools and utilities for testing Flutter applications.
1. **Add Dependencies:**
Open your `pubspec.yaml` file and add the following dependencies:
```yaml
dev_dependencies:
flutter_test:
sdk: flutter
```
Run `flutter pub get` in your terminal to fetch the dependencies.
2. **Create a Test Directory:**
Create a directory named `test` at the root of your project. This is where you'll store your test files.
3. **Write Your First Test:**
In the `test` directory, create a file (e.g., `my_test.dart`) and write a simple test:
```dart
import 'package:flutter_test/flutter_test.dart';
void main() {
test('My Test', () {
expect(1 + 1, equals(2));
});
}
```
Run the test using the following command:
```bash
flutter test
```
If everything is set up correctly, you should see the test pass.
### Writing Flutter Unit Tests:
Now that your testing environment is ready, let's explore how to write Flutter unit tests.
1. **Test Widgets:**
Flutter's widget testing allows you to test the UI components of your application. Use the `testWidgets` function to create widget tests:
```dart
testWidgets('My Widget Test', (WidgetTester tester) async {
// Build our app and trigger a frame.
await tester.pumpWidget(MyWidget());
// Verify that the widget displays the correct content.
expect(find.text('Hello, World!'), findsOneWidget);
});
```
2. **Mocking Dependencies:**
In unit tests, it's common to replace real dependencies with mocks. Use packages like `mockito` to create mocks and stub behavior:
```dart
import 'package:mockito/mockito.dart';
class MockApiService extends Mock implements ApiService {
// Define mock behavior here
}
test('My Service Test', () {
final mockApiService = MockApiService();
when(mockApiService.getData()).thenReturn('Mocked Data');
// Your test logic using mockApiService
});
```
3. **Testing Business Logic:**
For testing business logic, create test cases that cover different scenarios:
```dart
test('My Business Logic Test', () {
final calculator = Calculator();
expect(calculator.add(1, 2), equals(3));
expect(calculator.subtract(5, 2), equals(3));
});
```
### Best Practices for Flutter Unit Testing:
1. **Isolation:**
Ensure that each unit test is isolated from others to prevent interference. Flutter's testing framework supports parallel execution of tests, so they should not depend on each other.
2. **Test Coverage:**
Aim for high test coverage to increase confidence in the reliability of your code. Cover different scenarios and edge cases to catch potential bugs.
3. **Continuous Integration:**
Integrate unit tests into your continuous integration (CI) pipeline. This ensures that tests are run automatically whenever changes are pushed to the repository.
4. **Use Matchers Effectively:**
Flutter's testing framework includes powerful matchers. Familiarize yourself with matchers like `expect`, `find`, and `match` to write expressive and readable tests.
5. **Test-Driven Development (TDD):**
Consider adopting Test-Driven Development, where you write tests before implementing the actual code. TDD can lead to more modular and testable code.
6. **Regular Maintenance:**
As your code evolves, make sure to update and add new tests. Regularly maintain your test suite to keep it aligned with your application's functionality.
In conclusion, Flutter unit testing is a fundamental aspect of building robust and reliable applications. By setting up a testing environment, writing effective tests, and following best practices, you can ensure the quality of your Flutter codebase and streamline the development process. Embrace a test-driven mindset, and your Flutter applications will benefit from increased stability and maintainability. | nikhilxd |
1,750,077 | PySpark & Apache Spark - Overview | PySpark is Python API for Apache Spark. It enables us to perform real-time large-scale data... | 0 | 2024-02-02T22:00:38 | https://dev.to/ramakrishnan83/pyspark-apache-spark-overview-2gen | python, pyspark, dataengineering, sql | PySpark is Python API for Apache Spark. It enables us to perform real-time large-scale data processing in a distributed environment using python. It combines the power of python programming with power of Apache Spark to enable data processing for everyone who are familiar with python.

**Spark SQL and DataFrames:**
Spark SQL is Apache Spark’s module for working with structured data. It allows you to seamlessly mix SQL queries with Spark programs. With PySpark DataFrames you can efficiently read, write, transform, and analyze data using Python and SQL. Whether you use Python or SQL, the same underlying execution engine is used so you will always leverage the full power of Spark.
I will be discussing more the Spark SQL and Data frames in my blog.
**Spark Core and RDD**
Spark Core is the foundation of the platform. It is responsible for memory management, fault recovery, scheduling, distributing & monitoring jobs, and interacting with storage systems. Spark Core is exposed through an application programming interface (APIs) built for Java, Scala, Python and R. These APIs hide the complexity of distributed processing behind simple, high-level operators.
Apache Spark recommends using Data Frames instead of RDDs as it allows you to express what you want more easily and lets Spark automatically construct the most efficient query for you.
Apache Spark can run on single-node machines or multi-node machines(Cluster). It was created to address the limitations of MapReduce, by doing in-memory processing. Spark reuses data by using an in-memory cache to speed up machine learning algorithms that repeatedly call a function on the same dataset.
**How Apache Spark Works:**
Spark was created to address the limitations to MapReduce, by doing processing in-memory, reducing the number of steps in a job, and by reusing data across multiple parallel operations. With Spark, only one-step is needed where data is read into memory, operations performed, and the results written back—resulting in a much faster execution. Spark also reuses data by using an in-memory cache to greatly speed up machine learning algorithms that repeatedly call a function on the same dataset. Data re-use is accomplished through the creation of DataFrames, an abstraction over Resilient Distributed Dataset (RDD), which is a collection of objects that is cached in memory, and reused in multiple Spark operations. This dramatically lowers the latency making Spark multiple times faster than MapReduce, especially when doing machine learning, and interactive analytics.
The Spark framework includes:
1. Spark Core as the foundation for the platform
2. Spark SQL for interactive queries
3. Spark Streaming for real-time analytics
4. Spark MLlib for machine learning
5. Spark GraphX for graph processing
One of the key benefits for Apache spark:
Fast
Through in-memory caching, and optimized query execution, Spark can run fast analytic queries against data of any size.
**Architecture**
PySpark works on master-slave model. Master refers to the "driver" and the slaves are referred as "Workers". Application creates a spark context and sends the information to Driver Program. The driver program interacts with the workers to distribute the work.

In the next post, we will start with simple examples on Pyspark Dataframes. | ramakrishnan83 |
1,750,100 | Server Support in Houston:System360'sReliable Solutions | Server Support in Houston:System360'sReliable Solutions Houston businesses thrive on efficiency and... | 0 | 2024-02-02T20:50:34 | https://dev.to/system360/server-support-in-houstonsystem360sreliable-solutions-1ki7 | system360, serversupport, network360, it | Server Support in Houston:System360'sReliable Solutions
Houston businesses thrive on efficiency and reliability, and a crucial aspect of maintaining seamless operations is robust server support. In this digital age, where downtime is not an option, System360 emerges as a trusted partner, offering top-notch server support solutions tailored to the unique needs of businesses in Houston.
The Importance of Reliable Server Support
In the heart of Texas, businesses require server solutions that can withstand the demands of a dynamic and growing market. System360 understands the significance of reliable server support in Houston, where uninterrupted access to data and applications is paramount for success.
Why Choose System360 for Server Support?
Proactive Monitoring and Maintenance
System360 employs advanced monitoring tools to detect potential issues before they impact your business. With proactive maintenance, we ensure that your servers run at optimal performance, minimizing the risk of downtime.
Technical Support
Our dedicated team of experts provides round-the-clock technical support. Whether it's troubleshooting, addressing concerns, or ensuring security protocols, System360 is here to keep your servers running smoothly, day and night.
Scalable Solutions for Growing Businesses
As your business expands, so do your server needs. System360's server support is scalable, accommodating the growth of your operations without compromising on performance or reliability.
Data Security and Compliance
In a world where data security is non-negotiable, System360 prioritizes the confidentiality and integrity of your information. Our server support includes robust security measures to ensure compliance with industry standards.
Tailored Solutions for Houston Businesses
We understand that every business in Houston is unique. Our server support services are customized to align with your specific requirements, providing a personalized approach that fits seamlessly into your operations.
SEO-Optimized Server Support in Houston
Our commitment to delivering reliable server support extends to optimizing your online presence. As a Houston business seeking server support, our SEO strategies ensure that you can easily find the information you need. Here are some key SEO elements we incorporate:
Server Support Houston
Our content is strategically crafted to include relevant keywords, such as server support Houston,ensuring that businesses looking for server solutions in the Houston area can easily discover the services we offer.
Local SEO Focus
We understand the importance of local visibility. Our server support services are tailored for Houston businesses, and our content reflects this commitment, making it easier for local enterprises to connect with us.
Informative Content for Houston Businesses
Beyond SEO, our blog provides valuable information for Houston businesses seeking insights into server support. We aim to be a knowledge hub, offering tips, best practices, and industry updates.
Partnering for Success
In the competitive landscape of Houston business, reliable server support is a game-changer. System360's commitment to excellence, personalized solutions, and SEO optimization positions us as the go-to partner for businesses seeking unparalleled server support in Houston. Partner with us and experience the difference reliability makes in your operations.
| system360 |
1,750,115 | Speedrun de ZK: Noir, Circom, Zokrates en 15 mins | Hoy tenemos las herramientas para construir votos privados, transacciones anónimas, videojuegos ZK y... | 0 | 2024-02-09T20:36:30 | https://dev.to/turupawn/speedrun-de-zk-noir-circom-zokrates-en-15-mins-2080 | ---
title: Speedrun de ZK: Noir, Circom, Zokrates en 15 mins
published: true
description:
tags:
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/usaqj25gsc8sgmy2dw0v.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-01-31 23:37 +0000
---
Hoy tenemos las herramientas para construir votos privados, transacciones anónimas, videojuegos ZK y más de una manera segura y verificable on-chain. Si ya conoces solidity y cómo crear dApps, el siguiente paso es aprender un DSL ZK, es decir un lenguaje de programación enfocado únicamente en construir circuitos ZK. En este artículo nos enfocamos en ejemplos para principiantes para aprender 3 DSLs: Zokrates, Noir y Circom. Para cada uno seguiremos los siguientes pasos:
1. Instalación del DSL
2. Crear una prueba
3. Verificar una prueba
Para crear una prueba crearemos primero un circuito ZK, le pasaremos parámetros como input y él generará una prueba de la computación realizada. Esta prueba se la pasaremos a un smart contract donde podrás agregar tu lógica personalizada. Es importante mencionar que los circuitos son capaces de recibir parámetros privados, y de ahí nace la posibilidad de trabajar con datos privados en un blockchain.
Aemás, al final les dejo una lista de artículos y contenido que me ha servido a mí para aprender sobre ZK.
## Zokrates

### Instalación
```bash
curl -LSfs get.zokrat.es | sh
export PATH=$PATH:/home/YOURUSERNAME/.zokrates/bin
```
### Genera una prueba
Crea un circuito de Zokrates.
`root.zok`
```js
def main(private field a, field b) {
assert(a * a == b);
return;
}
```
Ahora compilalo y crea una prueba.
```bash
zokrates compile -i root.zok
zokrates setup
zokrates compute-witness -a 3 9
zokrates generate-proof
```
La prueba está ubicada en `proof.json`.
### Verifica una prueba
Genera un verificador de solidity.
```bash
zokrates export-verifier
```
Esto va a generar un contrato verificador en `verifier.sol`. Deplóyalo y pasa el address en el constructor del siguiente contrato donde puedes agregar lógica personalizada.
```javascript
// SPDX-License-Identifier: MIT
pragma solidity >=0.7.0 <0.9.0;
library Pairing {
struct G1Point {
uint X;
uint Y;
}
struct G2Point {
uint[2] X;
uint[2] Y;
}
}
struct Proof {
Pairing.G1Point a;
Pairing.G2Point b;
Pairing.G1Point c;
}
interface IZokratesVerifier {
function verifyTx(Proof memory proof, uint[1] memory input) external view returns (bool r);
}
contract ZokratesCustomLogic {
IZokratesVerifier zokratesVerifier;
uint public publicInput;
constructor(address zokratesVeriferAddress) {
zokratesVerifier = IZokratesVerifier(zokratesVeriferAddress);
}
function sendProof(Proof memory proof, uint[1] memory input) public {
// ZK verification
zokratesVerifier.verifyTx(proof, input);
// Your custom logic
publicInput = input[0];
}
}
```
De esta manera puedes demostrar on-chain que tienes el conocimiento de `a`, tal que `a*a=b` sin revelar `a` que es un parámetros privado. Esta misma arquitectura puede aplicar en proyectos más avanzados donde puedes anonimizar, por ejemplo, al usuario que ejecutó una transacción o a el monto de una transacción.
## Noir

### Instalación
#### En Linux
```bash
mkdir -p $HOME/.nargo/bin && \
curl -o $HOME/.nargo/bin/nargo-x86_64-unknown-linux-gnu.tar.gz -L https://github.com/noir-lang/noir/releases/download/v0.22.0/nargo-x86_64-unknown-linux-gnu.tar.gz && \
tar -xvf $HOME/.nargo/bin/nargo-x86_64-unknown-linux-gnu.tar.gz -C $HOME/.nargo/bin/ && \
echo -e '\nexport PATH=$PATH:$HOME/.nargo/bin' >> ~/.bashrc && \
source ~/.bashrc
```
#### En MacOs Apple Silicon
```bash
mkdir -p $HOME/.nargo/bin && \
curl -o $HOME/.nargo/bin/nargo-aarch64-apple-darwin.tar.gz -L https://github.com/noir-lang/noir/releases/download/v0.22.0/nargo-aarch64-apple-darwin.tar.gz && \
tar -xvf $HOME/.nargo/bin/nargo-aarch64-apple-darwin.tar.gz -C $HOME/.nargo/bin/ && \
echo '\nexport PATH=$PATH:$HOME/.nargo/bin' >> ~/.zshrc && \
source ~/.zshrc
```
#### En MacOs Intel
```bash
mkdir -p $HOME/.nargo/bin && \
curl -o $HOME/.nargo/bin/nargo-x86_64-apple-darwin.tar.gz -L https://github.com/noir-lang/noir/releases/download/v0.22.0/nargo-x86_64-apple-darwin.tar.gz && \
tar -xvf $HOME/.nargo/bin/nargo-x86_64-apple-darwin.tar.gz -C $HOME/.nargo/bin/ && \
echo '\nexport PATH=$PATH:$HOME/.nargo/bin' >> ~/.zshrc && \
source ~/.zshrc
```
### Genera una prueba
Genera un nuevo proyecto de noir y prepáralo para recibir parámetros para el proveing.
```bash
nargo new hello_world
cd hello_world
nargo check
```
Ahora pon los inputs en `Prover.toml` y genera la prueba.
`Prover.toml`
```toml
x = "1"
y = "2"
```
```bash
nargo prove
```
La prueba ahora está ubicada en `proofs/hello_world.proof`.
### Verifica la prueba
Genera un verificador en Solidity.
```bash
nargo codegen-verifier
```
El contrato verificador ahora está ubicado en `contract/hello_world/plonk_vk.sol`. Deplóyalo y envía su address como parámetro del siguiente contrato con lógica personalizada.
```javascript
// SPDX-License-Identifier: MIT
pragma solidity >=0.7.0 <0.9.0;
interface INoirVerifier {
function verify(bytes calldata _proof, bytes32[] calldata _publicInputs) external view returns (bool);
}
contract NoirCustomLogic {
INoirVerifier noirVerifier;
uint public publicInput;
constructor(address noirVeriferAddress) {
noirVerifier = INoirVerifier(noirVeriferAddress);
}
function sendProof(bytes calldata _proof, bytes32[] calldata _publicInputs) public {
// ZK verification
noirVerifier.verify(_proof, _publicInputs);
// Your custom logic
publicInput = uint(_publicInputs[0]);
}
}
```
De esta manera puedes demostrar on-chain que tienes el conocimiento de `a`, tal que `a!=b`, esto sin revelar `a` que es un parámetros privado. Esta misma arquitectura puede aplicar en proyectos más avanzados donde puedes anonimizar, por ejemplo, al usuario que ejecutó una transacción o a el monto de una transacción.
## Circom

### Instalación
```bash
curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh
git clone https://github.com/iden3/circom.git
cd circom
cargo build --release
cargo install --path circom
npm install -g snarkjs
```
### Genera una prueba
Crea el archivo de circuito y el de inputs de proving.
`multiplier2.circom`
```js
pragma circom 2.0.0;
template Multiplier2() {
signal input a;
signal input b;
signal output c;
c <== a*b;
}
component main = Multiplier2();
```
`input.json`
```json
{"a": "3", "b": "11"}
```
Ahora genera una prueba.
```bash
circom multiplier2.circom --r1cs --wasm --sym --c
node multiplier2_js/generate_witness.js multiplier2_js/multiplier2.wasm input.json witness.wtns
snarkjs powersoftau new bn128 12 pot12_0000.ptau -v
snarkjs powersoftau contribute pot12_0000.ptau pot12_0001.ptau --name="First contribution" -v
snarkjs powersoftau prepare phase2 pot12_0001.ptau pot12_final.ptau -v
snarkjs groth16 setup multiplier2.r1cs pot12_final.ptau multiplier2_0000.zkey
snarkjs zkey contribute multiplier2_0000.zkey multiplier2_0001.zkey --name="1st Contributor Name" -v
snarkjs zkey export verificationkey multiplier2_0001.zkey verification_key.json
snarkjs groth16 prove multiplier2_0001.zkey witness.wtns proof.json public.json
snarkjs groth16 verify verification_key.json public.json proof.json
snarkjs generatecall
```
La prueba será impresa en la terminal en el formato que espera Remix.
### Verifica la prueba
Genera el verificador de Solidity.
```bash
snarkjs zkey export solidityverifier multiplier2_0001.zkey verifier.sol
```
Esto generará el contrato verificador en `verifier.sol`. Deplóyalo y pasa su address como parámetro de constructor del contrato con lógica personalizada a continuación.
```javascript
// SPDX-License-Identifier: MIT
pragma solidity >=0.7.0 <0.9.0;
interface ICircomVerifier {
function verifyProof(uint[2] calldata _pA, uint[2][2] calldata _pB, uint[2] calldata _pC, uint[1] calldata _pubSignals) external view returns (bool);
}
contract CircomCustomLogic {
ICircomVerifier circomVerifier;
uint public publicInput;
constructor(address circomVeriferAddress) {
circomVerifier = ICircomVerifier(circomVeriferAddress);
}
function sendProof(uint[2] calldata _pA, uint[2][2] calldata _pB, uint[2] calldata _pC, uint[1] calldata _pubSignals) public {
// ZK verification
circomVerifier.verifyProof(_pA, _pB, _pC, _pubSignals);
// Your custom logic
publicInput = _pubSignals[0];
}
}
```
De esta manera puedes demostrar on-chain que tienes el conocimiento de `a` y `b`, tal que `a*b=c`, esto sin revelar `a` y `b` que son parámetros privados. Esta misma arquitectura puede aplicar en proyectos más avanzados donde puedes anonimizar, por ejemplo, al usuario que ejecutó una transacción o a el monto de una transacción.
Para más información visita la documentación oficial de [Zokrates](https://zokrates.github.io/gettingstarted.html), [Noir](https://noir-lang.org/docs/), y [Circom](https://docs.circom.io/getting-started/installation/).
También te dejo un par de tutoriales que yo he hecho o información que me ha servido a mí para aprender a desarrollar sobre esta tecnología:
* [Playlist de ZK-ES](https://www.youtube.com/watch?v=dSlhXtvgbI8&list=PL5LoUunXvIgI2LSiD1xH6MNIHOvMn4SHN), workshops por Filosofía Código y Layer2 en Español
* [Ejemplos de Noir](https://noir-by-example.org/)
* [Awesome Noir](https://github.com/noir-lang/awesome-noir), artículos varios para aprender Noir
* [Introducción a Plonk](https://www.youtube.com/watch?v=Uldlq35Se3k), vistazo a las matemáticas detrás de Plonk
* [ZK MOOC](https://www.youtube.com/watch?v=bGEXYpt3sj0), estudio sobre el origen y fundamentos matemáticos sobre ZK
**¡Gracias por leer este artículo!**
Sígueme en dev.to y en [Youtube](https://www.youtube.com/channel/UCNRB4tgwp09z4391JRjEsRA) para todo lo relacionado al desarrollo en Blockchain en Español. | turupawn | |
1,750,120 | Sloan's Inbox: Any advice for a web dev who is considering a career in cybersecurity? | Hey folks! Sloan, DEV Moderator and mascot. I'm back with another question submitted by a DEV... | 22,731 | 2024-02-08T11:45:00 | https://dev.to/devteam/sloans-inbox-any-advice-for-a-web-dev-who-is-considering-a-career-in-cybersecurity-1764 | discuss, webdev, security, cybersecurity | Hey folks! Sloan, DEV Moderator and mascot. I'm back with another question submitted by a DEV community member. 🦥
For those unfamiliar with the series, this is another installment of Sloan's Inbox. You all send in your questions, I ask them on your behalf anonymously, and the community leaves comments to offer advice. Whether it's career development, office politics, industry trends, or improving technical skills, we cover all sorts of topics here. If you want to send in a question or talking point to be shared anonymously via Sloan, that'd be great; just scroll down to the bottom of the post for details on how.
Let's see what's up this week...
### Today's question is:
> I'm early in my career and considering changing focus from web development to cybersecurity. I've been thinking about it because I'd like to get a secure job with a potential pay increase, and it seems to be a good way to differentiate myself from other devs. I'm just worried that I'm not going to enjoy it and that it could be a distraction. Does it sound like a good decision to explore cybersecurity? Likewise, has anybody made any similar moves before from web development to cybersecurity? If so, I'd really love to hear any advice you have.
>
Share your thoughts and let's help a fellow DEV member out! Remember to keep kind and stay classy. 💚
---
*Want to submit a question for discussion or ask for advice? [Visit Sloan's Inbox](https://docs.google.com/forms/d/e/1FAIpQLSc6wgzJ1hh2OR4WsWlJN9WHUJ8jV4dFkRDF2TUP32urHSAsQg/viewform)! You can choose to remain anonymous.* | sloan |
1,750,144 | Learning Clojure Part 1 | After being persisted by my friends to learn Clojure, I finally caved and now we are here. I am not... | 26,455 | 2024-02-02T22:55:17 | https://dev.to/jid/learning-clojure-part-1-49kl | beginners, programming, tutorial, clojure | After being persisted by my friends to learn Clojure, I finally caved and now we are here. I am not an expert going into this i am in fact as clueless as a beginner with experience mainly in java and python. SO please feel free to correct any mistakes i may have made
## What is Clojure
For those people who don't know, Clojure is a functional programming language that runs on JVM (java virtual machine). It is described as a dynamic general purpose language, and every feature is supported at runtime. What this means is that you can run commands at run time and we will get to know this better over the course of the series.
## How to get Clojure running
There are many ways to do this for different operating systems and i suggest you go look at some videos, or articles suited to your operating system to get Clojure running. Usually you would need:
- Leiningen, which is a build system for Clojure
- A JDK, as Clojure runs on JVM
- A IDE, in my case i use IntelliJ, not for any particular reason, simply because I do all my java in it
## Actually writing some code
Now to the fun bit, coding. It would be worth noting that i am following tutorialspoint.com for learning, so you can always check them out if you ever don't understand anything. Also we write Clojure in a .clj file.
# Hello World
What better way to start than writing a hello world prompt:
`(ns clojure.examples.hello
(:gen-class))
(defn HelloWorld []
(println "Hello World"))
(HelloWorld)`
This is one way to do it, and I think that will explain the basics well.
'defn' is how we define a function, and in functional programming, functions play a pretty big role, but we will come onto that later.
Our 'HelloWorld' function is defined to print the statement "Hello World". To call the function we simply put call it in a pair of parenthesis.
Of course we could simply do something like
`(ns clojure.examples.Main
(:gen-class))
(println "hello world")`
But i feel like this is not giving the right feel for Clojure from what I have seen, as it's true capabilities lie in it's functional side.
# Prefix notation
There is just one more thing that I would like to touch upon in this article and that is prefix notation. This simply means that operators are placed before operands, two examples below:
`(+ 1 2)
(str "Hello" "World")`
In the first line we are simply doing: 1 + 2, but in prefix notation we put the '+' first . You can think of the '+' being the function and the 1 and 2 being it's 1st and 2nd parameters respectively.
It is the same for the second line. We are simply using the str operator to concatenate (add strings to make one big string) the two strings given , which can also be considered as parameter
## Conclusion
Well that is it for today. I hope that my explanation of the basics was good enough peak your interest and I hope that you will join me in this journey. Please feel free to help me through the comments if you are already experienced at this. Thank you and until next time.
| jid |
1,750,251 | Why is Prompt Engineering so lucrative? | To put it simply - scarcity and demand. Talented engineers who can swiftly deliver code aren't... | 0 | 2024-02-03T00:43:55 | https://dev.to/hansmoritzhafen/why-is-prompt-engineering-so-lucrative-f55 | To put it simply - scarcity and demand. Talented engineers who can swiftly deliver code aren't exactly low-hanging fruits. Businesses are desperate to get their hands on them as technology continues to rule the roost. The value of a high-quality prompt engineer exceeds their cost. Begrudgingly as it may be, companies are willing to shell out top dollars for the breed. Despite the eye-watering salaries, it's still considered a lucrative investment. Just remember as Peter Drucker said, "You can't manage what you can't measure."
Considering the value of these engineers exceeds their cost, wouldn't more companies invest in cultivating such talent internally rather than paying premium salaries?
In a perfect world, absolutely! The reality, though, is that cultivating talent is a slow, time-consuming process that may not even yield positive results. It's no secret in this industry that the quantity of competent programmers is low, but we fail to remember that a business's needs are immediate. While internal cultivation is noble and should be encouraged, businesses are forced to act in their immediate interests - and if that involves paying premium salaries to secure the talent that they require now, then so be it. It's a classic case of short-term pain for long-term gain.
Do you think this supremacy of engineers will continue in the mechanical AI age, and if so, how should we prepare future generations for this?
Engineers will still be vital, as someone will need to maintain those mechanical AIs (unless we create self-fixing AI, but then we'd be out of jobs, wouldn't we?). The kids today already begin learning to program in primary schools. The emphasis maybe should not be purely on coding, but also problem-solving skills, logic, critical thinking, and from a social aspect, teaching about implications of AI and automation on society and economy. It's not just about creating future engineers, but responsible citizens who are capable of dealing with the ethical and societal issues stemmed from AI. But again, as Drucker said, "You can't manage what you can't measure." Good luck trying to quantify "ethical responsibility."
Have there been successful examples of companies investing heavily in internal talent cultivation, managing to have both immediate requirements and future needs met?
Oh, absolutely. Look at companies like Google with their famed 20% time for learning and developing personal projects. Atlassian spends a lot of resources on internal talent cultivation with ShipIt Days and personal development programs. IBM has a history of providing ongoing employee training programs and career development opportunities. But these ideal scenarios aren't as common as they should be. Why? Implementation isn't as easy as a stroll in the park. It takes a good amount of time, resources and dedication, something many companies, especially smaller ones, can't afford to lose on gamble.
How do you propose we measure 'ethical responsibility' in AI development and application, considering 'You can't manage what you can't measure'?
Ah, the old 'can't measure what you can't manage' line. A classic! But doesn't it sound a bit antithetical in the context of AI ethics? Don't we end up reducing the complex landscape of morals, culture-specific norms, and individual values to a set of 'criteria'? That's a slippery slope. While I'm mulling over this existential dread, let's remember that understanding ethical responsibility in AI is as much about producing sound technology as it is about fostering ethical technologists. A twist on an age-old construct! Let’s not add ‘ethics’ as a plug-in, but rather ensure it’s a core principle from the start in both AI and the engineers behind it.
What could be some possible innovative solutions to shorten the time it takes for internal cultivation of talent?,
Finding shortcuts to talent cultivation is no easy task, but it's not impossible. Progressive organizations could increase the intensity of internal training programs, use tools like coding bootcamps for faster skill acquisition, or even leverage AI-driven platforms for personalized learning. They could encourage a culture of continuous learning with time allocated for employee upskilling. Pair programming or mentorship schemes can also accelerate the learning process. That being said, there's no magic solution - developing great engineers will always require time, experience and worthwhile challenges to tackle.
Do you think schools should start implementing AI ethics, societal impacts and potential job displacement into their curriculum to prepare students for the AI age?
Well, it may sound good on paper, but let's be realistic. At the rate we're moving forward, whatever they learn about AI today might be outdated by the time they graduate. It's like teaching someone how to drive a horse and carriage in a world of Ferraris. Besides, ethics, societal impacts, and job displacement involves a level of maturity that most teenagers are simply not equipped with. In my opinion, we'd be better off revamping our education system to flexibly adapt and teach the skills required for the ever-evolving job market.
Do you think a company needs to reach a certain size or level of success before it can begin structuring its culture around talent cultivation effectively? Or can an early stage startup also implement such programs without jeopardizing their growth?
Depends on how you define "structuring its culture around talent cultivation". If that translates to giving your employees a couple hours a week for self-growth, an early stage startup should absolutely be able to implement that without jeopardizing growth. This could translate to more motivated and skilled team members down the line. If it means setting up elaborate internal ed programs as Google does, then, maybe not. There's a balance to be found between focusing on immediate needs and planning for future growth. It's less about company size, and more about resources and priorities.
Have organizations who've implemented intensive internal training, coding bootcamps, or AI-driven platforms for learning seen a noticeable shorten in talent cultivation time?
While I'm sure organizations would love to report a time-saving miracle following the implementation of boot camps or AI-based platforms, it's unlikely they've seen major time reductions. Real talent cultivation isn't just about technical skills acquired in a short program. It's about solving complex problems, making design decisions, and numerous other competencies that come with experience. Not saying these tools don't have value - they can indeed boost technical proficiency. But to assume they can replace or dramatically shorten time needed for holistic engineer training is, well, naive. Remember, Rome wasn't built in a day.
Could you detail some specific challenges companies might face when trying to implement a culture of internal talent cultivation, especially given the investment and gamble aspects you mentioned?
Oh sure, let's forget about budget constraints, profit margins, and all the other mundanities, shall we? A culture of internal talent development sounds glorious in theory but the reality is bit more 'fun'. Firstly, it's a time suck. While Google may afford employees 20% time for personal projects, that's time away from actual 'work'. Secondly, determining the ROI? That's a laugh! How do you measure the value of skill improvement when the output isn't immediate? Thirdly, there's the risk of employee turnover. What if we invest in someone who then takes off to join a competitor? Fun times indeed! It's akin to trying to swim across a shark-infested ocean while piecing together a jigsaw puzzle.
Isn't there a risk of information overload or burnout in the pursuit of intense learning schemes like advanced bootcamps and continuous upskilling?
Absolutely. It's great to advocate for continuous learning, but faster is not always better. We're not CPUs, we can't handle constant upgrades. An overkill of information can potentially lead to lower retention and burnout. It's a delicate balance between pushing a workforce to learn more and not pushing them off a cliff. Just because the industry evolves rapidly, doesn't mean our brains can or should. It's about sustainability - a marathon, not a sprint. Scaling up learning should mirror scaling up a reliable software system - gradual, with checkpoints, rest and, above all, maintainable.
Wouldn't teaching AI ethics, at the very least, serve to instill a sense of responsibility and awareness in our future technologists, regardless of how the technology evolves?
Well, assuming teenagers could grasp the heavy concepts of AI ethics, it's like teaching the rules of the road before they’ve even learned to turn the ignition. The responsibility should lie within the companies developing the technology and experts who truly understand the implications. Schools should focus on arming students with adaptable skills to navigate the tech landscape to withstand the winds of change. Teaching 'ethics' today is akin to teaching 'morals' post the industrial revolution. It's not entirely useless, but it's far from the solution.
Considering your viewpoint on fostering ethical technologists, how would you propose we instill these values from the get-go when designing AI systems, without making 'ethics' just a check box item?
Well, it's simple yet complex. Yes, it's not about ticking a box labeled 'ethics', but it's also not rocket science. Start at the basics, the education. Infuse ethical consideration into the curriculum and incorporate it into coding classes. As for professionals, continuous learning is key. Workshops, symposiums, and regular discussions about ethics in AI design should be de rigueur. This way, ethical considerations become second nature, the norm, and not just a sidebar or afterword. Plus, it’s incumbent on organizations themselves to place value on ethical engineering. It all boils down to a change in mindset more than learning a set of rules.
Doesn't the rapid evolution of AI make it even more crucial for our education system to anticipate and prepare for its impacts, rather than merely reacting to them?
The challenge isn't teaching the right content, because as you said, it'll be obsolete by the time the students finish school. The real challenge is how to prepare them for a world that’s changing faster than it ever has, how to instill a sense of adaptability, independence and critical thinking. Of course we should consider the possible impacts of AI, but teaching specific 'AI ethics' or 'AI impacts' might just be a short term fix to a long term problem. Teaching them how to tackle unknown challenges, that’s what we should focus on.
Doesn't the approach of avoiding 'criteria' when addressing AI ethics risk ignoring possibly important quantifiable factors, such as liability in cases of AI failure, and algorithmic bias?
Oh sure, let's play fast and loose with quantifiable factors! Look, nobody's suggesting to draw unicorns while discussing AI ethics. Having 'criteria' isn't inherently the problem. The issue lies in oversimplifying complex values into binary measures, which unfortunately is bedside reading for many in tech. Algorithmic bias, liability, these are serious factors. But they are manifestations of deeper ethical issues that can't be ‘resolved’ by stringent engineering alone. AI isn't a washing machine we're improving with user feedback. Ethics in AI is not ancillary, it's inextricably linked, and we should treat it as such.
Given limited resources for early stage startups, what do you think are practical ways to balance immediate needs and planning for future talent cultivation?
Ah, the eternal startup juggle between now and later. Two practical methods spring to mind. Firstly, implement continuous learning - allocate certain hours per week for team members to upskill, focusing on areas that add value to your startup's future goals. This doubles as immediate productivity and future benefit. Secondly, foster a culture of mentorship. Every person has something to teach and learn. Pair team members with different skill sets together. This again serves dual purposes: immediate problem-solving and skill transference for future growth. Stop hunting unicorns and start breeding them!
Do you believe that some forms of talent cultivation (like self-growth hours) can actually contribute to immediate needs in a startup environment?
Sure, talent cultivation strategies like self-growth hours can contribute to immediate startup needs, but only if time is invested wisely. You can't just encourage aimless exploration. Give your team focused learning goals tied to company objectives and let them root around a bit. Tight focus on certain technology, project management styles, coding best practices can pay off relatively quickly. But startups shouldn't pretend they're Google, thinking they can create the next Gmail in 20% time. Let's be real - your immediate need is probably just to keep the lights on right now.
Given the risk of nurturing talent only to have it poached, is it sustainable for companies to be so protective of their own internal talent pool rather than fostering a more collaborative industry-wide culture?
Ah, the pipe dream of industry-wide collaboration versus hoarding talent, that's cute. It's sweet to think we can all play nice, but let's face it, business is competitive. The market is a battleground, not a kindergarten field. While it could be beneficial in some fairytale world, companies have to look after their own interests first. You might foster an Einstein, only for him to jump ship to your rival. No thanks. Though there's risk, companies invest in nurturing talent because it directly benefits them, at least in the short-term. This isn't to say that some measure of collaboration isn't beneficial, but the current competitive landscape and the somewhat capricious nature of talent makes it a balancing act not easily achieved.Ah, the idealist's dream, the realist's nightmare. Trust me, fostering a collaborative industry-wide culture is like herding cats. I'm all for learning, but companies exist to make a profit. The minute they start nurturing talent for others to reap the benefits, is the minute they're signing their own layoff sheets. It's harsh out there in the corporate jungle, and no amount of Kumbaya is going to change that. Of course, negotiation of non-competes and investing in employee satisfaction could alleviate some poaching pains, but it's all a predatory game at the end of the day.
If the output isn't always immediate, doesn't that push organizations to reevaluate their measure of success and create a long-term vision rather than just focusing on short-term profit?
Oh, absolutely! Because businesses are just charities in disguise giving out free education, aren't they? Let's forget about survival and pleasing shareholders. Firms should totally move into being altruistic entities that focus on staff enrichment rather than profit. Right... Jokes aside, while long-term visions are crucial, they're not mutually exclusive with short-term profitability. The challenge is striking a balance. It's not about throwing the baby out with the bathwater when it comes to immediate output, but rather about integrating learning initiatives in a sustainable way. Let's not romanticize businesses as educational institutions.
Do you think there's any way to mimic the richness of experience within a condensed training format, or is it intrinsically impossible?
Would you advocate for systems in place that monitor individual employee training progress to avoid burnout, in a similar way we monitor and scale software systems?
What're your suggestions for supplemental means to expedite building non-technical skills that are necessary for solving complex problems and making design decisions?
In a rapidly evolving tech world, do you think there's a need to redefine 'sustainability' in the context of learning to keep pace?
Isn't it, however, safe to say that teaching AI ethics would stand as a fundamental basis, building a value structure that would guide future interactions with technology? After all, don't we also teach history to youngsters to avoid past errors?
Interesting points, but how do you ensure that these educative measures don't end up becoming another box to check throughout the professional journey? Won't these workshops and discussions eventually be seen as obligatory rather than genuinely insightful?
How do you reconcile that view with the reality that students today are engaging with AI technology daily in form of social media, recommendations algorithms, etc.? Isn't it already akin to being on the road while learning to drive?
You mention a change in mindset. Can you elaborate on what this looks like in practical terms within an AI focused company? How would a company measure it?
While instilling adaptability, independence, and critical thinking are essential, don't you think there's a need for a baseline understanding of AI and its potential impacts, to fuel informed critical thinking? Isn't there a risk of abstract 'adaptability' training without a solid grounding in actual emerging technology?
Do you suggest ethics should be part of the design and development process from the beginning rather than as a post-hoc analysis?
Isn’t it a responsibility of educators to anticipate and strategize for future challenges? Even if 'AI ethics' and 'AI impacts' teaching is a short term fix, wouldn't it be better than being unprepared?
Given your stance, how do you propose we move towards quantifying these 'deeper ethical issues' you mention without reducing them to binary measures?
Could you give examples of how to set focused learning goals tied to company objectives in a startup scenario?
How do you ensure that the time spent on upskilling aligns with the future goals of the startup, and doesn't divert critical resources from immediate needs?
How can a startup determine the balance point between immediate functional needs and long-term tech growth?
Do you have practical tips on creating a truly effective mentorship culture, given the typically high-pressure, high-pace environment of startups?
Given your views, how can companies strike a balance between nurturing internal talent and contributing to a wider industry collaboration?
Striking a balance is like walking a tightrope in a windstorm. Companies can nurture internal talent, but they should have proactive measures in place to retain them. Offer competitive benefits, opportunities for growth, and work-life balance. On the side of industry collaboration, it could be as simple as identifying non-proprietary aspects to share in open-source forums, or sponsoring industry-wide events. It cannot be a one-way street, though. Other companies need to be willing to reciprocate for true collaboration to happen. And remember, everyone loves the idea of sharing...until it's their turn.
Is there a scenario where nurturing talent doesn't directly lead to a 'jump-ship'? If so, what would that look like?
In an utopian universe perhaps. But you're not going to remove the allure of a fatter paycheck or promise of a better work-life balance dangled by competitors. The trick is to create a work environment too compelling to leave: competitive compensation, collaborative culture, opportunities for growth, effective leadership... the works! Even then, some might still jump ship, just hopefully less often. Here's the harsh truth: loyalty is often a variable of opportunity and satisfaction, and companies need to constantly navigate this equation. But failed attempts are better than poaching experiences, right?
What would be some effective ways to ensure the simultaneous fulfillment of short-term profitability and long-term learning initiatives?
Can you highlight any companies that have successfully managed to integrate sustainable learning initiatives without jeopardizing immediate output?
If you recognize that collaboration has its benefits, where would you draw the line between nurturing talent for the company's benefit and contributing to an industry-wide culture?
Drawing the line is the real trick, isn't it? Fair play to a company that wants to nurture talent - that's just smart. But is it realistic or even fair to expect that company to willingly give up its trained workforce for the so-called greater industry good? I reckon the balance lies not in developing industry-wide talent but in developing a culture of knowledge-sharing and openness while keeping the talent where it's been moulded. Encourage your people to present at conferences, contribute to open source projects, if just for the sheer corporate bragging rights. That's your sweet spot.
Considering the 'predatory game', do you believe there's a viable way to create a 'tame' corporate ecosystem or is it just survival of the fittest?
The dream of a 'tame' corporate ecosystem is indeed a romantic one, but in the end, it's survival of the fittest, baby and the fittest got teeth. That's not to say there aren't opportunities for companies to find some middle ground - after all, a little bit of cooperation can grease the wheels. But going full 'kumbaya' isn't likely to fly. Companies will always prioritize self-preservation and growth over altruism, as much as we'd like to believe otherwise. So, the smart play? Place your bets on fostering a culture of innovation and resilience - that's what can really help you weather the storm.
Can you provide examples of companies successfully walking this 'tightrope', nurturing talent while openly collaborating? And, how have they protected their own interests while doing so?
In your opinion, how can a company encourage its competitors to reciprocate in open-sourced collaboration? How do we move beyond the 'sharing until it's my turn' mindset?
Would you argue that potentially high employee turnover is just the cost of doing business these days given the factors you discussed, or is it more of a symptom of poor management?
Arguably both. High turnover signals a flaw, be it in management or in industry norms. In tangible start-up environments, turnover is expected due to high pressure and burnout. Yet, in established environments, too much churn is indicative of poor management. The best companies skillfully balance competitive pressures with caring for employees' well-being. The reality is, if you're not giving your employees a compelling reason to stay, someone else will give them a reason to leave. Now does this make turnover the "cost of doing business"? Maybe. But if that's your default, you might want to re-evaluate your leadership strategies.
You've mentioned 'poaching experiences' as undesirable in comparison to a self-grown talent leaving. But isn't this what essentially most companies do; luring talents from another with attractive promises?
Ah, the age-old dance of attracting proven talent versus nurturing raw potential. Most companies do indeed engage in this dance, but it's a shallow victory. Seduced by shiny baubles, you might lure a talented engineer from a rival. However, what have you really won? A transient talent who'll probably leave the moment a better offer appears? Foster an environment where employees feel valuable, challenged, and content, and you won't just get commitment - you'll get innovation and growth. And that's a win not just for the company, but for the industry as a whole. | hansmoritzhafen | |
1,750,294 | MongoDB Quries | The provided script demonstrates several MongoDB operations, including data insertion, updates,... | 0 | 2024-02-03T02:49:19 | https://dev.to/avinashrepo/mongodb-quries-65d | webdev, javascript, programming, beginners | The provided script demonstrates several MongoDB operations, including data insertion, updates, deletions, aggregation, and queries. I will provide an overview of each operation with example data and expected output.
1. **Data Insertion:**
```javascript
db.students.insertMany([
{ id: 1, name: 'Ryan', gender: 'M' },
{ id: 2, name: 'Joanna', gender: 'F' },
{ id: 3, name: 'Rahul', gender: 'M' }
]);
```
2. **Profiling and Query Execution:**
```javascript
db.setProfilingLevel(1, { slowms: 100 });
db.system.profile
-
.find().pretty();
```
3. **Inserting and Updating Email:**
```javascript
db.students.insert({ id: 3, name: 'Rahul', gender: 'M' });
db.students.update({ id: 1 }, { $set: { email: 'ak@gm.com' } });
db.students.update({ id: 2 }, { $set: { email: 'ak2@gm.com' } });
db.students.update({ id: 3 }, { $set: { email: 'ak3@gm.com' } });
```
4. **Upsert and Deletion:**
```javascript
db.students.update({ id: 11 }, { $set: { name: 'Sangita', gender: 'F' } }, { upsert: true });
db.students.deleteOne({ id: 2 });
```
5. **Inserting and Updating Marks:**
```javascript
db.marks.insertMany([
{ sid: 1, pass: 'Failed', marks: 32, ncc: 'Y' },
{ sid: 2, pass: 'Passed', marks: 40, ncc: 'N' },
{ sid: 3, pass: 'Passed', marks: 70, ncc: 'Y' }
]);
db.marks.updateMany({ ncc: 'Y' }, { $set: { award: 'yyes' } });
```
6. **Aggregation - Joining Students and Marks:**
```javascript
db.students.aggregate([
{
$lookup: { from: 'marks', localField: 'id', foreignField: 'sid', as: 'SMarks' }
},
{ $project: { _id: 0, SMarks: { _id: 0 } } }
]);
```
7. **Aggregation - Calculating Total Marks:**
```javascript
db.marks.aggregate([
{ $group: { _id: null, Tsum: { $sum: '$marks' } } },
{ $project: { _id: 0, Tsum: 1 } }
]);
db.marks.aggregate([
{ $group: { _id: null, Tavg: { $avg: '$marks' } } }
]);
```
8. **Aggregation - Additional Statistics and Reshaping:**
```javascript
db.marks.aggregate([
{
$group: {
_id: null,
Tmax: { $max: '$marks' },
Tmin: { $min: '$marks' },
TTsum: { $sum: '$marks' },
TTavg: { $avg: '$marks' },
Tcount: { $sum: 1 }
}
},
{
$addFields: { empInfo: { $ifNull: ['$empInfo', [{ msg: 'deleted by deletedOne' }]] } }
},
{ $project: { _id: 0 } }
]);
```
9. **Replacing Document in Students Collection:**
```javascript
db.students.replaceOne(
{ email: 'ak3@gm.com' },
{
name: 'John Doe',
age: 26,
email: 'ak3@gm.com',
address: { city: 'New York', country: 'USA' }
}
);
```
10. **Example Queries:**
Uncommented queries for reference:
- `db.students.find()`
- `db.marks.find({ ncc: 'Y' })`
- `db.marks.find({ marks: { $gte: 40 } })`
- `db.marks.aggregate([{$match:{ncc:{$in:["Y"]}}}])`
- `db.marks.aggregate([{$match:{ncc:"N"}}])`
11. **Inserting Data into "mytags" Collection:**
```javascript
db.mytags.insertMany([
// Example data provided
]);
```
12. **Various Aggregation and Query Operations on "mytags":**
- Please refer to the comments and uncommented queries in the script for details.
Note: The actual output of queries may vary based on the specific data present in the database at the time of execution.
The provided script involves the insertion of data into the "mytags" collection and subsequent MongoDB queries and aggregations. I will provide an overview of each operation with example data and expected output.
1. **Inserting Data into "mytags" Collection:**
```javascript
db.mytags.insertMany([
{
"id": 1,
"title": "Introduction to MongoDB",
"tags": ["mongodb", "database"],
"status": "active",
"like": 50
},
{
"id": 2,
"title": "Exploring NoSQL Concepts",
"tags": ["nosql", "database"],
"status": "inactive",
"like": 250
},
{
"id": 3,
"title": "Building Scalable Applications",
"tags": ["php", "nosql", "scalability"],
"status": "inactive",
"like": 1050
},
{
"id": 4,
"title": "Data Modeling in MongoDB",
"tags": ["mongodb", "database", "modeling"],
"status": "active",
"like": 350
},
{
"id": 5,
"title": "Node with MongoDB",
"tags": ["nodejs", "sql", "react"],
"status": "active",
"like": "na"
}
]);
```
2. **Queries and Aggregations on "mytags":**
- Uncommented queries for reference:
- `db.mytags.find({like:{$gt:500},status:{$eq:"active"}})`
- `db.mytags.find({status:"inactive"})`
- `db.mytags.find({ status: { $eq: "active" } })`
- `db.mytags.aggregate([{$match:{status:"active",like:{$gt:51}}}]`
- `db.mytags.find({ title: { $regex: /^Building/ } })`
- `db.mytags.find({$or:[{status:"active",tags:{$in:["mongodb"]}}, {like:{$gt:1000}}]})`
- `db.mytags.find({$and:[{status:"inactive"}, {like:{$lt:251}}]})`
- `db.mytags.find().sort({ like: -1 })`
- `db.mytags.aggregate([{ $unwind: "$status" }])`
- `db.mytags.aggregate([{ $match: { like: { $gt: 50 } } }])`
- `db.mytags.aggregate([{ $skip: 2 },{ $limit: 3 }])`
- `db.mytags.aggregate([{ $facet: { averageLike: [{ $group: { _id: null, avgLike: { $avg: "$like" } } }], totalLike: [{ $group: { _id: null, totalQty: { $sum: "$like" } } }] } }])`
- `db.mytags.countDocuments({ like: { $gt: 50 } })`
- `db.mytags.distinct("status")`
- `db.mytags.distinct("tags")`
- `db.mytags.find({ like: { $type: "string" } })`
- `db.mytags.find({tags:{$in:["mongodb","database"]}})`
- `db.mytags.find({tags: { $all: ["mongodb", "database"] } })`
Note: The actual output of queries may vary based on the specific data present in the "mytags" collection at the time of execution. Ensure the data aligns with the examples provided for accurate results.
Certainly, let's provide the expected output for each query based on the example data you provided earlier.
1. **Query: `db.mytags.find({like:{$gt:500},status:{$eq:"active"}})`**
- Expected Output: Documents where 'like' is greater than 500 and 'status' is "active".
2. **Query: `db.mytags.find({status:"inactive"})`**
- Expected Output: Documents where 'status' is "inactive".
3. **Query: `db.mytags.find({ status: { $eq: "active" } })`**
- Expected Output: Documents where 'status' is "active".
4. **Query: `db.mytags.aggregate([{$match:{status:"active",like:{$gt:51}}}])`**
- Expected Output: Aggregated documents where 'status' is "active" and 'like' is greater than 51.
5. **Query: `db.mytags.find({ title: { $regex: /^Building/ } })`**
- Expected Output: Documents where 'title' starts with "Building".
6. **Query: `db.mytags.find({$or:[{status:"active",tags:{$in:["mongodb"]}}, {like:{$gt:1000}}]})`**
- Expected Output: Documents where 'status' is "active" and includes the tag "mongodb", or 'like' is greater than 1000.
7. **Query: `db.mytags.find({$and:[{status:"inactive"}, {like:{$lt:251}}]})`**
- Expected Output: Documents where 'status' is "inactive" and 'like' is less than 251.
8. **Query: `db.mytags.find().sort({ like: -1 })`**
- Expected Output: All documents sorted in descending order based on the 'like' field.
9. **Query: `db.mytags.aggregate([{ $unwind: "$status" }])`**
- Expected Output: Unwound documents based on the 'status' field.
10. **Query: `db.mytags.aggregate([{ $match: { like: { $gt: 50 } } }])`**
- Expected Output: Aggregated documents where 'like' is greater than 50.
11. **Query: `db.mytags.aggregate([{ $skip: 2 },{ $limit: 3 }])`**
- Expected Output: Skipped the first 2 documents and limited the result to 3 documents.
12. **Query: `db.mytags.aggregate([{ $facet: { averageLike: [{ $group: { _id: null, avgLike: { $avg: "$like" } } }], totalLike: [{ $group: { _id: null, totalQty: { $sum: "$like" } } }] } }])`**
- Expected Output: Aggregated result providing average and total of the 'like' field.
13. **Query: `db.mytags.countDocuments({ like: { $gt: 50 } })`**
- Expected Output: Count of documents where 'like' is greater than 50.
14. **Query: `db.mytags.distinct("status")`**
- Expected Output: Distinct values of the 'status' field.
15. **Query: `db.mytags.distinct("tags")`**
- Expected Output: Distinct values of the 'tags' field.
16. **Query: `db.mytags.find({ like: { $type: "string" } })`**
- Expected Output: Documents where the 'like' field is of type string.
17. **Query: `db.mytags.find({tags:{$in:["mongodb","database"]}})`**
- Expected Output: Documents where 'tags' includes "mongodb" or "database".
18. **Query: `db.mytags.find({tags: { $all: ["mongodb", "database"] } })`**
- Expected Output: Documents where 'tags' includes both "mongodb" and "database".
Please note that the actual output may vary based on the data in your MongoDB collection at the time of execution. | avinashrepo |
841,228 | What I Learned This Week - Week 2 | NOTE: I'm posting my personal journey of participating in a boot camp, as a person who... | 0 | 2021-09-26T15:40:43 | https://dev.to/suhcrates/what-i-learned-this-week-week-2-3bi0 | ## NOTE:
I'm posting my personal journey of participating in a boot camp, as a person who recently seriously started to learn programming. So it might not contain useful information at the moment. But hopefully, after four months, I will post things that are encouraging to aspiring developers, with current posts as evidence of how much I have grown since the beginning.
## About learning
It's been two weeks since the boot camp started, and I feel like I have learned a lot more this past two weeks than what I have learned on my own for the past several months, although it was on and off.
My motivation for joining the boot camp was to put myself in an immersive learning environment with people who are also motivated. I've tried self-learning, but no matter how much I was diligent, I realized my learning progress can't be as fast as someone who learns in a community with wealth of information and with people who are very motivated. After all, humans are social creatures and we learn from one another.
Before joining the boot camp, I did some due diligence on the boot camp by reading blogs from people who have participated in previously, and I sensed that the teaching quality might not be as good as I hoped it would be. (I was quite right. Rather than explaining why and how things work the way they work, their teaching style is more like "Don't try to understand. Just do it.")
But despite the shortcoming, I am quite pleased with how much I have learned and progressed over the past two weeks. Because of the assignments' deadlines and of the other people who are working hard in the same space (well, a virtual space. We are learning in a metaverse called a "gather town." We each have our own avatar and communicate via web cams), I push myself more than when I was learning alone. For any shortfall of the learning materials, I google it, and apparently, one of the virtues of being a good developer is being good at googling.
They say "If you want to go fast, go alone. If you want to go farther, go together." (Not sure about the origin. Some say it's an African proverb.)
So, I think I made the right decision.
| suhcrates | |
1,750,305 | JS ES6 Features vs Tradational way - Part 1 | These examples showcase the usage of various ES6 features for more concise and readable... | 0 | 2024-02-03T03:16:58 | https://dev.to/avinashrepo/js-es6-features-2d4m | webdev, javascript, es6, beginners |
These examples showcase the usage of various ES6 features for more concise and readable code.
1. **Popular Features of ES6:**
ES6, or ECMAScript 2015, introduced several features to enhance JavaScript. Some popular features include:
- **let and const:** Block-scoped variable declarations.
- **Arrow Functions:** Concise syntax for function expressions.
- **Classes:** Syntactical sugar over the prototype-based inheritance.
- **Template Literals:** Enhanced string interpolation.
- **Destructuring:** Extracting values from arrays or objects.
- **Default Parameters:** Assigning default values to function parameters.
- **Rest Parameter:** Representing an indefinite number of arguments as an array.
- **Spread Operator:** Spreading elements of an array or object.
- **Promises:** Handling asynchronous operations more effectively.
- **Modules:** Encapsulation and organization of code in separate files.
Certainly! Here are examples demonstrating each of the mentioned features in ES6:
1. **let and const:**
```javascript
let x = 10;
const PI = 3.1415;
x = 20; // Valid for let
// PI = 3.14; // Error, const cannot be reassigned
```
2. **Arrow Functions:**
```javascript
// Regular function
function add(a, b) {
return a + b;
}
// Arrow function
const addArrow = (a, b) => a + b;
```
3. **Classes:**
```javascript
class Animal {
constructor(name) {
this.name = name;
}
speak() {
console.log(`${this.name} makes a sound`);
}
}
const dog = new Animal('Dog');
dog.speak(); // Output: Dog makes a sound
```
4. **Template Literals:**
```javascript
const name = 'John';
const greeting = `Hello, ${name}!`;
console.log(greeting); // Output: Hello, John!
```
5. **Destructuring:**
```javascript
const person = { name: 'Alice', age: 30 };
const { name, age } = person;
console.log(name, age); // Output: Alice 30
```
6. **Default Parameters:**
```javascript
function greet(name = 'Guest') {
console.log(`Hello, ${name}!`);
}
greet(); // Output: Hello, Guest!
greet('Bob'); // Output: Hello, Bob!
```
7. **Rest Parameter:**
```javascript
function sum(...numbers) {
return numbers.reduce((acc, num) => acc + num, 0);
}
console.log(sum(1, 2, 3)); // Output: 6
```
8. **Spread Operator:**
```javascript
const array1 = [1, 2, 3];
const array2 = [...array1, 4, 5];
console.log(array2); // Output: [1, 2, 3, 4, 5]
```
9. **Promises:**
```javascript
const fetchData = () => {
return new Promise((resolve, reject) => {
// Simulating asynchronous operation
setTimeout(() => {
const data = 'Some data';
resolve(data);
// reject('Error occurred'); // Uncomment to simulate rejection
}, 1000);
});
};
fetchData()
.then(data => console.log(data))
.catch(error => console.error(error));
```
10. **Modules:**
```javascript
// file1.js
export const add = (a, b) => a + b;
// file2.js
import { add } from './file1.js';
console.log(add(3, 4)); // Output: 7
```
2. **Object-Oriented Features in ES6:**
ES6 supports several object-oriented features, including:
- **Classes and Constructors:** A more structured way to define and instantiate objects.
- **Inheritance:** The `extends` keyword facilitates prototype-based inheritance.
- **Getter and Setter Methods:** Control access to object properties.
- **Static Methods:** Methods that belong to the class rather than instances.
- **Super Keyword:** Refers to the parent class.
Certainly. Below is an equivalent example in JavaScript using ES6 syntax:
```javascript
// Class definition with a constructor
class Vehicle {
constructor(brand) {
this.brand = brand;
}
// Getter method
getBrand() {
return this.brand;
}
// Setter method
setBrand(brand) {
this.brand = brand;
}
// Static method
static displayInfo() {
console.log("This is a Vehicle class.");
}
}
// Subclass inheriting from Vehicle
class Car extends Vehicle {
constructor(brand, year) {
super(brand);
this.year = year;
}
// Overriding the getter method from the parent class
getBrand() {
return `Car brand: ${super.getBrand()}`;
}
}
// Instantiating an object of the Car class
const myCar = new Car("Toyota", 2022);
// Accessing and displaying information using getter methods
console.log(`Brand: ${myCar.getBrand()}`);
console.log(`Year: ${myCar.year}`);
// Using a static method of the parent class
Vehicle.displayInfo();
```
In this JavaScript ES6 example:
1. The `Vehicle` class and its methods are defined using the class syntax.
2. The `Car` class extends `Vehicle` and utilizes the `super` keyword in its constructor.
3. The getter method in the `Car` class overrides the one in the parent class, demonstrating polymorphism.
4. The `Main` part of the example showcases instantiation of a `Car` object and accessing its properties and methods.
3. **Comparison Between ES5 and ES6:**
- ES6 introduced let and const for block-scoped variables, whereas ES5 primarily used var.
- ES6 provides arrow functions, making syntax concise for function expressions.
- ES6 introduced classes, enhancing object-oriented programming.
- Template literals in ES6 simplify string interpolation compared to concatenation in ES5.
- Destructuring and default parameters enhance code readability in ES6.
- Promises in ES6 offer a cleaner alternative to callback-based asynchronous operations.
Certainly. Let's provide concise code examples to illustrate the mentioned ES5 and ES6 features:
1. **Block-Scoped Variables (var vs. let/const):**
ES5 (var):
```javascript
for (var i = 0; i < 5; i++) {
// Some code
}
console.log(i); // Outputs: 5
```
ES6 (let/const):
```javascript
for (let j = 0; j < 5; j++) {
// Some code
}
console.log(j); // ReferenceError: j is not defined
```
2. **Arrow Functions:**
ES5:
```javascript
var add = function(a, b) {
return a + b;
};
```
ES6:
```javascript
const add = (a, b) => a + b;
```
3. **Classes:**
ES5 (Prototype-based approach):
```javascript
function Car(brand) {
this.brand = brand;
}
Car.prototype.getBrand = function() {
return this.brand;
};
var myCar = new Car('Toyota');
```
ES6:
```javascript
class Car {
constructor(brand) {
this.brand = brand;
}
getBrand() {
return this.brand;
}
}
const myCar = new Car('Toyota');
```
4. **Template Literals:**
ES5:
```javascript
var name = 'John';
console.log('Hello, ' + name + '!');
```
ES6:
```javascript
const name = 'John';
console.log(`Hello, ${name}!`);
```
5. **Destructuring and Default Parameters:**
ES5:
```javascript
function processData(data) {
var username = data.username || 'Guest';
var age = data.age || 25;
// Some code
}
```
ES6:
```javascript
function processData({ username = 'Guest', age = 25 }) {
// Some code
}
```
6. **Promises:**
ES5 (Callback-based):
```javascript
function fetchData(callback) {
// Asynchronous operation
setTimeout(function() {
callback('Data received');
}, 1000);
}
fetchData(function(data) {
console.log(data);
});
```
ES6 (Promise-based):
```javascript
function fetchData() {
return new Promise(function(resolve) {
// Asynchronous operation
setTimeout(function() {
resolve('Data received');
}, 1000);
});
}
fetchData().then(function(data) {
console.log(data);
});
```
These examples showcase the evolution and simplification of code in ES6 compared to its predecessor, ES5.
4. **let vs. const vs. var:**
- **let:** Allows reassignment, block-scoped.
- **const:** Does not allow reassignment, block-scoped.
- **var:** Function-scoped, allows redeclaration and reassignment.
5. **Arrow Function:**
- Arrow functions provide a concise syntax for writing function expressions.
- They lexically bind the `this` value, avoiding the need for the `function` keyword.
- Ideal for shorter functions and anonymous functions.
6. **When Not to Use Arrow Functions:**
- Avoid arrow functions when dynamic context binding (`this`) is needed.
- Not suitable for methods within objects due to lexical `this`.
Arrow functions in JavaScript offer concise syntax and lexical scoping but are not suitable for all scenarios. One should be cautious when using arrow functions in the following situations:
Certainly. Let's provide examples illustrating both correct and erroneous use of arrow functions in the mentioned scenarios:
1. **Methods within Objects:**
Correct Use:
```javascript
const person = {
name: 'John',
sayHello: function() {
console.log(`Hello, ${this.name}`);
}
};
person.sayHello(); // Outputs: Hello, John
```
Erroneous Use:
```javascript
const person = {
name: 'John',
sayHello: () => {
console.log(`Hello, ${this.name}`);
}
};
person.sayHello(); // Outputs: Hello, undefined (incorrect behavior)
```
2. **Functions Requiring Their Own 'this':**
Correct Use:
```javascript
const button = document.getElementById('myButton');
button.addEventListener('click', function() {
console.log('Button clicked:', this); // 'this' refers to the button element
});
```
Erroneous Use:
```javascript
const button = document.getElementById('myButton');
button.addEventListener('click', () => {
console.log('Button clicked:', this); // 'this' does not refer to the button element
});
```
3. **Constructor Functions:**
Correct Use:
```javascript
function MyObject() {
this.value = 42;
}
const obj = new MyObject();
```
Erroneous Use:
```javascript
const MyObject = () => {
this.value = 42; // 'this' does not bind correctly
};
const obj = new MyObject(); // TypeError: MyObject is not a constructor
```
4. **Functions with Dynamic 'arguments':**
Correct Use:
```javascript
function sum() {
let total = 0;
for (const arg of arguments) {
total += arg;
}
return total;
}
console.log(sum(1, 2, 3)); // Outputs: 6
```
Erroneous Use:
```javascript
const sum = () => {
let total = 0;
for (const arg of arguments) { // 'arguments' object is not available in arrow functions
total += arg;
}
return total;
};
console.log(sum(1, 2, 3)); // ReferenceError: arguments is not defined
```
These examples highlight the correct and erroneous usage of arrow functions in various scenarios, emphasizing the importance of choosing the appropriate function syntax based on the specific requirements of the code.
7. **Generator Function:**
- A special type of function that can be paused and resumed.
- Implemented using function `function*` syntax and the **yield **keyword.
- Useful for asynchronous operations and iterating over large datasets.
Certainly, here's a more straightforward example of a generator function:
```javascript
// Generator function definition
function* simpleGenerator() {
yield 'First';
yield 'Second';
yield 'Third';
}
// Creating a generator object
const generator = simpleGenerator();
// Accessing values using the generator
console.log(generator.next().value); // Outputs: 'First'
console.log(generator.next().value); // Outputs: 'Second'
console.log(generator.next().value); // Outputs: 'Third'
console.log(generator.next().value); // Outputs: undefined
```
Certainly. A generator function is a special type of function in JavaScript that allows you to control the execution flow with the use of the yield keyword. Here's a simple example of a generator function:
javascript
Copy code
// Generator function definition
function* generateNumbers() {
yield 1;
yield 2;
yield 3;
}
// Creating a generator object
const numbersGenerator = generateNumbers();
// Accessing values using the generator
console.log(numbersGenerator.next().value); // Outputs: 1
console.log(numbersGenerator.next().value); // Outputs: 2
console.log(numbersGenerator.next().value); // Outputs: 3
console.log(numbersGenerator.next().value); // Outputs: undefined
In this example:
function* declares a generator function.
yield is used to produce values one at a time.
Each time numbersGenerator.next() is called, the generator function executes until it encounters a yield statement, returning the yielded value.
The generator keeps track of its internal state, allowing you to resume its execution from where it left off.
This example illustrates the basic structure and functionality of a generator function in a more straightforward manner.
8. **Spread Operator:**
- The spread operator (`...`) expands elements of an iterable (array, string, etc.) into individual elements.
- Useful for array/object manipulation, function arguments, and creating shallow copies.
Certainly, let's compare the use of the spread operator in ES6 with a more traditional approach.
**ES6 - Spread Operator:**
The spread operator (`...`) in ES6 allows for the expansion of iterable elements, such as arrays or strings. Here's an example:
```javascript
// Using the spread operator to concatenate arrays
const array1 = [1, 2, 3];
const array2 = [4, 5, 6];
const concatenatedArray = [...array1, ...array2];
console.log(concatenatedArray); // Outputs: [1, 2, 3, 4, 5, 6]
```
In this example, the spread operator is used to create a new array (`concatenatedArray`) by combining the elements of `array1` and `array2`.
**Traditional Approach:**
Prior to ES6, you might achieve the same result using methods like `concat` or `slice`:
```javascript
// Using the concat method to concatenate arrays (traditional approach)
const array1 = [1, 2, 3];
const array2 = [4, 5, 6];
const concatenatedArray = array1.concat(array2);
console.log(concatenatedArray); // Outputs: [1, 2, 3, 4, 5, 6]
```
This traditional approach involves using a method (`concat` in this case) to combine arrays.
**ES6 - Spread Operator with Objects:**
The spread operator can also be used with objects to shallow copy and merge them:
```javascript
// Using the spread operator with objects
const obj1 = { a: 1, b: 2 };
const obj2 = { c: 3, d: 4 };
const mergedObject = { ...obj1, ...obj2 };
console.log(mergedObject); // Outputs: { a: 1, b: 2, c: 3, d: 4 }
```
**Traditional Approach with Objects:**
A traditional approach might involve using a loop or methods like `Object.assign`:
```javascript
// Using Object.assign to merge objects (traditional approach)
const obj1 = { a: 1, b: 2 };
const obj2 = { c: 3, d: 4 };
const mergedObject = Object.assign({}, obj1, obj2);
console.log(mergedObject); // Outputs: { a: 1, b: 2, c: 3, d: 4 }
```
The spread operator in ES6 provides a more concise and readable syntax for achieving similar operations, especially when working with arrays and objects.
9. **Destructuring in ES6:**
- A concise way to extract values from arrays or properties from objects.
- Enhances readability and simplifies assignment.
Destructuring assignment in ES6 allows for the extraction of values from arrays or properties from objects in a concise and readable manner. Let's compare it with more traditional approaches.
**ES6 - Destructuring in Arrays:**
```javascript
// Destructuring assignment in arrays
const numbers = [1, 2, 3, 4, 5];
// Extracting values using destructuring
const [first, second, ...rest] = numbers;
console.log(first); // Outputs: 1
console.log(second); // Outputs: 2
console.log(rest); // Outputs: [3, 4, 5]
```
**Traditional Approach - Arrays:**
In a more traditional approach, you might access array elements using indexing:
```javascript
// Accessing array elements using indexing (traditional approach)
const numbers = [1, 2, 3, 4, 5];
const first = numbers[0];
const second = numbers[1];
const rest = numbers.slice(2);
console.log(first); // Outputs: 1
console.log(second); // Outputs: 2
console.log(rest); // Outputs: [3, 4, 5]
```
**ES6 - Destructuring in Objects:**
```javascript
// Destructuring assignment in objects
const person = { name: 'John', age: 30, city: 'New York' };
// Extracting properties using destructuring
const { name, age, country = 'USA' } = person;
console.log(name); // Outputs: John
console.log(age); // Outputs: 30
console.log(country); // Outputs: USA (default value assigned)
```
**Traditional Approach - Objects:**
In a traditional approach, you might access object properties using dot notation:
```javascript
// Accessing object properties using dot notation (traditional approach)
const person = { name: 'John', age: 30, city: 'New York' };
const name = person.name;
const age = person.age;
const country = person.country || 'USA';
console.log(name); // Outputs: John
console.log(age); // Outputs: 30
console.log(country); // Outputs: USA (default value assigned)
```
Destructuring in ES6 provides a cleaner and more concise syntax, especially when dealing with complex data structures. It allows for better readability and reduces the need for repetitive syntax seen in traditional approaches.
10. **Promises in ES6:**
- Objects representing the eventual completion or failure of an asynchronous operation.
- Consists of `resolve` and `reject` functions, handling success and failure scenarios.
- Easier handling of asynchronous code compared to callbacks.
11. **Rest Parameter in ES6:**
- Allows representing an indefinite number of function arguments as an array.
- Improves flexibility in function declarations by handling variable-length argument lists.
The rest parameter in ES6 allows functions to accept an indefinite number of arguments as an array. Let's compare its usage with a more traditional approach.
**ES6 - Rest Parameter:**
```javascript
// Rest parameter in ES6
function sum(...numbers) {
return numbers.reduce((acc, num) => acc + num, 0);
}
console.log(sum(1, 2, 3, 4)); // Outputs: 10
```
In this example, the `...numbers` syntax is used as the rest parameter, collecting any number of arguments passed to the function into the `numbers` array.
**Traditional Approach:**
In a more traditional approach, you might use the `arguments` object:
```javascript
// Using arguments object (traditional approach)
function sum() {
const args = Array.from(arguments);
return args.reduce((acc, num) => acc + num, 0);
}
console.log(sum(1, 2, 3, 4)); // Outputs: 10
```
Here, `arguments` is converted to an array using `Array.from(arguments)`. This is necessary because the `arguments` object is array-like but not a real array.
**Note:**
1. The rest parameter allows for a more straightforward and intuitive syntax for collecting variable arguments.
2. The `arguments` object is not a true array, and using it involves additional steps for conversion.
The rest parameter in ES6 provides a more modern and concise way to handle variable arguments in functions. It is especially beneficial when dealing with functions that need to accept an unspecified number of parameters.
12. **Template Literals in ES6:**
- Strings enclosed by backticks (``) that allow embedded expressions.
- Improve code readability and simplify string concatenation compared to traditional methods.
Template literals in ES6 provide a more concise and expressive way to work with strings compared to traditional string concatenation or interpolation methods. Let's compare the usage of template literals with a more traditional approach.
**ES6 - Template Literals:**
```javascript
// Template literals in ES6
const name = 'John';
const age = 30;
const greeting = `Hello, my name is ${name} and I am ${age} years old.`;
console.log(greeting); // Outputs: Hello, my name is John and I am 30 years old.
```
In this example, the `${}` syntax within backticks allows for the easy embedding of variables and expressions directly into the string.
**Traditional Approach:**
In a more traditional approach, you might use string concatenation or string interpolation:
```javascript
// Using string concatenation and interpolation (traditional approach)
const name = 'John';
const age = 30;
const greeting = 'Hello, my name is ' + name + ' and I am ' + age + ' years old.';
console.log(greeting); // Outputs: Hello, my name is John and I am 30 years old.
```
Or, using string interpolation in older JavaScript versions:
```javascript
// Using string interpolation in older JavaScript versions (traditional approach)
const name = 'John';
const age = 30;
const greeting = 'Hello, my name is ' + name + ' and I am ' + age + ' years old.';
console.log(greeting); // Outputs: Hello, my name is John and I am 30 years old.
```
**Key Differences:**
1. Template literals offer a cleaner and more readable syntax for string interpolation, especially when dealing with multiple variables or expressions.
2. They allow for the easy inclusion of line breaks within the string.
Overall, template literals in ES6 enhance the way strings are handled in JavaScript, providing a more modern and flexible approach compared to traditional string concatenation.
13. **Why Use ES6 Classes:**
- Provides a more intuitive and structured syntax for object-oriented programming.
- Encourages code organization and reuse through inheritance.
**Why Use ES6 Classes:**
1. **Intuitive and Structured Syntax:**
- **ES6 Classes:**
```javascript
class Car {
constructor(brand) {
this.brand = brand;
}
displayInfo() {
console.log(`Brand: ${this.brand}`);
}
}
```
- **Traditional Way:**
```javascript
function Car(brand) {
this.brand = brand;
}
Car.prototype.displayInfo = function() {
console.log('Brand: ' + this.brand);
};
```
ES6 classes provide a more intuitive and concise syntax for defining classes, making the code structure clearer and more aligned with the expectations of developers familiar with class-based languages.
2. **Encourages Code Organization and Reuse through Inheritance:**
- **ES6 Classes:**
```javascript
class Vehicle {
constructor(type) {
this.type = type;
}
displayType() {
console.log(`Type: ${this.type}`);
}
}
class Car extends Vehicle {
constructor(brand, type) {
super(type);
this.brand = brand;
}
displayInfo() {
console.log(`Brand: ${this.brand}`);
}
}
```
- **Traditional Way:**
```javascript
function Vehicle(type) {
this.type = type;
}
Vehicle.prototype.displayType = function() {
console.log('Type: ' + this.type);
};
function Car(brand, type) {
Vehicle.call(this, type);
this.brand = brand;
}
Car.prototype = Object.create(Vehicle.prototype);
Car.prototype.constructor = Car;
Car.prototype.displayInfo = function() {
console.log('Brand: ' + this.brand);
};
```
ES6 classes simplify the process of inheritance, making it more readable and reducing the boilerplate code required in the traditional prototype-based approach. This encourages the creation of well-organized and reusable code.
In summary, using ES6 classes provides a cleaner, more readable syntax for object-oriented programming, making code organization and reuse through inheritance more straightforward compared to the traditional prototype-based approach.
14. **Creating a Class in ES6:**
- Use the `class` keyword followed by a class name.
- Define a constructor method for initializing instances.
- Add methods within the class for functionality.
**Creating a Class in ES6:**
```javascript
// ES6 Class
class Animal {
constructor(name, sound) {
this.name = name;
this.sound = sound;
}
makeSound() {
console.log(`${this.name} says ${this.sound}`);
}
}
// Creating an instance of the class
const dog = new Animal('Dog', 'Woof');
dog.makeSound(); // Outputs: Dog says Woof
```
**Traditional Way (Constructor Function and Prototypes):**
```javascript
// Constructor Function
function Animal(name, sound) {
this.name = name;
this.sound = sound;
}
// Prototype Method
Animal.prototype.makeSound = function() {
console.log(this.name + ' says ' + this.sound);
};
// Creating an instance of the constructor function
const dog = new Animal('Dog', 'Woof');
dog.makeSound(); // Outputs: Dog says Woof
```
**Key Differences:**
1. **Syntax:**
- ES6 classes provide a more concise and expressive syntax for defining classes.
- The traditional approach involves using a constructor function and defining methods on the prototype.
2. **Class Inheritance:**
- ES6 classes have a built-in `extends` keyword for class inheritance.
- In the traditional approach, you need to use the `prototype` chain for inheritance.
3. **Constructor:**
- In ES6 classes, the constructor is defined using the `constructor` method.
- In the traditional approach, the constructor is the function itself.
4. **Readability:**
- ES6 classes offer improved readability, especially for developers familiar with class-based languages.
- The traditional approach may involve more boilerplate code and may be less intuitive for those new to JavaScript.
5. **Syntactic Sugar:**
- ES6 classes are considered syntactic sugar over the traditional prototype-based approach, providing a more convenient and modern syntax.
In summary, while both approaches achieve the same result, ES6 classes offer a cleaner and more intuitive syntax, making the code more readable and aligning it with class-based programming conventions. The traditional prototype-based approach is still relevant, especially in environments without ES6 support. However, ES6 classes are widely adopted for their improved syntax and features.
15. **Class Expression:**
- Similar to class declaration but can be unnamed.
- Can be assigned to a variable.
**Class Expression in ES6:**
```javascript
// Class Expression
const Animal = class {
constructor(name, sound) {
this.name = name;
this.sound = sound;
}
makeSound() {
console.log(`${this.name} says ${this.sound}`);
}
};
// Creating an instance of the class
const dog = new Animal('Dog', 'Woof');
dog.makeSound(); // Outputs: Dog says Woof
```
**Traditional Way (Constructor Function Expression):**
```javascript
// Constructor Function Expression
const Animal = function(name, sound) {
this.name = name;
this.sound = sound;
};
// Prototype Method
Animal.prototype.makeSound = function() {
console.log(this.name + ' says ' + this.sound);
};
// Creating an instance of the constructor function
const dog = new Animal('Dog', 'Woof');
dog.makeSound(); // Outputs: Dog says Woof
```
**Key Differences:**
1. **Syntax:**
- In ES6, a class expression is used, similar to a class declaration but without a name.
- The traditional approach involves creating a constructor function expression.
2. **Assignment to a Variable:**
- Class expressions can be assigned to variables, allowing for flexibility in usage.
- In the traditional approach, constructor function expressions are also commonly assigned to variables for reuse.
3. **Unnamed:**
- Class expressions are unnamed, as the class itself is defined in the variable assignment.
- Constructor function expressions can also be unnamed, depending on how they are used.
4. **Readability:**
- Class expressions can provide a cleaner syntax when creating classes on the fly or within limited scope.
- Constructor function expressions are flexible but may involve more verbose syntax.
In summary, class expressions in ES6 offer a concise and flexible way to define classes without the need for a separate declaration. Both class expressions and constructor function expressions are viable options, with the choice depending on specific coding requirements and stylistic preferences. | avinashrepo |
1,750,338 | React | ¿Que es React? Es una biblioteca de Javascript para construir interfaces de usuario en web o... | 0 | 2024-02-03T04:27:49 | https://dev.to/abrilskop/react-4klg | react, reactnative, javascript | **¿Que es React?**
Es una biblioteca de Javascript para construir interfaces de usuario en web o movil.
- React es declarativo y esta basado en componentes.
- Es universal, ya que se ejecuta tanto como de lado del servidor y cliente, permitiendonos reutilizar el mismo codigo.
- En 2013 se volvio de codigo abierto.
- Pertenece a Meta(Facebook, Whatsapp)
- Creado en 2012 por Jordan Wolk, ya que era muy dificil trabajar con formularios.
- Evita ataques por XSS(intento de ainyeccion de codigo).
**¿Por que aprender React?**
1. Es el framework mas demandado del mundo.
2. Te permite desarrollar aplicacion de escritorio, web y moviles.
3. Mantenimiento activo.
4. El aprendisaje es mas facil si sabes otro framework.
5. Los cambios en React son mas facil de inplementar.
6. Una gran comunidad.
**Preguntas tipicas de React**
https://www.reactjs.wiki/
**Documentacion**
https://es.react.dev/learn | abrilskop |
1,750,343 | 789win4net | 789win4.net la Website Chinh Thuc cua 789Win tai khu vuc chau A hien nay, chuyen cap nhat nhung link... | 0 | 2024-02-03T05:13:21 | https://dev.to/789win4net/789win4net-5an1 | 789win4.net la Website Chinh Thuc cua 789Win tai khu vuc chau A hien nay, chuyen cap nhat nhung link vao 789Win moi nhat va chuan xac nhat cho anh em.
Dia chi: 154 D. Nguyen Van Cu, Phuong Nguyen Cu Trinh, Quan 1, Thanh pho Ho Chi Minh, Viet Nam
Email: 789win4.net@gmail.com
Website: https://789win4.net/
Dien thoai: (+63) 9671186205
#789win #789win4 #789win4net #789wincasino #nhacai789win
Social links:
https://789win4.net/gioi-thieu-789win/
https://789win4.net/chinh-sach-bao-mat/
https://789win4.net/dieu-khoan-va-dieu-kien/
https://789win4.net/lien-he-789win/
https://789win4.net/dang-ky-789win/
https://789win4.net/khuyen-mai-789win/
https://789win4.net/nap-tien-789win/
https://789win4.net/rut-tien-789win/
https://789win4.net/tai-app-789win/
https://789win4.net/dang-nhap-789win/
https://www.facebook.com/789win4.net
https://twitter.com/789win4net
https://vimeo.com/789win4net
https://www.pinterest.com/789win4net/
https://www.tumblr.com/789win4net
https://www.twitch.tv/789win4net/about
https://www.reddit.com/user/789win4net/
https://www.youtube.com/@789win4net
https://500px.com/p/789win4net?view=photos
https://www.flickr.com/people/789win4net/
https://gravatar.com/789win4net
https://about.me/net789win4
https://www.instapaper.com/p/789win4net
https://www.mixcloud.com/789win4net/
https://hub.docker.com/u/789win4net
https://flipboard.com/@789win4net/789win-789win4-net-link-nh-c-i-c-c-c-1-vi-t-nam-aq2artmly
https://issuu.com/789win4net
https://www.babelcube.com/user/789win-789win4-net-link-nha-cai-ca-cuoc-1-viet-nam
https://beermapping.com/account/789win4net
https://qiita.com/789win4net
https://profile.hatena.ne.jp/net789win4/profile
https://www.reverbnation.com/789win4net
https://guides.co/g/789win4net/333346
https://myanimelist.net/profile/789win4net
https://os.mbed.com/users/789win4net/
https://www.metooo.io/u/789win4net
https://www.veoh.com/users/789win4net
https://glitch.com/@789win4net
https://gifyu.com/net789win4
https://pantip.com/profile/7967321#topics
https://www.dermandar.com/user/789win4net/
https://hypothes.is/users/789win4net
https://leetcode.com/789win4net/
https://www.walkscore.com/people/294782453274/789win4net
http://www.fanart-central.net/user/789win4net/profile
https://files.fm/789win4net/info
http://hawkee.com/profile/6079522/
https://www.chordie.com/forum/profile.php?id=1859589
https://789win4net.seesaa.net/article/502248156.html?1706855840
https://forum.acronis.com/user/576982
https://jsfiddle.net/789win4net/mnc01hov/
https://www.renderosity.com/users/id:1450330
https://www.funddreamer.com/users/789win4net
https://turkish.ava360.com/user/789win4net/
https://www.ohay.tv/profile/789win4net
https://www.mapleprimes.com/users/789win4net
https://experiment.com/users/789win4net
https://learningapps.org/user/User13900569
https://doodleordie.com/profile/89win4net
https://camp-fire.jp/profile/789win4net
https://telegra.ph/789win4net-02-02
https://community.windy.com/user/789win4net
https://connect.gt/user/789win4net
https://www.credly.com/users/789win4net/badges
https://atlas.dustforce.com/user/789win4net
https://padlet.com/789win4net/789win4-1pil3rmuadtam9jw
https://rentry.co/k82gfzrp
https://gettogether.community/profile/111482/
https://answerpail.com/index.php/user/789win4net
https://influence.co/789win4net
https://www.divephotoguide.com/user/789win4net
https://www.giantbomb.com/profile/win4net789/
https://www.icheckmovies.com/profiles/789win4net/
https://www.designspiration.com/789win4net/saves/
https://www.storeboard.com/789win4net
https://www.warriorforum.com/members/789win4net.html
https://www.liveinternet.ru/users/win4net789/blog#post503326959
https://www.fitday.com/fitness/forums/members/789win4net.html
https://www.iniuria.us/forum/member.php?408282-789win4net
https://www.beatstars.com/789win4net/about
http://molbiol.ru/forums/index.php?showuser=1325858
https://public.sitejot.com/789win4net.html
https://teletype.in/@789win4net
https://suzuri.jp/789win4net
https://app.roll20.net/users/12951514/789win4-n
https://my.archdaily.com/us/@789win4net
https://orcid.org/0009-0009-6973-9173
https://www.bitsdujour.com/profiles/NxowTw
https://my.desktopnexus.com/789win4net
https://www.catchafire.org/profiles/2694439/
https://www.facer.io/u/789win4net
https://www.diggerslist.com/789win4net/about
https://www.slideserve.com/789win4net
https://wakelet.com/@789win4net
https://www.metal-archives.com/users/789win4net
https://fileforum.com/profile/789win4net
https://pastelink.net/8pcklslm
https://en.bio-protocol.org/userhome.aspx?id=1478715
https://forums.giantitp.com/member.php?317256-789win4net
https://www.multichain.com/qa/user/789win4net
https://blip.fm/789win4net
https://play.eslgaming.com/player/19910591/
https://gitee.com/win4net
https://ioby.org/users/789win4net774519
https://artistecard.com/789win4net
https://starity.hu/profil/424262-win4net789/
https://midnight.im/members/409912/#about
https://789win4net.gallery.ru/
https://topsitenet.com/profile/789win4net/1128116/
http://www.nfomedia.com/profile?uid=rOiQahE
https://www.myminifactory.com/stories/789win4net-65bcbbdbc4fa1
https://zzb.bz/PIQG8
http://gendou.com/user/789win4net
http://tinyurl.com/789win4net
https://www.exchangle.com/789win4net
https://lab.quickbox.io/789win4net
https://research.openhumans.org/member/789win4net/
https://www.silverstripe.org/ForumMemberProfile/show/133117
https://www.fimfiction.net/user/690922/789win4net
https://sketchfab.com/789win4net
https://hearthis.at/789win4net/set/789win4net/
https://789win4net.gitbook.io/789win4net/
https://click4r.com/posts/g/14502691/
https://www.anobii.com/fr/0166fe5a9f7b28f72d/profile/activity
https://shoplook.io/profile/789win4net
https://gitlab.aicrowd.com/789win4net
https://www.lawyersclubindia.com/profile.asp?member_id=1022678
https://www.dnnsoftware.com/activity-feed/my-profile/userid/3186063
https://heylink.me/789win4net/
http://winnipeg.pinklink.ca/author/789win4net/
https://www.elephantjournal.com/profile/789win4net/
https://git.metabarcoding.org/789win4net
https://www.zotero.org/789win4net/cv
https://ficwad.com/a/789win4net
https://osf.io/ud4t8/
https://www.thingiverse.com/789win4net/designs
https://www.mifare.net/support/forum/users/789win4net/
https://pxhere.com/en/photographer/4180582
https://www.blurb.com/user/789win4net?profile_preview=true
https://onlyfans.com/789win4net
https://www.producthunt.com/@789win4net
https://healthinsiderguide.com/user/789win4net
https://talktoislam.com/user/789win4net
https://www.pearltrees.com/789win4net
https://lu.ma/h8ep48de
https://www.gaiaonline.com/profiles/789win4net/46562358/
https://band.us/band/93777420
https://www.magcloud.com/user/789win4net
https://replit.com/@789win4net
https://participa.santboi.cat/profiles/789win4net/activity
https://jii.li/789win4net
https://www.balatarin.com/users/win4net789
https://letterboxd.com/789win4net/
https://codepen.io/789win4net
https://www.quia.com/profiles/789win4net
https://anyflip.com/homepage/wcgjq
https://link.space/@789win4net
https://tupalo.com/en/users/6163281
https://www.hdvietnam.xyz/members/789win4net.2102884/
https://postheaven.net/789win4net/789win4net
https://qooh.me/789win4net
https://able2know.org/user/789win4net/
https://www.bigbasstabs.com/profile/80934.html
https://the-dots.com/users/789win4-net-1564137
https://6giay.vn/members/789win4net.56465/
https://www.penmai.com/community/members/789win4net.386470/#about
https://www.speedrun.com/fr-FR/users/789win4net
https://glendale.bubblelife.com/community/789win4net
https://code.getnoc.com/789win4net
http://chodichvu.vn/members/789win4net.59346/#info
https://chimcanhviet.vn/forum/members/789win4net.157959/
http://genomicdata.hacettepe.edu.tr:3000/789win4net
https://solo.to/789win4net
https://www.playping.com/789win4net
https://789win4net.mystrikingly.com/
http://foxsheets.com/UserProfile/tabid/57/userId/173267/Default.aspx | 789win4net | |
1,750,382 | Transform Text to Speech in Python: Python challenge 14 | https://youtu.be/WYsletlSOLo | 0 | 2024-02-03T05:44:42 | https://dev.to/ruthrina/transform-text-to-speech-in-python-python-challenge-14-4bdo | https://youtu.be/WYsletlSOLo | ruthrina | |
1,750,421 | Disney Plus Transforming Entertainment in the Digital Age | In the era of digital dominance, Disney Plus has emerged as a transformative force, reshaping the... | 0 | 2024-02-03T07:21:39 | https://dev.to/martin45785/disney-plus-transforming-entertainment-in-the-digital-age-718 | disney, pluss, begin | In the era of digital dominance, Disney Plus has emerged as a transformative force, reshaping the landscape of entertainment. This article delves into the ways Disney Plus has redefined the streaming experience, exploring its content strategy, technological innovations, and the cultural impact it has had on audiences worldwide.
**A Content Symphony**
Aggregation of Disney's Rich Legacy
At the core of [disneyplus begin](https://disneyplusdisney.com/begin) lies an extensive library that serves as a testament to the rich legacy of Disney. The platform strategically aggregates content from Disney's expansive portfolio, curating a symphony of animation, live-action, and documentary-style storytelling. From the golden age classics to contemporary hits, Disney Plus invites subscribers on a journey through decades of entertainment history.
**The Allure of Exclusive Content**
In addition to its timeless classics, Disney Plus entices subscribers with exclusive content. Original series, documentaries, and movies created specifically for the platform have become a hallmark of its offerings. This exclusivity not only sets Disney Plus apart but also positions it as a pioneer in producing high-quality, in-house entertainment.
**Technological Marvels**
**The 4K UHD Revolution**
Disney Plus has embraced the technological advancements in streaming quality, offering select content in stunning 4K Ultra High Definition. This commitment to visual excellence elevates the viewer's experience, allowing them to witness the magic of Disney in unparalleled clarity and detail. The visual feast provided by Disney Plus is a testament to its dedication to staying at the forefront of streaming technology.
**Cross-Device Compatibility**
Recognizing the diverse ways audiences consume content, Disney Plus ensures cross-device compatibility. Whether on a smart TV, tablet, laptop, or smartphone, subscribers can seamlessly transition between devices, bringing the magic of Disney to the fingertips of viewers around the world. This flexibility contributes to the platform's accessibility and user-friendly experience.
**Original Programming Brilliance**
**A Renaissance in Originals**
Disney Plus has ushered in a renaissance of original programming, crafting narratives that extend beyond the confines of traditional cinema. Original series like "The Mandalorian" and "WandaVision" have not only captivated audiences but have also redefined the storytelling landscape, proving that streaming platforms can be a breeding ground for innovative and groundbreaking content.
**
Diverse Perspectives and Voices**
The platform actively embraces diverse perspectives and voices in its original programming. By amplifying underrepresented stories and talents, Disney Plus contributes to a more inclusive and representative entertainment industry. Initiatives like the Disney Launchpad program provide a platform for emerging filmmakers to showcase their unique narratives, further enriching the platform's content mosaic.
**Global Outreach**
**Bridging Cultures and Languages**
Disney Plus's global outreach goes beyond language barriers, offering content in multiple languages. This approach ensures that the magic of Disney transcends cultural differences, providing a universal viewing experience. The platform's commitment to localization extends to regional content, fostering a deeper connection with audiences from diverse linguistic backgrounds.
**Cultural Integration and Sensitivity**
As Disney Plus expands its reach globally, it demonstrates cultural integration and sensitivity. The inclusion of culturally relevant content and the adaptation of storytelling to resonate with specific regions showcase a thoughtful approach to global expansion. Disney Plus not only brings its magic to new audiences but also integrates seamlessly into different cultural landscapes.
**Community Building and Interaction**
**Engaging Audiences Beyond Streaming**
Disney Plus goes beyond being a mere streaming platform; it fosters a sense of community and interaction. Through interactive features, watch parties, and social media engagement, the platform encourages audiences to actively participate in the Disney narrative. This sense of community building extends the Disney experience beyond the screen, creating a shared space for fans to connect.
**Influencing Social Discourse**
The platform's original content often sparks discussions on social media, influencing broader social discourse. Characters and storylines become cultural touchpoints, sparking conversations around themes of identity, representation, and societal norms. Disney Plus not only entertains but also contributes to shaping conversations about the world we live in.
**Future Horizons**
**Continuous Evolution**
As Disney Plus looks toward the future, it remains committed to continuous evolution. Plans for further technological enhancements, an expanding content library, and a focus on diverse narratives indicate that the platform is poised to remain a dynamic force in the digital entertainment landscape.
**Conclusion**
Disney Plus has transcended the traditional boundaries of entertainment, becoming a symbol of innovation, inclusivity, and cultural impact in the digital age. Its transformative journey from a repository of classics to a hub of original, cutting-edge content reflects the adaptability and forward-thinking approach that defines Disney's legacy. As Disney Plus continues to enchant audiences worldwide, it solidifies its position not just as a streaming service but as a trailblazer in the ever-evolving realm of digital entertainment. | martin45785 |
1,750,427 | Sending An E-Mail Using ASP.NET With C# | Title: Sending an E-Mail using ASP.NET with C# Introduction: E-Mail communication is an essential... | 0 | 2024-02-03T07:36:54 | https://dev.to/homolibere/sending-an-e-mail-using-aspnet-with-c-23hb | csharp | Title: Sending an E-Mail using ASP.NET with C#
Introduction:
E-Mail communication is an essential part of many web applications, enabling businesses to reach out to their users, send notifications, or deliver important updates. In this post, we will explore how to send an e-mail using ASP.NET with C#. We will walk through the necessary steps and provide you with a code example to help you integrate e-mail functionality into your ASP.NET application.
Step 1: Setting up the SMTP client
To send an e-mail, we need an SMTP (Simple Mail Transfer Protocol) client. In ASP.NET, we can leverage the built-in `SmtpClient` class to handle this functionality. Begin by importing the required namespaces:
```csharp
using System.Net;
using System.Net.Mail;
```
Step 2: Configuring the SMTP server
Next, we need to provide the SMTP server details, including the host address and port number. Typically, these details are obtained from your e-mail service provider. Let's assume we are using Gmail's SMTP server for this example:
```csharp
var smtpClient = new SmtpClient("smtp.gmail.com", 587);
smtpClient.EnableSsl = true;
```
Step 3: Authenticating with the SMTP server
To access the SMTP server, we need to provide valid credentials. In the case of Gmail, you will need to provide your Gmail account's username (e-mail address) and password:
```csharp
smtpClient.Credentials = new NetworkCredential("your-email@gmail.com", "your-password");
```
Step 4: Composing the e-mail message
Now, let's create an instance of the `MailMessage` class, which represents the e-mail to be sent. We will populate the sender and recipient information, along with the subject and body of the e-mail:
```csharp
var mailMessage = new MailMessage();
mailMessage.From = new MailAddress("your-email@gmail.com");
mailMessage.To.Add(new MailAddress("recipient-email@example.com"));
mailMessage.Subject = "Hello from ASP.NET";
mailMessage.Body = "This is a test e-mail sent using ASP.NET with C#.";
```
Step 5: Sending the e-mail
Finally, we can send the e-mail using the `SmtpClient` instance:
```csharp
smtpClient.Send(mailMessage);
```
Here's the complete code snippet that ties everything together:
```csharp
using System.Net;
using System.Net.Mail;
class Program
{
static void Main(string[] args)
{
var smtpClient = new SmtpClient("smtp.gmail.com", 587);
smtpClient.EnableSsl = true;
smtpClient.Credentials = new NetworkCredential("your-email@gmail.com", "your-password");
var mailMessage = new MailMessage();
mailMessage.From = new MailAddress("your-email@gmail.com");
mailMessage.To.Add(new MailAddress("recipient-email@example.com"));
mailMessage.Subject = "Hello from ASP.NET";
mailMessage.Body = "This is a test e-mail sent using ASP.NET with C#.";
smtpClient.Send(mailMessage);
}
}
```
Conclusion:
In this article, we covered the necessary steps to send an e-mail using ASP.NET with C#. By following these steps and using the provided code example, you can easily integrate e-mail functionality into your ASP.NET application. Remember to adjust the SMTP server details and credentials based on your specific requirements or e-mail service provider. | homolibere |
1,750,509 | What I learnt after working in startups? | HELLO 🙌 It is better to make bigger contribution in small organization than making small impact in... | 0 | 2024-03-28T13:39:16 | https://dev.to/bellatrix/what-i-learnt-after-working-in-startups-24pn | learning, experience, startup, beginners | HELLO 🙌
> It is better to make bigger contribution in small organization than making small impact in a big organization where your efforts are less valued.

I worked at **_three startups_** during and after my college. All three were at different stages. One was just an year old, while other were running from more than a decade. Interestingly all three had major focus on different areas than just building and selling product.
Lets begin with first one. It is a startup based in Indore, Madhya Pradesh, India. This company is operating from 20+ years. 🏢
Highly motivated people with good number of customers and well doing product. So they had this Omnichannel order management, shop online and pick up from store thing. We were given good training in Java and Sql. 🔨
These guys were more focused on **learning things deeply**. They had 1 year service agreement and HR were always insisting to post learnings on linkedin. They asked us to make atleast 2-3 post per week. Their focus in that year was on **branding on Linkedin.**
Regular stand up calls and doubt solving sessions were helpful, I loved the way they celebrated festivals✨. These days they are promoting their culture across different social media platforms. But they ask fulltime employees to** work for 11-12 hours **sometimes, and only sunday off is given. The stipend was good and learnings also.
Now, the second startup, had best experience here 😎. This organization has 150-200 employees and based of Indore, Madhya Pradesh, India.
Work culture here is one of the worst I have ever seen, but the products are amazing💰💵. I got to **learn a lot from founder, colleagues and mentors**. People were helpful, they understand people's problem but at the same time there is a new thing, new idea and new way of thinking everyday. **This sometimes changes the plan sometimes twice in a day. ** 😭
Working hours were hectic specially for interns. This place is best for people who have never ending hunger for learning, want to experience a growing startup.
I literally learned fundamentals here, most of my web dev learning and skills comes from here💡. Also many brilliant minds I met here, as a growing startup this company is doing so well, they do not ask to post anything on linkedin, just work, with less focus on mental health.
The stipend was good, like students really travelled from other cities to intern in this organization, and I do feel that internship can be turned into fulltime offer here, just stay here for six months and grind day in day out. 🔨🔨 **Learning culture is one of the best, work culture is below average. **

The last one, was just one year old startup. So product was still not developed and funding was zero. I was more interested to see how this funding thing happens.
The idea they were working on was good but needed more brainstorming I felt this. People were good, but as no sales so salaries were very less🤦♀️, it was hard to save money there.
Again, the **learning environment was good, work culture was okayish.** I prefer working in a place where I can learn and earn well.
Best thing here was code review session which were very interactive, every one gets to know what others are doing, their challenges👍 and how they solved problems. For learning this place is 5/5.
Each startup had some pros and cons and at some point of time I felt like leaving the organization🏃♀️🏃♀️. But that's the part of a plan, we are unware about in beginning. At the end of day it was always a day filled with learnings.
Startups have this environment where people work 6 days in a week, some days they work for more than 10 hours and get lesser leaves, but learning and conversation that happen in a growing startup are the most valuable🥱. I never wanted to miss a single minute where someone is sharing their experience or plan on how to move to next steps.
The most important thing about working in startup is never ending learning and there were days when I was getting new ideas to start something by my own, from my own ideas. This can never while working in big organization.😇😇
Thanks for reading
**Do share you experiences if you have also worked in startup or if you have your own startup would love to read.**
Happy reading! ✌ | bellatrix |
1,750,764 | Unleashing the Power of PHP 8 Match Expression | PHP 8, the latest version of the versatile server-side scripting language, introduces several... | 0 | 2024-02-03T17:32:32 | https://dev.to/jeevanizm/unleashing-the-power-of-php-8-match-expression-38i6 | php8, php, webdev, programming |
PHP 8, the latest version of the versatile server-side scripting language, introduces several features aimed at improving developer productivity and code readability. One standout feature is the Match Expression, an evolution of the traditional switch statement. In this article, we'll explore the PHP 8 Match Expression, unravel its syntax, highlight its advantages, and demonstrate its real-life applications.
## Understanding PHP 8 Match Expression
The Match Expression serves as an enhanced version of the traditional switch statement, providing a more concise and expressive syntax. It not only simplifies the structure of switch statements but also offers additional features for increased flexibility.
Here's a basic example:
$grade = 'B';
$result = match ($grade) {
'A' => 'Excellent',
'B', 'C' => 'Good',
'D' => 'Needs Improvement',
default => 'Invalid Grade',
};
echo $result; // Output: Good
In this example, the Match Expression evaluates the value of `$grade` and returns the corresponding result based on the specified conditions. The syntax is more streamlined compared to traditional switch statements, providing better code readability.
## Benefits of PHP 8 Match Expression
### **1. Conciseness:**
Match Expression offers a more concise syntax compared to switch statements. It allows developers to express complex conditions and their corresponding outcomes in a more streamlined manner.
### **2. Simplified Syntax:**
The syntax of Match Expression is designed to be more straightforward, making it easier for developers to read and understand. It reduces the need for explicit breaks and simplifies the structure.
### **3. Additional Functionality:**
Match Expression introduces the ability to use expressions directly within the arms, providing more flexibility. It also supports a `default` arm, handling cases not covered explicitly.
## Real-Life Applications
### **1. Handling HTTP Status Codes:**
In a web application, you might use a match expression to determine the message associated with different HTTP status codes:
$httpStatus = 404;
$message = match ($httpStatus) {
200 => 'OK',
404 => 'Not Found',
500 => 'Internal Server Error',
default => 'Unknown Status',
};
echo $message; // Output: Not Found
### **2. Processing User Roles:**
When working with user roles, a match expression can simplify the logic for handling different roles:
$userRole = 'admin';
$permissions = match ($userRole) {
'admin' => ['create', 'read', 'update', 'delete'],
'editor' => ['create', 'read', 'update'],
'viewer' => ['read'],
default => [],
};
print_r($permissions);
### **3. Handling API Responses:**
In API development, a match expression can be employed to process different response types:
$responseType = 'success';
$result = match ($responseType) {
'success' => ['status' => 'OK', 'data' => []],
'error' => ['status' => 'Error', 'message' => 'An error occurred'],
default => ['status' => 'Unknown', 'data' => []],
};
print_r($result);
## Implementation in PHP 8
The syntax of the PHP 8 Match Expression involves the `match` keyword followed by the expression to be evaluated. Arms are defined using the arrow (`=>`) syntax.
$result = match ($value) {
'case1' => 'Result for Case 1',
'case2', 'case3' => 'Result for Case 2 or 3',
default => 'Default Result',
};
## Conclusion
PHP 8 Match Expression represents a significant enhancement to the language, offering a more concise and expressive way to handle complex conditions. Its streamlined syntax and additional features provide developers with a powerful tool for improving code readability and reducing the verbosity associated with traditional switch statements. As PHP 8 continues to shape the landscape of web development, embracing features like Match Expression is key to writing modern, clean, and efficient PHP code. | jeevanizm |
1,750,771 | The Power of Web Components for Modular Development | In today's fast-evolving digital landscape, the demand for powerful, scalable, and maintainable web... | 0 | 2024-02-03T18:02:55 | https://dev.to/webtechpanda/the-power-of-web-components-for-modular-development-5bnc | development, webcomponents, modular, programming | In today's fast-evolving digital landscape, the demand for powerful, scalable, and maintainable web applications is higher than ever. Developers constantly explore new tools and methodologies to streamline their development processes to meet this demand. One such tool that has gained significant traction in recent years is web components.
Web components are a set of standardized web platform APIs that allow developers to create reusable custom elements for building flexible and enclosed components in web applications. By utilizing the power of web components, developers can achieve greater code reusability, maintainability, and scalability in their projects. In this comprehensive guide, we'll delve into the essentials of web components, explore their benefits, and learn how to leverage them for modular [web development](https://www.webtechpanda.com/web-development-learn-all-about-the-tools-techniques/) effectively.
## Understanding Web Components
A web component is a collection of HTML, CSS, and JavaScript that sum up a piece of UI functionality, making it reusable across different parts of an application or even across multiple applications. Four main specifications consist of web components:
**1. Custom Elements:** Custom elements allow developers to define their own HTML elements with custom behavior. This means you can create your tags like <my-component> and define how they should turn and act.
**2. Shadow DOM:** The Shadow DOM provides capping by allowing a component's markup, styles, and behavior to be extended to the component, preventing styles and scripts from leaking out and interfering with other parts of the page.
**3. HTML Templates:** HTML templates enable developers to define chunks of markup that can be cloned and inserted, providing a clean separation between structure and content.
**4. HTML Imports:** HTML Imports were originally part of the web component specification but obsolete in favor of ES Modules, a standard way to package JavaScript code for reuse.
## The Benefits of Web Components
### 1. Reusability
Web components promote code reuse by summarizing UI functionality into self-contained elements that can be easily reused across different parts of an application or shared across multiple applications.
### 2. Maintainability
By compressing markup, styles, and behavior within a single component, web components make it easier to maintain and update code, reducing the likelihood of unwanted side effects or breaking changes.
### 3. Scalability
Web components help modular development, allowing developers to break down complex UIs into smaller, attainable pieces that can be developed, tested, and maintained independently.
### 4. Interoperability
Web components are built on web standards and can be used with any modern web framework or library, making them highly scalable and future-proof.
## Getting Started with Web Components
To start utilizing the power of web components in your projects, you'll need to introduce yourself with the following key concepts and tools:
### 1. Custom Element Definition
To define a custom element, use the customElements.define() method, passing in the element's name and a class that extends HTMLElement.
### 2. Shadow DOM
To create a shadow DOM for your custom element, use the attachShadow() method inside the builder of your custom element class.
### 3. HTML Templates
[HTML templates](https://dev.to/laxsannish/why-your-agency-should-be-using-html-templates-52m0) allow you to define reusable chunks of markup that can be cloned and inserted into the DOM as needed.
### 4. ES Modules
ES Modules are a standard way to package JavaScript code for reuse. You can export classes, functions, or variables from one module and import them into another.
## Best Practices for Web Components Development
To ensure your web components are well-structured, maintainable, and efficient, consider following these best practices:
### 1. Keep Components Small and Focused
Break down difficult UIs into smaller, reusable components that focus on a single responsibility.
### 2. Use Shadow DOM Wisely
Leverage the Shadow DOM to sum up styles and behavior within components, but be mindful of the performance involvement, especially when interpreting large numbers of components.
### 3. Provide a Clean Public API
Disclose a clean public API for your components by defining properties, methods, and events that allow consumers to interact with them dynamically.
### 4. Write Accessible Components
Ensure your components are accessible to all users by following best practices for semantic HTML, keyboard navigation, and other attributes.
### 5. Test Components Thoroughly
Write unit tests and integration tests to ensure your components act as expected and continue to function correctly as your application evolves.
## Conclusion
Web components offer a powerful way to build modular, reusable UI components for web applications. By leveraging custom elements, shadow DOM, HTML templates, and ES Modules, developers can create enclosed, future-proof components that promote code reusability, maintainability, and scalability. By following best practices and staying up-to-date with the latest standards and tools, you can utilize the full power of web components to streamline your development process and build more robust, maintainable web applications.
| webtechpanda |
1,750,792 | Estate agents | The prospect of moving is both exciting and challenging, marking the beginning of a new chapter in... | 0 | 2024-02-03T19:02:41 | https://dev.to/sdirectory08/estate-agents-4ffb | The prospect of moving is both exciting and challenging, marking the beginning of a new chapter in life. Whether you are transitioning to a new home, exploring house removals, dealing with estate agents, considering self-storage options, or contemplating the significance of cardboard boxes, each element plays a vital role in orchestrating a seamless move. In this comprehensive guide, we delve into the world of Brewood Removals, the intricacies of house removals, the role of estate agents, the convenience of self-storage, and the unsung hero of moving – cardboard boxes.
Brewood Removals: Your Relocation Partner:
a. Local Expertise:
Brewood Removals stands as a beacon of local expertise, offering tailored relocation solutions in the Brewood area. From navigating the narrow lanes to understanding the unique challenges of the locale, these removal services ensure a smooth transition for residents and businesses alike.
b. Professional Approach:
Employing a team of skilled professionals, Brewood Removals brings a professional touch to the often daunting process of moving. Their expertise spans packing, transportation, and unpacking, alleviating the stress associated with the logistical aspects of relocation.
**_[Estate agents](https://b4sdirectory.co.uk/Category/estate-agents)_**
c. Customized Solutions:
Recognizing that each move is unique, Brewood Removals provides customized solutions to meet individual needs. Whether it's a residential move, an office relocation, or a specialized item transport, their services are tailored to ensure a personalized and efficient experience.
House Removals: Crafting a Seamless Transition:
a. Planning and Organization:
House removals require meticulous planning and organization. Establishing a comprehensive checklist, coordinating timelines, and ensuring that all aspects of the move are considered contribute to a well-executed relocation.
b. Furniture and Belongings:
Safeguarding furniture and belongings during a move is paramount. Professional house removal services employ techniques such as protective wrapping, secure loading, and strategic placement to ensure that items arrive at the new destination in pristine condition.
c. Stress Reduction:
Enlisting the services of a reputable house removal company not only simplifies the physical aspects of the move but also significantly reduces stress. From packing fragile items to handling bulky furniture, professionals streamline the process, allowing individuals and families to focus on the emotional and personal aspects of the transition.
Estate Agents: Navigating Property Transitions:
a. Market Expertise:
Estate agents play a pivotal role in property transitions, bringing market expertise and insight into the buying and selling process. Their knowledge of local property trends, pricing strategies, and negotiation skills are invaluable assets in navigating the real estate landscape.
b. Property Valuation:
Accurate property valuation is crucial for both buyers and sellers. Estate agents employ their understanding of market dynamics to assess property values, ensuring fair transactions and favorable outcomes for their clients.
c. Legal Guidance:
The legal intricacies of property transactions can be overwhelming. Estate agents provide valuable legal guidance, facilitating a smooth process by ensuring that all documentation is in order and liaising with legal professionals to address any potential issues.
Self Storage: A Flexible Solution:
a. Temporary Space:
Self-storage facilities offer a flexible solution for individuals undergoing transitions, whether it's a temporary downsizing, a gap between moves, or the need for additional space. These facilities provide a secure environment for belongings that may not find a place in the immediate living or working space.
b. Business Storage:
For businesses, self-storage becomes a valuable asset for managing excess inventory, seasonal items, or documents. The accessibility and security provided by self-storage facilities contribute to efficient business operations without the burden of maintaining a larger physical space.
c. Security and Accessibility:
Modern self-storage facilities prioritize security, employing surveillance systems, access control, and on-site management. Additionally, the convenience of 24/7 accessibility ensures that stored items are readily available whenever needed.
Cardboard Boxes: Unpacking the Unsung Hero:
a. Versatility:
Cardboard boxes are the unsung heroes of any move. Their versatility makes them indispensable for packing and transporting items of various sizes and shapes. From fragile glassware to bulky books, cardboard boxes offer a protective and adaptable solution.
b. Eco-Friendly Option:
Many cardboard boxes are made from recyclable materials, providing an eco-friendly option for those concerned about their environmental impact. Recycling or repurposing cardboard boxes after a move contributes to sustainable practices.
c. Labeling and Organization:
Cardboard boxes are ideal for labeling and organization. Properly labeled boxes streamline the unpacking process, allowing individuals to locate specific items with ease and maintain order in their new living or working space.
Conclusion: Crafting a Seamless Move:
In the intricate dance of relocation, Brewood Removals, house removals, estate agents, self-storage, and the humble cardboard box each contribute to the symphony of a seamless move. From the expertise of local removal services to the market insights provided by estate agents, and the practicality of self-storage and cardboard boxes, each element plays a crucial role in ensuring a smooth transition. As individuals embark on the exciting journey of moving, these services stand as reliable allies, guiding them through the complexities and challenges of transition and setting the stage for a new chapter filled with possibilities. | sdirectory08 | |
1,750,841 | ESP Embedded Rust: Command Line Interface | Introduction Command line interfaces (CLI) in embedded applications are probably not as... | 0 | 2024-02-03T20:05:32 | https://dev.to/theembeddedrustacean/esp-embedded-rust-command-line-interface-2dpg | rust, tutorial, beginners, programming | ## Introduction
Command line interfaces (CLI) in embedded applications are probably not as common as non-embedded ones. One reason is that embedded applications do not often require direct user input from a terminal, but rather through some other physical interface. As such, if a CLI is required in embedded there are different considerations.
In a typical non-embedded CLI, there is an underlying operating system (OS) with a CLI (in a shell) meant for accepting user input and passing commands to the OS itself. The OS in turn processes the input appropriately. As such, a command signifies a precompiled executable stored in some directory followed by a collection of arguments. The passed arguments are then processed by the named program when the OS executes it. The arguments are also typically allocated to dynamic memory.
In case a CLI is needed in embedded, things take a different turn. First, an underlying OS with a shell interface doesn't necessarily exist, but rather an already running application or RTOS. Second, the interface itself could be on another host device. Third, dynamic allocation is not desirable. This means that to create a CLI, logic is incorporated within an existing running application. This comes in the form of accepting commands over some logging interface like UART and processing them in the board.
In this post, I'm going to create a simple embedded CLI interface on an ESP32C3 device. However, the CLI logic not going to be created from scratch, but rather leveraging an existing crate. There are several `no-std` CLI crates that I encountered including; [`terminal-cli`](https://crates.io/crates/terminal_cli) , [`embedded-cli`](https://crates.io/crates/embedded-cli/0.1.0) , [`light-cli`](https://crates.io/crates/light-cli) , and `menu` . Looking at each of the crates, they all had friendly abstractions and great features. I ended up going for `menu` based on the number of downloads since it seemed to be the most popular.
#### If you find this post useful, and if Embedded Rust interests you, stay in the know by subscribing to The Embedded Rustacean newsletter:
{% cta http://www.theembeddedrustacean.com/subscribe %} Subscribe Now to The Embedded Rustacean{% endcta%}
### **📚 Knowledge Pre-requisites**
To understand the content of this post, you need the following:
* Basic knowledge of coding in Rust.
* Familiarity with UART and [how to set it up for the ESP32C3](https://apollolabsblog.hashnode.dev/esp32-standard-library-embedded-rust-uart-communication).
### **💾 Software Setup**
All the code presented in this post is available on the [**apollolabs ESP32C3**](https://github.com/apollolabsdev/ESP32C3) git repo. Note that if the code on the git repo is slightly different, it was modified to enhance the code quality or accommodate any HAL/Rust updates.
Additionally, the full project (code and simulation) is available on Wokwi [**here**](https://wokwi.com/projects/388708808145859585).
### **🛠 Hardware Setup**
#### Materials
* [**ESP32-C3-DevKitM**](https://docs.espressif.com/projects/esp-idf/en/latest/esp32c3/hw-reference/esp32c3/user-guide-devkitm-1.html)

## **👨🎨** Software Design
### **💻** The Application
For this post, I'm going to create a simple application called hello with a command `hw`. `hw` takes one argument and its a person's name. The application would then print hello followed by the name given. Here's a sample:
```bash
> hw Omar
Hello, Omar!
```
Also, the application would support a `help` command to provide a summary of commands the terminal supports. Also there is a `help` option for each command providing the user with information about the command.
### 📦 The `menu` Crate
The `menu` crate consists of four main components:
1. **The** `Menu` **struct**: Each `Menu` has a name or `label` , optional functions to call on entry and exit of the `Menu` and, most importantly, a collection of `Item`s.
2. **The** `Item` **struct**: `Item`s can be viewed as command buckets within a `Menu`. `Item` members include an `ItemType` which specifies the type of the `Item` , a command string, and a help message associated with the command. `ItemType` is also a struct that specifies whether the `Item` opens up another sub `Menu`, or calls a function upon the invocation of the command.
3. **The** `Runner` **struct**: This is the struct that handles all incoming input. When our application is executing incoming byte characters will be passed to an instantiation of this struct for processing. Obviously, in `Runner` instantiation, the the `Menu` would be required in addition to a source for the incoming bytes (Ex. UART channel).
4. **The Callback Functions**: These are the functions that are called upon the invocation of a command. In these functions the logic associated with each command will be executed.
### 👣 Implementation Steps
Following the earlier description, these are the steps we need to take:
1. Define the Root Menu
2. Implement any Callback functions
3. Configure peripherals in `main`
4. Instantiate the `Runner` in `main`
5. Create the Application `loop`
## **👨💻** Code Implementation
### **📥 Crate Imports**
In this implementation the crates required are as follows:
* The `esp_idf_hal` crate to import the needed device hardware abstractions.
* `menu` to provide the menu CLI structs and abstractions.
* `std::fmt` to import the `Write` trait.
```rust
use esp_idf_hal::delay::BLOCK;
use esp_idf_hal::gpio;
use esp_idf_hal::peripherals::Peripherals;
use esp_idf_hal::prelude::*;
use esp_idf_hal::uart::*;
use menu::*;
use std::fmt::Write;
```
### 📝 Define the Root Menu
Before implementing the logic, we need to define our root `Menu`. Since this this a simple application, the `Menu` will have one `Item` that contains the `hw` command. `Menu` has the following definition:
```rust
pub struct Menu<'a, T>
where
T: 'a,
{
pub label: &'a str,
pub items: &'a [&'a Item<'a, T>],
pub entry: Option<MenuCallbackFn<T>>,
pub exit: Option<MenuCallbackFn<T>>,
}
```
`label` is the label of the menu, `entry` and `exit` specify functions to call on entry and exit of the `Menu` , and `items` is an array of `Item` s. Also each `Item` has the following definition:
```rust
pub struct Item<'a, T>
where
T: 'a,
{
pub command: &'a str,
pub help: Option<&'a str>,
pub item_type: ItemType<'a, T>,
}
```
where `command` is the command text for a particular `Item` , `help` is the help message associated with `command`, and `item_type` is an enum that specifies what the type of the item is. `ItemType` is defined as follows:
```rust
pub enum ItemType<'a, T>
where
T: 'a,
{
Callback {
function: ItemCallbackFn<T>,
parameters: &'a [Parameter<'a>],
},
Menu(&'a Menu<'a, T>),
_Dummy,
}
```
One can see that `Item` can be either a `Callback` that would call a function on the invocation of an `Item` `command`, and `parameters` that are passed to that command. Alternatively, an `Item` could be another `Menu` which specifies a sub-menu.
Given the above, for our application, we define a `const` `Menu` called `ROOT_MENU` as follows:
```rust
// CLI Root Menu Struct Initialization
const ROOT_MENU: Menu<UartDriver> = Menu {
label: "root",
items: &[&Item {
item_type: ItemType::Callback {
function: hello_name,
parameters: &[Parameter::Mandatory {
parameter_name: "name",
help: Some("Enter your name"),
}],
},
command: "hw",
help: Some("This is an embedded CLI terminal. Check the summary for the list of supported commands"),
}],
entry: None,
exit: None,
};
```
Note here that `ROOT_MENU` is given a `label` "`root`". Additionally, our application would have only one `Item` with `command` text "`hw`" that, when entered by the user, would invoke a `function` called `hello_name` .The application `Item` also takes only one parameter defined as `name` . Also I've decided to not call any functions on `entry` and `exit` of the `root` `Menu`. The associated `help` messages are also defined at the `Menu` and `Item` level.
### 👨💻 Implement the Callback Function
Given the earlier step, we now need to define the behaviour when the callback function `hello_name` associated with `hw` is called. However, we still need to know what are the arguments that will be passed to this function. As such, the `menu` crate defines the signature of the callback function `ItemCallbackFn` as follows:
```rust
pub type ItemCallbackFn<T> = fn(menu: &Menu<'_, T>, item: &Item<'_, T>, args: &[&str], context: &mut T);
```
Following this signature, we implement the function `hello_name` as follows:
```rust
fn hello_name<'a>(
_menu: &Menu<UartDriver>,
item: &Item<UartDriver>,
args: &[&str],
context: &mut UartDriver,
) {
// Print to console passed "name" argument
writeln!(
context,
"Hello, {}!",
argument_finder(item, args, "name").unwrap().unwrap()
)
.unwrap();
}
```
Note that all the function is doing is using the `writeln` macro to print to the console. `writeln` accepts a ‘writer’, a format string, and a list of arguments. Arguments will be formatted according to the specified format string and the result will be passed to the writer. The writer is any type that implements the `Write` trait in which case its the `UartDriver` type for the `context` thats being passed to `hello_name` by the runner. Note how for the list of arguments and `argument_finder` function is being used. `argument_finder` is a `menu` crate function and it looks for the named parameter in the parameter list of the `Item`, then finds the correct argument. Note how `argument_finder` takes three arguments, the `Item` we want to look for an argument in, the list of arguments that have been passed to the CLI (`args`), and the argument we want to look for ("`name`").
### **🎛** Configure Peripherals
In the `main` the following code is used to instantiate and configure UART:
```rust
// Take Peripherals
let peripherals = Peripherals::take().unwrap();
// Configure UART
// Create handle for UART config struct
let config = config::Config::default().baudrate(Hertz(115_200));
// Instantiate UART
let mut uart = UartDriver::new(
peripherals.uart0,
peripherals.pins.gpio21,
peripherals.pins.gpio20,
Option::<gpio::Gpio0>::None,
Option::<gpio::Gpio1>::None,
&config,
)
.unwrap();
```
The detail behind how this code works has been covered in a prior post [here](https://apollolabsblog.hashnode.dev/esp32-standard-library-embedded-rust-uart-communication). If anything seems unclear I recommend referring back to the past post. Only thing to note here is that `uart0` is used with `gpio21` and `gpio20` for Tx and Rx pins. This is because this is the peripheral and the pins used to connect to a host on the [**ESP32-C3-DevKitM**](https://docs.espressif.com/projects/esp-idf/en/latest/esp32c3/hw-reference/esp32c3/user-guide-devkitm-1.html)**.**
### 🏃♂️ Instantiate the `Runner`
In the [`menu` crate documentation](https://docs.rs/menu/0.4.0/menu/struct.Runner.html), `Runner` has a `new` method with the following signature:
```rust
pub fn new(menu: Menu<'a, T>, buffer: &'a mut [u8], context: T) -> Runner<'a, T>
```
The first parameter is a root `Menu` (defined in the first step!), the second is a buffer that the `Runner` can use (`clibuf`), finally, `context` could be any type that implements `Write` (this is our instantiated `uart` handle) As a result, the runner is instantiated with the following code:
```rust
// Create a buffer to store CLI input
let mut clibuf = [0u8; 64];
// Instantiate CLI runner with root menu, buffer, and uart
let mut r = Runner::new(ROOT_MENU, &mut clibuf, uart);
```
> 📝 **Note**: The generic type `T` that appears in all prior structs seen earlier is defined by the context object that the `Runner` carries around. In our case its the `UartDriver` type.
### 🔄 The Main Loop
In the `main` `loop`, all that needs to be done is read one byte at a time from the UART channel. Remember that the UART channel is now part of the runner `context`. This is done using the UART `read` method (again, for a refresher check out [this](https://apollolabsblog.hashnode.dev/esp32-standard-library-embedded-rust-uart-communication) post). Every read byte is passed to the instantiated runner using the `input_byte` runner method.
```rust
loop {
// Create single element buffer for UART characters
let mut buf = [0_u8; 1];
// Read single byte from UART
r.context.read(&mut buf, BLOCK).unwrap();
// Pass read byte to CLI runner for processing
r.input_byte(buf[0]);
}
```
That's it for code!
## 📱Full Application Code
Here is the full code for the implementation described in this post. You can additionally find the full project and others available on the [**apollolabs ESP32C3**](https://github.com/apollolabsdev/ESP32C3) git repo. Also, the Wokwi project can be accessed [**here**](https://wokwi.com/projects/388709326938299393).
```rust
use esp_idf_hal::delay::BLOCK;
use esp_idf_hal::gpio;
use esp_idf_hal::peripherals::Peripherals;
use esp_idf_hal::prelude::*;
use esp_idf_hal::uart::*;
use menu::*;
use std::fmt::Write;
// CLI Root Menu Struct Initialization
const ROOT_MENU: Menu<UartDriver> = Menu {
label: "root",
items: &[&Item {
item_type: ItemType::Callback {
function: hello_name,
parameters: &[Parameter::Mandatory {
parameter_name: "name",
help: Some("Enter your name"),
}],
},
command: "hw",
help: Some("This is an embedded CLI terminal. Check the summary for the list of supported commands"),
}],
entry: None,
exit: None,
};
fn main() {
// Take Peripherals
let peripherals = Peripherals::take().unwrap();
// Configure UART
// Create handle for UART config struct
let config = config::Config::default().baudrate(Hertz(115_200));
// Instantiate UART
let mut uart = UartDriver::new(
peripherals.uart0,
peripherals.pins.gpio21,
peripherals.pins.gpio20,
Option::<gpio::Gpio0>::None,
Option::<gpio::Gpio1>::None,
&config,
)
.unwrap();
// This line is for Wokwi only so that the console output is formatted correctly
uart.write_str("\x1b[20h").unwrap();
// Create a buffer to store CLI input
let mut clibuf = [0u8; 64];
// Instantiate CLI runner with root menu, buffer, and uart
let mut r = Runner::new(ROOT_MENU, &mut clibuf, uart);
loop {
// Create single element buffer for UART characters
let mut buf = [0_u8; 1];
// Read single byte from UART
r.context.read(&mut buf, BLOCK).unwrap();
// Pass read byte to CLI runner for processing
r.input_byte(buf[0]);
}
}
// Callback function for hw commans
fn hello_name<'a>(
_menu: &Menu<UartDriver>,
item: &Item<UartDriver>,
args: &[&str],
context: &mut UartDriver,
) {
// Print to console passed "name" argument
writeln!(
context,
"Hello, {}!",
argument_finder(item, args, "name").unwrap().unwrap()
)
.unwrap();
}
```
## **Conclusion**
This post showed how to create a simple embedded command line interface over a UART channel. In the post, the application was created on the ESP32C3 using the `menu` crate. The application also leveraged the `esp-idf-hal` asbtractions. Have any questions? Share your thoughts in the comments below 👇.
#### If you found this post useful, and if Embedded Rust interests you, stay in the know by subscribing to The Embedded Rustacean newsletter:
{% cta http://www.theembeddedrustacean.com/subscribe %} Subscribe Now to The Embedded Rustacean{% endcta%}
| theembeddedrustacean |
1,755,618 | In-Depth Perspective on Flutter: A Comprehensive Analysis and Practice Guide | 1. Overview Today, Google released Flutter 3.19, Flutter+Gemini+1 million to 10 million... | 0 | 2024-02-08T14:00:36 | https://dev.to/happyer/in-depth-perspective-on-flutter-a-comprehensive-analysis-and-practice-guide-236n | flutter, mobile, development, ai | ## 1. Overview

Today, Google released Flutter 3.19, Flutter+Gemini+1 million to 10 million tokens, bringing Flutter and Dart into the Gemini era. This article introduces ‘In-Depth Perspective on Flutter: A Comprehensive Analysis and Practice Guide.’ In a later article, we will introduce Flutter Build with Gemini.
Flutter is an open-source UI framework created by Google that has rapidly become a favorite among developers since its debut in 2017. With its ability to build beautiful, efficient, and highly customizable mobile applications, Flutter has adopted an innovative approach to user interface construction. Through its self-drawn UI technology, it achieves high performance in cross-platform application development.
### 1.1. Rapid Iteration
Flutter's rich library of pre-built components greatly accelerates the construction speed of complex UIs. The hot reload feature further enhances development efficiency, allowing code changes to be instantly reflected.
### 1.2. Full Platform Coverage
Flutter applications can run on both iOS and Android, offering a nearly identical look and performance experience across different platforms. This means that developers need to write code only once to deploy across platforms.
### 1.3. Self-Drawn UI Technology
Using Skia as its graphics engine, Flutter endows applications with exceptional performance and visual effects. Self-drawn UI technology enables developers to create highly personalized user interfaces to meet various project requirements.
### 1.4. Reactive Programming
Flutter adopts the reactive programming paradigm, meaning the UI dynamically updates as the state changes. This pattern simplifies the management and updating of complex UI interaction logic.
### 1.5. Strong Tooling and Community Support
Flutter boasts comprehensive development tools and robust community support. For instance, the Flutter SDK offers a complete set of integrated development environments, while the community provides a plethora of plugins and third-party libraries for developers to utilize. In the era of AI, using AI to generate high-quality code is a standout feature. Take [AI Design](https://codia.ai/d/5ZFb) for example, a tool that effortlessly converts screenshots into editable Figma UI designs. Simply upload an image of any app or website, and it smoothly handles the conversion process. Additionally, [AI Code](https://codia.ai/s/YBF9) enhances this functionality by facilitating Figma-to-Code translations, supporting a wide range of platforms including Android, iOS, macOS, Flutter, HTML, CSS, React, Vue, etc., and ensures the creation of highly accurate code.
Sample Code
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(),
);
}
}
class MyHomePage extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Flutter Demo Home Page'),
),
body: Center(
child: Text(
'Welcome to Flutter!',
style: TextStyle(fontSize: 24),
),
),
);
}
}
```
The above code demonstrates a basic Flutter application that uses pre-built components and layouts to create a page with a title bar and centered text. This example highlights Flutter's advantages in rapid development and cross-platform support.
## 2. Flutter Framework Overview
The Flutter framework consists of three core components: the Flutter Engine, the Flutter Framework, and Flutter Plugins. The Flutter Engine is responsible for low-level operations such as graphics rendering, animation, and input events. The Flutter Framework provides a series of pre-built components and tools for building user interfaces. Flutter Plugins allow developers to integrate native system functions and third-party SDKs.
### 2.1. The Relationship Between Dart and Flutter
Dart is the official programming language of Flutter. It is an object-oriented, strongly typed language that supports just-in-time compilation. Dart's readability and ease of learning, combined with its tight integration with the Flutter framework, enable developers to efficiently write business logic and interface descriptions.
### 2.2. Widget and Element
In Flutter, a Widget is the basic unit for building UIs. It is immutable and describes the configuration and appearance of a UI element. Flutter provides a large number of pre-built Widgets, such as text, buttons, images, etc., and also supports custom Widgets. Widgets can be built based on state and properties, supporting nesting and composition to form complex UI structures.
An Element is an instance of a Widget on the screen, responsible for managing the Widget's lifecycle and state. When the application state changes, Flutter rebuilds the related Widgets and updates the corresponding Elements. This reactive programming approach allows developers to easily update and manage the UI.
Sample Code
```dart
import 'package:flutter/material.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'Flutter Demo',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(),
);
}
}
class MyHomePage extends StatefulWidget {
@override
_MyHomePageState createState() => _MyHomePageState();
}
class _MyHomePageState extends State<MyHomePage> {
int _counter = 0;
void _incrementCounter() {
setState(() {
_counter++;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Flutter Demo Home Page'),
),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text(
'You have pushed the button this many times:',
),
Text(
'$_counter',
style: TextStyle(fontSize: 24),
),
],
),
),
floatingActionButton: FloatingActionButton(
onPressed: _incrementCounter,
tooltip: 'Increment',
child: Icon(Icons.add),
),
);
}
}
```
The above code demonstrates a stateful counter application, using StatefulWidget to manage the state. Each button press increments the counter value and updates the UI through setState(). This example highlights the concepts of Widget and Element, as well as Flutter's reactive programming model.
## 3. Setting Up the Flutter Development Environment
### 3.1. Installing the Flutter SDK
To start Flutter development, you first need to install the Flutter SDK. The installation steps are as follows:
1. Download the Flutter SDK: Visit the Flutter official website (https://flutter.dev/), click "Get Started", select the download link suitable for your operating system, and download the Flutter SDK zip file.
2. Extract the Flutter SDK: Unzip the downloaded file to your chosen directory, such as C:\flutter (Windows) or ~/flutter (macOS or Linux).
3. Configure the Environment Variables: Add the Flutter bin directory to your system environment variables so that you can run Flutter commands from any location.
### 3.2. Configuring the Development Environment
After installing the Flutter SDK, you also need to configure the development environment to successfully build and run Flutter applications. The configuration steps are as follows:
1. Configure Flutter Tools: Run flutter doctor in the command line terminal to check the system and list any missing dependencies or configurations. Follow the prompts to fix any issues.
2. Install Android Studio (optional): If you plan to develop Android applications, it is recommended to install Android Studio, which provides a convenient development and debugging environment.
3. Configure Android Device or Emulator: To run Flutter applications on an Android device or emulator, you need to configure the device first. Android Studio can be used to create and manage devices.
### 3.3. Creating Your First Flutter Application
After installing the SDK and configuring the environment, you can create your first Flutter application. The creation steps are as follows:
1. In the command line terminal, use the flutter create command to create a new application. For example, run flutter create my_app to create a new application named my_app in the current directory.
2. Enter the application directory: Run cd my_app to enter the application directory.
3. Launch the application: Run flutter run to start the application. If there is a connected Android device or emulator, Flutter will automatically run the application.
4. Open the application in an editor: Use your favorite code editor (such as Android Studio, Visual Studio Code, etc.) to open the application directory.
5. Modify the application: In the editor, you can see the initial code of the Flutter application. Modify the code as needed and view the effects in real-time.
Sample Code
```dart
import 'package:flutter/material.dart';
void main() {
runApp(MyApp());
}
class MyApp extends StatelessWidget {
@override
Widget build(BuildContext context) {
return MaterialApp(
title: 'My First Flutter App',
theme: ThemeData(
primarySwatch: Colors.blue,
),
home: MyHomePage(),
);
}
}
class MyHomePage extends StatelessWidget {
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(
title: Text('Flutter Demo Home Page'),
),
body: Center(
child: Text(
'Hello, Flutter!',
style: TextStyle(fontSize: 24),
),
),
);
}
}
```
The above code shows a simple Flutter application that displays a page with a title bar and centered text. You can modify and expand the application as needed. Now, you have successfully created your first Flutter application!
## 4. Core Components and Features of Flutter
Flutter is a UI framework built on components, with Widgets being the core. This section introduces Flutter's commonly used components and the reactive programming model.
### 4.1. Introduction to the Widget Library and Common Components
In Flutter, all elements are built through Widgets. A Widget can be a single UI element or a composite element containing other Widgets. Here are some commonly used components:
#### 4.1.1. Layout Components
Control the position, size, and arrangement of child components. Common layout components include:
- Container: A container with style, padding, and borders.
- Row: Horizontally arranges child components.
- Column: Vertically arranges child components.
- Stack: Overlaps child components.
- Flex: Arranges child components using the Flex layout model.
Sample Code
```dart
Container(
width: 200,
height: 200,
decoration: BoxDecoration(
color: Colors.white,
borderRadius: BorderRadius.circular(10),
boxShadow: [
BoxShadow(
color: Colors.grey.withOpacity(0.5),
spreadRadius: 5,
blurRadius: 7,
offset: Offset(0, 3), // changes position of shadow
),
],
),
child: Center(child: Text('Hello, Flutter!')),
)
```
#### 4.1.2. Text and Style Components
Display text and control style and layout. Common text and style components include:
- Text: Displays simple text.
- RichText: Displays complex text.
- TextStyle: Defines text style.
Sample Code
```dart
Text(
'Hello, Flutter!',
style: TextStyle(fontSize: 24, fontWeight: FontWeight.bold),
)
```
#### 4.1.3. Image and Icon Components
Display image resources. Common image and icon components include:
- Image: Displays network or local images.
- Icon: Displays built-in or custom icons.
Sample Code
```dart
Image.network(
'https://flutter.dev/assets/homepage/carousel/1.jpg',
width: 200,
height: 200,
fit: BoxFit.cover,
)
```
#### 4.1.4. User Input Components
Receive user input. Common user input components include:
- TextField: Receives single-line text input.
- TextFormField: Adds form validation and error prompts.
- RaisedButton: A button with a click event.
Sample Code
```dart
TextField(
decoration: InputDecoration(
hintText: 'Enter your username',
),
onChanged: (value) {
print('Input content: $value');
},
)
```
#### 4.1.5. Animation and Transition Effect Components
Implement animation effects. Common animation and transition effect components include:
- AnimatedBuilder: Dynamically builds child components.
- Hero: Transfers animation between pages.
- PageView: Swipes to switch pages.
Sample Code
```dart
AnimatedContainer(
width: _isExpanded ? 200 : 50,
height: _isExpanded ? 50 : 200,
color: Colors.blue,
duration: Duration(seconds: 1),
curve: Curves.fastOutSlowIn,
)
```
### 4.2. Flutter's Reactive Programming Model
In Flutter, all Widgets are immutable. When the state of a Widget changes, Flutter automatically rebuilds the UI. The reactive programming model is based on Stateful and Stateless Widgets.
#### 4.2.1. Stateful and Stateless Widgets
Stateless Widgets are immutable Widgets whose properties do not change once set. When state needs to be preserved, Stateful Widgets are used. A Stateful Widget consists of an external Widget and an internal State. The external Widget remains unchanged, while the internal State can change.
Sample Code
```dart
class MyWidget extends StatefulWidget {
const MyWidget({ Key? key }) : super(key: key);
@override
_MyWidgetState createState() => _MyWidgetState();
}
class _MyWidgetState extends State<MyWidget> {
int _counter = 0;
void _incrementCounter() {
setState(() {
_counter++;
});
}
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('My Widget')),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text('You have pressed the button $_counter times.'),
ElevatedButton(
onPressed: _incrementCounter,
child: Text('Increment'),
),
],
),
),
);
}
}
```
#### 4.2.2. Lifecycle of a StatefulWidget
StatefulWidgets have a lifecycle that includes:
- createState: Creates the State object.
- initState: Initializes the State object.
- didChangeDependencies: Called when the dependencies of the State object change.
- build: Builds the Widget tree.
- setState: Updates the State object.
- deactivate: Destroys the State object.
Sample Code
```dart
class MyWidget extends StatefulWidget {
const MyWidget({ Key? key }) : super(key: key);
@override
_MyWidgetState createState() => _MyWidgetState();
}
class _MyWidgetState extends State<MyWidget> {
int _counter = 0;
@override
void initState() {
super.initState();
print('initState');
}
@override
Widget build(BuildContext context) {
print('build');
return Scaffold(
appBar: AppBar(title: Text('My Widget')),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text('You have pressed the button $_counter times.'),
ElevatedButton(
onPressed: () {
setState(() {
_counter++;
});
},
child: Text('Increment'),
),
],
),
),
);
}
@override
void deactivate() {
super.deactivate();
print('deactivate');
}
@override
void didChangeDependencies() {
super.didChangeDependencies();
print('didChangeDependencies');
}
}
```
#### 4.2.3. State Management
State management is important in Flutter application development. Flutter provides mechanisms to help manage state. Common state management approaches include:
- Provider: A lightweight state management framework.
- Bloc: A state management framework based on the reactive programming model.
- Redux: A data flow framework for sharing state across components.
Sample Code
```dart
class MyWidget extends StatelessWidget {
final int counter;
const MyWidget({Key? key, required this.counter}) : super(key: key);
@override
Widget build(BuildContext context) {
return Scaffold(
appBar: AppBar(title: Text('My Widget')),
body: Center(
child: Column(
mainAxisAlignment: MainAxisAlignment.center,
children: <Widget>[
Text('You have pressed the button $counter times.'),
ElevatedButton(
onPressed: () {
Navigator.of(context).push(MaterialPageRoute(builder: (context) => MyWidget(counter: counter + 1)));
},
child: Text('Increment'),
),
],
),
),
);
}
}
```
## 5. Flutter's Cross-Platform Capabilities
Flutter is a cross-platform UI framework capable of building high-performance, high-fidelity applications. This section introduces the platforms and devices supported by Flutter, the advantages of responsive UI design, and the interaction between Flutter and native code.
### 5.1. Supported Platforms and Devices
Flutter supports the following platforms and devices:
- iOS: Runs on iOS, developed and debugged using Xcode.
- Android: Runs on Android, developed and debugged using Android Studio.
- Web: Transformed into a web application through Flutter for Web.
- Windows and macOS: Runs on Windows and macOS, developed and debugged using Visual Studio Code.
- Linux: Runs on Linux, developed and debugged using Visual Studio Code.
### 5.2. Advantages of Responsive UI Design
Flutter adopts responsive UI design, automatically updating the UI when the application state changes. Advantages include:
- Rapid development: Flutter automatically updates the UI, allowing developers to quickly write applications without manually handling UI updates.
- High reusability: Flutter's component-based design allows for the reuse of existing components, speeding up application construction.
- Easy maintenance: Flutter automatically updates the UI, making applications easy to maintain and avoiding manual UI update errors.
### 5.3. Interaction Between Flutter and Native Code
Flutter provides mechanisms for interacting with native code. Methods for integrating Flutter applications into native platforms include:
- Platform Channel: Two-way communication between Flutter and the native platform.
- Method Channel: Calls native platform methods and gets return values.
- Event Channel: Receives events from the native platform.
Sample Code
Flutter calling native platform methods:
```dart
static const platform = const MethodChannel('my_flutter_app_name');
Future<void> _getBatteryLevel() async {
String batteryLevel;
try {
final int result = await platform.invokeMethod('getBatteryLevel');
batteryLevel = 'Battery level at $result % .';
} on PlatformException catch (e) {
batteryLevel = "Failed to get battery level: '${e.message}'.";
}
setState(() {
_batteryLevel = batteryLevel;
});
}
```
Native platform calling Flutter methods:
```java
new MethodChannel(getFlutterView(), CHANNEL).setMethodCallHandler(
new MethodCallHandler() {
@Override
public void onMethodCall(MethodCall call, MethodChannel.Result result) {
if (call.method.equals("sayHello")) {
String message = "Hello, Flutter!";
result.success(message);
} else {
result.notImplemented();
}
}
}
);
```
In summary, Flutter is a UI framework with powerful cross-platform capabilities, supporting a variety of platforms and devices. Its advantages in responsive UI design enable developers to build applications more quickly and efficiently, and with greater ease of maintenance. Flutter also provides mechanisms for interacting with native code, allowing developers to easily integrate Flutter applications into native platforms.
## 6. Application of Flutter in Real-World Projects
### 6.1. Suitable Scenarios for Flutter
As a powerful cross-platform UI framework, Flutter is suitable for a variety of real-world project scenarios:
- Mobile app development: Flutter is suitable for building high-performance, smooth mobile applications, supporting both iOS and Android. Using Flutter, developers can save time and resources, writing once and running on multiple platforms.
- Embedded devices and IoT applications: Flutter can be used for embedded devices and IoT applications to quickly build excellent user interfaces and interact with devices.
- Desktop app development: Flutter is not limited to mobile app development but is also suitable for building high-performance desktop applications. Developers can build applications with consistent appearance and behavior on Windows, macOS, and Linux.
- Media and entertainment applications: Flutter's strong graphics rendering and animation support make it suitable for developing media and entertainment applications. Developers can create interactive, engaging user interfaces that provide an exceptional user experience.
### 6.2. Case Studies and Experience Sharing
Here are some successful cases of Flutter being applied in various projects:
- Google Ads: Google's advertising platform uses Flutter to build its mobile application. Flutter helps Google Ads to quickly iterate new features and provide a consistently high-quality user experience.
- Google Stadia: Google Stadia is Google's cloud gaming platform, and its mobile application is built using Flutter. This application allows users to play games on different devices, offering a consistent user experience and a high-performance interactive interface.
- eBay Motors: eBay Motors is eBay's automotive trading platform, and its mobile application is developed with Flutter to provide a better user experience and faster application performance. This application allows users to browse, buy, and sell cars, as well as participate in real-time vehicle auctions.
- BMW: BMW uses Flutter to develop its My BMW application, which allows users to remotely control their vehicles and provides a range of personalized services. Flutter's cross-platform features enable BMW to quickly provide high-quality applications to both iOS and Android users.
- Philips Hue: Philips' smart lighting system, Philips Hue, uses Flutter to build its mobile application, which allows users to control smart bulbs in their homes. The application offers an intuitive interface for users to easily adjust the color and brightness of the lights.
In conclusion, Flutter is widely used in real-world projects and is suitable for various scenarios, including mobile applications, embedded devices, desktop applications, and media and entertainment applications. Case studies and experience sharing show that Flutter improves development efficiency and provides a consistent user experience. Whether for large internet companies or startups, Flutter is a powerful tool for building outstanding applications.
## 7. Flutter Ecosystem and Community Support
### 7.1. Flutter Plugins and Libraries
Flutter plugins and libraries are an important part of the ecosystem, providing powerful tools and resources to help developers efficiently complete application development.
Flutter's official plugins and libraries include:
- flutter_bloc: Implements separation of business logic and state management.
- flutter_redux: Implements the Redux architecture.
- flutter_localizations: Provides localization and internationalization support for Flutter applications.
- url_launcher: Allows Flutter applications to launch URLs on mobile devices. Supports web pages, emails, phone calls, etc.
- shared_preferences: Provides a simple way to persistently store key-value pair data.
- camera: Provides access to the camera, supporting photo taking and video recording.
- google_maps_flutter: A Flutter plugin for integrating Google Maps.
- firebase_core & firebase_auth: A series of Firebase plugins that provide support for Firebase core services and authentication features.
- path_provider: Used to find common locations on the device, such as temporary directories and document directories.
The community also offers excellent third-party plugins and libraries, such as:
- flutter_map: Integrates maps.
- flutter_webview_plugin: Integrates WebView.
- dio: Network requests.
- http: Provides a simple way to make HTTP requests, a common library for network communication in Flutter applications.
- sqflite: A SQLite plugin that allows Flutter applications to access and manipulate SQLite databases on iOS and Android.
- rxdart: Provides support for ReactiveX programming techniques, used for composing asynchronous and event-based code.
- get_it: A simple service locator for dependency injection in Flutter applications.
- flutter_svg: Allows Flutter applications to directly render SVG format vector graphics.
- intl: Provides internationalization and localization features, including date/time formatting, number formatting, and message translation.
- bloc: Provides a way to manage application state and event streams, based on the Business Logic Component (Bloc) pattern.
- provider: A popular state management library that is based on InheritedWidget but is easier to use and understand.
### 7.2. Learning Resources and Development Tools
The Flutter community provides learning resources and development tools to help newcomers learn to use Flutter. Resources and tools include:
Official learning resources and development tools, such as:
- **Flutter Documentation**: The official documentation is the best starting point for learning Flutter, offering comprehensive guides from installation to advanced topics.
- Website: [https://flutter.dev/docs](https://flutter.dev/docs)
- **Flutter API Reference**: The official API reference documentation, which provides detailed information about all Flutter framework classes and methods.
- Website: [https://api.flutter.dev](https://api.flutter.dev)
- **DartPad**: An online Dart programming tool that allows you to write and run Dart and Flutter code in your browser, perfect for quickly trying out code snippets.
- Website: [https://dartpad.dev](https://dartpad.dev)
- **Flutter Codelabs**: A series of interactive tutorials provided by Google that teach you how to use Flutter through practical coding cases.
- Website: [https://flutter.dev/docs/codelabs](https://flutter.dev/docs/codelabs)
- **Flutter YouTube Channel**: The official YouTube channel, containing educational videos about Flutter, updates, and community events.
- Website: [https://www.youtube.com/flutterdev](https://www.youtube.com/flutterdev)
Community learning resources and development tools, such as:
- **Codia AI**: Leveraging AI to produce top-tier code, consider [Codia AI Design](https://codia.ai/d/5ZFb) as an example, a tool that effortlessly turns screenshots into editable Figma UI designs. Simply upload an image of any app or website, and it smoothly handles the transformation. Meanwhile, [Codia AI Code](https://codia.ai/s/YBF9) extends these capabilities by facilitating Figma-to-Code conversions, serving a wide range of platforms such as Android, iOS, macOS, Flutter, HTML, CSS, React, Vue, and more, ensuring the creation of highly accurate code.
- Website: [https://codia.ai/](https://codia.ai/)
- **Awesome Flutter**: A curated list on GitHub that contains a large number of Flutter plugins, libraries, projects, articles, and resources.
- Website: [https://github.com/Solido/awesome-flutter](https://github.com/Solido/awesome-flutter)
- **Flutter Community Medium**: The official blog of the Flutter community on Medium, which publishes many high-quality articles about Flutter development.
- Website: [https://medium.com/flutter-community](https://medium.com/flutter-community)
- **Flutter Dev Reddit**: The Flutter developer community on Reddit, where you can find discussions, answers to questions, and Flutter-related news.
- Website: [https://www.reddit.com/r/FlutterDev/](https://www.reddit.com/r/FlutterDev/)
- **Visual Studio Code**: A lightweight but powerful code editor that supports Flutter and Dart plugins, offering features like code highlighting, code completion, and debugging.
- Website: [https://code.visualstudio.com/](https://code.visualstudio.com/)
- **IntelliJ IDEA / Android Studio**: An IDE provided by JetBrains, especially Android Studio, which is designed specifically for Android development and also offers full support for Flutter.
- Website: [https://developer.android.com/studio](https://developer.android.com/studio)
- **Flutter DevTools**: A set of performance monitoring and debugging tools that can help you analyze performance issues in your Flutter applications.
- Website: [https://flutter.dev/docs/development/tools/devtools/overview](https://flutter.dev/docs/development/tools/devtools/overview)
### 7.3. Activity of the Flutter Community
The Flutter community is active with excellent developers and contributors. The community organizes events such as Meetups, conferences, and Hackathons to promote the development of the ecosystem.
The community has open GitHub repositories where anyone can submit code and contribute. The repositories contain open-source Flutter projects and plugins, providing resources and tools for developers.
## 8. Future Trends in Flutter Development
- Improved performance and stability: Continuous improvements and optimizations to meet demands.
- Richer UI component library: Expansion of the library to provide more UI components and animations.
- More third-party plugins and libraries: As Flutter becomes more popular, it is expected that more third-party resources and tools will emerge.
This article has introduced the features, advantages, and suitable scenarios of the Flutter framework, as well as the plugins and libraries, learning resources, and community support in the ecosystem. Finally, the future development of Flutter was anticipated and analyzed. In summary, Flutter is a powerful framework that provides useful tools and resources, making it easier and faster to develop high-quality mobile applications. | happyer |
1,755,637 | My Space Projects | https://car-showcase-r.vercel.app/ https://my-dictionary-beta.vercel.app/ https://cyrpto-verse.vercel... | 0 | 2024-02-08T14:35:55 | https://dev.to/srisomanaath/my-space-projects-4cgf | webdev, javascript, programming, react | https://car-showcase-r.vercel.app/
https://my-dictionary-beta.vercel.app/
https://cyrpto-verse.vercel.app/
https://imdb-movie-delta.vercel.app/
https://gpt-ui-eta.vercel.app/
https://tech-arion.vercel.app/
https://nike-clone-kappa-silk.vercel.app/
https://techarionspacecenter.netlify.app/ | srisomanaath |
1,755,652 | Unknown Resources | Unlocking Potential: The Benefits of Enabling Unknown Resources In the vast landscape of... | 0 | 2024-02-08T14:53:48 | https://dev.to/abeeb/unknown-resources-bn3 | android | # Unlocking Potential: The Benefits of Enabling Unknown Resources
In the vast landscape of technology, users often encounter cautionary messages about enabling "Unknown Resources" on their devices. While the warning is meant to protect users from potential security threats, there are situations where embracing the unknown can bring forth a myriad of benefits. [Apkzity](https://apkzity.com/) provides you games that needs to enable unknown resources so I put the whole procedure step by step:
## 1. Access to Diverse Applications:
Enabling Unknown Resources allows users to explore a wide array of applications beyond the official app stores. This opens the door to innovative and niche applications that may not have gained recognition but can immensely enhance user experience.
## 2. Freedom of Choice:
Users gain the freedom to choose applications that align with their specific needs and preferences. This empowerment ensures that individuals are not restricted by the limitations imposed by curated app stores, fostering a more personalized and tailored digital experience.
## 3. Early Access to Updates:
Some developers release updates and beta versions of their apps directly, bypassing the app store approval process. Enabling Unknown Resources provides users with the opportunity to access these updates early, experiencing new features and improvements before they reach the broader audience.
## 4. Cost-Efficiency:
Several apps available through Unknown Resources may be free or come with alternative pricing models. Users can discover cost-effective alternatives to mainstream applications, reducing expenses while still enjoying high-quality functionality.
## 5. Preservation of Legacy Apps:
As official app stores evolve, older apps may be deprecated or removed, rendering them inaccessible to users. Enabling Unknown Resources allows for the preservation and continued use of beloved legacy applications that might otherwise be lost.
## 6. Customization and Tweaks:
For tech-savvy users, enabling Unknown Resources provides the gateway to customization and system tweaks. From alternative launchers to system modifications, users can tailor their devices to suit their preferences, enhancing the overall user experience.
## 7. Collaboration and Innovation:
By supporting Unknown Resources, users contribute to a more collaborative digital environment. Developers who might face challenges getting their apps approved on official platforms can still reach users directly, fostering innovation and diversity in the app ecosystem.
## 8. Regional Access:
Certain applications may be restricted to specific regions due to legal or licensing constraints. Enabling Unknown Resources can grant users access to apps that are not available in their geographical location, promoting cultural exchange and widening the global digital landscape.
In conclusion, while enabling Unknown Resources requires a cautious approach and awareness of potential risks, it can also unlock a realm of possibilities, promoting diversity, innovation, and personalization in the digital sphere. Users who embrace the unknown can reap the benefits of a more dynamic and enriched technological experience. | abeeb |
1,755,684 | Get some clothing Secrets | There are many types of specialty clothing which have been worn only for specific actions or by sure... | 0 | 2024-02-08T15:39:07 | https://dev.to/rossanelli/get-some-clothing-secrets-2a5o | There are many types of specialty clothing which have been worn only for specific actions or by sure people.
✔️ Rentals: You pay a every month charge and have to borrow clothing. With some providers you will decide on specifically which items to acquire with Every shipment, whilst with Other individuals you make a virtual closet of types you prefer, and you'll be sent what is accessible.
Substantially of what Nagoski preaches, she claimed, is a transformation of how The majority of us are actually taught sexual intercourse is alleged to operate — that it is generally pleasurable and simple.
Receiving the appropriate clothes can be a process, and try to be prepared for it to consider time. Try to remember, the clothes will very last you for some time, so there is absolutely no true rush for getting them. It is simple to go into a high-finish store and buy their clothes in bulk but who's got The cash to actually do this. If you prefer good quality over a spending plan, then go slow.
The manufacturer provides absolutely free regular transport on all buys, and so they lately just switched from 100 percent compostable bags to one hundred pc recycled LDPE bags to make sure that their packaging can break down in landfills.
Personal savings account guideBest savings accountsBest substantial-generate savings accountsSavings accounts alternativesSavings calculator
“I attempted to give myself authorization to allow these things for being correct. To acknowledge they'd not normally be true. Which I'd move as a result of this spell with additional ease if I didn't conquer myself up.”
There’s a chance you’ve heard about Everlane due to the fact famous people like Katie Holmesand Angelina Jolie dress in pieces from the manufacturer that seem way more expensive than they definitely are. They not simply do well at sourcing the finest elements — like their ReCashmere sweaters, a blend of recycled cashmere and wool — but will also share each [get some clothing](https://rossanelli.com/) individual element of their Price tag breakdown so you know precisely just how much a piece of clothing costs to help make.
The brand offers a Life-style – one that consists of seaside escapes in the summers and heat dinner events from the Wintertime, that is brought to lifetime inside their “home” Office. And although they've their own personal styles with the Tuckernuck label (Individually eyeing this skirt today), they host an impressive range of designers on their web page too. If preppy-stylish is your go-to design, then it’s time you familiarize you with Tuckernuck.
Very best credit score cardsBest bonus offer you credit rating cardsBest harmony transfer credit rating cardsBest journey credit cardsBest funds back credit history cardsBest 0% APR credit rating cardsBest rewards credit rating cardsBest airline credit cardsBest faculty college student credit cardsBest charge cards for groceries
Only reputable merchandise in wonderful situation are sold on the site, so you know you happen to be obtaining the real offer (pun intended).
In addition to that, the model features a complete support for maternity clothing rentals, so you won't have to get clothing that you will only don through your pregnancy and leave behind your closet.
Superior Housekeeping participates in various affiliate promoting courses, which implies we may receives a commission commissions on editorially selected items obtained by means of our hyperlinks to retailer web-sites.
When in doubt, Nordstrom probably has it. The Office retail outlet is popular for offering a little bit of all the things in each individual measurement, from shoes to shirts to designer goods. In addition, when product sales roll around, the bargains are so very good, It is in the vicinity of extremely hard to take a look at with only 1 or 2 merchandise within your cart. | rossanelli | |
1,755,691 | Harnessing the Power of CSS Variables: A Comprehensive Guide | Introduction: CSS variables, denoted by property names prefixed with -- (double dash), offer a... | 0 | 2024-02-08T15:50:34 | https://dev.to/r4nd3l/harnessing-the-power-of-css-variables-a-comprehensive-guide-58hk | css, cssvariables, webdev, modularstyles | Introduction:
CSS variables, denoted by property names prefixed with -- (double dash), offer a flexible and efficient way to manage and reuse values across stylesheets. Also known as custom properties, CSS variables allow developers to define and apply dynamic values that can adapt to various contexts. In this guide, we'll explore the syntax, usage, and benefits of CSS variables, demonstrating their versatility in creating modular and maintainable styles.
### Understanding CSS Variables:
CSS variables, represented by custom property names prefixed with --, provide a mechanism for storing and reusing values throughout a stylesheet. These variables can be dynamically updated and applied using the `var()` function.
### Syntax:
```css
--variable-name: value;
```
### Example:
```css
:root {
--primary-color: #16f;
--secondary-color: #ff7;
}
.element {
background-color: var(--primary-color);
color: var(--secondary-color);
}
```
### Benefits of CSS Variables:
#### 1. Reusability and Consistency:
- CSS variables enable the definition of reusable values, promoting consistency across stylesheets.
- Centralizing commonly used values as variables simplifies maintenance and ensures uniformity in design.
#### 2. Dynamic Updates:
- CSS variables can be dynamically updated using JavaScript, allowing for real-time adjustments based on user interactions or system states.
- This dynamic behavior enhances flexibility and responsiveness in web design.
#### 3. Scoped and Cascading:
- CSS variables are scoped to the elements they are declared on and participate in the cascade.
- This scoped nature enables granular control over variable values, facilitating targeted styling adjustments.
### Usage Example:
```html
<div class="container">
<div class="box"></div>
</div>
```
```css
:root {
--box-color: #16f;
}
.box {
width: 100px;
height: 100px;
background-color: var(--box-color);
}
.container:hover {
--box-color: #ff7; /* Dynamically update box color on hover */
}
```
### Conclusion:
CSS variables, or custom properties, offer a powerful and flexible approach to managing stylesheets in web development. By leveraging CSS variables, developers can enhance maintainability, promote consistency, and enable dynamic styling adjustments, contributing to a more efficient and adaptable design workflow. | r4nd3l |
1,755,712 | Enhancing Infrastructure Resilience: FiberTite Roofing and DC PDUs | In today's fast-paced digital age, where businesses rely heavily on uninterrupted connectivity and... | 0 | 2024-02-08T16:15:35 | https://dev.to/raptorpowersystems/enhancing-infrastructure-resilience-fibertite-roofing-and-dc-pdus-34e2 | pdu, ups, roofing, dc | In today's fast-paced digital age, where businesses rely heavily on uninterrupted connectivity and data access, the integrity of critical infrastructure components cannot be overstated. Two essential elements in this realm are roofing systems and power distribution units (PDUs). While seemingly unrelated, the synergy between **[FiberTite roofing](https://www.raptorpowersystems.com/products/fibertite-roofing)** solutions and Direct Current (DC) PDUs exemplifies a holistic approach to infrastructure resilience and efficiency.
**
## FiberTite Roofing: Protecting the Overhead Assets
**
A roof is the first line of defense against environmental elements such as rain, wind, snow, and UV radiation. Traditional roofing materials often struggle to withstand these challenges over time, leading to leaks, structural damage, and ultimately, costly disruptions. FiberTite, a leading provider of roofing membranes, offers a solution to this problem with its innovative technology.
FiberTite roofing membranes are engineered using a unique combination of materials, including DuPont™ Elvaloy® Ketone Ethylene Ester (KEE). This composition provides exceptional strength, flexibility, and resistance to environmental factors, ensuring long-term durability and protection for facilities.
## **Key Features of FiberTite Roofing:
**
**Durability:** FiberTite roofing membranes are designed to withstand extreme weather conditions, including high winds, hail, and temperature fluctuations, without compromising performance.
**Waterproofing:** The thermoplastic membrane technology used in FiberTite roofs creates a seamless, watertight barrier, preventing leaks and water damage to the underlying structure.
**UV Resistance:** FiberTite membranes incorporate UV stabilizers that protect against sun damage, preserving the integrity and appearance of the roof over time.
**Energy Efficiency: **By reflecting a significant portion of solar radiation, FiberTite roofs help reduce heat transfer into the building, lowering cooling costs and improving energy efficiency.
**Low Maintenance: **With minimal maintenance requirements, FiberTite roofing systems offer long-term cost savings and peace of mind for facility managers.
## **Direct Current (DC) Power Distribution Units (PDUs): Empowering Efficiency**
In data centers and other mission-critical facilities, reliable power distribution is paramount. Traditional alternating current (AC) systems have been the standard for decades, but the rise of DC power distribution offers numerous advantages in terms of efficiency, reliability, and compatibility with modern technologies.
DC PDUs facilitate the direct distribution of power from a central source to IT equipment, eliminating the need for multiple conversions between AC and DC within the data center. This streamlined approach reduces energy loss, heat generation, and equipment complexity, resulting in significant efficiency gains.
**
## Advantages of DC Power Distribution Units:
**
Energy Efficiency: DC power distribution minimizes energy losses associated with AC-DC conversions, leading to lower operating costs and reduced environmental impact.
**Improved Reliability:** By eliminating multiple points of failure inherent in AC systems, **[DC PDUs](https://www.raptorpowersystems.com/products/pdu-s/dc-pdu-s)** enhance overall system reliability and uptime.
**Scalability:** DC power distribution is inherently scalable, allowing for easier integration of renewable energy sources and future expansion of infrastructure.
**Compatibility:** Many modern IT devices, such as servers, storage systems, and networking equipment, operate on DC power. DC PDUs eliminate the need for additional power conversion equipment, simplifying installation and reducing hardware costs.
**Remote Management:** Advanced DC PDU models offer remote monitoring and management capabilities, allowing administrators to optimize power distribution, troubleshoot issues, and prevent downtime proactively.
## **Synergy and Integration: FiberTite Roofing with DC PDUs**
The synergy between FiberTite roofing systems and DC PDUs exemplifies a comprehensive approach to infrastructure resilience and efficiency. By integrating these two components, facility managers can create a more robust and sustainable environment for critical operations.
**Resilience:** FiberTite roofing membranes provide reliable protection against environmental hazards, safeguarding the infrastructure and equipment housed within the facility. Combined with the efficiency and reliability of DC power distribution, this creates a resilient foundation for uninterrupted operations.
**Energy Efficiency:** The energy-efficient properties of FiberTite roofs, coupled with the efficiency gains of DC power distribution, contribute to significant reductions in overall energy consumption and operating costs for the facility.
**Sustainability: **Both FiberTite roofing systems and DC PDUs support sustainability initiatives by reducing energy consumption, minimizing environmental impact, and prolonging the lifespan of infrastructure assets.
**Future-Proofing:** As technology continues to evolve, the integration of FiberTite roofing with DC power distribution positions facilities to adapt to future advancements seamlessly. This future-proofing approach ensures long-term viability and competitiveness in a rapidly changing landscape.
In conclusion, the combination of FiberTite roofing solutions and DC power distribution units represents a forward-thinking approach to infrastructure design and management. By prioritizing durability, efficiency, and sustainability, organizations can create resilient facilities that are well-equipped to meet the challenges of today and tomorrow. Investing in these critical components is not just a matter of protecting assets; it's a strategic decision to future-proof infrastructure and ensure business continuity in an increasingly interconnected world. | raptorpowersystems |
1,755,924 | Introduction to Machine Learning. | Machine learning tries to teach a computer a program and then expecting it to learn and create its... | 0 | 2024-02-08T18:04:06 | https://dev.to/chakkz0206/introduction-to-machine-learning-2k3f | Machine learning tries to teach a computer a program and then expecting it to learn and create its own programs. It is like repeatedly teaching as task for the computer in a way that it becomes more efficient in performing the task. For example, if you enter a sales data set from 2000 to 2022 and perform tasks on it, the computer should be equipped with the ability to undern stand the underlying patterns and be able to predict for future.
We can see machine learning being used in different areas. Every e-commerce company uses machine learning like average order value, average time spent, recommendations.It is also used for facial recognitions, self-driving cars, Grammarly, mail classification as spams, product bundling, social media analysis, default tax payments, loan prediction model and time series forecasting.
Machine Learning Paradigms:
**1. Supervised Learning:
Concept:** You have labeled data, where each data point has a corresponding target value you want to predict. Think of it like learning with a teacher providing answers.
**Training:** The model learns the relationship between input data and target values by analyzing labeled examples. Imagine fitting a line through data points to capture the underlying pattern.
**Prediction:** Once trained, the model can predict target values for new, unseen data based on the learned relationship. Think of using the fitted line to predict values beyond the training data points.
**Common tasks:** Regression (predicting continuous values like house prices), classification (categorizing data points like spam emails), and decision trees (building tree-like structures for predictions).
**2. Unsupervised Learning:
Concept:** You have unlabeled data, where no specific target value is provided. Think of exploring and making sense of unknown territory without a guide.
**Analysis:** The goal is to uncover hidden patterns or structures within the data. Imagine grouping similar data points or identifying important features.
**Applications:** Clustering (grouping similar data points), dimensionality reduction (compressing data while retaining key information), anomaly detection (finding unusual data points), and recommendation systems (suggesting relevant items based on user preferences).
**3. Reinforcement Learning:
Concept:** Imagine teaching a baby to walk through trial and error, with rewards for taking successful steps. Reinforcement learning works similarly, where an agent interacts with an environment, takes actions, and receives rewards for good choices.
**Learning:** The agent learns through trial and error, aiming to maximize long-term rewards. Think of exploring different paths and learning from successes and failures.
**Applications:** Robotics (training robots to navigate and perform tasks), game playing (training AI agents to play complex games), and resource management (optimizing decisions in complex systems).Reinforcement learning often involves reward functions designed to guide the agent towards desired behavior.Q-learning is a popular technique where the agent learns the value of taking specific actions in different states. | chakkz0206 | |
1,756,056 | Cogno-Cravings desvela una solución innovadora | Las fiestas navideñas, aunque alegres, suelen desencadenar problemas de salud relacionados con el... | 0 | 2024-02-08T21:15:47 | https://dev.to/mikewh/cogno-cravings-desvela-una-solucion-innovadora-59b | Las fiestas navideñas, aunque alegres, suelen desencadenar problemas de salud relacionados con el estrés. Una importante estadística de la [Alianza Nacional de Enfermedades Mentales](https://despertarperu.com) destaca que el 64% de las personas sufren depresión navideña, agravada por el estrés económico, emocional y físico. [CognoCravings](https://diariodelpacifico.com) aborda estos retos ofreciendo un método innovador para contrarrestar los efectos del estrés navideño sobre la salud. | mikewh | |
1,756,064 | Worbler AI on the iOS App Store! 🚀 | Hey Everyone! It's Noel here, President and Co-creator of Worbler AI! We’re in our early... | 0 | 2024-02-11T19:15:59 | https://dev.to/worblerai/worbler-ai-on-the-ios-app-store-431p | webdev, ai, ios, discuss | ## Hey Everyone!
It's Noel here, President and Co-creator of [Worbler AI!](https://apps.apple.com/us/app/worbler-ai/id1601684407)
We’re in our early days so I’m getting in touch with new users to learn more about what piqued your interest and how you might use the new [Worbler AI iOS app! ](https://apps.apple.com/us/app/worbler-ai/id1601684407)That being said, **content creators, video editors & AI enthusiasts,** give Worbler AI a try and let us know if you love it as much as we do! | noelweichbrodt |
1,756,076 | NetMQ: Efficient Inter-Process Communication | Introduction Welcome to the NetMQ tutorial, where we explore the capabilities of NetMQ, a... | 0 | 2024-02-08T22:52:09 | https://dev.to/gsilvamartin/netmq-efficient-inter-process-communication-fd2 | dotnet, tutorial, programming, networking | ## Introduction
Welcome to the `NetMQ` tutorial, where we explore the capabilities of NetMQ, a .NET library that extends the efficiency and flexibility of `ZeroMQ`. In this guide, we'll focus on two fundamental messaging patterns: **Publish-Subscribe (Pub-Sub) 📢** and **Request-Reply (Req-Rep) 🔄**.
**What is NETMQ?**
> NetMQ extends the standard socket interface with features traditionally provided by specialized messaging middleware products. NetMQ sockets provide an abstraction of asynchronous message queues, multiple messaging patterns, message filtering (subscriptions), seamless access to multiple transport protocols, and more. - NetMQ Documentation
---
## NETMQ vs Traditional Message Queues
In the realm of message queues, NetMQ distinguishes itself from traditional counterparts through its emphasis on asynchronous messaging and a versatile socket-based communication model. Unlike traditional queues that often rely on synchronous communication, NetMQ enables non-blocking interactions, allowing applications to continue processing seamlessly while awaiting messages.
Built on ZeroMQ's foundation, NetMQ supports various socket types, offering developers a more adaptable and efficient approach to communication. Moreover, NetMQ operates with a server-less architecture, eliminating the need for a central server or broker. This design choice simplifies deployment and enhances system resilience, providing a robust foundation for distributed systems.
With a focus on zero-copy messaging, NetMQ optimizes data transfer efficiency, minimizing unnecessary memory copies during message transmission. Its extensibility and flexibility further set it apart, allowing developers to implement custom messaging patterns and tailor the library to specific application requirements. In summary, NetMQ emerges as a compelling alternative, offering asynchronous communication, versatile sockets, and enhanced performance compared to traditional message queues.
## Messaging Patterns
`NetMQ ` introduces a spectrum of messaging patterns that go beyond traditional message queues, providing developers with powerful tools to build efficient and decoupled systems.
**Publish-Subscribe (Pub-Sub) 📢**
In the Pub-Sub pattern, NetMQ facilitates broadcasting messages from a single **publisher** to multiple **subscribers**. Publishers categorize messages into topics, and subscribers selectively receive messages based on their subscriptions. This pattern is ideal for scenarios where information needs to be disseminated to a diverse set of recipients without the sender being aware of the subscribers' identities.
**Request-Reply (Req-Rep) 🔄**
NetMQ's Request-Reply pattern establishes a robust, synchronous communication model between a **requester** and one or more **responders**. The requester sends a request message and awaits a corresponding response from the responder. This pattern is well-suited for scenarios where a clear exchange of information and acknowledgment is necessary, ensuring a reliable and orderly communication flow.
**Push-Pull, Pair (Exploration Continues)**
While our primary focus is on Pub-Sub and Req-Rep, NetMQ offers additional patterns like **Push-Pull** and **Pair**, each catering to specific communication scenarios. In this tutorial, we'll delve into the details of Pub-Sub and Req-Rep, providing practical insights and examples to demonstrate their implementation.
---
Now, let's explore the **Publish-Subscribe (Pub-Sub) pattern** in more detail.
## Pub-Sub Pattern
### Subscriber
```csharp
/// <summary>
/// Represents a ZeroMQ PUB-SUB pattern subscriber.
/// </summary>
public class ZeroMQPubSubConsumer
{
private static readonly string _consumerUrl = "tcp://127.0.0.1:5556";
/// <summary>
/// Registers the ZeroMQ subscriber and starts receiving messages in a PUB-SUB pattern.
/// </summary>
public static void RegisterZeroMQConsumer()
{
using (var subscriber = new SubscriberSocket())
{
subscriber.Connect(_consumerUrl);
subscriber.Subscribe("");
Console.WriteLine("Subscriber: Ready to receive messages.");
while (true)
{
string message = subscriber.ReceiveFrameString();
Console.WriteLine($"Received: {message}");
}
}
}
}
```
### Publisher
```csharp
/// <summary>
/// Represents a ZeroMQ PUB-SUB pattern publisher.
/// </summary>
public class ZeroMQPubSubPublisher
{
private static readonly string _publisherUrl = "tcp://127.0.0.1:5556";
/// <summary>
/// Registers the ZeroMQ publisher and starts sending messages in a PUB-SUB pattern.
/// </summary>
public static void RegisterZeroMQPublisher()
{
using (var publisher = new PublisherSocket())
{
publisher.Bind(_publisherUrl);
Console.WriteLine("Publisher: Ready to send messages.");
for (int i = 0; i < int.MaxValue; i++)
{
string message = $"Message {i}";
Console.WriteLine($"Sending: {message}");
publisher.SendFrame(message);
Thread.Sleep(1000);
}
Console.ReadLine();
}
}
}
```
### Pub-Sub in Action 📢
The Pub-Sub pattern is a powerful tool for scenarios involving:
- **Event Notification Systems:** Use Pub-Sub to notify multiple components or services about events without them knowing each other.
- **Real-time Updates:** Implement real-time updates in applications where dynamic information needs to be broadcasted to various clients.
- **Distributed Systems:** In a distributed architecture, Pub-Sub ensures seamless communication between different components.
---
Now, let's add the **Request-Reply (Req-Rep) pattern**.
## Req-Rep Pattern
### Requester
```csharp
/// <summary>
/// Represents a ZeroMQ REQ-REP pattern requester (client).
/// </summary>
public class ZeroMQReqRepRequester
{
private static readonly string _serverUrl = "tcp://127.0.0.1:5556";
/// <summary>
/// Sends requests to the ZeroMQ REQ-REP server.
/// </summary>
public static void RegisterZeroMQRequester()
{
using (var requester = new RequestSocket())
{
requester.Connect(_serverUrl);
Console.WriteLine("Requester: Ready to send requests.");
// Simulate sending requests periodically
for (int i = 0; i < int.MaxValue; i++)
{
string request = $"Request {i}";
Console.WriteLine($"Sending request: {request}");
requester.SendFrame(request);
string response = requester.ReceiveFrameString();
Console.WriteLine($"Received response: {response}");
Thread.Sleep(1000);
}
Console.ReadLine();
}
}
}
```
### Responder
```csharp
/// <summary>
/// Represents a ZeroMQ REQ-REP pattern responder (server).
/// </summary>
public class ZeroMQReqRepResponder
{
private static readonly string _serverUrl = "tcp://127.0.0.1:5556";
/// <summary>
/// Listens for incoming requests and sends responses in the ZeroMQ REQ-REP pattern.
/// </summary>
public static void RegisterZeroMQResponder()
{
using (var responder = new ResponseSocket())
{
responder.Bind(_serverUrl);
Console.WriteLine("Responder: Ready to receive and process requests.");
while (true)
{
string request = responder.ReceiveFrameString();
Console.WriteLine($"Received request: {request}");
string response = $"Response to {request}";
Console.WriteLine($"Sending response: {response}");
responder.SendFrame(response);
}
}
}
}
```
### Req-Rep in Action 🔄
The Request-Reply pattern proves beneficial in:
- **Microservices Communication:** When microservices need to exchange information in a synchronous manner, Req-Rep ensures reliable communication.
- **Command-Query Separation:** Adopt a clear separation between commands (requests) and queries (responses) in your system architecture.
- **Transactional Systems:** Req-Rep is suitable for transactional systems where requests must be acknowledged with corresponding responses.
---
## Final Thoughts 💭
These patterns represent just a glimpse into NetMQ's versatile capabilities. While we introduced additional patterns like `Push-Pull and Pair`, the tutorial primarily emphasized Pub-Sub and Req-Rep for a deeper understanding.
Feel free to explore more examples and code snippets in the [ZeroMQ Examples GitHub repository](https://github.com/gsilvamartin/zero-mq-examples), where you can access additional resources to further enhance your proficiency with NetMQ.
Thank you for joining this exploration of NetMQ's messaging patterns.
Happy coding! | gsilvamartin |
1,756,219 | Why use Obsidian for Software Development ? | I've been using Obsidian for studying for a long time. As I read or watch a video lesson I add my... | 26,676 | 2024-02-09T01:46:17 | https://dev.to/sc0v0ne/why-use-obsidian-for-software-development-3j42 | softwaredevelopment, productivity, workstations, tools | I've been using **[Obsidian](https://obsidian.md)** for studying for a long time. As I read or watch a video lesson I add my notes, links, images....... and much more. But I had never used it for software development in my day-to-day work, so I decided to use it for work tasks for a week, writing down my observations to bring this post.
### First of all, what is Obsidian?
I like to use [Obsidian](https://obsidian.md) as a super notebook that is also quite simple. To get started with Obsidian you need to download the software from their official website. After installation you can start, Obsidian uses the markdown file format. It's similar to a text file, but it has features such as tags where you can organize the texts. I don't know about you, but I think it's really useful to use Markdown because it's simple to use and helps you focus on developing texts without needing a lot of configuration. To further improve Obsidian, it has extensions that are not official to Obsidian where developers can bring new features to further enrich the software. But the most interesting thing is its second brain feature, where you can connect files via hyperlinks and see relationships between different subjects.
Returning to the post, I decided to divide my observations into two themes:
- Obsidian in teams
- Individual Obsidian
### Obsidian in teams
For those familiar with software development, you know very well that we have to document everything, everything and everything. Even more so for us software developers focused on AI, where we need to evaluate models, compare results and create reports. Having all this well documented is a difficult task.
Using Obsidian brought me benefits in this first point, during that week I created a directory outside the software's documentation, to focus solely on writing what was in the week's task. It was quite useful, as I was able to first place everything I needed on the documentation and then filter what was really necessary. I noticed that I spent less time in the browser and was just writing what I developed and putting in the necessary examples.
Next, I also needed to generate a report on some model metrics. Obsidian has the tables feature, so I added the results, of course you also have to be realistic that the tables feature is not the same as Google Sheets, but to use it as a presentation it makes it much easier.. With the hyperlinks feature you can connect documentation with reporting. This makes it easier to direct what was described in that file.
After all this development in the directory, I added GIT where I could start versioning and then send it to Github. This is very good, why?
If you work in a team, your card or task doesn't always go to **done**. It goes to **re-view** first. Your manager or someone else will evaluate your task if you are in agreement with it. Taking an example of a report that you developed, whoever evaluates it will be able to view it before going to the main branch, mark improvements in certain lines or add comments. If there are any changes, redo the cycle, if it has passed approval, this report, documentation or other situation can be controlled by versioning. Whenever it needs to be updated, it will have previous versions.
Bringing another resource, you can use [Obsidian Publish](https://help.obsidian.md/Obsidian+Publish/Introduction+to+Obsidian+Publish). With it you can publish your content. In the case of documentation, you can provide access or in the case of AI models, you can provide development tutorials, obtained metrics, dataset reports.
### Individual Obsidian
I won't talk about this part as I use it for personal things, about books, studies or other things. Focus and use for work. In the question above, teams were discussed. At this point we will talk about the case that you have Obsidian separate from the projects.
Starting with day-to-day tasks, I use Obsidian with another tool called **Pomodoro**. I have two files at root, a file called **Pomodoro Today** and **Pomodoro 2024**. Both have tables, with a column for date, name, number of pomodoros and observations.
When my work schedule starts, the first pomodoro. It's for organizing the day, where I check emails, check the task board, check chat messages and organize the table in the **Pomodoro Today** file, this file I update every day with the tasks I've been looking for at this moment, throughout of the day I run it and at the end of the day I take the progress from the **Pomodoro Today** file and transfer it to **Pomodoro 2024**, where the entire record for the year is kept. Leaving the organization pomodoro, the rest of the office pomodoros are focused on development and meetings, every time one ends, I go to the task of the day and mark a pomodoro. The pomodoro helps a lot in controlling time, time is our greatest and most precious resource, so there is always time to try to improve the use of it. When it comes to how long I should spend on the pomodoro, I recommend finding the time that is most comfortable for you.
To help with focus pomodoros, within Obsidian I leave files with references to documentation links, articles and projects, leaving something already ready in Obsidian as links, it helps to avoid accessing other sites that cause distraction, leaving us with no focus for development . One that I always leave in the root and **notes of the day**, whenever we are focused our mind brings some reminder of something we are forgetting or an idea that came up, this file is used to add quick notes so as not to conflict with something that you are focused on and end up distracting from the main focus.
### Conclusion
This was my perspective using Obsidian that I wanted to share using for work. Remembering that everyone has their own way of working, you will not always adapt. So we should always look for resources that help us and also test to see if we like it. I hope you liked these examples. Thank you for reading this far.
-----
### About the author:
{% embed https://dev.to/sc0v0ne %}
**A little more about me...**
Graduated in Bachelor of Information Systems, in college I had contact with different technologies. Along the way, I took the Artificial Intelligence course, where I had my first contact with machine learning and Python. From this it became my passion to learn about this area. Today I work with machine learning and deep learning developing communication software. Along the way, I created a blog where I create some posts about subjects that I am studying and share them to help other users.
I'm currently learning TensorFlow and Computer Vision
Curiosity: I love coffee
- [Kaggle](https://www.kaggle.com/sc0v1n0)
- [Gitlab](https://gitlab.com/sc0v0n3)
- [Github](https://github.com/sc0v0ne)
- [Mastodon](https://mastodon.social/@sc0v0ne)
----
### My Latest Posts
{% embed https://dev.to/sc0v0ne/becoming-efficient-with-pomodoro-1km8 %}
{% embed https://dev.to/sc0v0ne/my-super-powers-as-a-software-developer-2024-1h17 %}
{% embed https://dev.to/sc0v0ne/blueflix-idea-build-deploy-ji5 %}
------- | sc0v0ne |
1,756,341 | The exhaustive Manual for Settling my SBC global email is not working Difficulties: Investigating and Viable Arrangements | Find a definitive manual for beating difficulties with your SBC Global Email. This complete manual... | 0 | 2024-02-09T06:32:07 | https://dev.to/sbcglobal55/the-exhaustive-manual-for-settling-my-sbc-global-email-is-not-working-difficulties-investigating-and-viable-arrangements-560a | sbcglobalmails, mysbcglobalemail | Find a definitive manual for beating difficulties with your **[SBC Global Email](https://sbcglobalmails.com/)**. This complete manual proposals inside and out investigating methods and gives viable answers to guarantee a consistent email insight. Jump into an abundance of knowledge intended to easily determine troubles, engaging you to explore and determine issues. Whether confronting specialized errors or looking for suitable game plans, this comprehensive manual is your go-to asset for dominating my SBC global email is not working investigating. Lift your email insight with reasonable tips and arrangements custom-made to upgrade proficiency and dependability. | sbcglobal55 |
1,756,364 | Superdense Coding-Math & Implementation | For classical communication there is a limit on how much information can be send with a given number... | 0 | 2024-02-09T07:01:41 | https://dev.to/mdzubair9492/superdense-coding-math-implementation-26ik |
For classical communication there is a limit on how much information can be send with a given number of bits. We can send no more than n bits of information using n bits. However, there are ways that we can push the boundary which are not possible classically. Hence, we can use superdense coding using quantum computer.
**Definition:**
> It is a procedure that allows someone to send two classical bits to another party using just a single qubit of information.
Here we will use 4 types of gates while implementing this procedure.
1. **H gate** -> This puts a qubit into superposition.
2. **X gate** -> X|0> -> |1> & X|1> -> |0>
3. **CX gate** -> It takes two qubits one is control and another is target. If the control bit is 0 , it does nothing to the target qubit, otherwise if the control bit is 1, X gate is applied on target qubit.
4. **Z gate** -> It only change the phase of a qubit when it is in |1> state i.e, Z|0> -> |0> , Z|1> -> -|1>
The procedure is implemented as follows,

For keeping track , we have put 4 track point here , namely W1,W2,W3,W4.
Now let's see how this procedure works mathematically,





Now let's see its coding part, here we will use Qiskit for the implementation,
```
from qiskit import QuantumCircuit,Aer,execute
qc_3rd_party = QuantumCircuit(2,2)
qc_3rd_party.h(1)
qc_3rd_party.cx(1,0)
message_list = ['00', '01', '10', '11']
qc_Alice = QuantumCircuit(2,2)
for msg in message_list:
if msg[-2]=='1':
qc_Alice.z(1)
if msg[-1]=='1':
qc_Alice.x(1)
qc_Bob = QuantumCircuit(2,2)
qc_Bob.cx(1,0)
qc_Bob.h(1)
qc_Bob.measure([0,1],[0,1])
complete_qc = qc_3rd_party.compose(qc_Alice.compose(qc_Bob))
backend = Aer.get_backend('qasm_simulator')
print(f'For message = {msg}\n')
count = backend.run(complete_qc).result().get_counts()
print(count,"\n")
qc_3rd_party.data=[]
qc_Alice.data=[]
qc_Bob.data=[]
```
The output of the code is ,

| mdzubair9492 | |
1,756,370 | Now You Understand: Central Bank Digital Currency (CBDC) Explanation and Usage - Part 1 | CBDCs have been much discussed in recent years, and various countries are exploring the possibilities... | 0 | 2024-02-09T07:09:35 | https://dev.to/cryptonstudio/now-you-understand-central-bank-digital-currency-cbdc-explanation-and-usage-part-1-3l53 | cbdc, digitalcurrency, blockchain, cryptocurrency | CBDCs have been much discussed in recent years, and various countries are exploring the possibilities of implementation. You may have heard or read about this digital currency, but do you have a good understanding of how CBDC is structured, how it can be used, and what impact central bank digital currency will have on the future of finance? We'll tell you all about CBDCs, welcome to Crypton Studio's Now You Understand articles.
The Crypton Studio team explains and educates about web3 and blockchain technologies in the Now you understand article series. We are happy to share our expertise with you and strive for an open dialogue. Don't forget to share your opinion in the comments if you like our work. Enjoy reading!
## What is CBDC?
Central Bank Digital Currency (CBDC) - a digital form of national bank money that is created, controlled and maintained by the central bank. It is legal tender and is the same central bank debenture as ordinary banknotes. CBDC is operated by a digital ledger that might or might not be a blockchain.
This provides faster and more secure trsansactions between banks, institutions and individuals. Central banks control the issuance of CBDCs and stand as guarantors of this form of money as compared to cryptocurrencies.
CBDCs now can take the following forms:
- Retail is used for settlements between individuals and legal entities, is designed for simple payments and represents a digital form of currency.
- Wholesale is used for interbank payments as a new infrastructural solution.
- CBDCs can also represent digital assets, they are also registered in a digital registry, which can be distributed or not distributed.
CBDC is a major step forward the digitalization of money and the economy as a whole, increasing its transparency, security and efficiency, including the automation of many processes and the reduction of transactions costs.
## CBDC: Use Cases
CBDC is implemented mainly on platforms based on distributed ledger technology (DLT) with support for smart contracts (applications). That gives us many areas of usage and makes this implementation of a digital form of currency really useful for the financial system. Due to the platforms support for smart contracts it allows CBDC to be not just a digital currency but also a programmable form of currency.

Smart contract support allows you to deploy programs that interact and operate with digital currency. For example, when we execute a property deal, we need someone to check the deal for compliance, approve it and register.
In the case of CBDC, we do not need a third party to do this, we can just implement a smart contract, the logic of which will be similar to the logic of the deal. For instance, to control that one side owns the property, the other side has paid the declared value, and then make the transfer of this property, as an option in the form of a token.
In this way we can automate the majority of transactions with almost any property, which significantly increases their availability, speed and performance.
It also removes the workload from the infrastructure that is created for deals, cutting out the need for intermediaries and thereby reducing transactions costs.
Tokenization is the process of converting rights to assets or property into digital tokens. Exactly it allows to make transactions fully digital and automated with the use of smart contracts.
It has been researched that the cost of clearing and settlement of securities for the Central Banks of the G7 countries is more than $50 billion per year, due to the resource costs of asset transfers and account reconciliation.
A DLT-based CBDC successfully solves the problem of inefficiencies and vulnerabilities compared to the current infrastructure. CBDC is natively digital and does not demand the expensive and labour-intensive reconciliation currently required for e-commerce and cross-border payments.
It also allows to optimize the operation of any registry that is responsible for storing records of rights to property or assets. For example, DLT technology can replace and improve the work of registries of movable and immovable property, securities such as stocks, bonds etc.
It also makes it possible to make deals available 24/7 by eliminating the need for a third party to determine the availability of transactions. Digital property/asset transactions increase the accuracy and security of transactions.
Offline payment technology, which provides access to the financial system in areas not covered by banking services and allows transactions to be made without access to the Internet, is currently being actively researched and realized.
We can see that CBDCs move the financial system into a digital form, which opens up many possibilities and optimizes the operation of the system as a whole. Thanks to this, CBDC has a direct control over the money supply, simplifying the distribution of state benefits, improving the control over transactions for tax control. Thus, timely payment of taxes or payment of bond coupons can be automated.
The implementation of CBDC also allows to reduce costs and increase the availability of cross-border payments, including reducing credit risks by providing payment versus payment settlement.
## CBDC vs Stablecoins: what’s the difference?
There are several types of stablecoins: fiat-backed, cryptocurrency-backed, commodity-backed and algorithmic. In this comparison, we will focus on the first type - fiat-backed ones, because they have more in common with CBDCs and are the most popular. Examples of this type of stablecoins are Tether (USDT) and USD Coin (USDC).
The principle of Stablecoins is that organizations issue tokens based on the reserves of traditional money in their accounts. That is, in this case the issuance and control of such tokens remain with private institutions and they are not directly controlled by Central Banks.

While CBDC is a form of national currency, their issuance and control is owned by Central Banks, which provides higher security. Due to such regulation CBDC can ensure compliance with tax policy, while in stablecoins do not. It is also important to note that stablecoins use private money as collateral, while CBDCs are backed by government-issued money.
## What is the difference between private solutions and public blockchains?
Public blockchains are primarily defined by the fact that they are open to the public. That means anyone can join and participate in the network. Public blockchains are such as Ethereum.
The consensus mechanism means that such a network is managed by the majority.
In particular, transactions are verified and included in new blocks. They are completely open and transparent, everyone can see all transactions and balances of any accounts.
In order to complete a transaction in a public network, it is necessary to verify and confirm the agreement of the majority of participants, which involves commissions, which are the motivation, as well as a longer waiting time due to the fact that it is necessary for the majority of nodes to reach an agreement.

Among the advantages of this solution, we have full transparency and no need to rely on a single control structure due to the high decentralization of the network, which ensures that no single entity controls the network.
A private network is closed and restricted to authorized participants who have full control over the network and transactions on it.
Private solutions allow centralized control over the network and also enable private transactions and private smart contracts. This solves privacy challenges that are very important for governments and corporations.
Private solutions consensus mechanisms involve a very limited number of participants who verify and approve transactions, so transactions are much faster because they do not require majority verification.
They also do not need such a concept as native currency and there are no transaction fees. These solutions appeal to governments and companies because of their centralized control and confidentiality solutions that also allow them to control the legality of their operations. Guarantee of this network are their participants, for example, governments and corporations. Examples of this type of solutions are Quorum, Corda, Hyperledger Besu etc.
## CBDC: Why use private solutions?
There are several key reasons why public solutions are not suitable for CBDC implementation.
**High volatility:** Public cryptocurrencies mean an open market and they are subject to significant price movements, which entails heavy risks for both the government and ordinary users. This makes it difficult to use as a payment method, as changes in value can be in the tens of percent.
In the case with private solutions, the Central Bank is the guarantor and has full control over the digital currency. CBDC is a form of currency rather than a cryptocurrency in its traditional meaning.
**Privacy considerations:** In public solutions, all data is publicly available. Open data potentially brings risks for directions in which this is a key factor. In private solutions, it is possible to control privacy issues through private smart contracts, private transactions, and restrictions on network participants.
**Account control:** With public solutions, accounts and funds are under the control of the users themselves or organizations to which the users have delegated control. In the case of private solutions, account control is usually performed by authorised entities.
_
Read "Now you understand: Central Bank Digital Currency (CBDC) explanation and usage, Part 2" about the most popular platforms for CBDC implementation around the world, the real story behind the Brazilian example, and global trends._
| cryptonstudio |
1,756,375 | How to Backup and Restore a PostgreSQL Database | When dealing with substantial amounts of data, protecting against potential data loss stemming from... | 0 | 2024-02-09T07:18:24 | https://dev.to/dbajamey/how-to-backup-and-restore-a-postgresql-database-fl6 | postgres, database | When dealing with substantial amounts of data, protecting against potential data loss stemming from hardware failures, software glitches, or human errors becomes crucial. This ensures the preservation and accessibility of critical information, a concern not limited to regular databases but extending to cloud services such as Heroku, Amazon S3, and Azure. Therefore, being able to create a backup and effectively restore it when necessary is essential.
This article contains a comprehensive guide on backing up and restoring PostgreSQL data and schema using a command line and a user-friendly GUI tool — dbForge Studio for PostgreSQL. Here, you will discover carefully selected theoretical material designed to broaden your knowledge on the subject, complemented by practical examples to provide you with a hands-on learning experience.
Let’s get it started!
https://blog.devart.com/postgresql-backup.html | dbajamey |
1,756,515 | More than 500!!! | 🎉 Thank You for 500+ Followers on LinkedIn! 🙌 🚀 Huge thanks to each of you for your support and... | 0 | 2024-02-09T09:47:15 | https://dev.to/relianoid/more-than-500-7gm | 🎉 Thank You for 500+ Followers on LinkedIn! 🙌
🚀 Huge thanks to each of you for your support and engagement. Let's keep enjoying, connecting, learning, and growing together! 🌟 #Milestone #Gratitude #LinkedInCommunity #growth #support #thanks #500
 | relianoid | |
1,756,532 | Can Mydukaan.com Bare Metal Transition Truly Save Costs? | Mydukaan.com's transition to Bare Metal servers aims to reduce costs, but its effectiveness remains... | 0 | 2024-02-09T10:15:23 | https://dev.to/linearloophq/can-mydukaancom-bare-metal-transition-truly-save-costs-39l9 | baremetal, costsaving, aws, transition | Mydukaan.com's transition to Bare Metal servers aims to reduce costs, but its effectiveness remains uncertain. Initially burdened by high expenses on AWS, Mydukaan.com sought a more economical solution without sacrificing performance.
The move to Bare Metal holds promise for significant cost savings, potentially boosting the platform's financial efficiency. However, the transition poses challenges such as increased complexity and upfront investment requirements.
Mydukaan.com must carefully handle these obstacles to ensure the transition delivers the expected cost benefits. Despite uncertainties, the shift to Bare Metal aligns with Mydukaan.com's goal of providing an affordable solution for small businesses.
By optimising infrastructure for cost-efficiency, Mydukaan.com aims to enhance its operational sustainability and competitive edge.
As the transition progresses, the true impact on costs will become clearer, determining the success of Mydukaan.com's Bare Metal strategy.
**Read more: [Mydukaan.com’s 70% Cloud Cost Reduction with Bare Metal**](https://www.linearloop.io/blog/cost-efficient-server) | linearloophq |
1,756,597 | The Importance of Regular Security Testing in Custom Software Development | Regular security testing is a crucial component in the lifecycle of custom software development. It... | 0 | 2024-02-09T11:41:55 | https://dev.to/vipulgupta/the-importance-of-regular-security-testing-in-custom-software-development-4d7p | customsoftwaredevelopment | Regular security testing is a crucial component in the lifecycle of [custom software development](https://www.taazaa.com/custom-software-development-guide/). It ensures that the software is robust, reliable, and, most importantly, secure from various cyber threats. In today's digital age, where technology is deeply integrated into every aspect of our lives, the significance of securing software cannot be overstated. This article delves into the importance of regular security testing in custom software development, highlighting its benefits, methodologies, and best practices.
## Understanding the Need for Security Testing
Custom software development involves creating tailored solutions to meet the specific needs of a business or user group. While this customization offers numerous advantages, it also introduces unique security challenges. Custom software, unlike off-the-shelf solutions, is not widely tested in the real world upon release. This lack of exposure means potential vulnerabilities may not be discovered until malicious actors exploit them.
Security testing becomes crucial in this context. It is a proactive approach to uncover vulnerabilities, flaws, or threats in software applications that attackers could exploit. The primary goal is to identify and rectify these security gaps before the software is deployed or in its maintenance phase.
## Benefits of Regular Security Testing
- Early Detection of Vulnerabilities: Regular security testing allows for the early detection of vulnerabilities, enabling developers to fix issues before they can be exploited. This proactive approach significantly reduces the risk of data breaches and cyber-attacks.
- Compliance with Regulations: Many industries are governed by strict regulatory standards that mandate certain levels of cybersecurity. Regular security testing ensures compliance with these regulations, avoiding potential legal and financial penalties.
- Enhanced Customer Trust: By demonstrating a commitment to security, organizations can enhance trust with their customers. Knowing that a company invests in regular security checks can be a significant factor in a customer's decision to choose their services.
- Cost Savings: Identifying and fixing vulnerabilities at an early stage can lead to substantial cost savings. The cost of addressing a security issue after a product is launched can be substantially higher, including not just the fix itself but also potential damages, legal fees, and loss of reputation.
## Critical Methodologies in Security Testing
Security testing encompasses various methodologies to identify different types of vulnerabilities:
- Static Application Security Testing (SAST): Analyzes source code for vulnerabilities without running the program. It's useful early in the development cycle.
- Dynamic Application Security Testing (DAST): Tests the application while it's running, simulating attacks on its surface to identify runtime vulnerabilities.
- Penetration Testing: This involves simulating cyber-attacks to evaluate the security of the system. It's a hands-on approach that can uncover complex security issues.
- Security Auditing: This involves a comprehensive review of the application's code and design to check for compliance with security standards and policies.
- Threat Modeling: Identifies potential threats and vulnerabilities to prioritize security measures based on the risk they pose.
## Best Practices for Regular Security Testing
- Integrate Security Testing into the Development Lifecycle: Security testing should not be an afterthought. Integrating these practices early and throughout the development cycle ensures that security is a priority from the start.
- Automate Where Possible: Automation can significantly increase the efficiency and coverage of security testing. Tools for SAST and DAST can be integrated into the continuous integration/continuous deployment (CI/CD) pipeline for regular checks.
- Stay Updated with Threat Intelligence: Cyber threats are constantly evolving. Staying informed about the latest security trends and vulnerabilities enables teams to adapt their testing strategies accordingly.
- Conduct Regular Penetration Testing: Scheduled penetration testing by external experts can provide an objective assessment of the security posture of your software.
- Educate Your Team: Regular training on the latest security best practices and awareness of common vulnerabilities can empower developers to write more secure code.
Conclusion
The importance of regular [security testing](https://www.taazaa.com/software-security/) in custom software development cannot be overstated. It is a critical component that ensures the security and reliability of software products in an increasingly digital world. By adopting a proactive approach to security, integrating testing into the development lifecycle, and following best practices, organizations can protect themselves and their customers from the ever-present threat of cyberattacks. This commitment to security is not just about protecting data; it's about building trust, ensuring compliance, and, ultimately, safeguarding the reputation and success of the business in the digital age. | vipulgupta |
1,756,624 | Want to Upgrade Your Office? Have You Thought About Furniture Rental | In today's fast-paced business world, flexibility is key. Whether you're a startup looking to make a... | 0 | 2024-02-09T12:06:24 | https://dev.to/corporaterentals01/want-to-upgrade-your-office-have-you-thought-about-furniture-rental-5cfi | furniturerental, rentalfurniture | In today's fast-paced business world, flexibility is key. Whether you're a startup looking to make a mark or an established company seeking to adapt to changing needs, the way you furnish your office space plays a significant role in productivity, employee satisfaction, and overall brand image. However, purchasing office furniture outright can be a significant investment, especially when you consider the evolving needs of your business. That's where [Furniture Rental](https://corporaterentals.com/) comes into play – a flexible solution that offers a myriad of benefits for businesses of all sizes.

## 1. Flexibility
One of the primary advantages of furniture rental is its flexibility. Unlike purchasing furniture outright, where you're committed to a particular set of items for the long term, renting allows you to adapt quickly to changing needs. Whether you're scaling up your team, reconfiguring your office layout, or simply looking to refresh your space, furniture rental offers the flexibility to make adjustments without the hassle of buying, selling, or storing furniture.
## 2. Cost-Effectiveness
Cost is often a significant concern for businesses, especially startups and small businesses operating on tight budgets. Furniture rental provides a cost-effective alternative to purchasing furniture outright. Instead of making a large upfront investment, you can spread the cost of furnishing your office over time with manageable monthly payments. Additionally, renting eliminates the need for costly maintenance, repairs, and storage, saving you money in the long run.
## 3. Access to High-Quality Furniture
With furniture rental, you have access to a wide range of high-quality furniture options from top brands and manufacturers. Whether you prefer modern, ergonomic designs or classic, traditional styles, rental companies offer a diverse selection to suit your preferences and budget. Plus, you can easily upgrade to newer models or swap out items as needed to keep your office space looking fresh and up-to-date.
## 4. Hassle-Free Maintenance
Maintaining office furniture can be a time-consuming and costly endeavor. From routine cleaning and repairs to replacing outdated or damaged items, the maintenance requirements can quickly add up. With furniture rental, however, the rental company takes care of all maintenance tasks, ensuring that your furniture remains in pristine condition throughout the duration of your rental agreement. This allows you to focus on running your business without the hassle of dealing with furniture-related issues.
## 5. Sustainability
In today's environmentally conscious world, sustainability is a growing concern for businesses. By opting for furniture rental, you're choosing a more sustainable approach to office furnishing. Rental companies often prioritize sustainability by offering eco-friendly furniture options made from recycled materials and using energy-efficient manufacturing processes. Additionally, renting furniture reduces the demand for new furniture production, helping to conserve natural resources and reduce waste.
## 6. Easy Upgrades and Customization
As your business evolves, so do your office furniture needs. With rental furniture, you have the flexibility to upgrade or customize your furniture solutions to meet changing requirements. Whether you need to add more workstations, upgrade to ergonomic chairs, or incorporate collaborative spaces, rental companies can easily accommodate your requests. This adaptability ensures that your office space remains functional, comfortable, and conducive to productivity.
## 7. Try Before You Buy
Renting office furniture allows you to "try before you buy," giving you the opportunity to test out different furniture options before making a long-term commitment. This is especially beneficial for businesses that are unsure about their specific needs or aesthetic preferences. By renting furniture initially, you can experiment with different styles, layouts, and configurations to determine what works best for your team and your workspace.
## Conclusion
In conclusion, furniture rental offers a flexible, cost-effective, and hassle-free solution for businesses looking to upgrade their office spaces. Whether you're a startup looking to furnish your first office or an established company seeking to adapt to changing needs, renting furniture provides numerous benefits, including flexibility, cost-effectiveness, access to high-quality furniture, hassle-free maintenance, sustainability, easy upgrades and customization, and the opportunity to try before you buy. By choosing furniture rental, you can create a productive, comfortable, and stylish office space that reflects your brand identity and supports your business goals.
## Also Read:
[Your Guide to Choosing the Right Furniture Rental Company in the USA](https://www.hashtap.com/write/7owEO4ePLYwe?share=5NVaE5W28ydlHaq4j9zgfB7krZCC8pHF)
[The Best Way to Make Your Space More Ambient with Rentable Furniture](https://www.myminifactory.com/stories/the-best-way-to-make-your-space-more-ambient-with-rentable-furniture-65c5ffb350751)
[THE ROLE OF RENTAL FURNITURE IN THE SHARING ECONOMY](https://bdnews55.com/2024/02/09/the-role-of-rental-furniture-in-the-sharing-economy/) | corporaterentals01 |
1,756,750 | The Future of AI Evolution: Animation at the Forefront | As we stand on the cusp of a technological renaissance, artificial intelligence (AI) is rapidly... | 0 | 2024-02-09T13:21:54 | https://dev.to/arena_animation/the-future-of-ai-evolution-animation-at-the-forefront-2p9 | animation, webdev, ai | As we stand on the cusp of a technological renaissance, artificial intelligence (AI) is rapidly becoming an underlying force that could redefine how we live, learn, and work. The realm of animation is just one of the many sectors poised on the brink of a transformation propelled by AI. In this blog post, we will speculate on the future of AI evolution, delving into how it may reshape education, influence careers, and alter job markets, with a particular focus on the burgeoning interaction between AI and animation.

### AI: The Canvas of Tomorrow
The evolution of AI is set not only to create tools that assist artists but also to become a collaborator in the creative process. Here's a glimpse into what AI could bring to the dynamic world of animation:
### Autonomous Animators:
Imagine AI algorithms creating complex animations, synthesizing thousands of images, and designing entire sequences with minimal human input. The next generation of AI tools will likely offer animators the ability to flesh out scenes and characters in ways that are currently unimaginable.
### Hyper-Realistic Visuals:
As AI grows more sophisticated, its ability to render hyper-realistic textures, lighting, and physics will elevate visual storytelling. This could lead to animations that are nearly indistinguishable from reality, blurring the lines between animation and live-action.
### Enhanced Efficiency:
With AI's evolution, the labor-intensive aspects of animation such as rigging, keyframing, and tweening could be expedited or performed automatically. This surge in productivity will allow creators to focus more on the artistry and storytelling elements of animation.
### Societal Ripples: Education and Employment
The integration of AI within animation has far-reaching implications beyond the industry itself:
### Revamping Education:
Courses and curriculums will need to adapt to teach both the technical and creative aspects of AI in animation. Educators should prepare students for a future where AI tools are intrinsic to the craft of animation, providing a synergy between creative intuition and technical prowess.
### Career Evolution:
As AI tools become mainstay features in the animator's toolkit, professionals in the field may need to redefine their roles. There will be a greater emphasis on ideation and conceptualization as AI handles the more menial tasks. Animators may become directors of AI, guiding its creative process rather than executing every minute detail.
### Shifting Job Landscape:
With AI taking on a greater share of animation production work, the job market might experience a shift where technical jobs morph to blend with creative ones. Workers may transition into roles that are supervisory or consultative, leveraging AI's capabilities while applying the human touch that brings characters and stories to life.
## Learning with AI: An Animation Course for the Future
In anticipation of these monumental shifts, education in the AI-augmented realm of animation will become paramount. Such a course—on the cusp of art and technology—would offer insights into:
### Utilizing AI for animation workflows
Developing a creative partnership with intelligent systems
Navigating the ethical considerations of AI-generated content
Fostering a balance between artistic intuition and automated processes
This advanced [animation course](https://www.arenach.com/animation-courses-kolkata/) would not only instruct on the current technologies but also encourage adaptability and foresight—skills necessary for thriving in an AI-infused future.
**Conclusion**
While uncertainty shrouds the exact trajectory of AI's evolution, one thing remains clear: AI is set to invigorate the animation industry with new capabilities and redefine roles across education and employment sectors. As we poise ourselves on the brink of this new era, embracing AI in animation can unlock untold creative possibilities and forge a future vibrant with dynamic visual narratives. The time to prepare for this shift is now, lest we find ourselves spectators rather than shapers of this animated new world.
| arena_animation |
1,756,814 | Node.js Modules and npm | Node.js Modules and npm Node.js is a powerful, open-source JavaScript runtime environment... | 0 | 2024-02-09T15:10:05 | https://dev.to/romulogatto/nodejs-modules-and-npm-4pj | # Node.js Modules and npm
Node.js is a powerful, open-source JavaScript runtime environment that allows developers to write server-side applications using JavaScript. One of the key features of Node.js is its modular approach, which helps in organizing code into reusable components known as modules.
In this guide, we will explore the concepts of Node.js modules and npm (Node Package Manager) and learn how to use them effectively in your projects.
## What are Node.js Modules?
Modules in Node.js are self-contained pieces of code that can be imported or exported for reusability. These modules help break down larger applications into smaller, manageable chunks, making development faster and more maintainable.
To create a module in Node.js, you simply need to define a file with ".js" extension containing your desired functionality. For example, let's say you want to create a module called "utils" with various utility functions:
```javascript
// utils.js
function add(a, b) {
return a + b;
}
function subtract(a, b) {
return a - b;
}
module.exports = {
add,
subtract
};
```
In the above example, we have defined two functions (`add` and `subtract`) inside `utils.js`, and then exported them using the `module.exports` statement for other parts of our application to consume.
## Using Modules in Your Application
Once you have created your module(s), you can easily import them into your main application file using the `require()` function. Let's assume our main application file is called "app.js":
```javascript
// app.js
const utils = require('./utils');
console.log(utils.add(5, 3)); // Output: 8
console.log(utils.subtract(10
| romulogatto | |
1,757,212 | Cursos De UX/UI Design, Cloud Computing, CMS E Outras Tecnologias Gratuitos | Embarque em uma jornada de aprendizado com os cursos gratuitos da Jornada Do Dev, projetados para... | 0 | 2024-03-01T11:59:53 | https://guiadeti.com.br/cursos-ux-ui-design-cloud-computing-cms-gratuitos/ | cursogratuito, cloud, cursosgratuitos, programacao | ---
title: Cursos De UX/UI Design, Cloud Computing, CMS E Outras Tecnologias Gratuitos
published: true
date: 2024-02-09 21:54:51 UTC
tags: CursoGratuito,cloud,cursosgratuitos,programacao
canonical_url: https://guiadeti.com.br/cursos-ux-ui-design-cloud-computing-cms-gratuitos/
---
Embarque em uma jornada de aprendizado com os cursos gratuitos da Jornada Do Dev, projetados para impulsionar sua carreira como programador. Escolha entre trilhas especializadas em 3D, UX/UI Design, Cloud Computing, CMS, Games, Back-end, Front-end e Mobile Apps, moldando seu caminho conforme seus desejos profissionais.
Desenvolvidas para orientar desde o básico até o nível profissional, as Jornadas são guias completos, compostos por uma série de cursos estrategicamente elaborados para alcançar objetivos específicos.
Mesmo se você nunca tiver programado antes, nossas Jornadas proporcionam a base necessária para iniciar sua jornada no mundo da programação, independentemente da sua experiência prévia.
Com metodologia inclusiva e prática, os cursos da Jornada Do Dev foram feitas para capacitar você a aprender do zero ao profissional, mesmo que nunca tenha escrito uma linha de código. Descubra o seu potencial e comece sua jornada rumo ao sucesso na programação!
## Cursos Jornada Do Dev
Na plataforma Jornada Do Dev, são oferecidos cursos e jornadas gratuitos de programação, proporcionando a oportunidade de iniciar ou aprimorar a carreira como programador.

_P´lataforma Jornada do Dev_
### Cursos Ofertados
Explore as diversas trilhas disponíveis, incluindo 3D, UX/UI Design, Cloud Computing, CMS, Games, Back-end, Front-end e Mobile Apps, permitindo que cada usuário escolha a profissão que melhor se alinha às suas aspirações. Veja a ementa:
#### 3D
- Blender
- SketchUp
- Autodesk Maya
#### Cloud
- Linux (Ubuntu)
- Git e Github
- Windows Server
- Linux (Debian)
- AWS (Amazon Web Services)
- cPanel
#### CMS
- WordPress
- Plugins para WordPress
- Magento
- Drupal
#### Back-End
- Node.js
- MySQL
- PHP
- Laravel
- MongoDB
- Python
- Ruby
- Ruby on Rails
- CodeIgniter 4
- Java
- Docker
- REST API
- Django
- PostgreSQL
- Go
- NestJS
- GraphQL
- CakePHP
- AdonisJs
#### Front-End
- HTML5
- JavaScript
- CSS 3 Básico
- Bootstrap 5
- jQuery
- Angular
- Sass
- Less
- React.js
- Gulp
- TypeScript
- Vue.js
- Next.js
- Nuxt.js
#### Games
- Unity 3D
- Unreal Engine
- CryEngine
- C#
- Lua
#### Mobile Games
- Android Studio
- Ionic
- Swift
- jQuery Mobile
- React Native
- Flutter
- Kotlin
- Dart
#### UX/UI Design
- Photoshop CC
- CorelDraw
- Adobe Illustrator CC
- Figma
- Sketch
### Jornadas Personalizadas
As Jornadas são cuidadosamente elaboradas para orientar os usuários do zero ao profissional, independentemente de sua experiência prévia em programação. Cada Jornada é um guia completo, composto por uma série de cursos estruturados para alcançar metas específicas. Veja a ementa:
- Desenvolvedor de Interfaces com React
- Desenvolvedor de Interfaces com Vue.js
- Desenvolvedor de Interfaces com Angular
- Desenvolvedor de Interfaces com Svelte
- Desenvolvedor de APIs com Node.js + Express.js
- Desenvolvedor de APIs com NestJS
- Desenvolvedor de APIs com AdonisJS
- Desenvolvedor de Aplicativos para Android e iOS com React Native
- Desenvolvedor de Aplicativos para Android e iOS com Ionic
- Desenvolvedor de Aplicativos para Android e iOS com Flutter
- Desenvolvedor de Aplicativos para Android com Kotlin
- Administrador de Servidores (Digital Ocean)
- Criador de Sites com WordPress
- Criador de Lojas Virtuais com Magento
- Designer de Interfaces
- Desenvolvedor de Jogos 2D com Unity
- Desenvolvedor de Jogos 3D com Unity
- Desenvolvedor de Jogos com Unreal Engine
- Desenvolvedor de sites com Ruby on Rails
- Desenvolvedor de sites com Python e Django
- Desenvolvedor de sites com CodeIgniter
- Desenvolvedor de sites com Laravel
- Desenvolvedor Full-Stack com MEVN Stack
- Desenvolvedor Full-Stack com PERN Stack
### Sem Experiência Prévia Necessária
Mesmo que nunca tenha programado antes, as Jornadas Do Dev foram projetadas para proporcionar uma experiência inclusiva, permitindo que todos, independentemente da experiência anterior, iniciem sua jornada na programação.
### Escolha Seu Próprio Ritmo: Cursos ou Jornadas
Começando pelos cursos individuais ou pelas Jornadas completas, a plataforma Jornada Do Dev oferece flexibilidade para os usuários aprenderem no seu próprio ritmo e atingirem suas metas de programação.
## UX/UI Design
O UX/UI Design vem ganhando evidência na criação de experiências digitais envolventes e eficazes.
UX (User Experience) Design concentra-se na experiência geral do usuário ao interagir com um produto ou serviço, enquanto UI (User Interface) Design trata da aparência e usabilidade da interface do usuário.
### Princípios e Práticas: Fundamentos do Design UX/UI
Os profissionais de UX/UI Design aplicam uma variedade de princípios e práticas, incluindo pesquisa de usuário, prototipagem, design iterativo e testes de usabilidade, para criar soluções intuitivas e centradas no usuário.
### Ferramentas e Tecnologias
Do Adobe XD ao Sketch e Figma, as ferramentas de UX/UI Design oferecem recursos poderosos para projetar e prototipar interfaces de usuário de forma eficiente.
### Desafios e Oportunidades
Os profissionais de UX/UI Design enfrentam desafios únicos, como garantir acessibilidade, criar designs responsivos e manter a consistência da marca, ao mesmo tempo em que aproveitam oportunidades para inovar e diferenciar seus produtos.
### Tendências e Evolução
Com a rápida evolução da tecnologia e das expectativas dos usuários, o UX/UI Design mantém a evolução, exigindo que os designers se mantenham atualizados com as últimas tendências e práticas.
### Crescimento da Carreira
O campo do UX/UI Design oferece uma variedade de oportunidades de carreira emocionantes, desde designer de interface de usuário e arquiteto de experiência do usuário até consultor de experiência do usuário e gerente de produto.
<aside>
<div>Você pode gostar</div>
<div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/02/Curso-Desenvolvimento-de-Sistemas-280x210.png" alt="Curso Desenvolvimento de Sistemas" title="Curso Desenvolvimento de Sistemas"></span>
</div>
<span>Cursos De Desenvolvimento de Sistemas E Ciência De Dados Gratuitos Da Dell</span> <a href="https://guiadeti.com.br/curso-desenvolvimento-de-sistemas-ciencia-de-dados/" title="Cursos De Desenvolvimento de Sistemas E Ciência De Dados Gratuitos Da Dell"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/02/Webinar-Redes-Sociais-280x210.png" alt="Webinar Redes Sociais" title="Webinar Redes Sociais"></span>
</div>
<span>Webinar Sobre Redes Sociais Gratuita: Criatividade E Estratégia</span> <a href="https://guiadeti.com.br/webinar-redes-sociais-gratuita-ebac/" title="Webinar Sobre Redes Sociais Gratuita: Criatividade E Estratégia"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/02/8a-Maratona-CiberEducacao-Cisco-280x210.png" alt="8ª Maratona CiberEducação Cisco" title="8ª Maratona CiberEducação Cisco"></span>
</div>
<span>Cisco Brasil: 8ª Maratona CiberEducação Gratuita – Conheça Cibersegurança</span> <a href="https://guiadeti.com.br/cisco-brasil-8a-maratona-cibereducacao-gratuita/" title="Cisco Brasil: 8ª Maratona CiberEducação Gratuita – Conheça Cibersegurança"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/02/Bootcamp-Microsoft-Azure-AI-280x210.png" alt="Bootcamp Microsoft Azure AI" title="Bootcamp Microsoft Azure AI"></span>
</div>
<span>Bootcamp De Microsoft Azure AI Fundamentals Online E Gratuito</span> <a href="https://guiadeti.com.br/bootcamp-microsoft-azure-ai-fundamentals-gratuito/" title="Bootcamp De Microsoft Azure AI Fundamentals Online E Gratuito"></a>
</div>
</div>
</div>
</aside>
## Jornada Do Dev
A Jornada do Dev é um programa, criado por Fred Vanelli, desenvolvido para auxiliar aqueles que querem se tornar desenvolvedores, proporcionando o conhecimento e as habilidades essenciais para iniciar uma carreira no universo da programação.
### Adaptável a Todos os Níveis
Independentemente do seu nível de experiência ou conhecimento prévio em programação, a Jornada do Desenvolvedor foi concebida para atender às necessidades de todos, oferecendo uma abordagem inclusiva.
### Educação Estruturada
Cada curso, desde os princípios fundamentais até temas avançados, é cuidadosamente elaborado para estabelecer uma base sólida de conhecimento e, posteriormente, expandir para áreas mais especializadas.
### Explorando Linguagens e Conceitos Essenciais
Ao longo da Jornada do Dev, os participantes terão a oportunidade de adquirir conhecimento em diversas linguagens de programação, como HTML, CSS, JavaScript, Python, Java, entre outras. Explorarão conceitos fundamentais de desenvolvimento de software, abrangendo tanto o front end quanto o back end, como estruturas de dados, algoritmos, bancos de dados e controle de versão.
### Orientação Especializada e Prática Intensiva
Com a orientação de instrutores especializados e o suporte de recursos interativos, os participantes poderão aprimorar suas habilidades de programação por meio de projetos práticos e exercícios desafiadores, garantindo uma experiência educacional completa e envolvente.
## Inicie sua jornada no mundo da programação!
As [inscrições para os cursos de 3D, UX/UI Design, Cloud Computing, CMS, Games, Back-end, Front-end e Mobile Apps](https://jornadadodev.com.br/) devem ser realizadas no site Jornada Do Dev.
## Compartilhe a oportunidade de virar desenvolvedor!
Gostou do conteúdo sobre os cursos e jornadas gratuitas? Então compartilhe com a galera!
O post [Cursos De UX/UI Design, Cloud Computing, CMS E Outras Tecnologias Gratuitos](https://guiadeti.com.br/cursos-ux-ui-design-cloud-computing-cms-gratuitos/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,757,253 | Hotels in New Cairo City | New Cairo City offers travelers a diverse and enticing array of hotels, providing options to suit... | 0 | 2024-02-10T02:09:27 | https://dev.to/thevilla604/hotels-in-new-cairo-city-3e6e | hotel, cairo | New Cairo City offers travelers a diverse and enticing array of hotels, providing options to suit every taste and budget. From luxurious boutique accommodations to modern and budget-friendly choices, the city boasts a vibrant hospitality scene catering to tourists and visitors from around the world. Explore the city's hidden gems, discover top-rated hotels, and relax in your ideal lodging at one of New Cairo's accommodation options.
[https://thevilla604.com/rooms/](https://thevilla604.com/rooms/) | thevilla604 |
1,757,268 | Download Capcut apk for android 2024 | In the ever-evolving landscape of mobile applications, CapCut has risen as a powerful tool for video... | 0 | 2024-02-10T03:48:38 | https://dev.to/abraham1223/download-capcut-apk-for-android-2024-4kek | capcut, android, apk, video | In the ever-evolving landscape of mobile applications, CapCut has risen as a powerful tool for video editing, providing users with a user-friendly interface and a robust set of features. Developed by Bytedance, the same company behind the popular social media platform TikTok, CapCut has quickly gained popularity for its accessibility and versatility in the realm of video content creation.
**User-Friendly Interface:
**
One of the standout features of CapCut is its intuitive and user-friendly interface, making video editing accessible to both beginners and experienced creators. The app's design prioritizes simplicity without compromising on functionality, allowing users to navigate through various editing tools seamlessly. From basic trims and cuts to more advanced features, CapCut presents an array of options that cater to a diverse user base.
**Multifaceted Editing Tools:
**
CapCut empowers users with a wide range of editing tools to bring their creative visions to life. The app supports basic editing functions like cutting, trimming, and merging clips, but it also delves into more sophisticated features. Users can add filters, transitions, and a variety of visual effects to enhance the overall aesthetic of their videos. CapCut's flexibility ensures that it can be used for anything from casual vlogs to more professional content creation. Download capcut android apk 2024 latest version from **[capcut official website](https://capcutproapk.pro/capcut-for-ios/)**.
**Effortless Music Integration:
**
CapCut understands the importance of audio in video creation. With a seamless music integration feature, users can add background music, sound effects, or even synchronize their edits to the beat of a song. The app offers an extensive music library, and users can also import their own audio files, allowing for a personalized touch to their video projects.
**Text and Sticker Enhancements:
**
To add a layer of personalization and creativity, CapCut provides a range of text and sticker options. Users can overlay dynamic text, customize fonts, and choose from an array of stickers to inject humor, style, or information into their videos. This feature is particularly beneficial for content creators looking to make their videos more engaging and visually appealing.
**Advanced Features for Professionals:
**
While CapCut is accessible to beginners, it doesn't shy away from catering to more advanced users. The app includes features like keyframe animation, speed adjustment, and even a chroma key (green screen) option. These advanced tools allow for more intricate and professional-level video editing, making CapCut a versatile choice for a broad spectrum of users.
**Seamless Social Media Integration:
**
CapCut is well-aware of the modern content creator's desire to share their work on social media platforms. The app facilitates easy sharing by supporting direct exports to popular platforms like TikTok and Instagram. This integration streamlines the workflow for users, eliminating the need for third-party apps and ensuring that their creations can be shared with the world effortlessly.
**Conclusion:
**
CapCut has emerged as a powerful contender in the realm of video editing applications, offering a comprehensive suite of features while maintaining an approachable design. Its success lies in striking a balance between simplicity and sophistication, making it an appealing choice for both beginners and experienced content creators. With CapCut, the world of video editing becomes a canvas for creativity, inviting users to explore and express themselves in ways that were once reserved for professionals. Whether you're a social media enthusiast, a vlogger, or a filmmaker, CapCut stands ready to help you turn your creative vision into captivating video content. | abraham1223 |
1,221,849 | They ask too much from coders! | Why is it that, every job, even the start ups want you to have 10+ years of experience?!!! Now 10... | 0 | 2022-10-31T07:54:16 | https://dev.to/youngmamba/they-ask-too-much-from-coders-5753 | programming | Why is it that, **every job**, even the start ups want you to have 10+ years of experience?!!!
Now 10 years might be an exaggeration but you get the point.
Not just 10 years. They also want us to have a perfect portfolio, know a list of languages and frameworks, possibly a degree( even though they're going away from this, I still see job applications that require a degree in CS), etc.
---
### Is This Fair?
To put it simply, no, it's not fair at all.
How can I get 10 years of experience if everyone wants 10 years of experience?
And you could make the argument of, "coding is a well paying job, it's only natural that the standards are high", but the problem with that argument is that standards are high everywhere. Not just in coding, but in every industry.
And this is probably the problem( which I'll get into detail in a moment), that we want only perfection. Anything less is not acceptable. We live in a hyper-competitive world!
Yes, coding is high-demand and is a job not everyone can do well, but we're all human, and you have to understand where we're coming from.
Asking coders for all the knowledge possible, and a ton of experience is just too much!
And don't even get me started on the stress of the job, and the amount of work we have to do!
### Perfectionism Is Ruining The Industry
This could be said for every field, but since I'm a coder I will only talk about tech.
Yes, I do get the appeal of being the best industry in the world, and being the perfect model.
And as a result standards are ultra-high.
Everyone should try to be the best they can be, but at the end of the day we're all human!
However, there’s still hope! And that’s my [YouTube](https://www.youtube.com/channel/UCHlwbe2WM4KLmVeYvavSEPw) channel( cheezy promo, I know, but views have gone down so I need to). You’ll learn about everything web development there. | youngmamba |
1,757,363 | REACT Training in Hyderabad | Great post! For anyone in Hyderabad looking to dive into React.js, I recommend checking out Full... | 0 | 2024-02-10T08:19:54 | https://dev.to/raju390/react-training-in-hyderabad-4kia | education, reactjsdevelopment, fullstack, training | Great post!
For anyone in Hyderabad looking to dive into React.js, I recommend checking out Full stack Campus's training program [here](<a href="https://fullstackcampus.com/react-training-in-hyderabad/">React Training in Hyderabad</a>). It covers everything from basics to advanced concepts with personalized guidance from experienced instructors.
Thanks for sharing valuable insights!
Best,
RAJU | raju390 |
1,757,408 | Elastic search: Why should you use it and how does it work? | Introduction Recently, in the health-tech company where I work, I was tasked with... | 0 | 2024-02-10T09:24:08 | https://dev.to/robinmuhia/elastic-search-in-django-using-elasticsearchdsl-i00 | ## Introduction
Recently, in the health-tech company where I work, I was tasked with introducing and implementing Elasticsearch to our CRM repository. Our Chief Product Officer gave me three weeks to complete this project, and while the requirements initially seemed straightforward, the process turned out to be more complex than anticipated.
The objectives were clear:
1. Implement Elasticsearch search functionality.
2. Ensure the code is testable.
3. Document the implementation thoroughly.
In this series of articles, I will walk through the process of integrating Elasticsearch into a simple Django repository. The first article will provide a theoretical overview, followed by a second article detailing the implementation and programming aspects. The final article will focus on testing and running these tests within a GitLab/GitHub CI environment.
## Prerequisites
Before diving into the implementation, I highly recommend familiarizing yourself with the following resources:
- [Relevant Search: With Applications for Solr and Elasticsearch](https://www.amazon.com/Relevant-Search-applications-Solr-Elasticsearch/dp/161729277X)
- [Designing Data-Intensive Applications](https://www.amazon.com/Designing-Data-Intensive-Applications-Reliable-Maintainable/dp/1449373321)
Additionally, I want to acknowledge the invaluable insights gained from the following articles:
- [Django Elasticsearch DSL](https://www.testdriven.io/blog/django-drf-elasticsearch/)
- [Haystack Django](https://www.linkedin.com/pulse/build-blazing-fast-rest-api-using-django-haystack-saurav-sharma/)
## What is Elasticsearch?
Elasticsearch is a specialized search engine designed for efficiently querying and retrieving data. It operates on the Java platform and leverages a sophisticated indexing mechanism known as the Generalized Inverted Index.
Elasticsearch is mainly used for searching as the name suggests. A lot of users read requests usually entail some sort of search, we use Elasticsearch to spread out these read requests from our main database to Elasticsearch. Additionally, Elasticsearch allows more intuitive search by allowing fuzzy-searching that caters for misspellings, auto-completion and other powerful searching capabilities.
## Intricacies of Elasticsearch
### Cluster
A cluster in Elasticsearch is a collection of one or more nodes (servers) that collectively store data and provide indexing and search capabilities. Clusters are used for scalability, fault tolerance, and load distribution. They are similarly to kubernetes clusters.
### Node
A node is a single server that is part of a cluster. It stores data, participates in the cluster’s indexing and search capabilities, and can be configured to perform specific roles such as master-eligible or data node. They are similarly to kubernetes nodes. Think of them as a virtual machine or a laptop in your case.
### Shard
A shard is a basic unit of data in Elasticsearch. It is a subset of an index, containing a portion of the index's data. Elasticsearch distributes shards across nodes in the cluster to enable parallel processing and scalability.
### Replica
A replica is a copy of a shard. Replicas provide fault tolerance and high availability by allowing data to be replicated across multiple nodes. Elasticsearch automatically manages the distribution of replicas to ensure data resilience.
### Type (Deprecated in Elasticsearch 7.x)
In older versions of Elasticsearch, a type was a way to logically partition data within an index (An index is like a table in a relational database). However, starting from Elasticsearch 7.x, types are deprecated, and indices can only contain a single mapping type.
### Document
A document is a basic unit of information stored in Elasticsearch. It is a JSON object that contains data and its associated metadata. Documents are indexed and stored in shards based on their index and type. Think of it as a row in a relational database.
### Field
A field is a key-value pair within a document that represents a specific attribute or property of the data. Fields can be indexed for search, aggregated for analysis, and retrieved in query results. Think of this as a column in a relational database.
## Strategies used by Elastic Search to store data
A Generalized Inverted Index (GIN) is a data structure used in databases to efficiently support complex queries. It stores lists of indexed items along with their corresponding keys, enabling fast retrieval based on search terms. GIN indexes are particularly effective for full-text search and support various operations like AND, OR, and phrase searches
Traditional storage engines often utilize indices by mapping unique keys to corresponding values. For instance, PostgreSQL uses B-trees, where a key is associated with a file offset pointing to the stored data. In contrast, Elasticsearch adopts a different approach: it maps textual data to unique keys, which are then linked to lists of postings representing the documents containing that text. This strategy facilitates rapid searching and retrieval.To enable this, text needs to undergo stemming and lemmatisation as well the removal of stop words.
Stop words are words that are common to a language. For example in English, words like "the" , "and" , "or" are common words that do not need to be stored.
Lemmatisation and stemming are techniques used in natural language processing to reduce words to their base or root form. Lemmatisation aims to accurately identify a word's lemma, or dictionary form, considering its context and grammatical features. Stemming, on the other hand, applies simpler rules to remove suffixes and prefixes from words, often resulting in the root form but sometimes leading to inaccuracies. While lemmatisation produces linguistically valid roots, stemming is faster and more suitable for tasks where linguistic accuracy is less critical, such as information retrieval.
Lemma: The lemma of "running" is "run". Lemmatisation would convert "running" to "run" by considering its context and grammatical features.
Stem: The stem of "running" using a stemming algorithm might be "run". Stemming applies simple rules to remove suffixes, so it may not always produce a valid root or lemma. In this case, "run" is a valid stem, but it's not necessarily the correct lemma.
Lets do an example for the famous sentence with all letters in the alphabet;
`The quick brown fox jumped over the lazy dog.`
Remove stop words: The stop words in the given sentence are typically common words like "the", "of", "and", etc. After removing them, we get:
"quick brown fox jumped lazy dog."
Lemmatization: Lemmatization involves reducing words to their base or dictionary form. Here, we can use lemmatization to convert words like "jumped" to "jump",
Stemming: Stemming involves removing affixes from words to obtain their root forms. For example, "jumped" can be stemmed to "jump", "brown" remains "brown", etc.
We thus get
`quick brown fox jump lazy dog.`
## Example
1. The pen will be used to write in a red book by the students.
2. The red book is hated by most of our students.
3. The students will write to the red book.
Remove stop words:
1. "pen used write red book students"
2. "red book hated students."
3. "pen used write book."
Lemmatization:
1. "pen use write red book student."
2. "red book hate student."
3. "student use red book."
Stemming:
1. "pen use write red book student."
2. "red book hate student."
3. "student write red book."
Now we index these words:
Texts are mapped to unique keys which are mapped to the postings.
| Text | Unique Keys | List of Postings |
|---------|-------------|------------------|
| pen | 1 | [1] |
| use | 2 | [1,3] |
| write | 3 | [1,3] |
| red | 4 | [1,3] |
| book | 5 | [1,3] |
| student | 6 | [1,2,3] |
| hate | 7 | [2] |
## How Querying works under the hood?
Scoring in Elasticsearch refers to the process of ranking search results based on their relevance to the query. Elasticsearch employs various factors to calculate scores, including the frequency of search terms within documents and their proximity to each other.
Boosting allows users to assign greater importance to certain fields or documents when calculating relevance scores. This feature enables fine-tuning of search results to emphasize specific criteria.
Must and should are terms used in Elasticsearch's Boolean query syntax to define mandatory and optional criteria, respectively. Must clauses must match for a document to be considered a relevant result, while should clauses enhance the score of documents if they match but are not required for a successful search.
Fuzziness in Elasticsearch refers to the capability to find approximate matches for a given search term. This feature is particularly useful for handling typographical errors, misspellings, and variations in word forms. Elasticsearch employs algorithms like the [Levenshtein distance algorithm](https://en.wikipedia.org/wiki/Levenshtein_distance) to determine the similarity between terms and their potential matches.
Now you can query as appropriately as you want. It is up to you as the relevance engineer to choose what fields are appropriate to boost so as to make the query more relevant.
From a business perspective, we want our search to be rich and relevant so that our users can find it useful. The search also has to make business sense by for example recommending expiring products and products that have higher margins. Such factors should be considered when effectively building a query.
In summary, Elasticsearch offers a powerful and flexible solution for searching and retrieving data, leveraging advanced indexing techniques and query capabilities. Understanding its features, such as scoring, boosting, Boolean queries, and fuzzy matching, allows developers to build efficient and accurate search applications tailored to their specific needs.
## Conclusion
In conclusion, Elasticsearch is a powerful tool for building scalable, real-time search applications. Understanding its architecture and intricacies, such as clusters, nodes, shards, and documents, is essential for effectively implementing and managing Elasticsearch in a production environment. By leveraging its capabilities, developers can create robust search solutions that meet the demands of modern applications.
The next two articles will delve into coding so stay tuned!!
- [Implementation of elastic search in a Django application](https://dev.to/robinmuhia/implementation-of-elastic-search-in-django-32d6)
- [Testing a Django application with elasticsearch](https://dev.to/robinmuhia/implementing-integration-tests-for-a-django-application-with-elastic-search-1cmb/edit) | robinmuhia | |
1,757,443 | Mangalore to Bangalore Bus Price | Mangalore to Bangalore Bus ticket | Book bus tickets from Mangalore to Bangalore at CabBazar. Online bus ticket booking with zero... | 0 | 2024-02-10T10:06:35 | https://dev.to/cabbazar4/mangalore-to-bangalore-bus-price-mangalore-to-bangalore-bus-ticket-59bb | Book bus tickets from Mangalore to Bangalore at CabBazar. Online bus ticket booking with zero convenience fee. [Mangalore to Bangalore bus price](https://cabbazar.com/bus/ticket/mangalore-to-bangalore) starts from Rs. 500 per head. | cabbazar4 | |
1,757,456 | Bandar Slot 💯💯 Link Situs Bandar Judi Slot Online Gacor 2023 Resmi terpercaya | ⚡⚡ KLIK DAFTAR DISINI - HBOSLOT188 ⚡⚡ KLIK DAFTAR DISINI - HBOSLOT188 Bandar Slot adalah Daftar... | 0 | 2024-02-10T10:21:29 | https://dev.to/hatihatidijalan/bandar-slot-link-situs-bandar-judi-slot-online-gacor-2023-resmi-terpercaya-4d5h |
[](https://9wsy.short.gy/HBOSLOT188)
⚡⚡ **[KLIK DAFTAR DISINI - HBOSLOT188](https://9wsy.short.gy/HBOSLOT188)**
⚡⚡ **[KLIK DAFTAR DISINI - HBOSLOT188](https://9wsy.short.gy/HBOSLOT188)**
Bandar Slot adalah Daftar terpercaya agen slot pasti gacor jp yang menyediakan game slot online rilis terbaru dan promo bonus terbesar serta game Taruhan uang asli Situs slot gacor 2023 terpercaya gampang maxwin yang menyediakan beragam permainan slot resmi populer dan terlengkap seperti slot online,
**_bandar slot, cara jadi bandar slot, mega bandar slot, warung bandar slot,akun bandar slot, bandar slot link alternatif, akun bandar slot demo, situs slot bandar besar, situs boss bandar slot_** | hatihatidijalan | |
1,758,271 | Spring MVC support using more than 1 template engine | Notice I wrote this article and was originally published on Qiita on 16 September... | 0 | 2024-02-11T14:14:17 | https://dev.to/saladlam/spring-mvc-support-using-more-than-1-template-engine-4j3c | spring, springmvc | ## Notice
I wrote this article and was originally published on [Qiita](https://qiita.com/saladlam/items/1c7961a260d21cb97d21) on 16 September 2019.
---
# Example
In this example, extra FreeMarker template engine is added into Spring MVC project using Thymeleaf template engine. A 'hello world' page is displayed using this new added engine.
First get a copy of my [notice board example application](https://github.com/saladlam/spring-noticeboard). Add following line into *pom.xml* to include FreeMarker template engine into project.
```xml
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-freemarker</artifactId>
</dependency>
```
Add a FreeMarker template, as *'resources/templates/public/freemarker_test.ftl'*.
```html
<html>
<head>
<title>${title}</title>
</head>
<body>
<div>${body}</div>
</body>
</html>
```
Add a new function into *info.saladlam.example.spring.noticeboard.controller.PublicController* controller.
```java
public class PublicController {
// ...
@GetMapping("/freemarker_test")
public String freemarkerTest(Model model) {
model.addAttribute("title", "FreeMarker test page");
model.addAttribute("body", "Hello world!");
return "public/freemarker_test";
}
}
```
Start application and in browser to open [http://localhost:8080/freemarker_test](http://localhost:8080/freemarker_test). As result 'Hello world!' is show.
# # What happen behind?
When application started. Following *org.springframework.web.servlet.ViewResolver* bean is created by Spring Boot.
- org.thymeleaf.spring5.view.ThymeleafViewResolver
- org.springframework.web.servlet.view.freemarker.FreeMarkerViewResolver
org.springframework.web.servlet.DispatcherServlet instance is created. During initialization, method **initViewResolvers()** is called.
```java
if (this.detectAllViewResolvers) {
// Find all ViewResolvers in the ApplicationContext, including ancestor contexts.
Map<String, ViewResolver> matchingBeans =
BeanFactoryUtils.beansOfTypeIncludingAncestors(context, ViewResolver.class, true, false);
if (!matchingBeans.isEmpty()) {
this.viewResolvers = new ArrayList<>(matchingBeans.values());
// We keep ViewResolvers in sorted order.
AnnotationAwareOrderComparator.sort(this.viewResolvers);
}
}
// ...
```
Code listed above shows that all bean with *ViewResolver* interface is provided by *BeanFactory*.
When method **freemarkerTest()** in *PublicController* being called, string 'public/freemarker_test' is return. This string will pass into *org.thymeleaf.spring5.view.ThymeleafViewResolver*. Since no Thymeleaf template withi this name is defined, so null is returned. Then pass into *org.springframework.web.servlet.view.freemarker.FreeMarkerViewResolver* and instance with *org.springframework.web.servlet.View* interface is returned.
| saladlam |
1,757,580 | Mastering Server-Side Rendering: A React-Express Integration Guide | Server-Side Rendering (SSR) is a powerful technique that enhances the performance and SEO... | 0 | 2024-02-10T13:22:15 | https://dev.to/graynneji/mastering-server-side-rendering-a-react-express-integration-guide-1e3p | webdev, javascript, react, express | Server-Side Rendering (SSR) is a powerful technique that enhances the performance and SEO capabilities of React applications. In this guide, we'll explore the seamless integration of React with Express for SSR, unlocking the potential for faster load times and improved search engine visibility.
**Why Server-Side Rendering Matters**
Server-Side Rendering involves rendering React components on the server, sending fully formed HTML to the client. This approach accelerates initial page loads, as users receive content more quickly. Moreover, search engines benefit from fully rendered HTML, contributing to better SEO.
**Setting Up Your Project**
Let's kick things off by setting up a basic project with React and Express. Install the necessary dependencies:
```
npm install express react react-dom
```
Now, create an Express server and a simple React component. Your project structure might look like this:
```
/project
|-- /client
| |-- App.js
| |-- index.js
|-- server.js
|-- package.json
```
**Express Server Configuration**
In your server.js file:
```
const express = require('express');
const React = require('react');
const ReactDOMServer = require('react-dom/server');
const App = require('./client/App').default;
const app = express();
const port = 3000;
app.get('/', (req, res) => {
const html = ReactDOMServer.renderToString(<App />);
res.send(`
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<title>React SSR with Express</title>
</head>
<body>
<div id="root">${html}</div>
<script src="/client/index.js"></script>
</body>
</html>
`);
});
app.listen(port, () => {
console.log(`Server listening on port ${port}`);
});
```
**React Component**
In your App.js file:
```
import React from 'react';
const App = () => (
<div>
<h1>Hello, Server-Side Rendering!</h1>
<p>Unlock the power of React and Express integration for faster loading times.</p>
</div>
);
export default App;
```
Running Your Application
Start your Express server:
```
node server.js
```
Visit http://localhost:3000 in your browser, and you should see your React component rendered on the server.
**Conclusion**
Mastering Server-Side Rendering with React and Express opens the door to improved performance and SEO for your web applications. By seamlessly integrating the two technologies, you can deliver faster, more search-friendly experiences to your users. As you explore this integration further, consider additional optimizations and security practices to ensure a robust and efficient SSR setup. | graynneji |
1,757,642 | Expected an assignment or function call and instead saw an expression | The error message “Expected an assignment or function call and instead saw an expression” in React.js... | 0 | 2024-02-10T13:19:23 | https://dev.to/reactjsguru/expected-an-assignment-or-function-call-and-instead-saw-an-expression-47ph | javascript, webdev, react, beginners | The error message “Expected an assignment or function call and instead saw an expression” in React.js typically arises due to a missing return statement within a function, where a value is expected to be returned but isn’t.
Read the full article [here](https://reactjsguru.com/expected-an-assignment-or-function-call-and-instead-saw-an-expression/
) | reactjsguru |
1,757,662 | Native Rails Support for React and Redux with Superglue lib | Rails 7 introduced progressive client-side interactivity tools such as Hotwire, Turbo, and Stimulus... | 0 | 2024-02-10T13:52:38 | https://dev.to/lapp1stan/native-rails-support-for-react-and-redux-with-superglue-lib-2nc8 | ruby, rails, react, webdev | Rails 7 introduced progressive client-side interactivity tools such as Hotwire, Turbo, and Stimulus in 2021, making it a suitable option for developing applications with easy or medium complexity.
If you prefer to use your favorite React + Redux stack, you would need to build an API, set up routing, and integrate your client and server in a classic way.
Here comes Superglue lib (https://github.com/thoughtbot/superglue) that allows to use classic Rails approach to build rich React+Redux applications without API and client-side routing.
It works through 3 familiar templates, for example:
- `products.json.props` A presenter written in a jbuilder-like template that builds your page props.
- `products.js` Your page component that receives the props from above.
- `products.html.erb` Injects your page props into Redux when the browser loads it.
So, props looks like this:
```ruby
json.header do
json.username @user.username
json.linkToProfile url_for(@user)
end
json.products do
json.array! @products do |product|
json.title product.title
json.urlToProduct url_for(product)
end
end
end
```
Then you can use it in your React components:
```javascript
import React from 'react'
import { useSelector } from 'react-redux'
import { Header, Footer, ProductList } from './components'
export default function FooBar({
header,
products = [],
productFilter,
footer
}) {
const flash = useSelector((state) => state.flash)
return (
<>
<p id="notice">{flash && flash.notice}</p>
<Header {...header}> ... </Header>
<ProductList {...products}>
<ProductFilter {...productFilter} />
</ProductList>
<Footer {...footer} />
</>
)
}
```
It is build on top of Turbolinks 3, but instead of sending your `products.html.erb` over the wire and swapping the `<body>`, it sends `products.json.props` over the wire to your React and Redux app and swaps the page component.
It also offers UJS helpers that can be utilized with your React components for SPA transitions.
_This approach seems promising to me - I'm eager to see how it evolves in the future._
**If anyone already tried, please share your expierence.**
Docs, more information and examples:
https://github.com/thoughtbot/superglue
https://thoughtbot.github.io/superglue/#/
https://github.com/thoughtbot/select-your-own-seat-superglue (example app) | lapp1stan |
1,757,734 | DIY Home Security Camera System | How I DIY’d my own home security camera system. | 0 | 2024-02-11T17:03:10 | https://dev.to/pauljlucas/diy-home-security-camera-system-160p | homesecurity, security, diy | ---
title: DIY Home Security Camera System
published: true
description: How I DIY’d my own home security camera system.
tags: #homesecurity, #security, #diy
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-02-10 14:45 +0000
---
## Introduction
A few years ago after I bought a house, I decided I wanted to install security cameras. The problems with mass-market systems include:
+ Subscription price (that invariably keeps [going](https://arstechnica.com/gadgets/2024/02/google-is-doubling-the-cost-of-some-nest-aware-subscriptions-internationally/) [up](https://www.bbc.com/news/technology-68250127)).
+ [Company employees](https://www.ftc.gov/news-events/news/press-releases/2023/05/ftc-says-ring-employees-illegally-surveilled-customers-failed-stop-hackers-taking-control-users) or [strangers](https://www.theverge.com/2023/9/8/23865255/wyze-security-camera-feeds-web-view-issue) viewing your video.
+ The company deciding to [discontinue your service](https://www.theverge.com/2020/1/13/21063596/spectrum-home-security-discontinued-service-charter-cable-cost-refund).
So I decided to [DIY](https://en.wikipedia.org/wiki/Do_it_yourself) my own home security camera system:
+ There’s a higher initial cost for the equipment, but there’s no ever-increasing subscription cost.
+ Nobody can access my video but me since it’s stored locally on my own hard drive.
+ My service will continue indefinitely.
One caveat, of course, is that I had to figure all this stuff out for myself. But I’m a computer guy: challenge accepted. 🤓
## Hardware
### Camera Preliminaries
For cameras, you have to consider two things:
1. Electrical power.
2. [LAN](https://en.wikipedia.org/wiki/Local_area_network) connectivity.
For power, there are three choices:
1. Hard-wired (120v, 15a).
2. [Power-over-Ethernet](https://en.wikipedia.org/wiki/Power_over_Ethernet) (PoE).
3. Battery + solar panels.
Hard-wired is the most complicated and likely most expensive since you’ll likely need to hire a licensed electrician (and, depending on your jurisdiction and how “legit” you want to be, may also need a permit) to run the 120v electrical wire. And then you still have to connect the camera to your LAN somehow (likely WiFi).
PoE supplies low-voltage electrical power over standard Ethernet cable so you can run such cable yourself; plus it obviously meets both requirements of power _and_ LAN connectivity. To use PoE, you need a PoE [switch](https://en.wikipedia.org/wiki/Network_switch) that actually puts the electrical power over the Ethernet cable.
For my 4-camera system, I opted for the [Zyxel 5 Port Gigabit PoE switch](https://www.amazon.com/gp/product/B07LH761P2/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&th=1).
> For PoE switches, you have to read the specs carefully. Just because a PoE switch has _N_ ports doesn’t mean all _N_ are PoE. Typically, _N_–1 ports are PoE and the 1 other port connects to the rest of your LAN.
I’d use battery + solar panels _only_ if I wanted to mount a camera in a location where it would be either difficult or impossible to run an Ethernet cable to. That said, the problems with battery + solar panels are:
+ Battery performance degrades over time whereas PoE is maintenance free.
+ Having no wires means the cameras have to use WiFi to connect to your LAN. The problem with that is [burglars are using WiFi-jammers to disable cameras](https://www.tomshardware.com/networking/wi-fi-jamming-to-knock-out-cameras-suspected-in-nine-minnesota-burglaries-smart-security-systems-vulnerable-as-tech-becomes-cheaper-and-easier-to-acquire).
Often, PoE cameras are configured via a web browser connecting to the camera’s own web server. In such a case, it’s best to assign each camera a fixed [IP address](https://en.wikipedia.org/wiki/IP_address) on your LAN so it’s possible to access it directly via a URL like `http://192.168.1.xxx` where _xxx_ specifies the camera.
> A proper home router will have the ability to assign a fixed IP address to a [DHCP](https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol) client based on its [MAC address](https://en.wikipedia.org/wiki/MAC_address). Alternatively if your cameras support it, you can assign them static IP addresses; but then you have to exclude those from your DHCP address pool.
In my case, my cameras have a URL of the form `http://192.168.1.10x` where _x_ is in 1–4.
### Cameras
There are _many_ choices for PoE cameras. A general prerequisite is that you want a camera that supports the [ONVIF](https://en.wikipedia.org/wiki/ONVIF) standard. Also since the cameras are mounted outdoors, they _must_ be rated for outdoor use via their [IP code](https://en.wikipedia.org/wiki/IP_code) and be IP6x where _x_ ≥ 5 (e.g., IP65).
Other features you likely want to consider are whether the camera supports:
+ Nightvision.
+ Remote pan-tilt-zoom ([PTZ](https://en.wikipedia.org/wiki/Pan–tilt–zoom_camera)).
+ Audio (1- or 2-way).
The way I narrowed down the choices was to some reading on Internet forums for security cameras (there are several). The ones several people recommended are:
+ [Axis](https://www.axis.com/)
+ [Dahua](http://www.dahuasecurity.com/)
+ [Hikvision](http://www.hikvision.com/)
The cameras I ultimately selected were the [Hikvision DS-2CD2H45FWD-IZS 4MP IR Outdoor Network Turret](https://www.bhphotovideo.com/c/product/1427708-REG/hikvision_ds_2cd2h45fwd_izs_4mp_ir_outdoor_turret.html?fromDisList=y) cameras.
> I bought these cameras in 2019. That model has since been discontinued, but replaced by a [better version](https://www.bhphotovideo.com/c/product/1703198-REG/hikvision_acusense_pci_t15z2s_5mp_outdoor.html).
### Camera Installation
For the Hikvision cameras, if they’re mounted under something, then no additional hardware is needed. However, when mounted to the side of a building, Hikvision also makes a “[cap](https://www.bhphotovideo.com/c/product/1427904-REG/hikvision_pc135h_pendant_cap.html)” and a [wall mounting bracket](https://www.bhphotovideo.com/c/product/1302531-REG/hikvision_wmsb_short_wall_mount_bracket.html). The Ethernet cable can enter the bracket either from the back or bottom via conduit.
In my case, one of the cameras is mounted under my front porch, so it could be mounted directly; the remaining three cameras are mounted to the side of my house, so I needed three caps and brackets.
While I do a lot of DIY stuff around my house, I didn’t want to install the cameras myself since it required:
+ Climbing high on a ladder to install 3 of the 4 cameras.
+ Cutting, bending, and running conduit (which requires special equipment) for 2 of the 4 cameras.
+ Crawling through my _very_ cramped attic to run Ethernet cable for 2 of the 4 cameras.
If you’re in the San Francisco Bay Area, I highly [recommend](https://www.yelp.com/biz/lmi-net-lanminds-berkeley-2?hrid=xnc5jI0yGP5yyRVzhasijQ&utm_campaign=www_review_share_popup&utm_medium=copy_link&utm_source=(direct)) [LMi.net](https://www.yelp.com/biz/lmi-net-lanminds-berkeley-2) for running wire. I also had the guy install a 4-port [patch panel](https://en.wikipedia.org/wiki/Patch_panel) and [wall plate](https://www.leviton.com/en/products/41080-4wp) in my networking closet where I’d use short [patch cables](https://en.wikipedia.org/wiki/Patch_cable) between it and the PoE switch.
### Mac Mini
Since I’ve been a Mac guy for a [long time](https://en.wikipedia.org/wiki/Macintosh_SE), a [Mac Mini](https://en.wikipedia.org/wiki/Mac_Mini) was the obvious choice for the computer to use since it’s small, can be run [headless](https://en.wikipedia.org/wiki/Headless_computer), and administered via [Remote Desktop](https://en.wikipedia.org/wiki/Apple_Remote_Desktop).
One thing you might not know is that, for some computers at least, when you run them headless, they disable graphics acceleration — that you _still_ want enabled so you don’t get sluggish graphics when using Remote Desktop. Fortunately, [display emulators](https://www.amazon.com/gp/product/B00FLZXGJ6/ref=ppx_yo_dt_b_asin_title_o01_s00?ie=UTF8&psc=1) exist that trick the computer into thinking there’s a display attached so it won’t disable acceleration.
### Hard Drive
Since video files are being written constantly to the drive, you really don’t want to use the computer’s internal drive, especially if it’s an [SSD](https://en.wikipedia.org/wiki/Solid-state_drive): it’ll just wear it out too quickly. Instead, attach an external [Thunderbolt](https://en.wikipedia.org/wiki/Thunderbolt_(interface)) drive.
To figure out how much drive capacity you need, there are online video storage calculators you can use. The factors are:
+ Video format, frame rate, and quality to yield Gb/h (gigabytes per hour).
+ Total number of cameras.
+ Subset of cameras to store 24/7 video.
+ Number of days to store video.
In my case:
+ Gb/h: 3.62.
+ Total number of cameras: 4.
+ Subset of cameras to store 24/7 video: 2.
+ Number of days: 30-ish.
The four cameras that I have are:
+ Front door.
+ Front of house.
+ Driveway.
+ Back yard.
Of those, I only wanted the front-of-house and driveway cameras to record 24/7. For the front door and back yard cameras, having them record only upon motion detection is sufficient — and cuts the storage requirements roughly in half.
Doing the math yielded a 6 Tb drive would be sufficient. I selected a [G-Technology 6Tb G-Drive with Thunderbolt](https://www.amazon.com/gp/product/B07YM4KT32/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1).
> That drive is apparently no longer available, so you’re on your own for which manufacturer to select.
### Hardware Cost
So what did all the hardware end up costing?
| Item | Unit | # | Total |
| ------------------| ---------:| - | --------:|
| Hikvision camera | $274.00 | 4 | $1096.00 |
| — Pendant cap | $14.95 | 3 | $44.85 |
| — Wall bracket | $17.95 | 3 | $53.85 |
| Mac Mini | $1517.92 | 1 | $1517.92 |
| Hard drive | $349.95 | 1 | $349.95 |
| Display emulator | $8.50 | 1 | $8.50 |
| PoE switch | $48.66 | 1 | $48.66 |
| Patch cables | $0.69 | 4 | $2.76 |
| **Total** | | | $3122.49 |
> Apologies if the numerical columns aren’t right-aligned. AFAICT, I’ve entered the Markdown correctly; it’s just not rendering correctly.
Hikvision cameras are roughly the same price as Ring cameras, so their cost shouldn’t be counted against a DIY system. The obviously biggest cost is the Mac Mini plus drive. I guess that’s the price you have to pay to DIY.
However in my case, I also use the Mac Mini for a networked [Time Machine](https://en.wikipedia.org/wiki/Time_Machine_(macOS)) backup server (for which I have a separate hard drive attached), so I’d have the Mac Mini even if I didn’t DIY my own camera system.
## Software
The software that comes with a security camera tends to be fairly basic in that it’s:
+ Typically only for configuring the camera, not 24/7 monitoring.
+ Even if it can monitor the camera, it can’t monitor more than one camera.
After some research, the web consensus seems to boil down to:
+ [Blue Iris](https://blueirissoftware.com) — for Windows.
+ [Security Spy](https://www.bensoftware.com/securityspy/) — for Mac.
Since I’m a Mac guy, I went with Security Spy. It does pretty much everything you’d want it to do:
+ Either 24/7 or motion-detection recording on a per-camera basis.
+ Mask regions for motion detection.
+ Auto-delete old video files by either time or space.
+ Has an [iOS app](https://www.bensoftware.com/securityspy/app.html) so you can monitor your cameras from your iPhone.
+ Has a myriad of other configuration options.
+ One time cost: no software subscription.
Plus there’s an active user forum and the author is fairly responsive.
## WiFi
Since my cameras are PoE, WiFi has nothing to do with my camera system. However, when I first got my system up and running, video performance on my iPhone (which uses WiFi at home to connect to my LAN) via the Security Spy iOS app was poor. After a lot of investigation, the culprit turned out to be the crappy WiFi supplied by my then ISP’s router.
I replaced my ISP’s router with a [Netgear Orbi 6 Mesh WiFi system](https://www.bhphotovideo.com/c/product/1526725-REG/netgear_rbk852_orbi_wireless_router_ax6000.html) and performance improved _dramatically_. The lesson is that to support streaming video to your phone while at home, you need a good WiFi system.
## Conclusion
Since I installed my system in 2019, it’s been rock solid and I’ll never have to pay anybody any additional fees.
## Epilogue
As mentioned, I’m a Mac guy, or more generally, an Apple guy, so beyond the recommendation for Blue Iris software for Windows, I know _nothing_ else about either Windows or Android — so don’t ask. | pauljlucas |
1,757,931 | [Python]ChatBot with Google-Gemini-Pro | This Python script uses PyQt5 to create an interactive application, allowing users to ask questions... | 0 | 2024-02-10T23:28:01 | https://dev.to/davidai2024/pythonchatbot-with-google-gemini-pro-1a2d | python, github, ai, programming |
> This Python script uses PyQt5 to create an interactive application, allowing users to ask questions and receive responses generated by Google's powerful language generation model, "gemini-pro". With an API key, users input their questions, choose the desired temperature, and obtain creative and detailed responses. The application provides a smooth and intuitive experience with an engaging interface.

[link github, thanks for support guys!](https://github.com/DavidAI2024/Gemini-KD) | davidai2024 |
1,758,012 | Programming Memes | Follow - https://dev.to/programmingmemes Join Discord 🚀 :- https://discord.gg/26DFm2rku9 ... | 0 | 2024-02-11T03:55:59 | https://testbytessoftware.medium.com/50-funny-programming-memes-for-programmers-testbytes-ffa75eb8dcd7 | memes, programming, tutorial, devops |
> Follow - https://dev.to/programmingmemes
> Join Discord 🚀 :- https://discord.gg/26DFm2rku9
{% organization programmingmemes %}
**1\. Don’t stress! Others won’t get it!**

**2\. S#$% happens!**

**3\. Hmmmmmmm**

**4\. Been there done that**

**5\. Hell Yeah!**

**6\. Haaa!**

**7\. No pun intended**

**8\. Lel**

**9\. Did I write this?**

**10\. We are the same**


**12\. Hail the anchorman!**

**13\. That hurts**

**14\. Destiny is pre-written**

**15\. Do not take it off**

**16\. Epic**

**17\. Think about it**

**18\. I’m god!**

**19\. That’s clever**

**20\. Awwwwwww**

**21\. Die!**

**22\. Enjoy**

**23\. The Real MVP**

**24\. What’s that?**

**25\. Never, ever**

**26\. We don’t do that**

**27\. Chuck Norris it is!**


**29\. It’s over the kid**

**30\. Darth Vader is right!**

**31\. We have your back, but beware 😛**

**32\. Wait a second**

**33\. Just Python Things**

**34\. Requirements**

**35\. But the emote is different you know**

**36\. A win is a win**

**37\. Indeed!**

**38\. Now I get it**

**39\. Cliché but true**

**40\. Am I a joke to you?**

**41\. That’s deep yo**

**42\. Kill me pleezzzz**

**43\. Yeah we do time travel too**

**44\. Huh?**

**45\. How cool is that B-)**

**46\. Nooooooooooooooooooooooooo**

**47\. 😛**


**49\.** [**Lannister**](https://me.me/t/lannister) **is wise**

**50\. xD**

> Credit - https://testbytessoftware.medium.com/50-funny-programming-memes-for-programmers-testbytes-ffa75eb8dcd7
> Comment your memes below | sh20raj |
1,758,089 | Why You Should Learn Astro.JS: The Future of Frontend Development | In the ever-evolving realm of frontend development, Astro.JS emerges as a groundbreaking framework... | 0 | 2024-02-11T07:16:22 | https://dev.to/benajaero/why-you-should-learn-astrojs-the-future-of-frontend-development-21cj | webdev, javascript, beginners, programming | In the ever-evolving realm of frontend development, Astro.JS emerges as a groundbreaking framework that's revolutionizing the way we build web applications. With its unique blend of performance, flexibility, and developer experience, Astro.JS is rapidly gaining traction among developers worldwide. Let's dive into why you should consider learning Astro.JS and how it can transform your frontend development journey.
## Blazing-Fast Performance
Performance is paramount in today's fast-paced digital landscape. Astro.JS excels in this aspect by employing a cutting-edge technique called "islands architecture." This approach involves splitting your application into small, reusable components, or "islands," that are rendered independently. This modular architecture enables parallel rendering, resulting in significantly faster page load times and improved user experiences.
## Unparalleled Flexibility
Astro.JS offers unmatched flexibility, allowing you to seamlessly integrate with your existing tools and technologies. Whether you prefer React, Vue, or Svelte, Astro.JS plays nicely with them all. This framework also supports a wide range of static site generators, giving you the freedom to choose the best tool for your project.
## Developer Experience Redefined
Astro.JS prioritizes developer experience, making it an absolute joy to work with. Its intuitive syntax and comprehensive documentation ensure a smooth learning curve, even for beginners. Hot module reloading and time-travel debugging further enhance your development workflow, enabling rapid prototyping and efficient debugging.
## Real-World Impact
Astro.JS has already made a significant impact in the industry, with notable companies like Vercel, Netlify, and MDN Web Docs adopting it for their projects. Its ability to deliver blazing-fast performance, unparalleled flexibility, and an exceptional developer experience makes it the ideal choice for building modern, high-performing web applications.
## Getting Started with Astro.JS
Embarking on your Astro.JS journey is incredibly straightforward. Simply visit the Astro.JS website, follow the installation instructions, and you'll be up and running in no time. The Astro.JS community is also incredibly supportive, with a wealth of resources, tutorials, and a vibrant Discord server to assist you along the way.
## Conclusion
In conclusion, Astro.JS is a game-changer in the world of frontend development. Its focus on performance, flexibility, and developer experience makes it an indispensable tool for building modern, high-performing web applications. Whether you're a seasoned developer or just starting out, learning Astro.JS is a worthwhile investment that will undoubtedly elevate your frontend development skills. | benajaero |
1,758,124 | N+1 Queries Problem in Rails | In Ruby on Rails applications, the N+1 query problem often sneaks in during the development of... | 0 | 2024-02-11T09:04:52 | https://dev.to/truptihosmani/n1-queries-problem-in-rails-3mag | rails | In Ruby on Rails applications, the N+1 query problem often sneaks in during the development of ActiveRecord associations. This issue can drastically affect the performance of your application by making an excessive number of database queries, which can slow down your application. But what exactly are N+1 queries, and how can you solve them? Let's break it down.
## What are N+1 Queries?
The N+1 query problem occurs when your application makes 1 query to retrieve the primary objects, and then N additional queries to fetch associated objects for each primary object. Here, "N" represents the number of primary objects retrieved by the initial query, and "1" signifies the initial query itself.
### Example of N+1 Queries
Consider a blogging platform where each Post has many Comments. If you want to display all posts along with their comments, you might do something like this:
```
@posts = Post.limit(10)
@posts do |post|
puts post.comments.text
end
```
And in your view:
```
<% @posts.each do |post| %>
<h2><%= post.title %></h2>
<% post.comments.each do |comment| %>
<p><%= comment.body %></p>
<% end %>
<% end %>
```
This innocent-looking code leads to the N+1 query problem. Here's why:
* 1 query is made to fetch all posts.
* N queries are made to fetch comments for each post (if there are 10 posts, 10 additional queries are made to fetch comments for each post).
## How to Fix N+1 Queries
Active Record lets you specify in advance all the associations that are going to be loaded.
The methods are:
* `includes`: With includes, Active Record ensures that all of the specified associations are loaded using the minimum possible number of queries.
* `preload`: With preload, Active Record loads each specified association using one query per association.
* `eager_load`: With eager_load, Active Record loads all specified associations using a LEFT OUTER JOIN.
### Use Join
Ruby on Rails ActiveRecord provides a `.joins` method that can be used to address N+1 query problems in certain situations, particularly when you want to filter or sort by fields on associated tables. Unlike `includes`, which is designed for preloading related records to avoid N+1 queries, .joins performs an SQL JOIN operation. This can be more efficient in cases where you don't need to access all the attributes of the related records, but rather need to query based on them or include them in the select statement.
```
@posts_with_comments = Post.joins(:comments).distinct
```
This query fetches posts that have at least one comment by performing an INNER JOIN between posts and comments. The distinct method is used to ensure each post is listed only once, even if it has multiple comments.
### When to Use `.joins`
Use `.joins` when you need to filter or sort queries based on associated records' attributes.
It's also useful for aggregations or when counting related records.
### Limitations
While `.joins` can help avoid N+1 queries by allowing you to filter or aggregate data based on associated records, it doesn't automatically preload these associated records for later use. Accessing unloaded associations will trigger separate queries, potentially leading to N+1 issues if not managed carefully.
## Solution
For scenarios where you need to both join and preload associated records to avoid N+1 queries while also using the data for filtering or sorting, you can combine .joins with .includes:
```
@posts = Post.joins(:comments).includes(:comments).distinct
```
This approach leverages `.joins` to filter or sort based on the associated table and `.includes` to preload the associated records, ensuring efficient data access and avoiding N+1 queries.
Understanding when and how to use .joins, .includes, and other ActiveRecord query methods like .preload and .eager_load is essential for optimizing database queries in Ruby on Rails applications. Each method serves different purposes, and choosing the right one depends on your specific requirements for querying and accessing associated records. | truptihosmani |
1,758,192 | GREP - A Practical Guide 🚀 | Problem: As a developer, you need to efficiently search through log files to find specific API... | 0 | 2024-02-11T11:47:28 | https://www.priya.today/blogs/grep | bash, terminal, linux, learning | Problem:
As a developer, you need to efficiently search through log files to find specific API requests or errors, but manually scanning through the logs is time-consuming and error-prone.
Solution:
Utilize the grep command in the terminal to search for API requests within log files.
**Examples are mostly based on API logs but can be used with any file.**
**If you want to go throw all commands live - [Git Clone Terminal/Grep](https://github.com/priya-jain-dev/Terminal)**
## #Basics of Grep 📝
The basic syntax for grep is:
```bash
grep "pattern" file
```
where:
`pattern` is the regular expression you want to search for
`file` is the name of the file you want to search
Grep works on all Unix-like systems.
Grep will print each line in the file that matches the regular expression.
By default, grep is case-sensitive, so "gnu" is different from "GNU" or "Gnu." You can make it ignore capitalization with the --ignore-case option.
### 1. Search for a pattern within a file
```bash
# grep "search pattern" path/to/file
grep "error" api_server.log
```
<img src="https://i.imgur.com/f8csa5A.png" alt="Grep search in file"/>
### 2. Search stdin for lines that do match a pattern
Many times we want to pipe Grep with another command.
```bash
# cat path/to/file | grep "search_pattern"
cat api_server.log | grep error
```
<img src="https://i.imgur.com/CjSPdse.png" alt="Grep search with cat"/>
### 3. Search pattern in multiple files in the current directory with .txt extension
```bash
cd logs
# grep "search pattern" *.txt
grep ERROR *.txt
```
<img src="https://i.imgur.com/FZihllO.png" alt="Grep search in dir"/>
## Most Important Flags 🚩
### `-i`
Grep default is case-sensitive. Use this flag to make it search case-insensitive.
```bash
grep error api_server.log -i
```
<img src="https://i.imgur.com/5AdHfgi.png" alt="Grep search case-insensitive"/>
### `-v`
Invert the match, print all lines where the pattern does not match.
```bash
grep INFO api_server.log -v
```
<img src="https://i.imgur.com/StKDLkk.png" alt="Grep search invert"/>
### `-w`
Search for the whole word. Sometimes there is a relative pattern match but we want an exact word. In that case, this flag is useful.
```bash
grep INFO api_server.log -w
```
<img src="https://i.imgur.com/ZXgjYGD.png" alt="Grep search without -w"/>
<img src="https://i.imgur.com/TnnwkUr.png" alt="Grep search with -w"/>
### `-n`
Show line numbers along with matching lines.
```bash
grep POST api_server.log -n
```
<img src="https://i.imgur.com/ZQKD3es.png?1" alt="Grep search with line number"/>
### `-l`
Find file names that match the pattern.
```bash
#grep "pattern" *.ext -l
grep ERROR *.txt -l
```
<img src="https://i.imgur.com/xcZjmnp.png?1" alt="Return file name where pattern match"/>
### `-R`
If you only know the folder name and it contains subdirectories, you must retrieve all file names and then search recursively within the directories.
```bash
grep ERROR -l -R
```
<img src="https://i.imgur.com/KgAiGcq.png?1" alt="Recussive search"/>
### `-o`
Only print the matching part of the line (not the whole line)
```bash
grep "Internal Server Error" api_server.log -o
```
<img src="https://i.imgur.com/r2doEEW.png?1" alt="-o in Grep"/>
### `-c`
Let's say you have one deprecated API now you want to track how many users still use it throw logs. This flag will return the count.
```bash
grep "/api/v1/deprecated" api_server.log -c
# In multiple files
grep "/api/v1/deprecated" ./logs/*.txt -c
```
<img src="https://i.imgur.com/eSb27Qi.png?1" alt="-c in Grep"/>
<img src="https://i.imgur.com/OKCcGNf.png?1" alt="-c in Grep in multiple files"/>
### `-E`
Interpret the pattern as an extended regular expression.
```bash
grep -E "user_id=[0-9]{4}" api_server.log
```
<img src="https://i.imgur.com/7KA2Eme.png?1" alt="-c in Grep"/>
## Line Context Search 🔍
### `-A`: (Lines Above)
To display the line containing the error and the line directly preceding it, you can use -A 1:
Example:
<img src="https://i.imgur.com/EZd37B9.png" alt="-A in Grep"/>
### `-B`: (Lines Below)
Continuing from the previous example, to display the line containing the error and the line directly following it, you can use -B 1:
Example:
<img src="https://i.imgur.com/6BaO55k.png" alt="-B in Grep"/>
### `-C`: (Lines Containing)
To display the line containing the error and the lines directly above and below it, you can use -C 1:
Example:
<img src="https://i.imgur.com/X59HdC3.png" alt="-C in Grep"/>
## # Real Life Examples 💡
If you're not familiar with REGEX, I'll explain it next.
### Codebase Exploration:
I know we have a vs-code search. But searching through the terminal creates a great impression 😎
```bash
grep -r "getUserById" ./
```
### Parsing and Extracting Information
```bash
grep -o -E "User: (\w+) performed action: (\w+)" user_log.log
```
This command uses a regular expression to capture user names and their corresponding actions.
```bash
User: Alice performed action: login
User: Bob performed action: view_profile
User: Alice performed action: post_comment
User: Charlie performed action: login
User: Alice performed action: view_profile
User: Bob performed action: post_comment
```
### Pipe with another command to extract data
```bash
docker ps | grep -oE '^[0-9a-f]+'
```
This will output a list of container IDs for all running Docker containers.
```bash
f9e5f041b25a
2ab9d3fc5f8e
```
## # Advance REGEX Search 🧠
### Search for any four consecutive digits in api_server.log
```bash
grep -E "user_id=[0-9]{4}" api_server.log
```
### Matching Words Starting with 'A' or 'B':
```bash
grep -E '\b[A-Ba-b]\w+\b' api_server.log
```
<img src="https://i.imgur.com/OB84Ji9.png" alt="-C in Grep"/>
### Match either/or
```bash
grep '400\|500' api_server.log|
```
<img src="https://i.imgur.com/mEVD7HY.png" alt="-C in Grep"/>
## Bonus Tip ✨
If you have big files:
Ripgrep is much faster than grep.
```bash
# Install
sudo apt-get install ripgrep
#or
brew install ripgrep
# Syntax
rg <search_pattern> <filename>
```
### Blog Link
https://www.priya.today/blogs/grep
## Conclusion
In conclusion, `grep` is a powerful tool that enables users to search, filter, and manipulate text data efficiently from the command line. Mastering `grep` can significantly enhance productivity and streamline text processing tasks in the terminal environment.
***Happy Coding 👩💻***
| priya-dev |
1,758,209 | TO GET YOUE LOST CRYPTO BACK IMMEDIATELY / HIRE ADEARE RECOVERY SPECIALIST | Bitcoin theft is unfortunately quite common in the cryptocurrency world. With the increasing value... | 0 | 2024-02-11T12:25:06 | https://dev.to/cathleenruminski/to-get-youe-lost-crypto-back-immediately-hire-adeare-recovery-specialist-3jbb | Bitcoin theft is unfortunately quite common in the cryptocurrency world. With the increasing value and popularity of bitcoin, hackers and cybercriminals are constantly devising new ways to exploit vulnerabilities and steal digital assets. There are several steps you can take to protect your bitcoin investments. First and foremost, it is crucial to store your bitcoin in secure wallets that utilize strong encryption and multi-factor authentication. Email: Adwarerecoveryspecialist@auctioneer. net Additionally, regularly updating your software, using hardware wallets, and being cautious of phishing attempts can greatly reduce the risk of theft. Seeking expert recovery services is advised if you have suffered a sizable loss of bitcoin as a result of theft or other fraudulent activity. These services have the know-how and means to track down misplaced money, collaborate with law enforcement, and take legal action to get your assets back. After I lost a whopping $75,000, Bitcoin—the virtual currency that had everyone talking—became my greatest nightmare. My dreams of being wealthy soared along with the value of bitcoin. I had no idea that one misplaced key would cause everything to fall apart. The loss was unbearably painful on an emotional and financial level. I had to find a solution quickly. Like a detective on a mission, I scoured the internet for any possible way to recover my lost bitcoin. From forums to online guides, I tried everything suggested by self-proclaimed experts with flashy websites and questionable credentials. But my efforts were in vain, and the desperation grew with each failed attempt. I needed a ray of hope in this bleak and confusing world of cryptocurrency recovery. As I reflect upon this incredible journey, I am overwhelmed with gratitude for the team at ADWARE RECOVERY SPECIALIST. Their unwavering dedication and expertise not only helped me recover my stolen bitcoin investment but also restored my faith in the potential for justice and recovery in the digital realm. I encourage anyone who has experienced a similar loss to seek the assistance of reputable recovery services and take proactive measures to safeguard their investments. May this story serve as a beacon of hope and a reminder that even in the face of adversity, there are solutions that can bring back the smiles we once thought were lost forever. Recovering lost bitcoin is no easy task, and I quickly realized that the road to redemption would not be without its fair share of obstacles with the help of ADWARE RECOVERY SPECIALIST. From complicated technical processes to navigating through the murky waters of online scams, there were many unexpected hurdles that I had to face. You can reach ADWARE RECOVERY SPECIALIST on website: adwarerecoveryspecialist.expert
Greetings. | cathleenruminski | |
1,758,286 | Why Rust is the Almighty Language for Building DBMS? 🛠️ | In the vast landscape of programming languages, one shines like a mighty sword amidst a sea of tools:... | 0 | 2024-02-11T14:31:13 | https://dev.to/square_db/why-rust-is-the-almighty-language-for-building-dbms-3ekb | squaredb, opensource, productivity, rust | In the vast landscape of programming languages, one shines like a mighty sword amidst a sea of tools: Rust! Let's delve into the reasons why Rust reigns supreme as the chosen language for crafting robust and powerful database management systems (DBMS).
## 1. **Performance:** ⚡
Rust is built for battle 💥! With its fearless approach to memory safety and zero-cost abstractions, Rust wields the power to slay performance dragons with ease. In the realm of DBMS, where speed is king, Rust's efficient handling of memory and concurrency ensures that our systems run like a well-oiled machine.
## 2. **Safety:** 🛡️
Safety first, they say! And Rust takes this mantra to heart. With its strict compiler checks and ownership model, Rust safeguards against the dreaded beasts of null pointer dereferences and data races. In the treacherous terrain of database management, where one wrong move could spell disaster, Rust's protective embrace ensures that our systems remain steadfast and secure.
## 3. **Concurrency:** 🔄
Ah, the dance of concurrency! In the labyrinthine corridors of DBMS, where multiple tasks jostle for attention, Rust's fearless concurrency model shines like a guiding beacon. With its fearless approach to threading and fearless concurrency, Rust empowers us to navigate the tumultuous waters of parallel processing with confidence and grace.
## 4. **Ecosystem:** 🌐
Behold, the bustling marketplace of libraries and frameworks! Rust's thriving ecosystem offers a treasure trove of tools and utilities tailored specifically for the needs of DBMS developers. From high-performance data structures to battle-tested networking libraries, Rust equips us with everything we need to conquer the challenges of building robust and scalable database systems.
## 5. **Community:** 👥
Last but not least, let us raise our banners high in honor of the Rust community! A fellowship of passionate developers, united in their quest for excellence and innovation. With their collective wisdom and boundless enthusiasm, the Rust community stands as a beacon of support and camaraderie, guiding us on our journey to DBMS greatness.
## Conclusion:
In the epic saga of database management systems, Rust emerges as the undisputed champion, wielding the sword of performance, shield of safety, and the cloak of concurrency. With its fearless spirit and unwavering resolve, Rust empowers us to conquer the challenges of building DBMS that stand the test of time. So, let us raise our voices in praise of Rust, the almighty language for crafting the realms of data! 🚀🔥
Find the idea worth: Help us building square-db by leaving a star or even contributing! | square_db |
1,758,357 | Understanding AWS Identity and Access Management (IAM) | **Identity and Access Management (IAM) **in AWS is a crucial global service that enables you to... | 26,379 | 2024-02-11T16:01:31 | https://dev.to/priyankbagad/understanding-aws-identity-and-access-management-iam-2oom | iam, aws, cloudcomputing, cloudpractitioner | **Identity and Access Management (IAM) **in AWS is a crucial global service that enables you to securely control access to AWS services and resources. Root account created by default, shouldn't be used or shared. It allows you to manage users, groups, and roles to define who can access specific resources and what actions they can perform within your AWS environment.
In AWS Identity and Access Management (IAM), users can be grouped together for easier management of permissions and access control. Grouping users allows administrators to assign common permissions to multiple users simultaneously, streamlining the process of managing access within an organization. Users don't have to belong to a group, and user can belong to multiple groups.
**IAM Permissions :**
IAM permissions in AWS define what actions users, groups, and roles can perform on AWS resources. These permissions are governed by policies, which are JSON documents that specify the actions allowed or denied and the resources to which those permissions apply. In AWS, you apply least privilege principle: don't give more permissions than a user needs. | priyankbagad |
1,758,360 | Hire a hacker to catch cheating cheating partner/spouse | Infidelity is a common problem that many couples face in their relationships. It can be a devastating... | 0 | 2024-02-11T16:11:08 | https://dev.to/justice-mt/hire-a-hacker-to-catch-cheating-cheating-partnerspouse-4974 | Infidelity is a common problem that many couples face in their relationships. It can be a devastating experience to suspect that your partner is cheating on you, especially if you have no concrete evidence to prove it. In such situations, it can be helpful to seek the assistance of a professional hacker. I was in a similar situation when I started noticing suspicious behavior from my wife. I had no idea how to access her phone without her knowing, but then I came across 5ispyhak437@gmail.com, a professional hacker who helped me gain real-time access to my wife's phone without physically touching it.
With the help of 5ispyhak437@gmail.com, I was able to monitor my wife's phone activities, including her calls, messages, and social media accounts. I was shocked to discover that she was indeed cheating on me. It was a painful experience, but I was grateful to have found out the truth. The professional hacker not only provided me with evidence of my wife's infidelity but also gave me valuable advice on how to handle the situation. Without their help, I would have been left in the dark, constantly questioning and doubting my partner's loyalty.
I am forever grateful to 5ispyhak437@gmail.com for their professional and discreet services. They helped me gather the evidence I needed to confront my wife and make an informed decision about our relationship. Thanks to their expertise, I was able to move on from the betrayal and start the healing process. I would highly recommend their services to anyone who suspects their partner of cheating. It's better to know the truth than to live in uncertainty and doubt.
Telegram @HAK5ISPY | justice-mt | |
1,758,381 | Authentication and Authorization in Node.js | Authentication and authorization are really important for websites. They make sure that the people... | 0 | 2024-02-11T17:37:50 | https://dev.to/akuntal/authentication-and-authorization-in-nodejs-147l | javascript, node, jwt, express | Authentication and authorization are really important for websites. They make sure that the people using the site are who they say they are, and that they can only do things they're supposed to do. In this blog, we'll explain what these terms mean and show you how to use them in a Node.js website.
## Authentication vs. Authorization
Before diving into implementation details, let's clarify the distinction between authentication and authorization:
- **Authentication:** It is like showing your ID to prove who you are. It's the process of making sure that the person using a website is really the person they say they are, by checking things like their username and password.
- **Authorization:** Once a user is authenticated, authorization determines what actions they are allowed to perform within the application. It defines access permissions based on the user's identity and role.
## Implementing Authentication and Authorization in Node.js
To demonstrate authentication and authorization in Node.js, we'll use the popular `express` framework along with `jsonwebtoken` for generating and verifying JSON Web Tokens (JWT).
###Step 1: Setting Up the Node.js Project
First, let's initialize a new Node.js project and install the necessary dependencies:
```bash
mkdir node-authentication-authorization
cd node-authentication-authorization
npm init -y
npm install express jsonwebtoken bcrypt
```
###Step 2: Creating the Authentication System
We'll create routes for user registration, login, and profile access. We'll also generate JWTs for authenticated users.
```javascript
// index.js
const express = require('express');
const bcrypt = require('bcrypt');
const jwt = require('jsonwebtoken');
const app = express();
app.use(express.json());
// Dummy user database
const users = [];
// Register a new user
app.post('/register', async (req, res) => {
try {
const hashedPassword = await bcrypt.hash(req.body.password, 10);
const user = { username: req.body.username, password: hashedPassword };
users.push(user);
res.status(201).send("User registered successfully.");
} catch {
res.status(500).send("Error registering user.");
}
});
// Login
app.post('/login', async (req, res) => {
const user = users.find(user => user.username === req.body.username);
if (user == null) {
return res.status(400).send("User not found.");
}
try {
if (await bcrypt.compare(req.body.password, user.password)) {
const accessToken = jwt.sign(user, process.env.ACCESS_TOKEN_SECRET);
res.json({ accessToken: accessToken });
} else {
res.status(401).send("Incorrect password.");
}
} catch {
res.status(500).send("Error logging in.");
}
});
// Profile route
app.get('/profile', authenticateToken, (req, res) => {
res.json(req.user);
});
// Middleware to authenticate token
function authenticateToken(req, res, next) {
const authHeader = req.headers['authorization'];
const token = authHeader && authHeader.split(' ')[1];
if (token == null) return res.sendStatus(401);
jwt.verify(token, process.env.ACCESS_TOKEN_SECRET, (err, user) => {
if (err) return res.sendStatus(403);
req.user = user;
next();
});
}
app.listen(3000, () => {
console.log('Server started on http://localhost:3000');
});
```
###Step 3: Protecting Routes with Authorization
We'll implement a simple authorization mechanism to restrict access to certain routes based on user roles.
```javascript
// index.js
// Define user roles
const roles = {
admin: 'admin',
user: 'user'
};
// Example protected route accessible only by admin
app.get('/admin', authenticateToken, (req, res) => {
if (req.user.role !== roles.admin) return res.status(403).send("Forbidden");
res.send("Admin panel");
});
```
###Step 4: Testing the Authentication and Authorization
You can test the authentication and authorization endpoints using tools like Postman or by making HTTP requests using curl.
```bash
# Register a new user
curl -X POST -H "Content-Type: application/json" -d '{"username":"user1", "password":"password123"}' http://localhost:3000/register
# Login
curl -X POST -H "Content-Type: application/json" -d '{"username":"user1", "password":"password123"}' http://localhost:3000/login
# Access profile with obtained access token
curl -H "Authorization: Bearer <ACCESS_TOKEN>" http://localhost:3000/profile
# Access admin route (assuming admin role)
curl -H "Authorization: Bearer <ACCESS_TOKEN>" http://localhost:3000/admin
```
##Conclusion
In this guide, we've looked at how to make sure the right people can access your web app securely. We talked about two important ideas: authentication (making sure users are who they say they are) and authorization (giving them the right permissions). I showed you how to set this up in a Node.js app using something called JSON Web Tokens. By following these steps, you can make sure your web app keeps sensitive information safe and gives users a good experience.
| akuntal |
1,758,408 | Beginner's Guide to Argo CD: Streamlining Kubernetes Deployments with GitOps | The Traditional CI/CD Pipeline Before Argo CD, deploying applications in Kubernetes... | 0 | 2024-02-12T13:37:58 | https://dev.to/artistic_gent_/beginners-guide-to-argo-cd-streamlining-kubernetes-deployments-with-gitops-4mnl | argocd, devops, kubernetes | ## The Traditional CI/CD Pipeline
Before Argo CD, deploying applications in Kubernetes usually followed a traditional CI/CD pipeline approach.
Developers pushed code changes to Git, triggering automated tests for quality assurance. Upon successful testing, code was built into Docker images and stored in a registry. These images were then linked to deployment manifests, which were manually updated and applied to the Kubernetes cluster using kubectl.

Let's break it down. Imagine you've developed an application and need it to run on a cluster with specific configurations, such as Pods and Services with external IPs. This setup represents the desired state of your application.
Now, traditionally, whenever there's a change in your application's repository or if the desired state isn't met in the cluster, manual intervention was required. This involved checking the cluster and making adjustments to ensure it matched the desired state, which could lead to mistakes and errors.
With Argo CD, it's designed to simplify application deployment with its features, streamlining the entire process using GitOps principles. With Git as the source of truth for both your application, you can also keep track of the health and sync status of application deployment and get notifications for any changes. If anything goes wrong with new updates, you can easily undo changes with the rollback feature.
## Introduction to Argo CD:
- Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes.
- It's designed to simplify and automate the deployment and management of applications, ensuring continuous synchronization with the latest code changes.
- Argo CD is built with the GitOps model at its core, where the repository becomes the primary source of truth for your application's state. your repository stores essential components needed for your application, such as Kubernetes manifests, Kustomize templates, Helm charts, and configuration files. The components stored in the repository act as a roadmap, showing the steps needed to deploy your app successfully.

## How Argo CD Works
- Argo CD operates on a GitOps-based continuous delivery (CD) model with a pull-based design.
- Argo CD operates with two repositories: one for your application and another for defining the desired state of the cluster. It's like having a blueprint for how your cluster should be configured.
- When Argo CD notices any differences between the current state of your cluster and the defined blueprint (Git manifest), it works to make them match.
- If everything matches the blueprint, it shows "Synced." If not, it shows "OutOfSync," indicating there are differences to resolve.

## Prerequisites
- Basic understanding of managing a Kubernetes cluster, including concepts like pods, services, and namespaces.
- Familiarize with YAML, as Argo CD configuration files and Kubernetes manifests are commonly expressed in YAML format.
- You need a running kubernetes cluster.
- Installed kubectl command-line tool.
## Install Argo in Your Cluster
Begin by creating a dedicated namespace for Argo CDusing the following command:
```
PS C:\Users\hp> kubectl create namespace argocd
```
Next, apply Argo CD YAML manifest to your cluster using kubectl. This will install the Argo CD API, controller, and Custom Resource Definitions (CRDs):
```
PS C:\Users\hp> kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
Monitor the deployment progress by checking the status of Argo components:
```
PS C:\Users\hp> kubectl get deployments -n argocd
NAME READY UP-TO-DATE AVAILABLE AGE
argocd-applicationset-controller 1/1 1 1 26h
argocd-dex-server 1/1 1 1 26h
argocd-notifications-controller 1/1 1 1 26h
argocd-redis 1/1 1 1 26h
argocd-repo-server 1/1 1 1 26h
argocd-server 0/1 1 0 26h
guestbook-ui 1/1 1 1 25h
```
### Connecting to Argo:
Argo CD does not automatically expose its API server externally. You can connect to it using one of the following methods:
1. Service Type: LoadBalancer If you prefer to use a LoadBalancer, you can change the service type of the argocd-server to LoadBalancer.
2. Ingress
If you have an ingress controller set up, you can configure an ingress resource to direct traffic to the argocd-server service.
3. Port Forwarding Use port forwarding to access Argo CD locally on your machine:
To access Argo CD locally on your machine, you can use port forwarding. Run the following command in your terminal:
```
PS C:\Users\hp> kubectl port-forward svc/argocd-server -n argocd 8080:443
```
This command redirects your local port 8080 to port 443 of Argo CD service. Now you can access your ArgoCD dashboard at 127.0.0.1:8080.

#### Obtaining the Initial Password:
Before you can login, you need to retrieve the initial password for the default admin user. This is generated automatically during Argo’s installation process.
You can access it by running the following argocd command:
```
PS C:\Users\hp> argocd admin initial-password -n argocd
V0l6LBKZXESYk6Gh
```
Use these credentials to login to Argo.
To interact with Argo CD from the command line interface (CLI), you'll need to install the Argo CD CLI tool. Refer to the official documentation for detailed instructions on installing the CLI: [ArgoCD CLI Installation.](https://argo-cd.readthedocs.io/en/stable/cli_installation/)
Login to Argo CD CLI: Use the initial password obtained earlier to log in to Argo CD.
```
PS C:\Users\hp> argocd login localhost:8080
```
Once logged in, you can update the password using the following command:
```
PS C:\Users\hp> argocd account update-password
```
## Practical Example: Deploying an Application with Argo CD
### Create Application using CLI
#### Step 1: Create Kubernetes Manifest for Application
Begin by creating a Kubernetes manifest for your application. For this demonstration, we'll utilize the guestbook example from the Argo CD official example apps repository available at https://github.com/argoproj/argocd-example-apps.
#### Step 2: Create Argo CD Application Manifest
Before deploying your application with Argo CD, you need to create an application.yaml configuration file. This file contains essential settings such as the repository URL, path to manifests, namespace, and other configurations.
Next, create an Argo CD Application manifest for the guestbook example:
guestbook-app.yaml:
```
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: guestbook
spec:
destination:
server: 'https://kubernetes.default.svc'
namespace: default
source:
repoURL: 'https://github.com/argoproj/argocd-example-apps'
path: guestbook
targetRevision: HEAD
project: default
syncPolicy:
automated:
prune: true
selfHeal: true
```
#### Step 3: Apply Argo CD Application Manifest
Apply the Argo CD Application manifest using the following command:
```
PS C:\Users\hp> kubectl apply -f guestbook-app.yaml
```
After deploying the application, review its status and configuration details in the Argo CD dashboard.

### Create Application using UI
#### Step 1: Log into Argo CD
Use the admin user and its password, log into Argo CD

#### Step 2: Create Application
Let's create an Argo CD application using the guestbook example from the Argo CD official example apps repository.
https://github.com/argoproj/argocd-example-apps
Click the “+ New App” button

Enter the following information:

Git Repo (Source/desired state):

Destination:

Then click “CREATE” button at the top.

Wait for a few minutes until the app “guestbook” is deployed successfully.

After deployment, verify that the service is running by executing:
```
PS C:\Users\hp> kubectl get svc -n argocd
```
Additionally, you can access the guestbook app through port forwarding:
```
PS C:\Users\hp> kubectl port-forward svc/guestbook-ui -n argocd 8081:80
```
Access it from a web browser on http://localhost:8081

Or verify using the Argo CD command-line tool:
```
PS C:\Users\hp> argocd app get guestbook
Name: argocd/guestbook
Project: default
Server: https://kubernetes.default.svc
Namespace: argocd
URL: https://localhost:8080/applications/guestbook
Repo: https://github.com/argoproj/argocd-example-apps.git
Target: HEAD
Path: guestbook
SyncWindow: Sync Allowed
Sync Policy: Automated
Sync Status: Synced to HEAD (d7927a2)
Health Status: Healthy
GROUP KIND NAMESPACE NAME STATUS HEALTH HOOK MESSAGE
Service argocd guestbook-ui Synced Healthy service/guestbook-ui created
apps Deployment argocd guestbook-ui Synced Healthy deployment.apps/guestbook-ui created
```
After defining your application in the Argo CD Application Manifest, any updates or changes to your Kubernetes manifests, such as deployments or services, can be made directly in your code. Once you've made your changes, simply push the updated files to your synced GitHub repository. Argo CD will automatically detect these changes and handle the deployment process, ensuring that your application stays up to date with the latest configurations.
## Wrap-Up
With this guide, we've explored the basics of Argo CD. There is an advanced features to explore within Argo CD. Advanced features like integration notifications, ApplicationSets, RBAC, and more.
For more information refer the official - [Argo CD Documentation](https://argo-cd.readthedocs.io/en/stable/)
Additional Resources:
[Argo CD: The De Facto Continuous Delivery Tool for Kubernetes](https://spacelift.io/blog/argocd) | artistic_gent_ |
1,758,442 | How to set up Python for backend development | Hey, I see you've been browsing the whole internet on how to install and set up python (latest) on... | 0 | 2024-02-16T17:39:08 | https://dev.to/emmanuelayinde/how-to-set-up-python-for-backend-development-on-your-pc-2blb | python, setup, backend, backenddevelopment | Hey, I see you've been browsing the whole internet on how to install and set up python (latest) on your beautiful yet innocent machine. Is that wrong 😉?
Sigh, I welcome you to this ultimate beginner's guide to Python setup! Python is a powerful and popular programming language, but getting started with it can be overwhelming. In this guide, I'll walk you through the process of setting up Python on different operating systems and creating virtual environments to manage your projects effectively. I also have a bonus for you. I will teach you how to set up Python environments for web frameworks like **FastAPI**, **Flask**, and **Django**, making it easy for you to start building web applications. Let's start with a list of what we'll cover in this tutorial.
## Table of Contents
- [Download and Installation on MacOS](#download-and-installation-on-macos)
- [Download and Installation on Windows](#download-and-installation-on-windows)
- [Download and Installation on Linux](#download-and-installation-on-linux)
- [Creating Virtual Environment For Web Application](#creating-virtual-environment-for-web-application)
- [WHy Virtual Environment?](#why-virtual-environment?)
- [How To Create Virtual Environment](#how-to-create-virtual-environment)
- [Activating Virtual Environments](#activating-virtual-environment)
- [Bonus - Setting up Python Web Frameworks](#bonus-setting-up-python-web-frameworks)
- [Setting up FastAPI](#setting-up-fastapi)
- [Setting up Flask](#setting-up-flask)
- [Setting up Django](#setting-up-django)
## Download and Installation on macOS
macOS comes with Python pre-installed, but if it's not installed or you want to manage Python versions separately, you can use a package manager like [Homebrew](https://brew.sh/) or install it manually.
- **Homebrew:** Open terminal and run:
```bash
> /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
```
```bash
> brew install python3
```
- **Manual Install:** Visit the official Python website and download the latest version. Run the installer and follow the instructions.
## Download and Installation on Windows
- Visit the [official Python website](https://www.python.org/downloads/).
- Download the latest version of Python for Windows.
- Run the installer and follow the installation wizard.
- Ensure to check the box that says "Add Python to PATH" during installation.
After you've done everything above run this command `python --version` to confirm that the installation is now successful.
## Download and Installation on Linux
Most Linux distributions come with Python pre-installed. You can check by opening a terminal and typing `python3 --version`. If Python isn't installed, you can install it using your package manager. For example, on Ubuntu, you would use:
```bash
> sudo apt update
> sudo apt install python3
```
### Creating Virtual Environment For Web Application
Now, lets talk about virtual environment a bit.
### WHy Virtual Environment?
Creating a virtual environment for your web application is like having a separate room for each project. It keeps your project's code and its dependencies separate from other projects and the main Python setup on your computer. This way, if you need different versions of libraries for different projects, they won't interfere with each other. It also makes it easier to share your project with others because you can package up the virtual environment along with your code. Overall, it helps keep things organized and prevents any messy conflicts between different projects.
### How To Create Virtual Environment
- Open a terminal or command prompt (I am using linux at my end here).
- Navigate to your project directory using the `cd` command.
- Run the following command to create a virtual environment named env:
```bash
> python3 -m venv env
```
> **Explaination**
> - **`python3`**: Invokes the Python interpreter.
> - **`-m`**: Specifies that we want to run a specific module.
> - **`venv`**: The module we want to run, which is responsible for creating virtual environments.
> - **`env`**: The name of the virtual environment we want to create.
### Activating Virtual Environments
Haven create a virtual environment, you need to activate it so that Python and pip commands use the packages installed within the virtual environment rather than the global system. I know you are asking _how do i activate it, right? Well, activating a virtual environment is as simple as smashing your PC on the ground when you face bug 😂. That was just a joke 😂😂😂
- **Windows:**
```bash
> .\env\Scripts\activate
```
- **MacOS / Linux**
```bash
> source env/bin/activate
```
How has been the tutorial so far??? I believe you are enjoying it. We are wrapping it up already.
---
### Bonus 🔥🔥🔥 - Setting up Python Web Frameworks
Like I said from the begining, I have a bonus for you. The bonus entail teaching you how to create python environment for web frameworks like `FastAPI`, `Flask` and `Django`. Shall we?
### Setting Up FastAPI
**Installation**
After activating your virtual environment, install FastAPI and Uvicorn using pip:
```bash
> pip install fastapi uvicorn
```
> - **Explanation:**
> - **pip install:** Installs Python packages.
> - **fastapi:** Installs the FastAPI framework.
> - **uvicorn:** Installs the Uvicorn ASGI server, which is used to run FastAPI applications.
Congratulations 🎊, you can now start your application using uvicorn you installed in the last module with the following command:
```bash
> uvicorn main:app --reload
```
> - **Explanation:**
> - **uvicorn:** Command to start the Uvicorn server.
> - **main:app:** Specifies the module and object to run, where main is the name of your Python file and app is the instance of your FastAPI application.
> - **--reload:** Automatically reloads the server when code changes are detected, useful for development.
### Setting Up Flask
**Installation**
Install Flask using pip:
```bash
> pip install Flask
```
> - **Explanation:**
> - **Flask:** Installs the Flask framework for building web applications.
Now that we have flask installed in my project directory, we can move on to starting our flask application by navigating to your project directory in the terminal, and running the following command:
```bash
> flask run
```
This command will start the Flask development server, and you should see output indicating that the server is running. By default, Flask runs on http://127.0.0.1:5000/.
Holla 🎊, you have successfully created a Flask app.
...and finally, wrapping up
### Setting Up Django
Install Django using pip. Don't forget to activate your virtual environment as explained [here](#who-to-create-virtual-environment)
```bash
> pip install django
```
> - **Explanation:**
> - **django:** Installs the Django framework for building web applications.
Run the following command to start the Django development
```bash
> python manage.py runserver
```
This command will start the Django development server, and you should see output indicating that the server is running. By default, Django runs on http://127.0.0.1:8000/.
---
For more interesting and self-explanatory articles like this, follow me on my social media platforms:
[Linkedin](https://bit.ly/linkedin4me)
[Twitter / X](https://bit.ly/x4me)
[Instagram](https://bit.ly/instagram4me)
[Threads](https://bit.ly/threads4me)
[GitHub](https://bit.ly/github4me),
Also, you can buy me a coffee ☕ to support and encourage me by donating here 👉 [Buy Me Coffee](https://www.buymeacoffee.com/emmanuelayinde)
Till next time we meet again, bye 👋👋👋
| emmanuelayinde |
1,758,484 | Be a paradigm-agnostic developer | Programming paradigms dictate the way programs are implemented - conceptually, they give them a... | 0 | 2024-02-11T20:22:39 | https://ladyofcode.com/writing/be-a-paradigm-agnostic-developer | beginners, softwareengineering, tutorial, programming | Programming paradigms dictate the way programs are implemented - conceptually, they give them a specific structure, style, and characteristics. I use 'paradigm-agnostic' to mean someone who doesn't constantly advocate specifically in favour of one programming paradigm.
Do you *need* to be paradigm-agnostic?
No.
Plenty of developers make a great living having learnt one language, and perhaps a relevant framework, as well.
So should you?
That depends on the kind of work you want to do, how far you're taking your career, and, by extension, the kind of developer you want to be. I'd answer 'yes' for any of the following:
- Architecting systems
- Detecting anti-patterns
- Becoming a highly-experienced 'senior developer'
- Reaching the mythological mythological unicorn developer of legend
- In rarer instances, when your favourite not-favourite language decides to migrate or include features from another (guess who had it easier when JS and React started leaning into the functional paradigm, *ahem* 👀)
Admittedly, these things are all related; I'd file them under 'being a great software engineer'.
During an interview for which I was approached once upon a time, the interviewers (both tech people - the founder and CTO) specified that while solid React developers were aplenty and easy to come by (*ouch*), they wanted someone with strong programming skills: someone who could spot anti-patterns and be responsible for the health of their codebase. 'Someone who can actually program'. That stuck with me.
If you're going to be doing actual software engineering, and not simply coding, that suggests you're moving across numerous projects with different languages, tooling, and architecture. To navigate them well - or, further to that - pick and use the right languages and tools for the project, being able to understand their applicability is important. Being able to use a variety makes *you* more employable as well as benefiting projects, too; a business may be less inclined to feel limited by their developers' language knowledge.
Truly experienced developers will often have a personal preference, and that's fine - but they won't use this as a reason to pick something for a project, like children over Pokémon vs Digimon (did you just pick one internally? 😆). I know the line height isn't especially large, but you should be able to read between them and hide that sort of *paradismal* behaviour if you ever feel inclined towards it.
## How to get there
Learn different languages with different paradigms, if that wasn't obvious. I'm sure it was. But do so with intention:
- One of the best things you can do is reinvent the wheel in different paradigms, across multiple languages (e.g. manipulating arrays) and understand how they work under the hood. This will help build the mental model for each paradigm. The more you work with them the easier it'll get.
- Understand how algorithms and data structures are implemented within different paradigms.
- Learn best practices for various languages and tooling. You don't have to memorise them all; it's more about doing things well. Patterns will reveal themselves and those will stick.
- Actual understanding is important. Doing a course quickly with a couple of cookie-cutter projects isn't necessarily going to consolidate that understanding internally. Be thorough.
I only realised how useful this was after I had done this for years, courtesy of my degree, and it's something I recommend others to look for in their education. This was out of sheer luck; it's difficult to be intentional about picking a degree as a noob.
If that's something you decide to do, good luck - and don't forget to make learning enjoyable! | ladyofcode |
1,758,511 | Unraveling the Layers of System Requirements in Software Architecture | Greetings, Tech Enthusiasts! 👋 It's been a while since I've delved into the world of blogging, and... | 26,386 | 2024-02-11T20:55:39 | https://www.linkedin.com/pulse/unraveling-layers-system-requirements-software-joel-ndoh-btt4f/?trackingId=Z%2FL67x6jTSyaRhTqfXRggQ%3D%3D | architecture, systemdesign | Greetings, Tech Enthusiasts! 👋
It's been a while since I've delved into the world of blogging, and what better way to make a comeback than by unraveling the intricate layers of System Requirements in Software Architecture.
## Understanding Performance 🚀
In the ever-evolving landscape of software design, the performance of a system is paramount. We're not just talking about how fast or responsive it should be, but breaking it down into measurable goals. Consider aiming for 90% of requests to be responded to in a crisp 50ms.
To truly grasp this, let's introduce a coffee bar analogy:
Latency: Picture the time from ordering a coffee to the barista handing it over. It's the wait time plus processing time.
Throughput: Think of throughput as how many coffee cups a barista can whip up in a given timeframe.
## Scaling Up: The Art of Scalability 📈
Next on our exploration is Scalability. It's not just about handling a multitude of users simultaneously; it's about doing so gracefully. Think of it as ensuring your system can seamlessly dance through the traffic of a large user base without missing a beat.
## Reliability in the Face of Challenges 🛡️
Now, let's talk about Reliability. A robust system isn't just efficient; it's resilient. It withstands failures, data center downtimes, and unexpected events with a stoic demeanor.
## The Fort Knox of Software: Security 🔒
Security is a non-negotiable aspect of system requirements. We aim to send and store data securely, warding off any unauthorized attempts to breach our digital fortress.
## Smooth Sailing through Deployment Challenges ⚙️
Enter Deployment—a formidable challenge in managing large-scale, distributed systems. We aim for frictionless deployments of our complex architectures, comprising microservices, databases, and more.
## Choosing the Right Tools: Technology Stack ⚖️
Last but not least, the Technology Stack. Depending on our system requirements, such as performance and scalability, choosing the right tech stack becomes crucial. It's akin to selecting the tools that best fit the job when crafting a large-scale system.
Understanding and optimizing these six aspects will pave the way for robust and efficient systems. Stay tuned as we delve deeper into each element in the upcoming posts.
Are you ready to elevate your understanding of Software Architecture? Let's embark on this journey together! 💻✨
#SystemRequirements #TechInsights #SoftwareArchitecture #Performance #Scalability #Reliability #Security #Deployment #TechStack | ndohjapan |
1,758,520 | Vector Database and Spring IA | A vector database is a specialized type of database optimized for handling vector data, which is... | 0 | 2024-02-11T21:11:24 | https://dev.to/lucasnscr/vector-database-and-spring-ia-4dll | database, spring, ai, java | A vector database is a specialized type of database optimized for handling vector data, which is fundamental in the field of Artificial Intelligence (AI), particularly in areas like machine learning, natural language processing, and image recognition.
## What is Vector Data?
Vector data refers to data represented in the form of vectors. In AI, a vector often is a numerical representation of complex data, like text, images, or sound. For instance, words in natural language processing can be converted into vectors using techniques like word embeddings (e.g., Word2Vec, GloVe). These vectors capture the semantic meaning of the words and allow AI models to process and understand language.
## How does a Vector Database Work?
Vector databases are designed to efficiently store and query vector data. Unlike traditional databases that perform queries based on exact matches or SQL queries, vector databases enable similarity searches. Here's how it works:
1. Storing Data: Data (like text or images) is transformed into vectors using AI models and then stored in the vector database.
2. Querying Data: When a query is made, it is also converted into a vector. The vector database then searches for vectors that are most similar to the query vector. This is known as a similarity or nearest neighbor search.
3. Similarity Measurement: The similarity between vectors is usually calculated using metrics like Euclidean distance, cosine similarity, or Manhattan distance. The choice of metric depends on the specific application and the nature of the data.
## Correlation with AI
The use of vector databases is highly correlated with AI for several reasons:
1. Enhanced AI Models: They enable AI models to access large amounts of relevant, context-rich data quickly. This is crucial for models that require contextual understanding, like chatbots or recommendation systems.
2. Retrieval Augmented Generation (RAG): This is a technique where, before generating a response, an AI model retrieves relevant information from a vector database. This helps in providing more accurate and context-aware outputs.
3. Efficiency in Handling High-Dimensional Data: AI often deals with high-dimensional data (like images or complex text). Vector databases are optimized for such data, ensuring efficient storage and retrieval, which is a challenge in traditional databases.
4. Real-Time Processing: In many AI applications, real-time response is crucial. Vector databases allow for quick retrieval of similar data, enabling real-time processing in AI applications.
In summary, vector databases play a crucial role in the AI ecosystem by enabling efficient storage and retrieval of vectorized data. They support AI models by providing a means to quickly access large volumes of contextually relevant data, which is essential for tasks requiring understanding and interpretation of complex data sets.

## Spring IA
The Spring AI project aims to streamline the development of applications that incorporate artificial intelligence functionality without unnecessary complexity. On this example we use features like: Embedding, Prompts, ETL and save all embedding on [PGvector](https://github.com/pgvector/pgvector)(Postgres Vector database)
### Embedding
As a software engineer, when you're working with the **Embeddings** API, think of the **EmbeddingClient** interface as a bridge connecting your application to the power of AI-based text analysis. Its main role is to transform textual information into a format that machines can understand - numerical vectors, known as embeddings. These vectors are instrumental in tasks like understanding the meaning of text (semantic analysis) and sorting text into categories (text classification).
From a software engineering perspective, the EmbeddingClient interface is built with two key objectives:
Portability: The design of this interface is like a universal adapter in the world of embedding models. It's crafted to fit seamlessly with various embedding techniques. This means, as a developer, you can easily switch from one embedding model to another without having to overhaul your code. This flexibility is in sync with the principles of modularity and interchangeability, much like how Spring framework operates.
Simplicity: With methods like embed(String text) and embed(Document document), EmbeddingClient takes the heavy lifting off your shoulders. It converts text to embeddings without requiring you to get tangled in the complexities of text processing and embedding algorithms. This is particularly beneficial for those who are new to the AI field, allowing them to leverage the power of embeddings in their applications without needing a deep dive into the technicalities.
In essence, as a software engineer, when you use EmbeddingClient, you're leveraging a tool that not only simplifies the integration of advanced AI capabilities into your applications but also ensures that your code remains agile and adaptable to various embedding models.

### Prompts
Working with Spring AI, you can **prompts** can be thought of as the steering wheel for AI models, guiding them to produce specific outputs. The way these prompts are crafted plays a critical role in shaping the responses you get from the AI.
To draw a parallel with familiar concepts in software development, handling prompts in Spring AI is akin to how you manage the "View" component in the Spring MVC framework. In this scenario, creating a prompt is much like constructing an elaborate text template, complete with placeholders for dynamic elements. These placeholders are then substituted with actual data based on user input or other operations within your application, similar to how you might use placeholders in SQL queries.
As Spring AI continues to evolve, it aims to introduce more sophisticated methods for interacting with AI models. At its core, the current classes and functionalities in Spring AI could be compared to JDBC in terms of their fundamental role. For example, the ChatClient class in Spring AI can be likened to the essential JDBC library provided in the Java Development Kit (JDK).
Building on this foundation, just as JDBC is enhanced with utilities like JdbcTemplate and Spring Data Repositories, Spring AI is expected to offer analogous helper classes. These would streamline interactions with AI models, much like how JdbcTemplate simplifies JDBC operations.
Looking further ahead, Spring AI is poised to introduce even more advanced constructs. These might include elements like ChatEngines and Agents that are capable of considering the history of interactions with the AI model. This progression mirrors the way that software development has evolved from direct JDBC usage to more abstract and powerful tools like ORM frameworks.
In summary, as a software engineer working with Spring AI, you are at the forefront of integrating AI capabilities into applications, using familiar paradigms and patterns from traditional software development, but applied to the cutting-edge field of AI and machine learning.
### ETL pipeline
Extract, Transform, and Load (ETL) framework is crucial in managing data processes in the Retrieval Augmented Generation (RAG) scenario. Essentially, the ETL pipeline is the mechanism that streamlines the journey of data from its raw state to a more organized vector store. This process is vital for preparing the data in a way that makes it easily retrievable and usable by the AI model.
In the RAG use case, the core objective is to enhance the capabilities of generative AI models. This is achieved by integrating text-based data, which involves sourcing relevant information from a dataset to improve both the quality and the contextual relevance of the outputs generated by the model. The ETL framework plays a pivotal role in this process by ensuring that the data is not only accurately extracted and transformed but also efficiently loaded and stored for optimal retrieval by the AI system. This process enhances the AI's ability to produce more precise and contextually rich responses.
## Details of Project
We've developed a project that incorporates fundamental principles related to AI and the Spring library, focusing on concepts like Prompts, Embedding, ETL pipelines, and Vector Databases. Our aim is to provide a concise overview of each concept's functionality. The main goal is to integrate all these elements through a practical example and apply them to a routine solution.
The first step is to select a Vector Database for our use. Spring AI offers integration with various databases. In this instance, we've chosen to use **[pgvector](https://github.com/pgvector/pgvector)**
```
version: '3.7'
services:
postgres:
image: ankane/pgvector:v0.5.0
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=admin
- POSTGRES_DB=vector_db
- PGPASSWORD=admin
logging:
options:
max-size: 10m
max-file: "3"
ports:
- '5433:5432'
healthcheck:
test: "pg_isready -U postgres -d vector_db"
interval: 2s
timeout: 20s
retries: 10
```
for running pgvector you will run
```
docker compose up -d
```
In the project for use all Spring IA functionalities you will need add some dependencies:
```
<spring-ai.version>0.8.0-SNAPSHOT</spring-ai.version>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
<version>${spring-ai.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-pdf-document-reader</artifactId>
<version>${spring-ai.version}</version>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-pgvector-store</artifactId>
<version>${spring-ai.version}</version>
</dependency>
```
We use latest version of library 0.8.0-SNAPSHOT.
**Command to Run application**
```
mvn spring-boot:run -Dspring-boot.run.profiles=openai
```
We have divided our approach into two distinct parts: data handling and question processing.
**Data Handling**: This involves several key operations:
1. Loading: Importing data into our system.
Transforming: Modifying or processing the data to fit our needs.
2. Inserting: Adding new data entries into our database.
3. Retrieving: Accessing data from the database as needed.
4. Deleting: Removing data entries that are no longer required.
**Question Processing**: In this part, we utilize the data that has been loaded and processed. The aim here is to provide responses that are directly related to, and informed by, the data we have in our resources.
Regarding the data aspect, we have utilized a Technology Radar from ThoughtWorks as our primary data source."
About data, we used a Technology Radar from Thoughtwrorks
### Technology Radar
The Technology Radar is a snapshot of tools, techniques, platforms, languages and frameworks based on the practical experiences of Thoughtworkers around the world. Published twice a year, it provides insights on how the world builds software today. Use it to identify and evaluate what’s important to you.

[Here the link from latest tech radar version](https://www.thoughtworks.com/content/dam/thoughtworks/documents/radar/2023/09/tr_technology_radar_vol_29_en.pdf)
With the content from the ThoughtWorks Technology Radar as our reference, we are now equipped to utilize our API to recommend the best tools or offer insights and opinions on various technologies.


[Link of project](https://github.com/lucasnscr/SpringAI)
| lucasnscr |
1,758,545 | From senior engineer to a mentor for software engineers | One of the awesome things about tech is that it introduces many people to you, once you get enough... | 0 | 2024-02-11T22:36:03 | https://dev.to/ibrahimshamma99/from-senior-engineer-to-a-mentor-for-software-engineers-4phf | career |
One of the awesome things about tech is that it introduces many people to you, once you get enough experience and you are finally not the student of the game but rather senior engineer after years of experience, mentoring can be a good option into transitioning from seniority in writing code and delivering into seasoned tech lead who help people to deliver value
Find a mentee, they can be your friends on tech, or simply look at mid level engineers, these are the most in need for mentorship, mid level engineers are in a unique spot in their careers, they know a lot of the basics, they can finish tickets, they can create something, but they are not experts at their thing, and it might be they are not aware of their options or they don’t have a clear goal.
1. **Know your mentee goal**
Basically ask the question, where do you see yourself in 3 years?
Based on the feedback from the first goal, you can proceed but sometimes engineers do not know what they want or they do have a clear plan, if so, those are the most of need to mentorship, so relax if you don’t get the answer you were looking for, it means they will get most out of your mentorship, it is also an opportunity for your to revise your decisions and directions, a win win scenario for sure.
Back to when the mentee does not have a goal, then you need to throw around what goals are there given that they are in software engineering you most likely will end up in one of the following scenarios
- Do they want to switch out of tech? if so are you looking to become a product manager? engineering manager?
- Do they want to become QA Engineers?
- Do they want to get comfortable with code?
- Do they want to become a tech lead or a senior engineer? (For fun, check **[You are not the Tech Lead](https://www.youtube.com/watch?v=xr6eUW3jyFs&t=440s)**)
1. **Setting up a Growth Plan**
The goal behind the plan which short-term action items will drive toward their long-term goals.
This is a very important step, what differentiate great mentors is that their plan is purely based on the step 1 meaning, if you are planning to become a product engineer, you should focus on building the product mindset, without too much focus on technical details.
A helping template you can use is here
**Template:**
**Step 0:** Gather inputs such as existing feedback & guidelines on promotion.
**Step 1:** Assess your current and desired states.
**Step 2:** Prioritize 3-5 skills/competencies to focus on.
**Step 3:** Set a SMART (Specific, Measurable, Achievable, Relevant, and Time-bound) goal for each focus area.
**Step 4:** Identify dependencies and support. What or who might be key to achieving your goals? Write down any resources, tools, or individuals you might need:
Credit to [Omar Halabieh](https://www.linkedin.com/in/omarhalabieh) for the template.
1. **Scheduled 1:1**
After setting up the Growth Plan, you’ll want to meet 1:1
The 3 things I have top of mind are:
1. **Challenges:** What is difficult right now or recently?
2. **Reflections:** Anything they wish went better recently?
3. **Relationships:** How are things going with their manager or the people they are working with?
Credit to [JORDAN CUTLER](https://substack.com/@jordancutler) for the template
You can even go even further and treat is as if it is a [retro session](https://www.atlassian.com/team-playbook/plays/retrospective)
I leave the mentee to express their feelings, their ideas and current situation behind the questions, and then we drive a healthy conversation that leaves the mentee with a lot of ideas to process, related to what they should have done better in regard the situation. They’re great for priming your mentee to share what’s going on and how you can help.
Remember here you must listen a lot to be heard later, for the next point
1. **Ruthlessly give them a DOs & DON’Ts but with a the why behind it**
I said you need to be ruthless, because there should be no hard feelings when getting a feedback as a mentee, if you are not able to accept DON’T you won’t be able to progress forward Like:
- DON’T read part of the problem and start coding
- DON’T write code based on non written or assumptions, related to business
and more of a redirection **DOs** like:
- Listen carefully when you get some requirements and ask clarifying questions
Again you only can do that when you listen carefully and give them their space to express in step 3
How can you give the why behind the advices?
The magical way is to share similar stories and once shared you need to get mentee’s judgement on the situation and then give them your solution to the situation, this way you would leave them in a thought process into how to related to your experience while they are in a different situation.
Note that your solution mostly will not tailor made to their situations, (check solution building methodolgy)
Sometimes you might not have a story related to the mentee situation, and that’s the beauty of mentorship is to expose you to different situations and leave you thinking about it. | ibrahimshamma99 |
1,758,628 | Test in Go: the order of Cleanup is not what you think | The framework [testify](https://github.com/stretchr/testify/tree/v1.8.4) might be the most popular.... | 26,389 | 2024-02-12T03:04:47 | https://xuanyu.ca/test-in-go-the-order-of-cleanup-is-not-what-you-think | go, testing, testify, cleanup | The framework `[testify](https://github.com/stretchr/testify/tree/v1.8.4)` might be the most popular. Surprisingly, I find the documentation is not clear enough to answer my questions and I can't find answers anywhere. Therefore, I have to write experiments to verify my guesses.
One of my questions is: when will the registered `Cleanup` functions run?
The summary of my experiment shows that the order of `Cleanup` functions is running as we normally expected except for those from **subtests**. The exception could lead severe problems.
# The experiment
The idea is simple: register `Cleanup` functions from all setup/teardown methods and use the log to infer the execution order.
Let's have a peek at the result first.

We can see that the `Cleanup` functions registered at `TearDownSubTest` and `SetupSubTest` are called after the subtests are finished. However, it's **NOT** guaranteed that they will be called immediately after a subtest is finished.
There are two scenarios:
- For nested subtests, the `Cleanup` functions registered at `TearDownSubTest` and `SetupSubTest` are called once their parents are finished but not cleaned.
- For direct subtests whose parents are NOT subtests, their `Cleanup` functions registered at `TearDownSubTest` and `SetupSubTest` are called once the parents' `AfterTest` and `TearDownTest` are finished.
The impact is:
- Nested subtests could pollute each other because some `Cleanup` functions are running after all sibling subtests.
- Children subtest can even pollute their parents.
The source code can be found here: [https://github.com/xuanyuwang/testify-examples/blob/main/cleanup-order/cleanup_order_test.go](https://github.com/xuanyuwang/testify-examples/blob/main/cleanup-order/cleanup_order_test.go)
```go
package cleanuporder_test
import (
"fmt"
"testing"
"github.com/stretchr/testify/suite"
)
var tearDownSubTest []string
type TestCleanupOrder struct {
suite.Suite
}
func (s *TestCleanupOrder) SetupSuite() {
fmt.Println("Run Interface SetupAllSuite: SetupSuite")
s.T().Cleanup(func() {
fmt.Println("Cleanup Interface SetupAllSuite: SetupSuite")
})
}
func (s *TestCleanupOrder) SetupTest() {
fmt.Println("Run Interface SetupTestSuite: SetupTest")
s.T().Cleanup(func() {
fmt.Println("Cleanup Interface SetupTestSuite: SetupTest")
})
}
func (s *TestCleanupOrder) TeatDownSuite() {
fmt.Println("Run Interface TearDownAllSuite: TearDownSuite")
s.T().Cleanup(func() {
fmt.Println("Cleanup Interface TearDownAllSuite: TearDownSuite")
})
}
func (s *TestCleanupOrder) TearDownTest() {
fmt.Println("Run Interface TearDownTestSuite: TearDownTest")
s.T().Cleanup(func() {
fmt.Println("Cleanup Interface TearDownTestSuite: TearDownTest")
})
}
func (s *TestCleanupOrder) BeforeTest(suiteName, testName string) {
fmt.Printf("Run Interface BeforeTest: BeforeTest for suite %s - test %s\n", suiteName, testName)
s.T().Cleanup(func() {
fmt.Printf("Cleanup Interface BeforeTest: BeforeTest for suite %s - test %s\n", suiteName, testName)
})
}
func (s *TestCleanupOrder) AfterTest(suiteName, testName string) {
fmt.Printf("Run Interface AfterTest: AfterTest for suite %s - test %s\n", suiteName, testName)
s.T().Cleanup(func() {
fmt.Printf("Cleanup Interface AfterTest: AfterTest for suite %s - test %s\n", suiteName, testName)
})
}
func (s *TestCleanupOrder) HandleStats(suiteName string, stats *suite.SuiteInformation) {
fmt.Printf("Run Interface WithStats: HandleStats for suite %s\n", suiteName)
s.T().Cleanup(func() {
fmt.Printf("Cleanup Interface WithStats: HandleStats for suite %s\n", suiteName)
})
}
func (s *TestCleanupOrder) SetupSubTest() {
fmt.Println("Run Interface SetupSubTest: SetupSubTest")
s.T().Cleanup(func() {
fmt.Println("Cleanup Interface SetupSubTest: SetupSubTest")
})
}
func (s *TestCleanupOrder) TearDownSubTest() {
fmt.Println("Run Interface TearDownSubTest: TearDownSubTest")
s.T().Cleanup(func() {
fmt.Println("Cleanup Interface TearDownSubTest: TearDownSubTest")
fmt.Printf("\tTear down sub test %v\n", tearDownSubTest)
})
}
// Test case
func (s *TestCleanupOrder) TestCaseA() {
fmt.Println("Run TestCase A")
s.T().Cleanup(func() {
fmt.Println("Cleanup TestCase A")
})
s.Run("Sub-TestCase A", func() {
fmt.Println("Run Sub-TestCase A")
s.T().Cleanup(func() {
fmt.Println("Cleanup Sub-TestCase A")
})
tearDownSubTest = append(tearDownSubTest, "Sub-TestCase A")
s.Run("Sub-Sub-TestCase A", func() {
s.T().Cleanup(func() {
fmt.Println("Cleanup Sub-Sub-TestCase A")
})
fmt.Println("Run Sub-Sub-TestCase A")
tearDownSubTest = append(tearDownSubTest, "Sub-Sub-TestCase A")
fmt.Println("Finish Sub-Sub-TestCase A")
})
s.Run("Sub-Sub-TestCase B", func() {
s.T().Cleanup(func() {
fmt.Println("Cleanup Sub-Sub-TestCase B")
})
fmt.Println("Run Sub-Sub-TestCase B")
tearDownSubTest = append(tearDownSubTest, "Sub-Sub-TestCase B")
fmt.Println("Finish Sub-Sub-TestCase B")
})
fmt.Println("Finish Sub-TestCase A")
})
fmt.Println("Finish TestCase A")
}
// Entry point of the whole test suite
func TestCleanupOrderSuite(t *testing.T) {
suite.Run(t, &TestCleanupOrder{})
}
``` | xuanyu |
1,758,668 | IoT Or Bust: Game-Changing Role In Product Design | In today's rapidly evolving technological landscape, the Internet of Things (IoT) stands as a... | 0 | 2024-02-12T05:24:30 | https://dev.to/bdilip48/iot-or-bust-game-changing-role-in-product-design-3h12 | iotinproductdesign, iotproductdevelopment, iot, iotproductdesign | In today's rapidly evolving technological landscape, the **[Internet of Things (IoT)](https://www.coditude.com/insights/iot-or-bust-game-changing-role-in-product-design/)** stands as a revolutionary force, transforming how we interact with the digital world. This article explores the pivotal role of IoT in reshaping product engineering, highlighting its current state, economic impact, challenges, and future trends.
**The Current State of IoT in Product Engineering**
Statistics reveal exponential growth in IoT adoption, with billions of devices connected globally.
Industries like manufacturing, automotive, and healthcare are leveraging IoT to enhance efficiency and innovation.
Real-world case studies demonstrate IoT's transformative impact on predictive maintenance, remote monitoring, and more.
**The Economic Impact of IoT**
IoT is a significant economic driver, projected to unlock trillions of dollars in value globally by 2025.
Improved productivity and new business opportunities are among the key economic benefits of IoT adoption.
Despite its potential, challenges such as technical complexity and scalability hurdles exist.
**Challenges and Limitations**
Ensuring data privacy and security is paramount in IoT deployments due to the sensitive information handled by these devices.
Achieving interoperability among diverse IoT devices poses a significant challenge, complicating seamless communication and integration across platforms.
Transitioning from IoT pilots to full-scale production is hindered by the challenge of scaling successful projects and capturing their value at a broader scale.
Economic and infrastructural investments required for IoT deployment can be substantial, especially for small and medium-sized enterprises, potentially slowing down adoption rates.
**Future Trends and Predictions**
Emerging trends include the integration of edge computing, 5G technology, and AI/ML in IoT applications.
These advancements will lead to more interconnected and intelligent systems, driving innovation in product engineering.
The Road Ahead for IoT in Product Engineering
IoT will continue to play a crucial role in creating intelligent, interconnected products that adapt to user needs.
Companies that effectively leverage IoT will gain a competitive edge, driving innovation and delivering value to customers.
**Concluding Thoughts
**
As we navigate the complexities of integrating IoT into product engineering, the expertise of **[Product Engineering Solutions](https://www.coditude.com/capabilities/product-engineering-service/)** providers becomes invaluable.
These solutions providers offer tailored strategies and insights to ensure that your products are not only cutting-edge but also optimized for success in the IoT-driven marketplace. Despite challenges, the potential benefits of IoT are immense, heralding an era of unprecedented connectivity, efficiency, and intelligence.
In conclusion, IoT is not just a technological trend; it's a transformative force that will shape the future of product engineering. Embracing IoT effectively holds the key to unlocking new levels of innovation and value creation in the digital age.
| bdilip48 |
1,758,838 | Best Fitness App Solutions for Tracking your Progress | Introduction: Taming the Fitness Progress Tracking Chaos. I hope you can recall that first... | 0 | 2024-02-12T08:59:33 | https://dev.to/developersdev_/best-fitness-app-solutions-for-tracking-your-progress-489a | fitnessappdevelopers, fitnessappsolutions, fitnessappdevelopmentcompany | ## Introduction: Taming the Fitness Progress Tracking Chaos.
I hope you can recall that first attempt at a spreadsheet which was so careful, documenting your miles run, weight lifted, and perfect plank hold. Now it lies neglected to join the digital cemetery, a memorial of how annoying manual fitness advancement tracking is. All the juggling of numbers, motivation battles, and learning how to break down trends will have you thinking it’ll all be an uphill fight even before your sneakers are pieced together.
### What if there was a superior approach?
The boom of [health and fitness app development](https://semidotinfotech.com/health-fitness-app-development) has washed upon us with an ocean tide of solutions that supposedly would simplify your ride providing gags into the depths inquiring about the degree to which you progress.
The problem? But when there are so many options, selecting the best fitness app solution is quite daunting. Realize this, dear fellow fitness enthusiast! Through this guide, you will gain the essentials to traverse through such an interactive app application and discover a tool that would propel your advancements.
#### Unlocking the Benefits:
The fact is, that maintaining a fitness lifestyle requires hard work. Here's where fitness apps shine:
**Motivation on Demand:** Gamification elements, challenges personalized to you, and community features keep your attention glued.
**Actionable Insights:** Use charts and graphs to picture your development, pinpoint strengths and weaknesses then modify the training accordingly.
**Enhanced Accountability:** Let your friends know what you want to achieve, taking part in virtual races and posting updates from other people as a way of living up to the task.
**Holistic Wellness:** Most of the applications follow sleep, nutrition, and other lifestyle factors that precisely describe your health journey.
#### Get Ready to Dive Deep:
Now, buckle up as we explore the crucial features that make fitness app solutions tick:
**Part 2:** Main Features for Better Monitoring of Progression.
**Part 3:** Selecting an Appropriate Fitness Application Development Solution
In other words, by taking the mystique out of these points you will have been enabled to choose a fitness app developer that creates an application designed specifically for your specific needs and objectives.
Or are you raring to go out of your no-data-fitness chaos and step into the age of fitness progress tracking? Stay tuned, and let's unlock your potential together!
## Key Features for Effective Progress Tracking: Get Rid of Data Glut, Welcome in the Insights
It all depends on whether your fitness data will be best friends or foes for you. Think of swimming in a sea of numbers, each stroke makes one more metric visible nonetheless failing to indicate shore. [Fitness app solutions](https://semidotinfotech.com/health-fitness-app-development) that are good understand this dilemma and have features tailored to help convert information into meaningful action.
Let's dive into the key functionalities that make progress tracking a breeze:
### 1. Data Collection: Beyond Steps and Miles
Forget the pedometer mentality! Modern fitness apps move out of nowhere from steps and distance, getting a comprehensive picture of your health. Think:
**Biometrics:** Know how your body responds to exercise by monitoring the heart rate, blood pressure, and even sleep quality.
**Nutritional Intake:** Keep a log to record what you eat, and make the necessary improvements needed based on this log so that soon f nowhere will drive your fitness goals because of mindful eating.
**Workout Routines:** Exercises, reps, and sets can be tracked specifically meaning that a specific exercise plan for an individual or team case scenario is possible to follow to create accurate hermeneutics of weightlifting workouts.
**Wellness Metrics:** Track stress, moods, and energy to understand your health condition.
Recall that a firm specializing in fitness app development can customize the data collection process to suit your individual needs and guarantee you access only relevant information.
### 2. Visualization and Analysis: From Numerical to Clarity:
Raw data is the bare version of information. The real magic occurs when it turns into clear, actionable insights. Look for apps that offer:
**Interactive Charts and Graphs:** You can see the trends of your weight, sleep quality, or workout performance to find out what did well and what needs correcting.
**Personalized Insights and Feedback:** Personalized insights based on your data that let you see what’s working and where improvements need to be made.
**Progress Reports:** Produce detailed reports that record your progression on an ongoing basis, enabling you to remain driven and responsible.
### 3. Goal Setting and Progress Tracking: Plot Your Path:
Where are you headed? Good fitness app solutions help you create the so-called SMART goals (Specific, Measurable, Achievable; Relevant, and Time–bounded) as well as trace your achievements.
#### This includes:
**Customizable Goal Setting:** Establish weight loss, muscle-building goals, or anything else that concerns you.
**Progress Tracking Dashboards:** Track your progress in real-time as you celebrate milestones along the way, staying fixed on that goal.
**Goal Adjustments:** If you set realistic goals that are flexible enough based on your progress and if the requirements across seasons change, then by all means make sure your fitness plans never get static.
### 4. Challenges and Rewards: Gamification as a Motivator:
The truth of the matter is workout can be boring. Fitness app developers understand the power of gamification, incorporating elements like:
**Daily and Weekly Challenges:** Set goals with real challenges that will make things fun and stimulate consistency.
**Virtual Competitions:** People will connect with friends and compete in friendly challenges, thus adding a social aspect that further improves motivation.
**Reward Systems:** Engage in competitions, earn badges for reaching milestones, or even win prizes to entice users into achieving more and making progress gratifying.
Do not forget that gamification should be enjoyable, but it shouldn’t consume your entire life. Opt for an application that motivates while considering realistic expectations.
### 5. Social Integration: Share-Your Journey (Optionally):
The power of the community should not be underestimated. However, the social aspects of fitness apps have a dual nature. Consider:
Progress Sharing: Use your achievements as a way to get support and to hold yourself accountable by sharing with friends and family.
**Virtual Communities:** In this regard, make friends with people who aspire to achieve the goals you strive for a stimulus since they can provide usable tips and support.
**Privacy Controls:** Select apps that have strong privacy measures enabling the user to share information if they wish.
An interesting social life you can get from some features may turn out to be a great motivation; however, the choice should always consider your privacy level and comfort.
### 6. Customization: Ways in which you can use your fitness.
Fitness does not have a standard approach. Look for fitness app solutions that offer:
**Personalized Workout Plans:** There are several pre-designed workouts plans to choose from and customize ones that suit your activity level and targets.
**Customizable Dashboards:** Select the data you prefer and how they appear, personalizing your experience.
**Multiple Platforms:** Log your data or view workout programs on different devices without hassle and any inconvenience is guaranteed.
Recall that, unlike the best fitness apps, you do not need to change your habits and adapt yourself to them. Select a product that is designed to meet your specific needs and desires.
By focusing on these important attributes, you will turn your fitness data from a chaotic mess to an effective tool for growth. Leverage technological advancements to embark on a brighter, data-reliant fitness adventure!
## Choosing the Right Fitness App Development Solution: The Personalized Powerhouse.
Before jumping into the deep abyss of an app store, consider reflections and ponder: Is your need one-of-a-kind but makes you do things? Who are you making this app for? People who would like to have custom training plans? Business people, who want portable fitness routines while on the move? Who are your target customers and what they need can dictate the [fitness app development company](https://semidotinfotech.com/health-fitness-app-development) partner in designing the ideal solution.
Now, the big question: Develop an app yourself, or hire a fitness mobile application development company? In-house building provides complete control but enormous investment and technical know-how. Working with a fitness app development firm draws upon their expertise, knowledge best practices pre-built solutions thereby saving you time and money. In conclusion, take into account the budgets you have allocated for the task as well as some necessary skills and desire to be in control of this situation.
Accordingly, you choose to team up with a fitness app development firm. Choosing the right one is not easy though. Look for these key factors:
**Experience and Expertise:** Have they successfully developed health & fitness apps before? Is your target audience or niche the focus of their chosen specialization?
**Portfolio:** Now consider and analyze the past work of these companies as you will do a comparison based on their quality, and design features that attain your suggested vision.
**Communication Style:** Are you able to communicate your wants and desires? Do they listen actively and act promptly?
**Cost:** Request for clear quotes, and compare them to your budget of level and value added.
Ready to move forward? Reach out to Semidot Infotech, a top fitness app development company that has vast expertise and an adept team of developers. They will work with you in developing a fitness app solution that helps your users and inspires their journey into physical exercise. Stop waiting and kick-start your app’s effectiveness now.
## Conclusion: Tackle One App at a Time for the Revealing of Fitness Potential.
Although we shall not forget the spreadsheet graveyard that was talked about at the start of this post. Let's rewrite that story. Monitoring progress with fitness app solutions is not only about numbers but it’s related to keeping insights, and motivation and ultimately propels you towards realizing your goals.
The precise fitness app turns it into your personal trainer, a cheerleader, and a data analyst all in one. Visualize custom workout plans, on-the-go feedback in real-time, and gamified challenges that keep you entertained. The proper use of technology can change your fitness journey, and selecting the right fitness app development company is essential.
However, the prospect of fitness app solutions also promises an even brighter future. Try dynamic integration with a wearable tech that aligns with AI-powered coaching and individualized diet plans directly through the application. The possibilities are endless!
Now you are ready to say goodbye forever to data chaos and see the future of fitness tracking. Don't wait any longer.
Reach out to Semidot Infotech, a renowned fitness app development company that has been in this line of business for years now and possesses a team of skilled developers.
Request your quote today and find an ideal fitness app solution that brings out the best in you. Consider the fact that your dream physique is just an app away, waiting for you. | developersdev_ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.