id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,755,378 | THE PRINT() FUNCTION | By the end of this lesson you should be able to: Describe the print function and its... | 26,360 | 2024-02-08T09:45:14 | https://dev.to/dev123/the-print-function-2ii0 | python, beginners, programming, lesson1 |

## By the end of this lesson you should be able to:
1. Describe the print function and its usage as used in python.
2. Write a simple code using the print function.
3. Avoid common errors associated with the print function.
Let's hop right into the action!
## The print function
-The print statement is one of the easiest ways to display data such as texts to the user using python.
To bring it into perspective, adding a print function tells the computer, **'Display whatever's in the brackets on the user's console'**.
-Here's a good example of a print function
```
print("Hello world!")
```
As illustrated above, the print function is made up of three components:**Print, Parentheses() and Double quotation marks""**
-A point to note is that all of the above mentioned components must be included or the code won't work! This is further illustrated in the common errors associated with the print function below.
-The double quotations can also be replaced by single quotation marks and the code would still work.
```
print('Hey, I love your smile!')
```
##Common errors
Here are examples of errors that most beginners commit and simple solutions to the errors.
**Error 1**
```
Print("I love travelling")
```
-The statement print above has been **capitalized** and the code will therefore not work.The print statement must therefore be written in small letters for the code to work.
-The code will also not work if the print statement is **misspelled**.
-The correct code should be:
```
print("I love travelling")
```
**Error 2**
```
print(John was sent to the shopping center to buy meat.)
```
-The code above will not work because the **quotation marks "" or ''**
have not been included in the code.Instead, the code should be written as:
```
print("John was sent to the shopping center to buy meat.")
```
**Error 3**
```
print"Chicken wings are the best!"
```
-The code does not have **parentheses()** and will therefore not work. The code should be re-written as:
```
print("Chicken wings are the best!")
```
**Error 4**
```
print("I am Darkside be prepared to meet
your
doom!")
```
-The code above if run fails to display the message on the user's console.But why? Haven't we included all the required components?
-The problem can be easily solved by using **triple quotes """_text_ """** instead of **double quotes ""_text_""**
```
print("""I am Darkside be prepared to meet
your
doom!""")
```
-Please note that double quotes are only used for single line texts.
-Triple quotes""" are used for multi-line texts
Please leave any question that you may be having in the comments section.That's all for today.
Thank you! | dev123 |
1,755,398 | AI vs ML: Decoding the Tech Jargon in App Development | I. Hey There, Let's Dive In A. Let's Talk AI (Artificial... | 0 | 2024-02-08T10:02:36 | https://dev.to/sofiamurphy/ai-vs-ml-decoding-the-tech-jargon-in-app-development-40l2 | ai, machinelearning | ## I. Hey There, Let's Dive In
### A. Let's Talk AI (Artificial Intelligence)
Artificial Intelligence, or AI for short, is like the tech whiz kid on the block. It's all about making computers smart, enabling them to do things that usually require a human touch. Think problem-solving, learning, understanding languages—AI is the brain behind the machine.
####1. The Magic of AI
AI isn't just one trick pony; it's got a bag full of capabilities, from basic rules to fancy algorithms and neural networks.
####2. AI Through the Ages
Picture this: AI has been around since the 1950s, going through phases like symbolic AI and expert systems. Recently, it made a comeback, thanks to cool stuff like machine learning.
### B. Say Hello to Machine Learning (ML)
Now, ML, which stands for Machine Learning, is AI's sidekick. It's the one learning the ropes, getting better at tasks without someone explicitly telling it what to do.
#### 1. Learning the ML Lingo
ML is like the apprentice, soaking in concepts like supervised learning (with labels), unsupervised learning (finding patterns), and reinforcement learning (making decisions).
#### 2. ML's Journey in Tech
ML has evolved, riding the wave of better computers, smarter algorithms, and a flood of data to learn from.
## II. The ABCs of AI in App Development
### A. AI's Cool Role in User Experience
Imagine AI as your app's personal stylist, making it tailor-fit for users.
#### 1. Tailored Just for You
[Artificial Intelligence in app development](https://www.excellentwebworld.com/artificial-intelligence-in-app-development/) gets personal with recommendation systems, giving users suggestions that feel like they were handpicked just for them.
#### 2. Chatting in Natural Language
Ever chatted with a bot? That's AI's natural language processing in action—understanding what you say and responding like a buddy.
### B. Automation, Anyone?
AI's not just about fancy words; it's also the hero behind the scenes automating app development tasks.
#### 1. Bugs, Begone!
AI tools sweep away bugs with automated testing, making sure your app runs smoothly.
#### 2. Speedy Delivery with AI
With AI, app development becomes a race car, thanks to automated tasks like code integration and deployment.
## III. ML Magic in App Development
### A. Meet the ML Wizards: Algorithms
ML algorithms are like the wizards of the tech world, bringing enchantment to app development.
#### 1. Sorting Stuff with Supervised Learning
ML can play Sherlock, sorting data into categories for features like image recognition or language translation.
#### 2. Finding Patterns in the Magic
Unsupervised learning lets ML discover hidden patterns, making apps smarter at understanding user behavior.
### B. Future Predictions, Anyone?
ML's got a crystal ball, predicting user behavior and trends for a smoother app experience.
#### 1. Guessing Game: Predictive Modeling
ML looks back in time to predict what users might do next, making your app more responsive.
#### 2. Trendspotting with ML
ML algorithms analyze patterns, predicting trends and keeping your app ahead of the curve.
## IV. Unveiling the Differences
### A. AI vs. ML: Who's Who?
Let's clear the fog: AI is the big picture, and ML is the focused snapshot.
#### 1. AI: The Big Picture
AI's like a blockbuster movie, with everything from robots to understanding languages.
#### 2. ML: Focused on Patterns
ML zooms in, focusing on recognizing patterns and making predictions based on data.
### B. AI-ML Team-Up
It's like Batman and Robin; they work best together.
#### 1. AI's ML Sidekick
AI uses ML's cool algorithms for tasks like recognizing patterns and predicting what users want.
#### 2. ML Boosting AI's Brainpower
ML gives AI the juice it needs, providing data-driven insights and making apps smarter.
## V. Let's Wrap It Up
### A. Quick Recap
So, AI and ML—different but best friends. AI's the big shot, and ML's the learner, making apps awesome.
### B. Why Mix It Up?
Combining AI and ML is like having peanut butter and jelly—they just go together. Apps get superpowers, and users get the best experience. As tech keeps growing, AI and ML will keep shaping the way we make apps. It's the future, and it's exciting! 🚀 | sofiamurphy |
1,755,415 | A Guide to Updating Your Git Repository After Local Code Changes | Git, a powerful version control system, enables developers to collaborate seamlessly and keep track... | 0 | 2024-02-08T10:34:37 | https://dev.to/nikhilxd/a-guide-to-updating-your-git-repository-after-local-code-changes-3bjj | tutorial, github, devops, productivity | _Git, a powerful version control system, enables developers to collaborate seamlessly and keep track of changes in their projects. However, when you make changes to your code locally, it's crucial to update your Git repository to ensure that your team is on the same page and to avoid conflicts. In this blog post, we'll walk through the steps to update your Git repository after making changes to your code locally._
### Step 1: Check Your Working Directory Status
Before updating your Git repository, it's essential to check the status of your working directory. Open your terminal and navigate to your project directory. Use the following command to see which files have been modified, added, or deleted:
```bash
git status
```
This command will provide an overview of the changes you've made.
### Step 2: Stage Your Changes
Once you've reviewed the changes, you need to stage them for the next commit. Use the following command to stage all changes:
```bash
git add .
```
If you want to stage specific files, replace the dot with the file names.
### Step 3: Commit Your Changes
Now, commit your staged changes with a descriptive message using the following command:
```bash
git commit -m "Your commit message here"
```
This step helps you encapsulate your changes with a meaningful comment for better tracking.
### Step 4: Pull Latest Changes from the Remote Repository
Before pushing your changes, it's crucial to pull the latest changes from the remote repository to avoid conflicts. Use the following command:
```bash
git pull origin branch-name
```
Replace "branch-name" with the name of your working branch. This command fetches and merges the changes from the remote repository.
### Step 5: Resolve Conflicts (If Any)
If there are conflicting changes between your local branch and the remote branch, Git will prompt you to resolve these conflicts manually. Open the conflicting files, resolve the differences, and then add and commit the changes again.
### Step 6: Push Your Changes
Once you've resolved conflicts (if any), you can push your local changes to the remote repository using the following command:
```bash
git push origin branch-name
```
Replace "branch-name" with the name of your working branch.
### Conclusion:
Updating your Git repository after making local code changes is a critical part of collaborative development. By following these steps – checking the working directory status, staging changes, committing, pulling latest changes, resolving conflicts, and pushing – you ensure that your codebase remains synchronized and that your team members have access to the latest updates. This workflow helps maintain a smooth and efficient development process, fostering collaboration and minimizing potential issues. | nikhilxd |
1,755,453 | The Specificity Of ::slotted() | The ::slotted() pseudo-element allows you to style elements that are slotted into your web component.... | 0 | 2024-02-08T15:25:48 | https://tomherni.dev/blog/the-specificity-of-slotted/ | css, webcomponents | The `::slotted()` pseudo-element allows you to style elements that are slotted into your web component. But, there is something that may catch you off guard: styles applied with `::slotted()` lose to global styles.
Imagine a website with the following markup:
```html
<style>
p { color: blue } /* (0, 0, 1) */
</style>
<my-element>
<p>Hello world</p>
</my-element>
```
Where `<my-element>` is a web component that adds the following styles:
```css
:host ::slotted(p) { /* (0, 1, 2) */
color: red;
}
```
The website's global styles will win, and the text will be **blue**. Even though the specificity of `:host ::slotted(p)` (0, 1, 2) is higher than `p` (0, 0, 1).
The web component cannot set any CSS properties on a slotted element that are already set by global styles.
This becomes an even bigger issue when global styles include a "reset" stylesheet (like Normalize.css). Reset stylesheets touch the styling of many HTML elements, making it even less likely for slotted styles to be applied.
And in case you were wondering, increasing the argument's specificity of `::slotted()` will also not help you win from global styles.
```css
/* Still loses to global styles */
::slotted(p#foo) {
color: red;
}
```
## Encapsulation contexts
When explaining how declarations are sorted by the cascade, the CSS spec says the following about [cascade contexts](https://www.w3.org/TR/css-cascade-5/#cascade-context):
> "When comparing two declarations that are sourced from different encapsulation contexts, then for normal rules the declaration from the outer context wins, and for important rules the declaration from the inner context wins."
This essentially means that, without `!important`, a web component's `::slotted()` rules are overridden by the outer context (global styles). Specificity pretty much goes out the window.
The next part is also worth mentioning:
> "This effectively means that normal declarations belonging to an encapsulation context can set defaults that are easily overridden by the outer context, while important declarations belonging to an encapsulation context can enforce requirements that cannot be overridden by the outer context."
Using `!important` seems to be the official way to enforce slotted styles.
## Making slotted rules important
As we now know, the only way to make `::slotted()` win is to make rules important. It's not pretty, but considering it's the only way, I suppose this is a case where using `!important` is justified.
```css
/* Wins from global styles */
::slotted(p) {
color: red !important;
}
```
However, consumers may not appreciate their slotted elements being styled with `!important`. Slotted elements are _their_ elements in _their_ DOM. And if they want to set a property that is already set with `!important`, then they now need to do the same to win the specificity battle.
## Why this behavior is tricky
It's likely that most developers would not anticipate this behavior. They would likely assume that styles set with `::slotted()` compete with the specificity of global styles.
Additionally, when a web component is developed and tested in an environment without (conflicting) global styles, then this issue is easy to miss before it makes its way to production.
The purpose of `::slotted()` is to set default styles that can be easily overridden. But consumers can have reset stylesheets, or import stylesheets over which they have no control (particularly in larger corporate environments). In those cases, slotted styles are overridden _too_ easily (i.e. unintentionally).
## How to proceed
Slotted styles without `!important` may not end up being applied as expected. If this is an issue for your web component, then it's time to make a decision:
1. Set important slotted styles with `!important` to enforce them. Document the reason behind this decision and what consumers would have to do to override those styles if necessary.
2. Steer clear of `::slotted()` altogether. Instead, document which styles consumers are recommended to set when slotting elements. Argument could be made that consumers should remain solely responsible for styling their elements.
Choose an approach that aligns with your philosophy. Make a conscious decision and be consistent.
---
It may be worth mentioning that the specificity of `::slotted()` still works as expected within the same encapsulation context. It helps determine which declarations are applied by the web component, even if they were to eventually be overridden by global styles.
```css
/* Wins from the selector below (but
still loses to global styles) */
:host ::slotted(p) {
color: red;
}
::slotted(p) {
color: green;
}
``` | tomherni |
1,755,553 | Transforming Chaos into Order: Incident Management Process, Best Practices, and Steps | Did you realize, only 40% of companies with 100 employees or less have an Incident Response plan in... | 0 | 2024-02-08T12:22:57 | https://dev.to/squadcast/transforming-chaos-into-order-incident-management-process-best-practices-and-steps-1h9g | incident, management, process | Did you realize, only 40% of companies with 100 employees or less have an Incident Response plan in place? Does that include you too? Even if it doesn't, this blog post is for you. Explore the Incident Management processes, best practices and steps so you can compare how your current IR process looks like and if you need to revamp it.
## Impacts Management & Impact of Incidents
Incident Management is a core component of Information Technology (IT) service management that focuses on efficiently handling and resolving disruptions to IT services. These disruptions, known as incidents, can include a wide range of issues, such as system failures, software glitches, hardware malfunctions, or any other event that hinders the otherwise normal operation of IT services.
Pretty direct. Isn’t it?
The average cost of a data breach in 2023 was $4.24 million, according to IBM Security. 37% of servers had at least one unexpected outage in 2023, according to Veeam. Incidents can have a wide range of negative impacts on an organization, categorized into operational impacts, financial impacts, reputational impacts, employee impacts and loss of customer trust. A 1% decrease in customer satisfaction can lead to a 5-10% decrease in revenue, according to Bain & Company. The fact is, downtimes are bound to happen. Both planned and unplanned. So, it’s better to be ready with an Incident Response plan in place with the best Incident Management procedure.
All steps involved in the procedure of managing incidents that arise within the tech environment and infrastructure create the Incident Management process.
### Incident Management Process
Except for the fact that every organization has a different Incident Management process. There are various factors influencing these differences in their Incident Management processes like the industry size, risk tolerance, resource & budget, compliance requirements, and organizational structure (ITIL-based Incident Management or an informal approach relying on key individuals).
While the foundation of Incident Management procedure remains the same as defined by ITIL (Information Technology Infrastructure Library), which is in broad sense the identification, resolution and documentation, differences are bound to arise in
The number of defined severity levels and their associated response times can vary greatly.
How and when incidents are escalated to different levels of management can differ based on complexity and impact.
The detail and format of incident logs and reports can be customized to specific needs.
The preferred methods for informing stakeholders about incidents (e.g., email, internal platforms) can vary.
Some organizations might use sophisticated Incident Management software, while others still rely on spreadsheets or email threads.
### Customized Incident Management Approach
A customized approach caters to individual requirements, resulting in quicker resolution times and minimized disruption. This empowers your Incident Response Team to manage incidents efficiently and confidently.
Tailoring Incident Management Processes according to incident severity and complexity ensures optimal resource utilization. Consequently, it seamlessly adjusts to evolving needs and situations.
There is no universal solution. The most effective Incident Management process is the one that aligns with an organization's distinct context and goals.
Incident Management: Unraveling the Key Stages
Every organization encounters disruptions, ranging from minor hitches to potential crises. How these incidents are managed significantly impacts operations, reputation, and financial standing.
Here's a detailed breakdown of the essential stages:
1. Identification
The initial step involves detecting the incident. This process may entail monitoring systems, analyzing user reports, tracking media mentions, and responding to automated alerts. Think of it as triggering an alarm upon detecting an anomaly.
2. Triage and Prioritization
Recognizing that not all incidents are equal, this stage entails assessing severity and impact, categorizing incidents as critical, high, medium, or low. Similar to sorting incoming tickets based on potential damage levels, prioritizing incidents aids in resource allocation and response efficiency.
a. Low-Priority Incidents:
- These incidents cause minimal disruptions, if any, to business functions.
- Workarounds can be easily devised without affecting services to users and customers.
b. Medium-Priority Incidents:
- This category may lead to moderate interruptions in work for some employees.
- While customers may experience slight inconvenience, the financial and security implications are generally manageable.
c. High-Priority Incidents:
- These incidents significantly disrupt business operations, affecting a substantial number of users.
- System-wide outages often fall into this category, carrying substantial financial impacts and potentially affecting customer satisfaction.
3. Containment and Response
This stage is dedicated to taking immediate action to prevent the incident from spreading further. Actions may include isolating affected systems, disabling features, or temporarily taking services offline.
4. Resolution and Recovery
Addressing the root cause is the focus here. This involves diagnosing the problem, implementing fixes, and restoring affected systems and data. For example, fixing issues gradually while ensuring no customer purchases are lost during peak traffic hours in an eCommerce store.
5. Closure and Review
The final stage involves capturing lessons learned, conducting postmortems, and identifying strategies to prevent future incidents. It includes analyzing incident reports and updating response playbooks with newfound knowledge.
Adopting best practices at each stage of the Incident Management Workflow ensures that every disruption is handled with predefined steps, optimal resource allocation, and a commitment to continuous improvement. Ultimately, this approach minimizes chaos and builds a resilient response system.
## Best Practices for Incident Management at Each Stage
### During Identification:
Deploy comprehensive monitoring: Utilize a range of monitoring tools for system performance, security events, and user feedback.
Automate alerts and escalation based on predefined criteria: Ensure timely notifications for critical incidents requiring immediate attention.
Establish clear incident definitions and escalation thresholds: Ensure universal understanding of what constitutes an incident and when to escalate.
### Encourage incident reporting: Prompt individuals to report incidents to the designated Incident Management team or help desk. Squadcast’s Webforms enable detailed incident reporting for both customers and employees.
### During Triage and Prioritization:
Develop a standardized prioritization matrix: Define severity levels based on impact, urgency, and resource requirements.
Utilize decision trees or scoring systems: Facilitate consistent and rapid prioritization decisions.
Engage relevant stakeholders in complex prioritization cases: Collaborate with business owners and impacted teams for informed decisions.
### During Containment and Response:
Prepare predefined Incident Response playbooks: Outline initial response steps for various incident types to save time and have solutions ready.
Implement containment strategies like isolation, throttling, or feature disabling: Minimize further damage and prevent broader impact.
Ensure access to tools and resources: Guarantee availability of diagnostic & monitoring tools, emergency contact lists, and disaster recovery procedures.
Establish a centralized Incident Management system or ticketing system: Utilize tools like Squadcast for seamless incident logging and tracking.
### During Resolution and Recovery:
Focus on root cause analysis: Utilize log analysis, forensic tools, and expert assistance to identify the underlying cause.
Implement robust rollback strategies: Have tested procedures for reverting changes and restoring affected systems quickly.
Prioritize critical data recovery when necessary: Employ reliable backup and recovery solutions to minimize data loss.
Define roles and responsibilities for Incident Response team members: Include incident coordinators and technical experts for effective response.
Establish effective communication channels and escalation paths: Facilitate seamless coordination and collaboration during Incident Response, potentially utilizing an incident war room.
### During Closure and Review:
Conduct thorough post-incident reviews: Analyze response actions, identify areas for improvement, and update playbooks accordingly.
Automate incident reporting and documentation: Simplify data collection and facilitate knowledge sharing.
Share lessons learned across the organization: Proactively disseminate insights to prevent future incidents, leveraging past experiences.
Perform post-incident reviews (postmortems) to evaluate Incident Response effectiveness and identify enhancement opportunities.
Assess the effectiveness of Incident Management processes: Identify any gaps or bottlenecks and implement corrective actions as needed.
## Bonus Tips For Better Incident Response
Some more actionable tips for better Incident Response are:

Emphasize communication: Keep stakeholders informed throughout the incident with clear, concise, and frequent updates.
Prioritize training and drills: Regularly train your Incident Response team and practice playbooks to ensure coordinated and effective action.
Continuously improve: Regularly review and update your Incident Management processes based on experience and best practices.
Invest in automation and reliability tools: Leverage technology to automate repetitive tasks and improve response efficiency like Squadcast.
Why does Squadcast work as a best Incident Management platform for your business’s reliability needs?
Atlassian’s State of Incident Management Report highlights a few major pain points in Incident Management, like:
Difficult to get stakeholders involved: 36%
Lack of full visibility across IT infrastructure: 23%
Lack of context during an incident: 13%
Lack of automated responses: 9%
Lack of integration with a chat tool (Slack, Microsoft Teams): 8%
A dedicated Incident Management solution like Squadcast covers all points in the Incident Management workflow. It facilitates tasks that integrate On-Call Management, Incident Response, SRE workflows, alerting, enhances team collaboration through chatops tools, workflow automation, SLO tracking, status pages, incident analytics, and conducts incident postmortems. It specially promotes the SRE culture for [Enterprise Incident Management](https://www.squadcast.com/incident-response-tools/enterprise-incident-management) and a preferred [alternative to PagerDuty](https://www.squadcast.com/blog/comparing-the-top-9-pagerduty-alternatives-in-2023). | squadcastcommunity |
1,755,576 | The Top Generative AI Trends for 2024. | In the landscape of digital transformation, artificial intelligence is evolving at an exponential... | 0 | 2024-02-08T12:44:53 | https://dev.to/xcubelabs/the-top-generative-ai-trends-for-2024-2eg | ai, generativeai, generativeaiusecases, aritficalintelligence | In the landscape of digital transformation, artificial intelligence is evolving at an exponential pace, and within it, Generative AI has emerged as a powerful force. As we move into 2024, it’s essential to stay ahead of the curve and understand the latest trends shaping the landscape of Generative AI. In this comprehensive guide, we will explore the top Generative AI trends for 2024 and their potential impact across industries.
**1. Bigger And More Powerful Models**
Generative AI applications are fueled by massive datasets and complex algorithms. In 2024, we can expect to witness the emergence of even larger and more powerful models. Companies like OpenAI and Google have already paved the way with their groundbreaking models, such as ChatGPT and PaLM2. The upcoming GPT-5 is rumored to push the boundaries of size and capability, enabling more advanced and nuanced content generation across text, images, audio, and video.
These larger models will unlock new possibilities in content creation, enabling businesses to automate tasks such as marketing copywriting, talent recruitment, and personalized customer communications. With improved performance and enhanced training capabilities, the potential for Generative AI to revolutionize industries is limitless.
**2. Multimodality: Bridging The Gap Between Modalities**
Traditionally, AI models have focused on a single modality, such as language, images, or sounds. However, the future of Generative AI lies in multimodality. In 2024, we can expect to see the rise of AI models that can understand and generate content across multiple modalities simultaneously.

These multimodal AI models will enable more natural and immersive experiences. Imagine interacting with an AI assistant that can understand and respond to text, images, and voice commands seamlessly. This integration of modalities will open up new possibilities in fields like virtual reality, augmented reality, and robotics, creating more personalized and engaging user experiences.
**3. Personalization: Tailoring Experiences For Maximum Impact**
Personalization has become a key driver of customer engagement and satisfaction. In 2024, Generative AI will play a pivotal role in delivering highly personalized experiences across industries. By analyzing vast amounts of data, AI algorithms can identify patterns and preferences, enabling businesses to tailor their products, services, and marketing campaigns to individual customers.
From personalized product recommendations to customized content creation, Generative AI will empower businesses to connect with their target audience on a deeper level. By leveraging the power of personalization, companies can drive customer loyalty, increase conversions, and stay ahead of the competition.
**4. Chatbots: Enhancing Customer Service And Engagement**
Chatbots have become a familiar presence in customer service, and their capabilities will continue to grow in 2024. Powered by Generative AI, chatbots will become more sophisticated in understanding and responding to customer queries, providing personalized recommendations, and resolving issues.
In addition to customer service, chatbots will find applications in lead generation, sales support, and internal communication. By automating routine tasks and providing instant responses, chatbots can streamline operations, improve efficiency, and enhance the overall customer experience.
**5. Automation: Streamlining Business Processes**
Automation is a driving force behind digital transformation, and Generative AI will further accelerate this trend in 2024. By automating repetitive and time-consuming tasks, businesses can free up valuable resources and focus on more strategic initiatives.
Generative AI-powered automation tools will enable professionals to streamline processes such as file transfers, report generation, and code development. With AI taking care of mundane tasks, employees can dedicate their time and expertise to higher-value activities, driving innovation and growth.
**6. AI In Healthcare: Transforming Patient Care**
The healthcare industry is on the cusp of a technological revolution, and Generative AI will play a crucial role in shaping its future. In 2024, AI-powered solutions will enhance various aspects of healthcare, from drug discovery and personalized treatment plans to patient monitoring and telemedicine.
Generative AI will enable healthcare professionals to analyze vast amounts of patient data, identify patterns, and generate insights. This will lead to more accurate diagnoses, personalized treatment options, and improved patient outcomes. Additionally, AI will streamline administrative tasks, enhance medical research, and improve the overall efficiency of healthcare delivery.
**7. E-Commerce Optimization: Customizing The Shopping Experience**
In the ever-evolving world of e-commerce, personalization is key to capturing the attention and loyalty of customers. Generative AI will enable businesses to create highly customized shopping experiences, from personalized product recommendations to tailored advertising campaigns.
By leveraging Generative AI, e-commerce platforms can analyze customer data, predict preferences, and deliver targeted content that resonates with individual shoppers. This level of personalization will not only drive sales but also foster long-term customer relationships and brand loyalty.

**Generative AI From [X]Cube LABS**
[x]cube has been AI-native from the beginning, and we’ve been working through various versions of AI tech for over a decade. For example, we’ve been working with the developer interface of Bert and GPT even before the public release of ChatGPT.[x]cube LABS offers key Gen AI services such as building custom generative AI tools, the implementation of neural search, fine-tuned domain LLMs, generative AI for creative design, data augmentation, natural language processing services, tutor frameworks to automate organizational learning and development initiatives, and more. Get in touch with us to know more!
**Conclusion: Embrace The Power Of Generative AI In 2024**
As we step into 2024, the power of Generative AI is set to reshape industries and revolutionize the way we live and work. From larger and more powerful models to personalized experiences and streamlined automation, the potential of Generative AI is limitless.
By embracing these trends and leveraging the capabilities of Generative AI, businesses can unlock new levels of efficiency, personalization, and customer engagement. The future is here, and Generative AI is at the forefront of innovation. Are you ready to harness its transformative power?
_Additional Information: This comprehensive guide provides insights into the top Generative AI trends for 2024 and beyond. It offers a holistic view of the transformative capabilities of Generative AI across various industries, including healthcare, e-commerce, customer service, and more. With a focus on personalization, automation, and multimodality, this guide equips businesses with the knowledge and understanding to navigate the evolving landscape of Generative AI and stay ahead of the competition._ | xcubelabs |
1,755,587 | The 10 minute mail Solution | The 10 minute mail Solution In today's digital age, the concept of disposable temporary e-mail, often... | 0 | 2024-02-08T13:08:43 | https://dev.to/liuxiao/the-10-minute-mail-solution-2328 | 10minutemail, tempmail, 10minmail, 10minemail | The [10 minute mail](https://10-minutemail.com) Solution
In today's digital age, the concept of disposable temporary e-mail, often referred to as 10 minute mail, has gained popularity for its convenience and security benefits. This article delves into what disposable e-mail is, why one might need a fake e-mail address, how to choose a disposable e-mail service, and the best practices for using these temporary e-mail addresses.
Disposable temporary e-mail services provide short-term e-mail addresses that expire after a set period, typically ranging from 10 minutes to an hour. These services, like 10 minute mail, generate a quick, use-and-throw e-mail address that helps protect users' primary e-mail accounts from spam, phishing, and other unsolicited e-mails.
Why Would You Need a Fake E-mail Address?
There are several scenarios where a [disposable e-mail](https://10-minutemail.com) can be incredibly useful. For instance, when signing up for online forums, newsletters, or free trials that require an e-mail verification but you wish to avoid cluttering your primary inbox with potential spam. It's also beneficial for maintaining privacy and anonymity online, especially in situations where sharing your real e-mail address could lead to unwanted attention or data breaches.
How to Choose a Disposable E-mail?
When selecting a disposable e-mail service, consider factors like the duration the e-mail address remains active, ease of use, and whether it allows you to receive and read e-mails or merely send them. Services like 10 minute mail are favored for their simplicity and reliability, offering a straightforward approach to generating a temporary e-mail address with just a click, without the need for registration or personal information.
How to Use a [Disposable E-mail](https://10-minutemail.com) Address?
Using a disposable e-mail address is simple. Visit a reputable disposable e-mail service like [10 minute mail](https://10-minutemail.com), and you'll be provided with a temporary e-mail address immediately. Use this e-mail for any sign-ups or registrations that you deem potentially unsafe or spammy. Remember, the e-mail address will expire after the set time, so it's only suitable for short-term engagements that do not require long-term access to the e-mail correspondence.
In conclusion, disposable temporary e-mail addresses serve as a valuable tool for online privacy and security. They offer a practical solution for avoiding spam and maintaining anonymity without compromising your primary e-mail account. Whether you're signing up for a one-time service, testing an application, or simply looking to protect your digital footprint, services like [10 minute mail](https://10-minutemail.com) provide an efficient and user-friendly option. Embrace the convenience and security of disposable e-mail and enjoy a clutter-free, secure online experience.
8 temp mail you can choose:
1 10 minute mail
[https://10-minutemail.com/](https://10-minutemail.com/)
2 temp mail
[https://www.mailtemp.net/](https://www.mailtemp.net/)
3 10 min mail
[https://email10min.net/](https://email10min.net/)
4 ten minutes mail
[https://tenminutesmail.net/](https://tenminutesmail.net/)
5 10 min mail
[https://10-minutemail.net](https://10-minutemail.net)
6 10 min mail
[https://10-minutemail.org](https://10-minutemail.org) | liuxiao |
1,765,676 | ArgoCD Deployment on RKE2 with Cilium Gateway API | Introduction It has already been a couple of years since the Kubernetes Ingress was... | 0 | 2024-02-19T15:24:34 | https://dev.to/egrosdou/argocd-deployment-on-rke2-with-cilium-gateway-api-412n | kubernetes, opensource, tutorial, argocd | ##Introduction
It has already been a couple of years since the Kubernetes [Ingress](https://kubernetes.io/docs/concepts/services-networking/ingress/) was defined as a “frozen” feature while further development will be added to the [Gateway API](https://gateway-api.sigs.k8s.io/).
After initial exposure to the Cilium Gateway API [docs](https://docs.cilium.io/en/v1.14/network/servicemesh/gateway-api/gateway-api/) and the interactive [lab](https://isovalent.com/labs/gateway-api/) session, it sounded promising to move the ArgoCD deployment from the Kubernetes Ingress to the Cilium Gateway API. The purpose of the blog post is to illustrate how easy it is to move the ArgoCD installation to the Cilium Gateway API. For this demonstration, the Gateway and the HTTPRoute have been created in the argocd namespace.
## Diagram

## Lab Setup
```
- — — — — — — + — — — — — — — — — -+ — — — — — — — — —+
| Cluster Name | Type | Version |
+ — — — — — — — + — — — — — — — — — -+ — — — — — — — — +
| rke2-test01| Test Management Cluster| RKE2 v1.26.12+rke2r1 |
+ — — — — — — — + — — — — — — — — — -+ — — — — — — — — +
- — — — — — -+ — — — — -+
| Deployment | Version |
+ — — — — — — -+ — — — — -+
| ArgoCD | v2.9.3 |
| Cilium | v1.14.5 |
| GatewayAPI | v0.7.0 |
+ — — — — — — -+ — — — — -+
```
## Step 1: Deploy RKE2 Cluster with Cilium CNI
Before diving in, it is a good idea to checkout the RKE2 official [documentation](https://docs.rke2.io/install/network_options) on Kubernetes Networking and the Cilium [documentation](https://docs.cilium.io/en/v1.14/installation/k8s-install-rke/). Also, take a peek at the prerequisites for deploying the [Gateway API deployment](https://docs.cilium.io/en/v1.14/network/servicemesh/gateway-api/gateway-api/).
### RKE2 Pre-work
```yaml
$ cat /etc/rancher/rke2/config.yaml
write-kubeconfig-mode: 0644
tls-san:
- {Master Node hostname}
token: {Your Token}
cni: cilium
disable-kube-proxy: true
etcd-expose-metrics: false
```
```yaml
$ cat /var/lib/rancher/rke2/server/manifests/rke2-cilium-config.yaml
---
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: rke2-cilium
namespace: kube-system
spec:
valuesContent: |-
image:
tag: v1.14.5
kubeProxyReplacement: strict
k8sServiceHost: 127.0.0.1
k8sServicePort: 6443
operator:
replicas: 1
gatewayAPI:
enabled: true
```
According to the Cilium documentation, to enable the Gateway API, we need at least the **1.14.5** Cilium Helm chart with the `kubeProxyReplacement` value set to `true` and the `gatewayAPI` inside the Helm chart definition set to `enabled: true`.
Once the remaining steps for the RKE2 installation are complete, we will have a two node RKE2 cluster with Cilium as a CNI.
Since Kubernetes v.1.26.x does not come with the Gateway API CRDs included, we will need to deploy them manually and let the Cilium containers restart until everything is in a “Running” state.
```bash
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.7.0/config/crd/standard/gateway.networking.k8s.io_gatewayclasses.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.7.0/config/crd/standard/gateway.networking.k8s.io_gateways.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.7.0/config/crd/standard/gateway.networking.k8s.io_httproutes.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.7.0/config/crd/standard/gateway.networking.k8s.io_referencegrants.yaml
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api/v0.7.0/config/crd/experimental/gateway.networking.k8s.io_tlsroutes.yaml
```
**Note:** According to the RKE2 documentation, RKE2 v1.26.12 does not officially support Cilium v1.14.5. The latest supported version is v1.14.4. However, during the demo setup, we did not encounter any issues.
### Verification
```bash
$ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
rke2-master01 Ready control-plane,etcd,master 111m v1.26.12+rke2r1 <none> SUSE Linux Enterprise Server 15 SP5 5.14.21-150500.53-default containerd://1.7.11-k3s2
rke2-worker01 Ready <none> 98m v1.26.12+rke2r1 <none> SUSE Linux Enterprise Server 15 SP5 5.14.21-150500.53-default containerd://1.7.11-k3s2
$ kubectl get pods -n kube-system | grep -i cilium
cilium-k9vhf 1/1 Running 0 111m
cilium-lc7jn 1/1 Running 0 99m
cilium-operator-548958b5bf-nc95q 1/1 Running 5 (108m ago) 111m
helm-install-rke2-cilium-tp5l6 0/1 Completed 0 112m
$ kubectl -n kube-system get daemonset cilium -o jsonpath="{.spec.template.spec.containers[0].image}"
rancher/mirrored-cilium-cilium:v1.14.5
```
## Step 2: Install ArgoCD
We will follow the official “Getting Started” guide found [here](https://argo-cd.readthedocs.io/en/stable/getting_started/), and use the manifest installation option.
```bash
$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
```
The code above will create the argocd Kubernetes namespace and deploy the latest **stable** manifest. If you would like to install a specific manifest, have a look [here](https://github.com/argoproj/argo-cd/releases).
### Verification
```bash
$ kubectl get pods,svc -n argocd
NAME READY STATUS RESTARTS AGE
pod/argocd-application-controller-0 1/1 Running 0 82m
pod/argocd-applicationset-controller-6b67b96c9f-7szsr 1/1 Running 0 82m
pod/argocd-dex-server-c9d4d46b5-mdf67 1/1 Running 0 82m
pod/argocd-notifications-controller-6975bff68d-ltbkc 1/1 Running 0 82m
pod/argocd-redis-7d8d46cc7f-2br7f 1/1 Running 0 82m
pod/argocd-repo-server-59f5479b7-dfg9x 1/1 Running 0 82m
pod/argocd-server-547bf65466-68554 1/1 Running 0 58m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/argocd-applicationset-controller ClusterIP 10.43.98.162 <none> 7000/TCP,8080/TCP 82m
service/argocd-dex-server ClusterIP 10.43.71.44 <none> 5556/TCP,5557/TCP,5558/TCP 82m
service/argocd-metrics ClusterIP 10.43.162.177 <none> 8082/TCP 82m
service/argocd-notifications-controller-metrics ClusterIP 10.43.55.157 <none> 9001/TCP 82m
service/argocd-redis ClusterIP 10.43.62.79 <none> 6379/TCP 82m
service/argocd-repo-server ClusterIP 10.43.224.205 <none> 8081/TCP,8084/TCP 82m
service/argocd-server ClusterIP 10.43.166.25 <none> 80/TCP,443/TCP 82m
service/argocd-server-metrics ClusterIP 10.43.165.222 <none> 8083/TCP 82m
```
## Step 3: Pre-Work
Before we move on with the Gateway API implementation, we need to create additional Kubernetes resources.
### Argocd TLS Secret
```bash
$ kubectl create secret tls argocd-server-tls -n argocd --key=argocd-key.pem --cert=argocd.example.com.pem
```
The above assumes that we have already created a private/public key pair via an available utility. Also, keep in mind that the TLS secret name should be `argocd-server-tls` as it will be used at a later point.
### Cilium IP Pool
In our lab environment, we do not have a tool to hand over `Loadbalancer` IP addresses. Therefore, we will use the Cilium [LoadBalancer IP Address Management](https://docs.cilium.io/en/v1.14/network/lb-ipam/) (LB IPAM).
```yaml
$ cat ipam-pool.yaml
---
apiVersion: "cilium.io/v2alpha1"
kind: CiliumLoadBalancerIPPool
metadata:
name: "rke2-pool"
spec:
cidrs:
- cidr: "10.10.10.0/24"
```
### Verification
```bash
$ kubectl apply -f "ipam-pool.yaml"
$ kubectl get ippool
NAME DISABLED CONFLICTING IPS AVAILABLE AGE
rke2-pool false False 253 79m
```
### Cilium GatewayClass
If the `GatewayClass` resource is not present in the cluster, we have to create one for Cilium. The resource will be used in a later step and while deploying the `Gateway`. The `GatewayClass` is a template that lets infrastructure providers offer different types of Gateways.
```yaml
$ cat gatewayclass.yaml
---
apiVersion: gateway.networking.k8s.io/v1beta1
kind: GatewayClass
metadata:
name: cilium
spec:
controllerName: io.cilium/gateway-controller
```
### Verification
```bash
$ kubectl apply -f "gatewayclass.yaml"
$ kubectl get gatewayclass
NAME CONTROLLER ACCEPTED AGE
cilium io.cilium/gateway-controller True 104m
```
## Step 4: Create a Gateway and an HTTPRoute Resources
### Gateway
The `Gateway` is an instance of the `GatewayClass` created above.
```yaml
$ cat argocd_gateway.yaml
1 ---
2 apiVersion: gateway.networking.k8s.io/v1beta1
3 kind: Gateway
4 metadata:
5 name: argocd
6 namespace: argocd
7 spec:
8 gatewayClassName: cilium
9 listeners:
10 - hostname: argocd.example.com
11 name: argocd-example-com-http
12 port: 80
13 protocol: HTTP
14 - hostname: argocd.example.com
15 name: argocd-example-com-https
16 port: 443
17 protocol: HTTPS
18 tls:
19 certificateRefs:
20 - kind: Secret
21 name: argocd-server-tls
```
**Line 3:** We define the kind Resource to `Gateway`
**Line 6:** We set the namespace to `argocd`
**Line 8:** We use the name of the `GatewayClass` created in the previous step
**Line 9:** We define the listeners for the ArgoCD server
**Line 21:** We define the TLS secret name created in the previous step
**Note:** In the definition above we use the `.example.com` as the Domain however, the value should be replaced with a valid Domain name.
### HTTP Route
The `HTTPRoute` is used to distribute multiple HTTP requests. For example, based on the `PathPrefix`.
```yaml
$ cat argocd_http_route.yaml
1 ---
2 apiVersion: gateway.networking.k8s.io/v1beta1
3 kind: HTTPRoute
4 metadata:
5 creationTimestamp: null
6 name: argocd
7 namespace: argocd
8 spec:
9 hostnames:
10 - argocd.example.com
11 parentRefs:
12 - name: argocd
13 rules:
14 - backendRefs:
15 - name: argocd-server
16 port: 80
17 matches:
18 - path:
19 type: PathPrefix
20 value: /
21 status:
22 parents: []
```
**Line 10:** We set the hostname we want the ArgoCD Server to get exposed to
**Line 15:** We define the name of the ArgoCD server service
### Apply the Kubernetes Resources
```bash
$ kubectl apply -f argocd_gateway.yaml,argocd_http_route.yaml
$ kubectl get gateway,httproute -n argocd
NAME CLASS ADDRESS PROGRAMMED AGE
gateway.gateway.networking.k8s.io/argocd cilium 10.10.10.173 True 9s
NAME HOSTNAMES AGE
httproute.gateway.networking.k8s.io/argocd ["argocd.example.com"] 9s
```
## Step 5: Test Time
We want to see if everything works as expected and whether we are able to access the ArgoCD server with the Cilium Gateway API. Let’s perform a CURL request.
```bash
$ curl -kv https://argocd.example.com
* Trying 10.10.10.173:443...
* Connected to argocd.example.com (10.10.10.173) port 443 (#0)
* ALPN: offers h2,http/1.1
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_CHACHA20_POLY1305_SHA256
* ALPN: server did not agree on a protocol. Uses default.
* Server certificate:
* subject: O=mkcert development certificate; OU=root@server
* start date: Feb 2 07:11:49 2024 GMT
* expire date: May 2 07:11:49 2026 GMT
* issuer: O=mkcert development CA; OU=root@server; CN=mkcert root@server
* SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway.
* using HTTP/1.x
> GET / HTTP/1.1
> Host: argocd.example.com
> User-Agent: curl/8.0.1
> Accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
< HTTP/1.1 307 Temporary Redirect
< content-type: text/html; charset=utf-8
< location: https://argocd.example.com/
< date: Fri, 02 Feb 2024 07:58:21 GMT
< content-length: 63
< x-envoy-upstream-service-time: 0
< server: envoy
<
<a href="https://argocd.example.com/">Temporary Redirect</a>.
* Connection #0 to host argocd.example.com left intact
```
From the above, it is visible that we are experiencing a well-known issue with 307 redirects. To resolce this, we will need to disable the TLS on the API server. This involves modifying the `argocd-cmd-params-cm` ConfigMap in the argocd namespace and setting the `server.insecure: “true”`. More information can be found [here](https://argo-cd.readthedocs.io/en/stable/operator-manual/server-commands/additional-configuration-method/).
Once the changes are performed we need to restart the `argocd-server` deployment for the changes to take effect.
```bash
$ kubectl rollout restart deploy argocd-server -n argocd
$ kubectl rollout status deploy argocd-server -n argocd
Waiting for deployment "argocd-server" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "argocd-server" rollout to finish: 1 old replicas are pending termination...
deployment "argocd-server" successfully rolled out
```
Let us try once again.
```bash
$ curl -ki https://argocd.example.com
HTTP/1.1 200 OK
accept-ranges: bytes
content-length: 788
content-security-policy: frame-ancestors 'self';
content-type: text/html; charset=utf-8
vary: Accept-Encoding
x-frame-options: sameorigin
x-xss-protection: 1
date: Fri, 02 Feb 2024 13:03:45 GMT
x-envoy-upstream-service-time: 0
server: envoy
<!doctype html><html lang="en"><head><meta charset="UTF-8"><title>Argo CD</title><base href="/"><meta name="viewport" content="width=device-width,initial-scale=1"><link rel="icon" type="image/png" href="assets/favicon/favicon-32x32.png" sizes="32x32"/><link rel="icon" type="image/png" href="assets/favicon/favicon-16x16.png" sizes="16x16"/><link href="assets/fonts.css" rel="stylesheet"><script defer="defer" src="main.f14bff1ed334a13aa8c2.js"></script></head><body><noscript><p>Your browser does not support JavaScript. Please enable JavaScript to view the site. Alternatively, Argo CD can be used with the <a href="https://argoproj.github.io/argo-cd/cli_installation/">Argo CD CLI</a>.</p></noscript><div id="app"></div></body><script defer="defer" src="extensions.js"></script></html>
```
Great, we received a 200 OK status message!
**Note:** The `-v` short option in the CURL request stands for `--verbose`, the `-k` short option stands for `--insecure`, and the `-i` short option is for `--include`.
The next steps will be to test the above deployment with the **latest Cilium version** and the **Gateway API v.1.0.0**.
## Resources
- Cilium Gateway API Lab: https://isovalent.com/labs/gateway-api/
- Cilium Advanced Gateway API Lab: https://isovalent.com/labs/advanced-gateway-api-use-cases/
- Migrate from Ingress to Gateway: https://docs.cilium.io/en/v1.14/network/servicemesh/ingress-to-gateway/ingress-to-gateway/
- RKE2 Installation Methods: https://docs.rke2.io/install/methods
Thanks for reading!
| egrosdou |
1,765,685 | 100 Days of CSS - Day 1 | Check out this Pen I made! | 0 | 2024-02-19T15:43:49 | https://dev.to/patricknjiru/100-days-of-css-day-1-4661 | codepen | Check out this Pen I made!
{% codepen https://codepen.io/Patrick-Njiru/pen/MWxxRxp %} | patricknjiru |
1,765,704 | HostPress: Mehr als nur Hosting – Eine Bewertung | Wenn es darum geht, eine erfolgreiche Website zu betreiben, spielt die Wahl des Hostings eine... | 0 | 2024-02-19T16:30:35 | https://dev.to/j0e/hostpress-mehr-als-nur-hosting-eine-bewertung-2cij | hosting, wordpress | Wenn es darum geht, eine erfolgreiche Website zu betreiben, spielt die Wahl des Hostings eine entscheidende Rolle für die Leistung, die Sicherheit und den Gesamterfolg der Website. Vor allem Premium-WordPress-Hosting bietet eine Reihe von Vorteilen, die auf die besonderen Bedürfnisse von WordPress-Websites zugeschnitten sind. In diesem Artikel erfahren wir, warum HostPress die führende Option für Premium-WordPress-Hosting ist. HostPress bietet überragende Leistung, fortschrittliche Sicherheitsfunktionen, fachkundigen Support und Skalierbarkeit, um den Anforderungen wachsender Websites gerecht zu werden.
## Was kostet mich der Spaß?

| Funktion | HostPress® START | HostPress® PLUS | HostPress® MAX |
|---------------------------|------------------|-----------------|----------------|
| **Preis pro Monat** | 19 € | 39 € | 79 € |
| **vCPUs** | 2 | 4 | 8 |
| **RAM & NVMe SSD** | 2 GB ab 5 GB | 4 GB ab 15 GB | 8 GB ab 30 GB |
| **RocketCache® Technologie** | Ja | Ja | Ja |
| **Tägliche Backups (14 Tage)** | Ja | Ja | Ja |
| **Support** | Telefon, Chat, E-Mail | Telefon, Chat, E-Mail | Telefon, Chat, E-Mail |
| **Kostenloses SSL-Zertifikat** | Ja | Ja | Ja |
| **Performance-Optimierung** | Ja | Ja | Ja |
| **Imunify360 Security Suite** | Ja | Ja | Ja |
| **X-Ray PHP Optimizer** | Ja | Ja | Ja |
| **E-Mail Postfächer (zubuchbar / inklusive)** | Zubuchbar | 10 inklusive | 20 inklusive |
| **Für WooCommerce Shops** | Uptime Monitoring, Redis Object Cache, CDN Einrichtung, KI-geprüfte Smart Updates | Uptime Monitoring, Redis Object Cache, CDN Einrichtung, KI-geprüfte Smart Updates | Uptime Monitoring, Redis Object Cache, CDN Einrichtung, KI-geprüfte Smart Updates |
HostPress® bietet Managed WordPress Hosting aus Deutschland an, das speziell auf die Bedürfnisse von Webseiten- und Shopbetreibern zugeschnitten ist. Das Angebot umfasst drei verschiedene Einzeltarife: START, PLUS und MAX. Jeder dieser Tarife ist darauf ausgerichtet, optimale Performance, Sicherheit und Support für WordPress-Seiten zu gewährleisten. Hier eine kurze Vorstellung der einzelnen Tarife:
### HostPress® START
Der START-Tarif ist ideal für normale Webseiten und Blogs, die eine solide Grundlage für ihren Online-Auftritt suchen.
- **Preis:** 19 € pro Monat bei jährlicher Zahlweise, zzgl. MwSt.
- **Leistungen:** 2 vCPUs, ab 5 GB RAM NVMe SSD, tägliche Backups (14 Tage Retention), RocketCache® Technologie, Imunify360 Security Suite, X-Ray PHP Optimizer, kostenfreies SSL-Zertifikat und Support via Telefon, Chat und E-Mail. E-Mail-Postfächer sind zubuchbar.
### HostPress® PLUS
Für professionelle Webseiten und Shops konzipiert, bietet der PLUS-Tarif erweiterte Funktionen für erhöhte Ansprüche.
- **Preis:** 39 € pro Monat bei jährlicher Zahlweise, zzgl. MwSt.
- **Leistungen:** 4 vCPUs, ab 15 GB RAM NVMe SSD, alle Vorteile des START-Tarifs plus 10 E-Mail Postfächer inklusive und zusätzliche Features speziell für WooCommerce Shops wie Uptime Monitoring, Redis Object Cache, CDN Einrichtung und KI-geprüfte Smart Updates.
### HostPress® MAX
Der MAX-Tarif ist für anspruchsvollste Webseiten und Shops gedacht, die maximale Performance und umfangreiche Support-Leistungen benötigen.
- **Preis:** 79 € pro Monat bei jährlicher Zahlweise, zzgl. MwSt.
- **Leistungen:** 8 vCPUs, ab 30 GB RAM NVMe SSD, alle Leistungen des PLUS-Tarifs mit zusätzlichen 20 E-Mail Postfächern und den höchsten verfügbaren Ressourcen für Spitzenleistung und Stabilität.
Alle Tarife beinhalten geo-redundante Backups in TÜV-zertifizierten HA Rechenzentren, DSGVO-konformes Hosting mit Server- und Firmenstandort in Deutschland, und nutzen 100% Ökostrom. Zudem bietet HostPress® einen kostenlosen Umzugsservice inkl. PageSpeed Optimierung, um den Wechsel zu HostPress so einfach und reibungslos wie möglich zu gestalten.
## Überlegene Leistung und Zuverlässigkeit
HostPress zeichnet sich durch überragende Leistung und Zuverlässigkeit aus, die für den Erfolg einer Website unerlässlich sind. Die Website verwendet eine Mischung aus kurzen und langen Sätzen sowie einfachen und komplexen Satzstrukturen. Der Ton ist im Allgemeinen förmlich. HostPress sorgt für schnelle Ladezeiten, minimale Ausfallzeiten und eine optimale Serverleistung, was alles zu einem außergewöhnlichen Nutzererlebnis beiträgt. Außerdem zeigt sich die Zuverlässigkeit von HostPress in der hohen Serververfügbarkeit und der allgemeinen Stabilität, wodurch das Risiko von Störungen des Website-Betriebs minimiert wird.
Die Auswirkungen von Leistung und Zuverlässigkeit auf das Nutzererlebnis und das SEO-Ranking können gar nicht hoch genug eingeschätzt werden. Websites, die auf HostPress gehostet werden, profitieren von schnelleren Ladegeschwindigkeiten, was zu einem positiven Nutzererlebnis beiträgt. Außerdem bevorzugen Suchmaschinen Websites mit zuverlässiger Leistung und Betriebszeit, was zu besseren SEO-Rankings und Sichtbarkeit führen kann.

## Erweiterte Sicherheitsfunktionen
Sicherheit hat für jeden Website-Betreiber oberste Priorität, und HostPress zeichnet sich in diesem Bereich durch seine robusten Sicherheitsmaßnahmen aus. Die Website verwendet eine Mischung aus kurzen und langen Sätzen sowie einfachen und komplexen Satzstrukturen. Der Ton ist im Allgemeinen förmlich. HostPress schützt Websites vor verschiedenen Bedrohungen wie Malware, Hacking-Versuchen und DDoS-Attacken und gibt Website-Besitzern und Besuchern gleichermaßen ein sicheres Gefühl. Die Implementierung von fortschrittlichen Sicherheitsfunktionen unterstreicht das Engagement von HostPress, die Integrität und Sicherheit der Websites seiner Nutzer/innen zu gewährleisten.
Das Vertrauen der Website-Besucher zu erhalten und sensible Daten zu schützen, ist in der heutigen digitalen Landschaft von entscheidender Bedeutung. Mit den umfassenden Sicherheitsfunktionen von HostPress können Webseitenbetreiber/innen ihr Engagement für den Schutz der Daten ihrer Besucher/innen unter Beweis stellen und so ihr Vertrauen und ihre Glaubwürdigkeit stärken.
## Fachkundiger Support und Kundenservice
HostPress zeichnet sich durch seinen reaktionsschnellen und sachkundigen Kundendienst aus, der Website-Betreibern unschätzbare Hilfe bietet. Die Website verwendet eine Mischung aus kurzen und langen Sätzen sowie einfachen und komplexen Satzstrukturen. Der Ton ist im Allgemeinen förmlich. Ein zuverlässiger Support ist wichtig, um technische Probleme zu beheben, rechtzeitig Hilfe zu erhalten und wertvolle Hinweise zu bekommen, um das Potenzial der Hosting-Dienste zu maximieren. Durch die Hervorhebung von Szenarien aus dem wirklichen Leben oder Erfahrungsberichten von Nutzern können die positiven Erfahrungen mit dem HostPress-Support wirkungsvoll vermittelt werden, wodurch die Vorteile der Wahl von HostPress eine persönliche Note erhalten.
## Skalierbarkeit und Flexibilität
Die Skalierbarkeit und Flexibilität, die HostPress bietet, wird den sich entwickelnden Bedürfnissen von Websites gerecht und ermöglicht ein nahtloses Wachstum und eine individuelle Anpassung. Die Website verwendet eine Mischung aus kurzen und langen Sätzen sowie einfachen und komplexen Satzstrukturen. Der Tonfall ist im Allgemeinen förmlich. Die Hosting-Pläne und -Ressourcen von HostPress sind so konzipiert, dass sie den wachsenden Anforderungen von Websites gerecht werden und sicherstellen, dass Leistung und Fähigkeiten mit dem Wachstum der Website übereinstimmen. Die Flexibilität von HostPress ermöglicht es Website-Besitzern, Hosting-Konfigurationen auf ihre spezifischen Bedürfnisse zuzuschneiden und so eine personalisierte und optimierte Hosting-Umgebung zu schaffen.

## Fazit
Zusammenfassend lässt sich sagen, dass HostPress die optimale Wahl für Premium-WordPress-Hosting ist, da es überragende Leistung, fortschrittliche Sicherheitsfunktionen, kompetenten Support und Skalierbarkeit bietet. Die Website verwendet eine Mischung aus kurzen und langen Sätzen sowie einfachen und komplexen Satzstrukturen. Der Tonfall ist im Allgemeinen förmlich. Wenn Website-Betreiber über ihre Hosting-Bedürfnisse nachdenken, bietet HostPress eine überzeugende Lösung, um die Leistung und den Gesamterfolg ihrer Website zu steigern. Mit dem Fokus auf Benutzerfreundlichkeit, Sicherheit und Support ist HostPress gut aufgestellt, um Website-Betreibern zu helfen, ihre Online-Ziele zu erreichen. Ich möchte dich ermutigen, die Angebote von HostPress zu erkunden und die Vorteile von Premium-WordPress-Hosting aus erster Hand zu erfahren.

HostPress verkörpert die Essenz von erstklassigem WordPress-Hosting und hält, was es verspricht: Leistung, Sicherheit, Support und Anpassungsfähigkeit. Wenn du über den Hosting-Bedarf deiner Website nachdenkst, empfehle ich dir, HostPress die Aufmerksamkeit zu schenken, die es verdient. Das Wachstumspotenzial und der Erfolg deiner Website können sehr wohl von dem Hosting-Anbieter abhängen, den du wählst. Bei HostPress kannst du sicher sein, dass deine Website in guten Händen ist und von einem Team unterstützt wird, das sich für deinen Online-Erfolg einsetzt. | j0e |
1,765,747 | A pseudo imperative approach for react confirmation dialogs | Hello, this is the first technical article I am writing since we started developing fouriviere.io;... | 0 | 2024-02-20T16:09:08 | https://dev.to/brainrepo/a-pseudo-imperative-approach-for-react-confirmation-dialogs-3jcn | Hello, this is the first technical article I am writing since we started developing fouriviere.io; for more info about the Fourier project, please visit [fourviere.io](https://www.fourviere.io).
The problem I want to discuss regards the **confirmation modal**; we have a few of them in our most complex flows (e.g., feed sync, feed/episode deletion).
Having a confirmation modal is often a good practice for managing un-revertable or destructive actions, and we adopted it in our critical paths for protecting the user from accidental actions.
Our frontend is built with React, and one of React's peculiarities is its very declarative approach, an approach that contrasts with the imperative approach of the confirmation modal. Considering this, our initial implementation bypassed the obstacle by effectively circumventing it; in fact, we used the [**tauri dialog**](https://tauri.app/v1/api/js/dialog/#confirm) function, which mimics the [web api confirm method](https://developer.mozilla.org/en-US/docs/Web/API/Window/confirm) confirm method in a certain way.
```js
//...do something
const confirmed = await confirm('This action cannot be reverted. Are you sure?', { title: 'Tauri', type: 'warning' });
if(!confirmed){
//...exit
}
//...continue the action
```
This is cool because it can be used in complex workflows without fighting with components and complex states; in fact, we don't need to track whether the modal is shown or the confirmation button is pressed.
However, there is a downside: **the design of this confirmation modal comes from the operating system and does not fit our design styles at all**.
# How we solved the problem
First of all, we designed a confirmation modal, for laziness we based our component on the [tailwindui dialog](https://tailwindui.com/components/application-ui/overlays/dialogs) .
Here is an oversimplified version. If you want to see the implementation with the tailwind classes, please look at [our ui lib](https://github.com/fourviere/fourviere-podcast/blob/main/packages/ui/lib/dialogs/alert.tsx)
```jsx
type Props = {
ok: () => void;
cancel: () => void;
title: string;
message: string;
okButton: string;
cancelButton: string;
icon?: React.ElementType;
};
export default function Alert({ok, cancel, title, message, okButton, cancelButton, icon}: Props) {
const Icon = icon as React.ElementType;
return (
<div>
<h3>{icon} {title}</h3>
<p>{message}</p>
<div>
<button onClick={ok}>{okButton}</button>
<button onClick={cancel}>{cancelButton}</button>
</div>
</div>
);
}
```
Now, we need to display this Alert modal in a [portal](https://react.dev/reference/react-dom/createPortal) in the most imperative way possible. To do that, we created a hook that exposes an askForConfirmation method that does all the dirty work under the hood.
```jsx
interface Options {
title: string;
message: string;
icon ?: ElementType;
}
const useConfirmationModal = () => {
async function askForConfirmation({title, message, icon}:Options)
{
//Here we will put our implementation
}
return {askForConfirmation}
}
export default useConfirmationModal;
```
This hook will return an `askForConfirmation` method for being called by the component logic, this method takes a `Options` object for defining the modal title, message and icon.
Now we need to track when modal is displayed and eventually the `title`, `message`, `icon`, the `okAction` and the `cancelAction`, we define a state for the component, the state can be false or object of type `ModalState`, if false the modal is hidden.
```jsx
interface Options {
title: string;
message: string;
icon ?: ElementType;
}
interface ModalState
title: string;
message: string;
ok: () => void;
cancel: () => void;
icon?: ElementType;
}
const useConfirmationModal = () => {
const [modal, setModal] = useState<false | ModalState>(false);
async function askForConfirmation({title, message, icon}:Options)
{
//Here we will put our implementation
}
return {askForConfirmation}
}
export default useConfirmationModal;
```
Now the `askForConfirmation` method should set the modal state, let's implement. But we want that does it following an async approach using promises, like that we can call in this way
```tsx
//inside the component//
const {askForConfirmation} = useConfirmationModal()
//...previous logic
if (!await askForConfirmation()) {
return
}
continue
```
This means that askForConfirmation should return a promise that is resolved (with true or false) when the ok button is pressed or when the cancel button is pressed; before resolving the promise, the modal is hidden.
```jsx
interface Options {
title: string;
message: string;
icon ?: ElementType;
}
interface ModalState
title: string;
message: string;
ok: () => void;
cancel: () => void;
icon?: ElementType;
}
const useConfirmationModal = () => {
const [modal, setModal] = useState<false | ModalState>(false);
async function askForConfirmation({title, message, icon}:Options)
{
return new Promise<boolean>((resolve) => {
setModal({
title,
message,
icon,
ok: () => {
setModal(false);
resolve(true);
},
cancel: () => {
setModal(false);
resolve(false);
},
});
});
}
return {askForConfirmation}
}
export default useConfirmationModal;
```
Now stays to implement the display part. This is a hook, and it does not render directly jsx; then we need to find a "sabotage" for managing the render phase. What if the hook returns a function component for rendering it?
Let's try.
```jsx
interface Options {
title: string;
message: string;
icon ?: ElementType;
}
interface ModalState
title: string;
message: string;
ok: () => void;
cancel: () => void;
icon?: ElementType;
}
const useConfirmationModal = () => {
const [modal, setModal] = useState<false | ModalState>(false);
const modals = document.getElementById("modals") as HTMLElement;
async function askForConfirmation({title, message, icon}:Options)
{
return new Promise<boolean>((resolve) => {
setModal({
title,
message,
icon,
ok: () => {
setModal(false);
resolve(true);
},
cancel: () => {
setModal(false);
resolve(false);
},
});
});
}
function renderConfirmationModal() {
return (
<>
{modal && createPortal(
<Alert
icon={modal.icon ?? ExclamationTriangleIcon}
title={modal.title}
message={modal.message}
okButton="ok"
cancelButton="cancel"
ok={modal.ok}
cancel={modal.cancel}
/>,
modals,
)
}
</>
);
return {askForConfirmation, renderConfirmationModal}
}
export default useConfirmationModal;
```
Now, our hook returns aside the `askForConfirmation`, a function component renderConfirmationModal that displays the modal in the portal (in our case, inside the `<div id="modal">` in the HTML page).
Now, let's try to use it in a simple component
```jsx
export default function SimpleComponent() {
const {askForConfirmation, renderConfirmationModal} = useConfirmationModal()
async function doSomething() {
if(!askForConfirmation({
title: "Are you sure?",
message: "This operation cannot be reverted",
})) {
return false
}
//do stuff...
}
return <>
{renderConfirmationModal()}
<button onClick={doSomething}>DO IT/button>
</>
}
```
# Conclusions
After this journey, we have a hook that helps us to have a confirmation modal with a simple api. It is essential to keep simple and reusable parts of the UI; this helps to keep the code readable, and we know how it can become messy our react components.
**But keeping things simple needs complex effort.** | brainrepo | |
1,766,021 | 📍 OPT-NC agencies on Kaggle | ❔ About OPT-NC has many agencies in New-Caledonia, but getting csv files was not as easy... | 26,496 | 2024-02-21T20:00:48 | https://dev.to/optnc/opt-nc-agencies-on-kaggle-4edd | datascience, opendata, python, showdev | ## ❔ About
OPT-NC has [many agencies in New-Caledonia](https://www.opt.nc/service/l-opt-pres-de-chez-moi-trouver-une-agence), but getting `csv` files was not as easy as that, and if you wanted to use data to build datascience, **you had to achieve manual tasks.**
**👉 The purpose of this post is to show how we recently upgraded the [Developer experience](https://github.blog/2023-06-08-developer-experience-what-is-it-and-why-should-you-care/)... and the opportunities it does open.**
## 🎯 What you'll learn
You'll learn:
- **🛍️ The various datasources** we used to build a consistent & up-to-date dataset
- **🎁 Available dataformats** (`csv`, `duckdb`)
- **🎀 How to use** the dataset with a dedicated Notebook
## 🍿 Demo
{% youtube Y5PWxaxz1_E %}
## 📑 Related resources
1. [⚙️ Notebook builder](https://www.kaggle.com/code/optnouvellecaldonie/open-data-agences-opt-nc) (where the data is prepared and aggregated)
2. [🏤 Agences 📍 Dataset](https://www.kaggle.com/datasets/optnouvellecaldonie/agences-new)
3. [👨🎓 Agences OPT-NC for dummies](https://www.kaggle.com/code/optnouvellecaldonie/agences-opt-nc-for-dummies)
## 🤯 Opened perspectives
Delivering data on Kagglle make it possible to play with them with free GPU and...
Amazing Open Source AI models like `Mixtral` (see [Kaggle model card](https://www.kaggle.com/models/mistral-ai/mixtral))...
and more recently `google/gemma` (see [Kaggle model card](https://www.kaggle.com/models/google/gemma)):
{% twitter 1760289388204900633 %}
... or many other open source LLM models, see :
{% embed https://dev.to/adriens/local-open-source-ai-a-kind-ollama-llamaindex-intro-1nnc %}
or simply use OpenAI from your notebook and the AI do the job for you so yo can focus on your storytelling:
{% embed https://dev.to/adriens/put-magic-in-your-notebook-w-jupyter-ai-3co4 %} | adriens |
1,766,049 | TextMine - AI powered knowledge base for business critical documents | TextMine is an E2E AI-powered knowledge base for your business critical documents, including... | 0 | 2024-02-20T03:25:07 | https://dev.to/textmine/textmine-ai-powered-knowledge-base-for-business-critical-documents-559f | b2b, ai, saas | TextMine is an E2E AI-powered knowledge base for your business critical documents, including invoices, payslips, tender documents, compliance reports, and even contracts. Our platform consolidates everything into a single operational layer to enable proactive decision-making. We provide increased data transparency whilst allowing document creation to be more scalable and uniform. We’ve helped organizations save money and time while reducing their risk profile and increasing compliance.
[https://textmine.com](https://textmine.com) | textmine |
1,768,103 | Is microservice architecture the best choice | Introduction In recent times I have found a lot of articles criticizing microservice... | 0 | 2024-02-21T16:09:46 | https://dev.to/mommcilo/is-microservice-architecture-the-best-choice-3406 | microservices, modular, architecture | ## Introduction
In recent times I have found a lot of [articles](https://thenewstack.io/year-in-review-was-2023-a-turning-point-for-microservices/) criticizing microservice architecture. It is important to understand when and why to use microservice architecture. This kind of approach increases complexity inside the system, but gives a lot of flexibility in the future. On the other hand, this flexibility will not be visible if the approach to building microservices is wrong. In the case of wrong architecture only complexity will be visible.
## Where not to use microservice architecture
For some use cases it is easier to specify where you should not use microservice architecture:
1. Creating a POC (Proof of concept) - even if you know your app will be microservice- oriented, don’t waste time creating POC, mitigating complexity in the microservice approach.
2. You know that the project will be small - if you know your project is a side project, or just that it will be small, don’t use the microservice approach just so that you can say you used it.
3. You or your team don’t have enough knowledge about microservice architecture.
4. You have a monolith that works well, customers are satisfied, stakeholders are satisfied, but you want to try new technologies because they are fancy.
5. For a new big system where we aren't yet sure how it looks, it could cost us a lot - so it would be good to consider Modular Monolith here, which gives you the possibility to easily switch to Microservice architecture.
## The Twelve Factor App
Before you start with the microservice approach you must be prepared, and have some level of knowledge about what microservices are, and where, why and how to use them. A good starting point is The Twelve-Factor App. You should read this documentation and try to understand the concepts behind it. If you see that your application will not fit in one or more of them, you should reconsider the decision to use microservice architecture. [The Twelve-Factor](https://12factor.net/) app is about how your microservices should look from the DevOps side, in order to make maintenance, deployment and scalability easy. You should bear in mind that it is a very important aspect, because if your microservices can’t be easily deployed and scale they lose their main purpose.
## Showcase - monolith e-shop database problem
Let’s set up an example to examine the pros and cons of microservice architecture. Imagine a big e-shop that has different products split into 3 sections: Books, Lego, and Food. Your stakeholders want all 3 sections to have different attributes, different presentations, and ways of searching. But in the end products from different sections can finish in the same shopping cart, and be shipped together.

Let’s imagine that every arrow represents 100 requests in one moment for a specific category, i.e. B-Books, L-Lego, F-Food. Those requests are everything from simple browsing, searching to checkout. Every request generates one or more requests on the database. Now let’s imagine that your eshop becomes popular after a successful advertising campaign, and loads are increasing. Because all requests go in the same database, and the search in the books section takes more time then the search in other categories, because of the problem in our code, applications start to hang because there is a problem with the database. The connection pool is full and all new connections are hanging before some resources are released. If you have experienced this problem sometimes, you know how dangerous this situation is. It can be fixed if we increase the connection pool, but this is just a temporary solution that first needs to be discovered and addressed, but sometimes it leads to other problems.
Problems with a monolith in eshop example:
1. Search on the books section generates a JOIN query and that is the main problem causing connection problems with the database.
2. The whole application is not working (hanging) when the connection pool is out of free connections. The Food and Lego sections don't have problematic queries, but because of the Book section, when users search in Food or Lego, they also experience problems.
3. To fix a problem with the Book database query, some adjustments in code are needed and the application must be built, tested and deployed. These changes in code must be done on a shared module that is used by the Food and Lego sections as well, which means that a full test cycle on the application must be done.
From this simple showcase it is easy to see that things that were advantages in the beginning, start to be a problem in later phases of the application lifecycle. With the monolith we reduced complexity, and it was faster to finish the application, but future adjustments on code and optimization, as well as problems can be much more expensive and harder to solve.
## Showcase - microservice eshop database problem
We now have the same problem as in the first showcase, which means a Books section search produces a JOIN query that takes too much time and the database connection pool becomes full, which causes application hanging.
**The benefits of microservice architecture in the eshop example:**
1. Only users browsing the Book section will experience problems. The use of all the other sections, as well as checkout, are working normally without any problems.
2. Fixing the problem requires only the Book microservice to be changed. Building and deployment are much faster.
3. Sharing the resources is minimized and changes made in the Book microservices do not affect any other section, and testing can be done only on the Book section, which causes a faster release cycle.
This problem or showcase with databases can apply to many other crucial system resources.
## Memory problem
The Book section loads a lot of PDF previews into the memory before they are shown to the users. When the number of users and the number of PDFs reach a limit, the application crashes.
Similar conclusions for the monolith are valid here:
1. The whole application will not work when the memory limit on the server is reached.
2. The whole application must be rebuilt once we find out the solution (eg. Adding some kind of external caching)
3. We have to add a new configuration, adjust the pipeline for the whole application, and open some ports on the server in order to allow this cache to successfully communicate with our application.
On the other hand in microservice architecture:
1. Only the Book section will not work when PDF previews reach the memory limit.
2. We only have to rebuild the Book source code.
2. We only have to adjust the Book pipeline, to open ports for cache communication only on the server where the Book application is running.
## A hard drive problem
The Lego section needs a lot of pictures for every set, and administrators like to upload them a lot. They are saved on the hard drive but after some time the hard drive reaches its limit. To address this problem, we decided to move images on S3 so that the process of adding new images can scale automatically.
Now in the monolith we have a new configuration for S3, we have a new adjustment of the pipeline and also a new intervention on the server to enable communication with S3.
In the case of microservice architecture all of this will eventually be done on different applications, on different pipelines and on different servers.
## General problems
Almost every other problem can fit this same pattern we have already discussed in the previous cases. And rest assured there will be other problems in big systems that you will need to deal with. In a single app they will produce:
1. A big fat app
2. A lot of configuration (cache, S3, MongoDB and who knows what else)
3. A lot of set up procedures
4. A lot of server adjustments
5. A long building time
6. A long testing time (once you build a whole app it is really risky to release it without testing the whole app, even if a story or bug are only related to one section eg. Book)
7. Complex CD/CI pipelines
8. Long release processes that only get longer with time
9. Harder integration and e2e testing
10. One big team or different teams working on separate modules, but all developers must be aware of the complete application problems and bottlenecks, and address them in the future code they have to write, even if their module is perfectly designed.
## Problems in microservice architecture
“New concepts just replace existing problems with new ones”, as some skeptics like to say about new concepts. And that is really something that applies to microservice architecture as well. But when done properly, and with clear separation between services, with clean communication between services, things get better over time. All 10 points from the previous section can be turned around and the opposite is valid in microservice architecture:
1. Small apps concentrate on a single domain (if you are using Domain Driven Design as a separation principle of microservices - more about that in the next articles on the same topic)
2. Minimal or no configuration
3. Minimal or no set up procedures
4. Minimal or no server adjustments
5. A short building time
6. A short testing time for a single app
7. A simple CD/CI pipeline
8. A short release time
9. Simple and easy integration testing and simpler e2e testing
10. Teams dedicated to the single app, with great knowledge on the application domain, with a lot of autonomy on making decisions that create better apps.
11. Easier horizontal scaling of a single part of the application that is identified as crucial, as a side effect of all previous conditions listed above
Well this looks great, so what is the problem with microservices? Why are we even thinking about a choice? Of course, microservices, as everything else, have their problems and let’s address just some of them:
1. Data normalization - think about this problem in the same way as you think of database tables. Due to performance data is normalized (split) into different tables. Here data is normalized into different microservices - databases. And at one moment sooner or later you will have to denormalize it. You will have to deal with this problem depending on the specific case, taking into account different aspects of your specific problem.
2. Increased complexity - you have a new story that is requested from your stakeholders and you remember your old monolith application where you had to change one service method to fulfill this request. Now because this story touches data from 3 different domains, you will have to create at least 3 different service methods, 3 events with the purpose to communicate, and who knows what else. Then you need to care about global transactions, and cases when the flow breaks somewhere in the middle, for example the first microservice finishes the job successfully, but the second fails. What should we do then? Revert the first one or recover the second one and continue? Those decisions then need to be discussed between teams and it can be very time- and energy-consuming to find a solution that suits everyone.
3. Eventual consistency - there will be some data that will diverge from the expected values, and that can be the result of various factors (a global transaction error, broken communication between microservices, etc). Those use cases must be identified as soon as possible and if not acceptable, must be addressed and solved properly. On the other hand, if your monolith service method breaks in the middle of the execution, the whole transaction will be rolled back. And there will be no communication between services.
4. Increased costs - increased complexity is enough to signal that the costs will be bigger. With bigger complexity you will need more mature developers to keep the system running, and you will need more developers in the summary of all levels, to be able to maintain all existing microservices and build new ones. This architecture also increases dev ops costs, in fact you will need dev ops that can help set up and maintain microservice architecture, which also means costs for cloud instances or set up in-house solutions. If users have better experience, and stakeholders can go faster to market with new ideas, those costs are acceptable, because it is not reasonable to expect better results for the same amount of money. But if this is not the case, then pressure is on the development department and it can be a sign that some decisions were wrong.
## Before the conclusion
You do not always need to choose between those 2 approaches. It is not a “take it or leave it” choice. There are a lot of approaches that make use of concepts from those 2 and give you the opportunity to choose a solution that fits best in your use case. For now I will name 2 concepts that are somewhere between and that you should consider when making the best choice: Modular Monolith and Distributed Monolith. You can guess a lot based on the names, but I would suggest you delve deeper into those concepts and understand as much as possible before choosing them as a template for your solution.
## Conclusion
As usual, there are no easy answers when things are complicated. You must consider different aspects and make the right decision. It is not an easy task and requires a lot of knowledge and experience, where I would place high importance on knowledge, because if you enter this field without enough principles that you will use and follow, it is easy to be on the wrong track that will have big consequences in the future.
| mommcilo |
1,769,650 | Free newsletter for secure development leaders <3 | The latest edition of Secure Development Leaders is out now and shares three essential ingredients... | 0 | 2024-02-22T23:06:10 | https://dev.to/ladynerd/free-newsletter-for-secure-development-leaders-3-2l95 | security, leadership, news, softwareengineering | The latest edition of Secure Development Leaders is out now and shares three essential ingredients for building security culture in your development team.
Highlights:
* The importance of education (and why that's not about technology but motivation)
* Why education without empowerment is bad for your security
* The role of accountability (and acknowledgment).
It also includes top things you may have missed in #appsec this week.
[https://www.secdevleaders.com/p/three-essential-ingredients-secure-development-culture]
If you find this useful, subscribe! It's free [https://www.secdevleaders.com] | ladynerd |
1,770,123 | German veteran Cross returns to national team for Euro 2024 | The German national soccer team has called back "Veteran midfielder" Tony Cross 34, Real Madrid, who... | 0 | 2024-02-23T12:08:52 | https://dev.to/slotmachines92/german-veteran-cross-returns-to-national-team-for-euro-2024-4a2p | The German national soccer team has called back "Veteran midfielder" Tony Cross 34, Real Madrid, who announced his retirement from the national team in 2021 in preparation for the 2024 European Football Championship Euro 2024 in his country in June.
"I decided to play for the German national team again from March. Why? I was asked by the national team coach," Kroos wrote on his Instagram account. "I am confident that with the national team, we will be able to achieve more than most people believe."
Cross, who first joined the German national team in 2010, is a veteran midfielder who played 106 A matches 17 goals.
With a wide field of view and outstanding pass skills, as well as creative play and outstanding set-piece ability, he played an active role as the "Central Commander" of the German national team, and helped Germany win the 2014 World Cup in Brazil.
Kroos, nicknamed "the football professor," announced his retirement from the national team in June 2021 after Germany were eliminated in the Euro 2020 round of 16 after losing 0-2 to England.
Since then, Cross has shown the dignity of a veteran by continuing his rusty skills at his team, Real Madrid.
In the meantime, Julian Nagelsmann, who leads the German national team, asked Cross to return ahead of the warm-up matches against France and the Netherlands scheduled for March, and Cross accepted this and chose to return to the national team for the first time in three years.
German soccer, which once showed off the world's top performance, has been mocked as a "Rusty Tank Corps" after a series of humiliations, including the elimination of the group stage of the 2018 World Cup in Russia, the elimination of the Euro 2020 round of 16, and the elimination of the 2023 World Cup in Qatar.
In the meantime, Germany, which has fallen to 16th place in the FIFA rankings, is the host country of Euro 2024, which will open in June, and has pushed for Cross to return to the national team, aiming to recover his pride. [토토사이트](https://www.betmantoto.org)
| slotmachines92 | |
1,771,477 | CICD pipelines: Application Developers perspective | This blog presents an alternate view of CICD pipelines with an example of AKS build deployment using... | 0 | 2024-02-26T08:36:55 | https://dev.to/abrarmoiz/cicd-pipelines-application-developers-perspective-5dpa | cicd, aks, githubactions |
This blog presents an alternate view of CICD pipelines with an example of AKS build deployment using GitHub actions.
A simplified view of CI pipeline is to allow developers to relate to and make the CICD pipelines more readable and update or fix any issues quickly in the CICD pipeline. The CI pipeline can be considered as an automated list of steps any developer would follow to get his code built and pushed into a code artifact repo.
Consider a situation where in there is no devops team around or you are just moved to devops team from a development background.
A quick view of the steps a developer would take and their matching automation step Github actions. (A similar action can be found on any other CI/CD tool like Azure Devops)
---
### 1. Code Build Phase ###
Our developer in this first step wants to build the code and save the created code output in a safe location. Previously depending the language the output could be a war / jar / zip file or dll file and it would be saved in versioned format.
Since in this case we deal with a containerized output i.e. a container image. The developer needs to save the image in an image repo like docker hub, ACR or ECR
######First the developer needs machine to run the build on it can be a windows or linux machine a VM######
<code>
runs-on: ubuntu-latest
</code>
######Next the developer would want to checkout the code######
<code>
- name: Code checkout
uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
fetch-depth: 0
</code>
***github.head_ref*** is a reference to the github repo and branch links saved as a variable in github actions
###### Developer needs to now authenticate / configure the container registry in our case Azure Container Registry where the code in form of container images would reside ######
<code>
- name: Login to Azure Container Registry
uses: docker/login-action@v2
with:
registry: ${{ secrets.ACR_URL }}
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
</code>
Since we are dealing with credentials of ACR, it would be better saved as a secret.
Refer to this [link](https://docs.github.com/actions/security-guides/encrypted-secrets) for using secrets in github.
###### Developer then would run a build i.e. a docker build and push the created docker image into the aforementioned container registry ######
<code>
- name: Build docker image and push to registry
uses: docker/build-push-action@v4
with:
context: ${{ inputs.PROJECT_PATH }}
file: ${{ inputs.DOCKERFILE_PATH }}
push: true
build-args: |
build_id = ${{ steps.vars.outputs.sha_short }}
tags: ${{ secrets.ACR_URL }}/${{ steps.vars.outputs.project_name }}:${{ steps.vars.outputs.sha_short }}
cache-from: type=registry, ref=${{ secrets.ACR_URL }}/${{ steps.vars.outputs.project_name }}:latest
cache-to: type=inline
</code>
In this action ***steps*** is a variable internal to github action used to refer to predefined variables.
With the above list of steps you have a got a basic CI pipeline running. Always remember
CI = checking out code, building / compiling code using the dependencies

We have got an artifact at this stage. A responsible developer however would want to run the test cases to ensure a new PR or code commit doesn’t break the existing functionality.
---
### 2. Test Phase ###
###### The developer again would need a machine to run the tests it can be a windows or linux machine a VM######
<code>runs-on: ubuntu-latest</code>
######Developer would again want to checkout the code######
<code>
- name: Code checkout
uses: actions/checkout@v3
with:
ref: ${{github.head_ref}}
fetch-depth: 0
</code>
###### Developer would need to now set up the local environment to run tests and default test suite to get executed ######
<code>
- name: Execute unit tests
if: inputs.TESTCASE_INPUT != ''
defaults:
run:
working-directory: ${{ inputs.TESTCASES_FOLDER }}/ ${{inputs.TESTCASE_INPUT}}
</code>
In this step ***inputs... *** is referred to the variables passed to the github actions
###### Setup coding language version ######
<code>
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
</code>
###### Now execute the tests ######
<code>
- name: Install dependencies
run: |
pip install pytest pytest-cov
pytest tests.py --doctest-modules --junitxml=junit/test-results.xml --cov=com --cov-report=xml --cov-report=html
</code>
As this stage we realize why the team lead and the architects have been after us to increase the code coverage. Remember a good code coverage always allows you to catch the errors early. We have heard that multiple times earlier.

### 3. Code Deploy Phase ###
Finally, the developer needs to run this application on cloud or on premise. In our case its going to be AKS cluster on azure cloud. The developer pulls out the image pushed into the registry and tries to run it into the compute layer. The compute layer could be Azure web service or a VM here we consider an AKS cluster
Assumption: AKS, DB the application relies on is already setup we deal with only delivery of application code
###### Developer would again need a VM or machine to deploy the code from ######
<code>runs-on: ubuntu-latest</code>
######Login to a cloud environment in our case an Azure environment######
<code>
- name: Azure Login
uses: azure/login@v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
fetch-depth: 0
</code>
######Developer would need to identify the proper AKS where the application code in form of docker image would reside this is done by setting the AKS context by identifying resource group, subscription group and AKS cluster name ######
<code>
- name: Set AKS Context
uses: azure/aks-set-context@v3
with:
resource-group:
${{ secrets.RESOURCE_GROUP }}
cluster-name: ${{ secrets.CLUSTER_NAME }}
</code>
######Authenticate into ACR container registry ######
<code>
- name: Login to Azure Container Registry
uses: docker/login-action@v2
with:
registry: ${{ secrets.ACR_URL }}
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
</code>
######Finally refresh the AKS deployment with the latest version of container image. ######
This will involve installation of kubectl tool, configuring kubeconfig from the secrets and running the k
<code>
- name: Set up Kubectl
uses: azure/k8s-set-context@v1
with:
kubeconfig: ${{ secrets.KUBECONFIG }}
- name: Deploy
run: |
kubectl apply -f kubernetes/deployment.yaml
kubectl apply -f kubernetes/service.yaml
</code>
The above step could also be accomplished using helm charts.

---
The entire CI/CD pipeline therefore can be viewed at a basic level by breaking it up into smaller jobs which are an automation of a developer’s day to day tasks of code build, running unit tests and deployment to a cloud server instead of running on his laptop.
Each of the main sections could be considered as a github action ***job*** and every developer activity we went through is a ***step***. Different implementations of CI/CD may call each step with a different name i.e. a job, step, trigger can be referred and have syntactically different ways of representation in different tools. Refer to this link for a comparison of different CICD tools [link](https://github.com/cdfoundation/sig-interoperability/blob/main/docs/tools-terminology.md#terminology-used-by-cicd-tools-and-technologies)
However, our pipeline is not production ready! The major difference between these steps and a CICD pipeline fit to be used in a production environment would be:
### a. Optimizing steps: ###
1. Remove duplicate steps between jobs for e.g. code checkout in build and test jobs to be reduced to only one
2. Add dependencies between stages to ensure one stage is completed before the next one starts. e.g. deploy cannot start until build is completed

### b. Adding security scanners to CICD: ###
Various measures are implemented to ensure the final output from the CI Pipeline is free from vulnerabilities. This is done by scanning code / code artifact by adding a few additional steps in the CI pipeline. This is a quality control applied on the CI pipeline. For this can use the standard github (or any custom CICD) actions to the pipeline which addresses the following –
1. Static Application Security Testing (SAST) e.g. SonarQube , Sonar cloud
2. Dynamic Application Security Testing (DAST) e.g. Snyk, ZAP
### c. Event based execution: ###
The execution of the entire pipeline needs to be all glued to an event it could be PR approval or Commit to a branch or Creation etc.,
### d. Continous delivery first and then continous deployment: ###
In the real world the deploy stage evolves first into a human monitored ***continous delivery*** and if the processes mature enough then it moves onto ***continous deployment*** something which devops teams would need to ponder over.
##### An overall view of our jobs in form of a CICD pipeline after implementing all the CICD best practices
 | abrarmoiz |
1,771,491 | Zustand EntityAdapter - An EntityAdapter example for Zustand | Hi there! Recently, I came across this amazing library to manage states in React, called Zustand.... | 26,613 | 2024-03-11T12:25:25 | https://dev.to/michaeljota/zustand-entityadapter-an-entityadapter-example-for-zustand-cd2 | javascript, typescript, react, zustand | Hi there! Recently, I came across this amazing library to manage states in React, called Zustand. This library allows you to create simple stores that can be consumed inside the components like a hook. It's simple to learn and use but also very powerful.
## What's Zustand?
Zustand is a state library with a Flux-like API. It gave me vibes of what a Service is for Angular's state management, a simple solution to share state between components. It's like creating a Redux store, with zero boilerplate code, that can be accessed anywhere and it doesn't require to configure a central state manager.
The example Zustand has in its introduction page does nothing but show what simple, yet useful it is:
```ts
// useBearStore.ts
import { create } from 'zustand'
export const useBearStore = create((set) => ({
bears: 0,
increasePopulation: () => set((state) => ({ bears: state.bears + 1 })),
removeAllBears: () => set({ bears: 0 }),
updateBears: (newBears) => set({ bears: newBears }),
}))
// BearCounter.tsx
import { useBearStore } from '..../useBearStore' // This can be anywhere;
export function BearCounter() {
const bears = useBearStore((state) => state.bears)
return <h1>{bears} around here...</h1>
}
// Controls.tsx
import { useBearStore } from '..../useBearStore' // This can be anywhere;
export function Controls() {
const increasePopulation = useBearStore((state) => state.increasePopulation)
return <button onClick={increasePopulation}>one up</button>
}
```
As you can see, we have defined a store somewhere, and then we can consume that store elsewhere the app. This alone can be very helpful to avoid prop drilling and improve your application-wide state management.
You might notice that when I said it gave me vibes of what a Service is for Angular's state management, I didn't say it lightly, this is as simple as creating a Service when you are using Angular. Even simpler because here we don't need classes or DI configurations, just a hook and that's all.
## What is an EntityAdapter?
An EntityAdapter is a function that generates the same set of prebuilt actions and selectors, allowing it to interact with a list of objects sharing the same structure and perform efficient CRUD operations. This concept, originally from the `ngrx` team, has already been ported to `RTK`, and I thank both teams for their effort.
If you check them [@ngrx/entity](https://ngrx.io/guide/entity/adapter) and [@reduxjs/toolkit](https://redux-toolkit.js.org/api/createEntityAdapter) both their API is very similar. They both have a createEntityAdapter function that will return an object with a state creator, a bunch of actions, and some selectors.
## Why should we create our own EntityAdapter for Zustand?
But if this is already been developed for `ngrx` and `RTK`, why can we just use those implementations instead? They were developed to be used with each library, and even if they do the same, they do so differently. That's why we can't use either directly with Zustand. But it allows us to learn more from both, the EntityAdapter functionality, and Zustand.
## Let's start!
I want us to focus on what we want from this function, a state creator, a bunch of actions, and some selectors.
### The initial state
The state that we want to create will be consistent, and we are going to be using the same shape both libraries are using:
```ts
type EntityId = string | number;
interface EntityState<Entity, Id extends string | number = EntityId> {
ids: Id[];
entities: Record<Id, Entity>;
}
```
The simpler function we can create to return this shape is:
```ts
function stateFactory<Entity>(): EntityState<Entity> {
return { ids: [], entities: {} };
}
```
But they don't do that. Instead, both `ngrx` and `RTK` allow for more properties to be added, and at first, I wanted to give this ability to the users, but, I think doing so misses the point of using something as simple as Zustand.
Zustand allows us to create encapsulated state managers, and because of this, instead of having our factory to handle this merging, we can do that merging ourselves when we are creating our store. We can even create another store and have one store to handle the entities and other store to handle any additional state.
So, we finish with the stateFactory. Let's move on to the actions!
### Random actions go!
Same as we did with the state, it would be better to identify what would be the same that we would need to express better the actions we want. This is where the libraries start to diverge, while both libraries provide methods to add, set, update, upsert, and delete one or many entities, and other methods to delete and replace all entities, each also provides additional functionality.
- [`ngrx`](https://ngrx.io/guide/entity/adapter#adapter-collection-methods) additionally provides a way to map directly through the entities list to update one or many entities.
- [`RTK`](https://redux-toolkit.js.org/api/createEntityAdapter#crud-functions) also provides a signature overload for each method enabling them to be used directly as a reducer case.
After that review, I think the bare minimum methods that we need are to add, set, update, upsert, and delete one or many entities, one to remove them all, and another one to replace them all. We can follow the same logic to handle the difference between add and set, if we try to add one entity, we verify that the entity doesn't exist, and if we set that value, we introduce or replace that entity. A similar difference can be made with update and upsert methods, but consider that update requires an object with an id property and an update property containing the updates, but the upsert needs the whole entity to be passed.
So, we will need something like this:
```ts
interface EntityActions<Entity> {
addOne(entity: Entity): void;
addMany(entities: Entity[]): void;
setOne(entity: Entity): void;
setMany(entities: Entity[]): void;
updateOne(entity: Entity): void;
updateMany(entities: Entity[]): void;
upsertOne(entity: Entity): void;
upsertMany(entities: Entity[]): void;
removeMany(entities: Entity[]): void;
removeOne(entity: Entity): void;
removeAll(): void;
setAll(entities: Entity[]): void;
}
```
Since we are following the Flux pattern, we don't want our actions to return anything, instead, we need to query for what we need from the main store.
To implement those methods in Zustand, we can leverage Zustand's feature that allows us to return just a slice of the state, so we can focus on what we want to interact with, and we can also consider that to update the state in Zustand, our update method needs to call a setState function. Because of this, we will create our methods to accept two parameters, the current state, and the arguments for what we want to do, an entity most of the time. Those methods will return the updated state.
To add an entity, we can do something like this:
```ts
const addOne = (state: S, entity: Entity): S => {
const id = idSelector(entity);
if (state.ids.includes(id)) {
return state;
}
const entities = {
...state.entities,
[id]: entity,
};
const ids = [...state.ids, id];
return {
entities,
ids,
};
};
```
> We will talk later more about the `idSelector` and the `State` type, but for now, the first one is just a function that gets the value of that property's id, and the `State` type is a type alias for the store state.
So, to add an entity to the collection, we shallow copy the current state merging it with the new entity and its id. Does the user have additional properties in that state? We don't know, and better than that, we don't care, because as I mentioned, Zustand will use this and update the state merging it with any other properties that the user could have.
But we need to call this using the setState function. To use this we can have a method like:
```ts
addOne(entity: Entity) {
setState(state => addOne(state, entity))
}
```
That method has the same signature as the `addOne` we want to use as our EntityActions. We can also reuse that method to create the "many" one as well, just by reducing the list of the entities.
```ts
addMany(entities) {
setState((state) => entities.reduce(addOne, state));
}
```
To implement the other methods, we can do something like that, we can create one function for each type of operation we want, and use it to implement both alternatives, the single one and the "many" one.
After we do that, we'll have all basic C*UD operators, but to complete the implementation we want, we would be missing two methods, one to replace all entities, and one to remove them all.
Again, we are going to leverage the simplicity of Zustand for this, and when we want to remove all values from the collection, all we have to do is return an empty state.
```ts
removeAll() {
setState({ ids: [], entities: {} });
}
```
Likewise, for our replace all function, we can still use reduce on the entities list we want to add, but instead of passing the state as the initial value as we are doing with the "many" alternatives, we will pass the empty state.
```ts
setAll(entities: Entity[]){
setState(() => entities.reduce(setOne, { ids: [], entities: {} });
}
```
Now we have all the actions we need to interact with our application! Awesome! We can create our `actionFactory` now. This is when we will talk about the `idSelector` and `setState` functions, and the `State` type.
- The `idSelector` is a function we will provide when we create each adapter. The idSelector should have this signature:
```ts
type IdSelector<Entity> = (model: Entity) => string | number;
```
- The `setState` is the Zustand function to set the state. The signature for this one is complex, as the simpler version in Zustand has been deprecated, so we will have to trust the process.
- The `State` type, is a type alias for a EntityState:
```ts
type State = EntityState<Entity>;
```
Considering this, for our `actionsFactory` we will need the `Entity` generic, and the `setState` and the `idSelector` functions as props.
```ts
interface ActionsFactoryProps<Entity extends object> {
setState: SetState<Entity>;
idSelector?: IdSelector<Entity>;
}
```
We'll mark idSelector as optional because we will use a default implementation:
```ts
const defaultIdSelector = (entity: any): string | number => entity.id;
```
With all that, we can create our `actionsFactory`.
```ts
export function actionsFactory<Entity extends object>({
setState,
idSelector = defaultIdSelector,
}: ActionFactoryProps<Entity>): EntityActions<Entity> {
type State = EntityState<Entity>;
const addOne = (state: State, entity: Entity): State => {
...
}
...
return {
addOne(entity: Entity) {
...
}
...
};
}
```
> Unlike the state, I won't be adding the whole factory function for actions, because it's large, and I consider it will create noise in the post. But you can still check the implementation in the linked StackBlitz. Sorry!
### Gimme, gimme, gimme... an entity
So far, we have the state and the actions. But, at the end of the day, we need to display that data somewhere, and the best way to handle this is by using selectors. Zustand itself is very unopinionated about the usage of complex selectors, if you want to use memoized selectors, that's on you. But when you use the store, you still need to pass a simple selector function.
Both libraries provide pretty much the same set of selectors, but `RTK` also provides a selectById selector, that we are going to implement. You need to consider that, because those selectors are going to be used within a Redux-like store, they can memoized them for you.
In our case, what we need is a simple selector function, and we can again leverage Zustand's features to ensure we won't have extra renders.
```ts
interface EntitySelectors<
Entity,
State extends EntityState<Entity> = EntityState<Entity>
> {
selectIds(state: State): EntityId[];
selectEntities(state: State): Dictionary<Entity>;
selectAll(state: State): Entity[];
selectTotal(state: State): number;
selectById(id: EntityId): (state: State) => Entity | undefined;
}
```
Those are the methods we want to define. Maybe the new thing here is the `State extends EntityState<Entity>`. We already saw what the EntityState interface is, and what we want to do with this is to allow the user to use another State type, as long as it has the required properties in EntityState. We use a default value, to allow the user to only set the Entity generic type, and we can infer the other generic type.
And readers, it doesn't get any easier than this:
```ts
const selectIds = (state: State) => state.ids;
const selectEntities = (state: State) => state.entities;
const selectAll = ({ entities, ids }: State) => ids.map((id) => entities[id]);
const selectTotal = (state: State) => state.ids.length;
const selectById = (id: EntityId) => ({ entities }: State) => entities[id];
```
Five inline functions allow us to get the slice of state that we want.
### Be an adapter, my friend
With all the pieces we have, we can already create the adapter function:
```ts
type IdSelector<Entity> = (model: Entity) => EntityId;
interface EntityCreatorsProps<Entity extends object> {
idSelector?: IdSelector<Entity>;
}
export function createEntityAdapter<Entity extends object>({
idSelector,
}: EntityCreatorsProps<Entity>) {
return {
getState() {
return stateFactory<Entity>();
},
getActions(setState: SetState<Entity>) {
return actionsFactory({ setState, idSelector });
},
getSelectors() {
return selectorsFactory<Entity>();
},
};
}
```
Here we see again our `idSelector` function, but it's clearer here what it is doing. We have a function, that will be called with an entity, and should return an ID for that entity. We have a default implementation for this, selecting the property `id`, so we don't have to implement this each time, as long as the default is a valid selector.
And that's it. We have created our version of the EntityAdapter. We are generating our state, our actions, and our selectors. And we have done so, thinking about Zustand first. Funny enough, we are not using any Zustand directly just yet, those are plain JS functions that we are creating!
## That's all folks!
We had covered how to implement the main methods and functions we need to recreate our version of the EntityAdaptor for Zustand. In another post, I'm showing you how to use it right [here](https://dev.to/michaeljota/zustand-entityadapter-a-bookshelf-example-28bk)!
> Image generated with Microsoft Designer AI using as a prompt "A bear sits on a sofa reading a book in a living room with a firewood, with a 16 bits palette" | michaeljota |
1,771,613 | Learning Rust: A clean start | I've decided it's time to learn Rust and in order to keep myself motivated I'm going to keep a record... | 26,565 | 2024-02-26T21:00:00 | https://dev.to/link2twenty/learning-rust-a-clean-start-4eom | rust, learning, beginners | I've decided it's time to learn [Rust](https://www.rust-lang.org/) and in order to keep myself motivated I'm going to keep a record of how the learning is going here.

A little about me; I'm a web developer and have been for around 5 years, though I'd dabbled for years. I have experience with [Perl](https://www.perl.org/) and [PHP](https://www.php.net/) but my day to day is JavaScript/TypeScript be it through [NodeJS](https://nodejs.org/en) or [ReactJS](https://react.dev/). I want to learn Rust for no specific reason other than it's fun to learn new things.
My first port of call was to google `learn rust` which lead me to ["the book"](https://doc.rust-lang.org/book/). The book is a first steps guide written by the rust community for newbies (or Rustlings as they're called) to gain a 'solid grasp of the language'.
## Learning in public
I've chosen to document my Rust learning journey openly because I believe in the power of learning in public. By sharing my successes, challenges, and insights, I will reinforce my own understanding and hopefully provide a resource for others on a similar path.
I've seen the value in this approach first-hand. I invite feedback, corrections and contributions from readers. Whilst I recognize that learning in public isn't for everyone, I've personally found it immensely beneficial and hope to inspire others to consider it. So, let's dive into the lessons.
## Lesson 1 'Getting started'
This lesson is broken down into 3 sections:
- Installation
- Hello, World!
- Hello, Cargo!
### Installation
I was relieved to see installation listed, I was worried I would have to look up how to install Rust. I'm on a Windows machine but decided I'd rather do my Rust learning in Linux, so I'll be using Ubuntu through WSL.
The install command looked easy enough it uses curl to download something and then pipes that through sh, so we can assume the downloaded item is a bash script of some kind.
```bash
curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf | sh
```
Believe it or not this is where I made my first mistake. I saw the `Rust is installed now. Great!` message and moved on to the next lesson. Had I read on a little further I'd have seen that I needed to install the compiler separately.
> Linux users should generally install GCC or Clang, according to their distribution’s documentation. For example, if you use Ubuntu, you can install the build-essential package.
This was easily remedied though and I was back on track in no time.
```bash
sudo apt install build-essential
```
### Hello, World!
The next section is a staple of the dev community, the beloved "Hello, World!" example.

There a few little bits I learnt here, functions are declared with the `fn` keyword, the entry point for any rust application is the `main` function within the `main.rs` file and the standard naming convention is to use underscores to separate words in function and filenames.
It was at this stage I discovered I didn't have a compiler installed, which I think is the real reason for simple sections like this, to make sure we're all set up correctly.
### Hello, Cargo!
The previous section was very simple, this section is also very simple but introduces us to [cargo](https://crates.io/) which is Rust's package manager, as a JS dev my mind goes straight to NPM.
Cargo allows us to do a few cool things:
- name our packages.
- add package dependancies.
- run our program in one command.
- build our program with debug mode and release mode.
- check our program compiles without actually building it.
The example gets us to recreate our `Hello, World!` example but in the cargo way. The code is so simplistic it hardly feels worth showing but here it is.
```rs
fn main() {
println!("Hello, world!");
}
```
## Lesson 2 'Guessing Game'
The second lesson doesn't have any subsections, the goal of the lesson is to program a guessing game where the user enters a number and we compare it to a randomly selected number, the game continues until the user has guessed the exact number.
We're still not doing anything ground breaking but the progress from printing static text to dynamically taking user input and returning a result is nice all the same.
### VSCode
It was at this point I decided that doing code changes in `nano` was not a great idea and I needed to open the project in VSCode. I added a few extensions to, hopefully, make development a little easier. These were [rust-analyzer](https://marketplace.visualstudio.com/items?itemName=rust-lang.rust-analyzer), [crates](https://marketplace.visualstudio.com/items?itemName=serayuzgur.crates) and [Even Better TOML](https://marketplace.visualstudio.com/items?itemName=tamasfe.even-better-toml). You can use any editor you like, I'm just used to VSCode.

### Making the game
Let's look at the game tutorial, it has us use cargo to make set up the project and very quickly introduces us to a few new concepts
- The `use` keyword.
- Mutable variables.
- Error handling.
- Cargo doc.
#### The `use` keyword
The `use` keyword allows us to pull in code from other libraries, as a web developer, I want to compare this to [import](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/import). By default Rust will have access to a set of items from the 'standard' library, this is called the prelude, but if you want access to anything else you'll have to use `use`.
In the example they give we do `use std::io;` which allows us to access the `io` namespace, this does feel a little weird though as we already had access to `std` meaning `std::io` is also accessible.
#### Mutable variables
In JavaScript land we have the concept of immutable and mutable variables, these are `const` and `let` where `const` is immutable and `let` isn't. Rust is a little different in that all variables are immutable unless specified otherwise, the variable keyword also is always `let`, or at least it is as far as I can tell so far.
```rs
let mut var1 = String::new(); // mutable
let mut var2 = String::from("Test String"); // mutable
let var3 = 6; // immutable
```
The book let's us know here that it will be returning to mutability in lesson 3.
#### Error handling
We're introduced to two types of error handling `.expect` which doesn't attempt any sort of recovery but helpfully posts a message as the application crashes and `match`.
`Match` takes the `Result` from a function and then allows you to call a function based on the `Result`. In the example we're given `parse` and told it will either be `Ok` or `Err`, in the `match` we are able to define a function to be called on either of these cases. I assume that when we start dealing with more diverse functions match will be able to handle all `Result` types.
#### Cargo doc
This is my favourite part of Rust so far, I know it shouldn't be that exciting but I think it is. When you run the command `cargo doc` Cargo will scan through all the code you're using a generate help pages explaining functions and how to use them.
There isn't much explanation of this yet but I'm hoping these docs are generated from comments in the code, even if this isn't that case code bases that can self document is just so interesting to me.
### Wandering off the beaten track
At this point I was done with the first two lessons and decided to make a couple of changes to the guessing game. I extracted the game loop into its own function and I added an error message for failed parsing.
One thing I didn't like was the `magic` of this line.
```rs
let guess: u32 = match guess.trim().parse()
```
I didn't like that it felt like parse just magically knew what type it was aiming for. So I read the tooltip for parse in VSCode and it taught be about the `turbofish` syntax. I don't know if people don't like this syntax or if the writers of the book decided it was too complex for a beginner but to my eye is just made so much more sense. We tell parse what type we'd like and our `let` infers type from that rather than the other way around.
```rs
let guess = match guess.trim().parse::<u32>()
```
Here is the modified code.
{% embed https://replit.com/@andrewb05/Guessing-game %}
## Signing off
Thank you for coming on this journey with me. I plan to continue this series and cover the entire book. If you'd like to follow along, you can press the 'follow' button to be notified of new posts.
As I said earlier feel free to leave any feedback and if you're learning in public too please leave a link to your series in the comments so I can check it out.
Thanks so much for reading. If you'd like to connect with me outside of Dev here are my [twitter](https://twitter.com/Link2Twenty) and [linkedin](https://www.linkedin.com/in/andrew-bone-ba241b179/) come say hi 😊. | link2twenty |
1,771,683 | PYTHON - DAY 1.1 - my DOODLE | Write a program to choose a CATEGORY from the LIST and get the value for the... | 0 | 2024-02-25T18:56:39 | https://dev.to/technonotes/python-day-11-whz-n-my-mind-2o2k | ### Write a program to choose a CATEGORY from the LIST and get the value for the same.
**_https://api.chucknorris.io/_**
**_https://api.chucknorris.io/jokes/categories_**

**_https://api.chucknorris.io/jokes/random?category=animal_**

> **_Code : cat sample.py_**
```
import requests
get_data = requests.get("https://api.chucknorris.io/jokes/categories")
print(get_data)
output = get_data.json()
print("List of the categories " , output)
get_input = input("Choose any one : ")
mydata = requests.get(f"https://api.chucknorris.io/jokes/random?category={get_input}")
#print(mydata)
myjson = mydata.json()
myjson1 = myjson["value"]
print(myjson1)
```
> **_Output : sample.py_**
`<Response [200]>
List of the categories ['animal', 'career', 'celebrity', 'dev', 'explicit', 'fashion', 'food', 'history', 'money', 'movie', 'music', 'political', 'religion', 'science', 'sport', 'travel']
Choose any one : dev
Chuck Norris's log statements are always at the FATAL level.`

| technonotes | |
1,772,406 | CDN's: What are they and how they work and why do we need them? | CDN stands for Content Delivery Network. This thing is used, when we create objects in object... | 0 | 2024-02-26T12:44:30 | https://dev.to/swapnilshelke/cdns-what-are-they-and-how-they-work-and-why-do-we-need-them-3a9c | CDN stands for **Content Delivery Network**.
This thing is used, when we create objects in **object store**, that files will obviously exists somewhere in world. So when we upload files to AWS or any cloud provider that file exists somewhere in world. Somewhere in any server for ex: server in USA.
Now suppose, suddenly lot of people wants to access this file, from all over the world.
Now how this will work is, all requests from all this people, from all over the world, will go to that single server in USA through very long wires and bunch of router hopes. The request will reach to the server and responses will be send back. This is very big file, for ex, 1 GB.
Wouldn't it be nice, if lot of people asking this file from India, what if this request goes to server which is somewhere present in India first, and everyone in India and then from this server, users who asked for file gets their response.
That is what CDN's let us do. It's name itself tells that its job is to deliver content.
CDN says, there are many object stores in world, let that be the source of your truth, but as people ask for any particular file, don't distribute it directly from S3 URL, use my CDN URL and just tell me what's the source is.
So whenever we create CDN we just have to tell it what is the source, where that file is actually stored. So whenever anyone asks for a file to CDN it goes to source, later CDN cached that file on to the server present in India. So whenever anyone wants that file from India, it gets deliver to them easily. Now every request will not go to server in USA, because we have cached it, somewhere on server present in India.
Request from users in India goes to server present in somewhere [closest] present in India, which then asks source of truth, the place / server where it is originally stored, gets it back to server in India, and then caches it. So now that file not only present in server in USA but also in India. And now whoever wants it, they get it from server in India.
Objects / files / data is not cached for infinity, they are cached for certain period of time and after that time is over, the cache gets clear.
The main thing here to understand is how CDN's actually works. Content Delivery Networks make it very easy to deliver content. The thing that is so hard and expensive, if some file present only at one place in the world, and people / users from all continents starts to ask for it, CDN's creates what we called servers but they are technically are called as POP's. POP's stands for Point of Presence.
Object stores, store or files at one place, CDN's have servers all over the world, which are technically called as POP's.
This is very useful in case of very heavy applications, who not just store bunch of data, this are serving actual real world assets. Like mp4, jpeg files.
So we not just need object store, we also need CDN's. Object stores are use for storage and CDN's are use for Distribution. And we have to pay for both of services if we are using AWS. And usually distribution cost is higher.
#cdn | swapnilshelke | |
1,771,923 | All things you need to know about Microsoft Power Automate? | PowerApps Mentor | What is Microsoft Power Automate? Microsoft Power Automate is a cloud-based service that allows users... | 0 | 2024-02-26T03:14:02 | https://dev.to/powerapps_mentor/all-things-you-need-to-know-about-microsoft-power-automate-powerapps-mentor-2n2f | microsoft, microsoft365, powerplatform, powerautomate | **What is Microsoft Power Automate?**
Microsoft Power Automate is a cloud-based service that allows users to automate workflows and tasks across various applications and services without writing extensive code. Formerly known as Microsoft Flow, Power Automate offers a user-friendly interface and a wide range of pre-built connectors, enabling users to create automated workflows that enhance productivity and efficiency.
**Why We Use Microsoft Power Automate?**
- **Increased Productivity**: By automating repetitive tasks, Power Automate frees up time for employees to focus on value-added activities, increasing overall productivity and efficiency.
- **Improved Accuracy**: Automated workflows reduce the likelihood of human error, leading to more accurate and reliable results. This helps businesses maintain data integrity and quality across their operations.
- **Cost Savings**: Automating manual processes reduces the need for manual intervention and lowers operational costs. Additionally, Power Automate's subscription-based pricing model offers scalable options that align with business needs and budgets.
- **Enhanced Collaboration**: Power Automate facilitates collaboration by automating the flow of information between individuals, teams, and systems. This promotes transparency, communication, and teamwork within the organization.
**What Are The Benefits Of Using Microsoft Power Automate?**
**No Code/Low Code**: Power Automate offers a no-code/low-code approach to automation, allowing users to create workflows using a visual interface without the need for extensive coding knowledge. This democratizes automation and empowers business users to automate their own processes without relying on IT or development teams.
**Real-Time Triggers**: Power Automate supports real-time triggers that enable workflows to be triggered instantly based on specific events or conditions. This ensures timely execution of workflows and enables businesses to respond quickly to changes in their environment.
**Workflow Automation**: Power Automate automates repetitive tasks and manual processes, freeing up time for employees to focus on more strategic activities. By automating routine tasks such as data entry, file management, and notifications, businesses can improve operational efficiency and reduce errors.
**Easy Integration**: Power Automate seamlessly integrates with hundreds of popular applications and services, including Microsoft 365, Dynamics 365, SharePoint, Salesforce, and more. This enables users to create automated workflows that span across multiple platforms, consolidating data and streamlining processes.
**Advanced Capabilities**: Power Automate offers advanced capabilities such as conditional logic, looping, and error handling, allowing users to create complex workflows that cater to their unique requirements. From simple approvals to multi-step processes, Power Automate can automate a wide range of business scenarios.
**Mobile Accessibility**: With the Power Automate mobile app, users can monitor and manage their automated workflows on the go. This enables employees to stay connected and productive, even when they're away from their desks.
**Security and Compliance**: Built on the Microsoft Power Platform, Power Automate adheres to robust security and compliance standards, ensuring that sensitive data is protected and regulatory requirements are met.
**Note:- If you want to Join Our Power Platform Training Programs click below link👇👇**
[Let’s Chat via WhatsApp for more Details](https://wa.me/919216147026) | powerapps_mentor |
1,772,042 | The importance of having a red test first in test driven development | My name is Kazys Račkauskas, and I'm writing about test-driven development. In this blog post, I want... | 0 | 2024-02-26T06:44:14 | https://easytdd.dev/the-importance-of-having-a-red-test-first-in-test-driven-development | tdd, testing, javascript, dotnet | My name is Kazys Račkauskas, and I'm writing about test-driven development. In this blog post, I want to discuss the importance of starting with a red test. The red-green-refactor cycle is a well-known mantra in test-driven development. To recap:
* Red - write a piece of test code for the functionality you want to implement. It must be red since the functionality is not there yet. One of the main ideas is that the tests should guide the development process.
* Green - write a piece of code for the test to pass.
* Refactor - eliminate code duplicates, enhance readability, improve aesthetics, possibly extract some methods or create new classes, and optimize. Don't forget to refactor both the production code and the test code.
## Why red?
I very rarely do post-code testing (perhaps there is a better-known expression for writing tests after writing production code). I do that when I encounter a piece of code not covered with tests that I need to put my hands on. And I'm happy when the test is green, making me feel the code works as expected. Test-driven development is not only about writing tests to make sure the production code works as expected, but it is rather (as the name suggests) a process of development. A couple of things pop into my head when I think about Red in TDD. First is demand; I produce the production code when it is needed (red test), I need to meet the demand. Second - YAGNI (You Aren’t Gonna Need It), I do not produce the code which is not needed, I do not code too much, because it might not be needed.
## I'm just a human after all
I'm only human, and I make mistakes. I make mistakes in production code and test code alike. The idea for this blog post came last week when I encountered situations where the test was not red, and it was very tempting to leave it as is because it was GREEN. This was especially true when there were already several tests, leading me to think that I might have already covered the case. It may sound like an oxymoron, but a GREEN test in the first step of TDD is a RED flag.. In the following sections, I will provide cases that I encountered last week where I made a mistake in the test, causing it to be green, but it was incorrect.
### it vs if
I'm a backend developer and use TDD a lot. However, sometimes I have to work with front-end and do JavaScript development. I do my best to practice TDD while doing JavaScript development as well.
For JavaScript testing, I use *karma* as a test runner and *jasmine* as a test framework. In *jasmine*, the `describe` function sets a test case, and the `it` function defines an individual test. I want to share how I mistyped `it` as `if` multiple times, leading to a false impression of all passing tests.
```csharp
it('Seat map is selected when person of the seat map is selected',
function () {
let seatMaps = result.Seating.SeatMaps;
expect(seatMaps[0].Selected).toBeTrue();
seatMaps[2].Passengers[1].select();
expect(seatMaps[2].Selected).toBeTrue();
}
);
```
and
```csharp
if('Seat map is selected when person of the seat map is selected',
function () {
let seatMaps = result.Seating.SeatMaps;
expect(seatMaps[0].Selected).toBeTrue();
seatMaps[2].Passengers[1].select();
expect(seatMaps[2].Selected).toBeTrue();
}
);
```
In all modern development environments and text editors for developers, `if` is usually highlighted in a different color. I'm not colorblind, but it is still easy for me to miss it. Additionally, I find that my finger muscle memory is used to typing a two-letter word starting with `i` as `if`. It just happens automatically. As a result, there are no errors, and it is a valid sentence. All tests appear green when I run them. It is easy to notice when it is the first test because it will show that 0 tests have been run. However, it is more difficult when there are more tests. In this particular situation, my initial thought was, *"aha, I already covered this functionality.".* Fortunately, my experience with treating green tests as a red flag saved me.
### Missing the attribute
This example is in C#. While it's not taken from production code, it captures the essence:
```csharp
[DataRow(1, 2, 3)]
[DataRow(2, 3, 5)]
public void SomeFakeTest(int a, int b, int c)
{
Assert.AreEqual(c, a + b);
}
```
In the MsTest test framework, the `DataRow` attribute alone is insufficient to mark a method as a test. The `TestMethod` or `DataTestMethod` attribute is required for the method to be recognized as a test. I have found myself forgetting this attribute a few times, which resulted in tests not being run and creating a false impression of all tests passing.
### Missing assertion
The following example, which I adapted from my previous blog post, demonstrates the situation. I recall the instances when I write a test - I think of a test case, a test name, arrange it, then act, and then - get distracted. When I return to see where I left off, I usually run the tests to see the red ones and continue from there. But in this particular case, all are green. This case is tricky; it's easy to forget that I left the test unfinished, and it's green because it doesn't have an assertion.ertions.
```csharp
[TestCaseSource(typeof(UnexpectedPaymentMessageIsSentWhenInvoiceIsOverpaidOrUnknownCases))]
public async Task UnexpectedPaymentMessageIsSentWhenInvoiceIsOverpaidOrUnknown(
Invoice invoice,
string message)
{
_invoiceRepositoryResult = invoice;
await CallCallback();
}
```
### Misread
Recently, I needed to modify the functionality of a DTO converter to return passengers in order by sequenceNumber, rather than in a somewhat random manner. I'm using `FluentAssertions`, which allows passing configuration in the `BeEquivalentTo` method. I simply typed `config.With`, and Visual Studio's IntelliSense suggested a list of options. I chose the first one that started with `With` and ended with `Ordering`.
```csharp
seatMapDto
.Passengers
.Select(x => x.FirstName)
.Should()
.BeEquivalentTo(
new[]
{
"GUDMARIN",
"GUDMARIANA",
"TOM"
},
config => config.WithoutStrictOrdering(),
"should be ordered by sequenceNumber"
);
```
I ran the test, and it was green. Since I expected the test to be red, I began by making sure that the input data was unordered in the arrange part to create a nonsequential order for the test to be red. Only later did I realize that I had chosen the `WithoutStrictOrdering` configuration instead of `WithStrictOrdering`.
## Wrapping it up
Being human and making mistakes is not easy :). In this blog post, I wanted to showcase how sometimes silly mistakes can give the impression that tests are green, and emphasize the importance of starting with a red test first.
Have you encountered similar situations where a mistake in a test led you to believe that production was working as expected? Please share your experiences in the comments.
If you enjoyed this post, please click "like" and "follow". Feel free to explore my other blog posts, where I write about test-driven development and my pet project EasyTdd, a Visual Studio extension that makes test-driven development simpler. | easytdd |
1,772,060 | The Amreen Infotech: Best Digital Marketing Agency To Grow Business | The Amreen Infotech emerges as the Best Digital Marketing Agency To Grow Business, specializing in a... | 0 | 2024-02-26T07:04:11 | https://dev.to/amreeninfotech/the-amreen-infotech-best-digital-marketing-agency-to-grow-business-1jh4 | The Amreen Infotech emerges as the [Best Digital Marketing Agency To Grow Business,](https://www.theamreeninfotech.com/) specializing in a diverse array of services including graphic design, SEO, PPC, web development, and more. With a focus on delivering exceptional results, Their expert team crafts tailored strategies to elevate your online presence and drive business growth. Whether it's creating visually stunning graphics, optimizing your website for search engines, or managing effective PPC campaigns. Trust The Amreen Infotech to be your strategic partner in navigating the digital landscape, propelling your brand to new heights of success. To learn more about the company, please click on this link:
https://theamreeninfotechblog.wordpress.com/
| amreeninfotech | |
1,772,175 | Private Job vs Government Job: A Comprehensive Comparison | Introduction: The choice between pursuing a career in the private sector or opting for a government... | 0 | 2024-02-26T09:40:29 | https://dev.to/saching/private-job-vs-government-job-a-comprehensive-comparison-3h2b | Introduction:
The choice between pursuing a career in the private sector or opting for a government job is a crucial decision that individuals often grapple with. Both paths come with their own set of advantages and challenges, and making an informed decision requires a careful consideration of various factors.
Job Security:
One of the primary distinctions between private and government jobs is the level of job security they offer. Government jobs are renowned for their stability and long-term security. Once employed by the government, individuals often benefit from tenure-based job protection, providing a sense of reassurance even during economic uncertainties. On the other hand, private jobs may offer less inherent job security, with employment often dependent on the company's performance and market conditions.
Salary and Benefits:
Compensation is another critical factor to weigh when comparing private and government jobs. Government jobs are typically associated with fixed pay scales, regular salary increments, and a range of benefits such as health insurance, retirement plans, and allowances. Private jobs, especially in competitive industries, may offer higher salaries and bonuses, but the benefit packages can vary widely depending on the employer.
Work-Life Balance:
Government jobs are often perceived as having a better work-life balance. Standard working hours, regulated leave policies, and fewer expectations for overtime contribute to a more predictable schedule. In contrast, certain private sector roles may demand longer working hours and a more dynamic schedule. However, some private companies are increasingly recognizing the importance of work-life balance, implementing flexible working arrangements and remote options.
Career Growth and Advancement:
Both private and government sectors provide opportunities for career growth, but the paths may differ. Government jobs typically follow a structured hierarchy, with promotions and advancements often tied to years of service and performance. Private jobs may offer quicker career advancement based on individual merit, innovation, and the ability to contribute to the company's success. The private sector is often characterized by a faster-paced environment that rewards entrepreneurial skills and adaptability.
Job Satisfaction:
Job satisfaction is a subjective aspect that varies from person to person. Some individuals find satisfaction in the stability and social impact associated with government jobs, while others may prefer the dynamic, results-oriented culture of the private sector. Factors such as workplace culture, job responsibilities, and alignment with personal values play a crucial role in determining job satisfaction.
Conclusion:
The decision between a private job and a government job ultimately depends on individual preferences, career goals, and priorities. It is essential to carefully evaluate the pros and cons of each option and consider long-term aspirations. Whether prioritizing job security, competitive compensation, or a dynamic work environment, individuals can make informed decisions by weighing the factors that matter most to them.
Certainly! When utilizing LinkedIn for private job searches, you can focus on building a professional online presence, networking, and actively engaging with relevant industry content. Here's a brief guide:
<a href="https://www.linkedin.com/">LinkedIn</a> Private Job Search:
Optimize Your Profile:
Ensure your LinkedIn profile is complete, highlighting your skills, experiences, and accomplishments.
Use a professional profile picture and craft a compelling headline.
Build Your Network:
Connect with professionals in your industry, colleagues, and alumni.
Engage in discussions, comment on posts, and share relevant content to increase your visibility.
Job Search and Alerts:
Use LinkedIn's job search feature to find relevant positions.
Set up job alerts for specific keywords, locations, and industries.
Follow Companies:
Follow companies you are interested in working for to stay updated on their activities and job postings.
Join Groups:
Join industry-specific groups to connect with professionals, participate in discussions, and gain insights into job opportunities.
Recommendations and Endorsements:
Request and provide recommendations to strengthen your profile.
Seek endorsements for your skills to enhance your credibility.
As for government job searches, <a href="https://sarkarijobwala.in"> Sarkari Job Wala</a> can be a valuable resource:
<a href="https://sarkarijobwala.in"> Sarkari Job Wala</a> for Government Job Search:
Visit the Website:
Go to <a href="https://sarkarijobwala.in">https://sarkarijobwala.in</a> to explore the latest government job updates.
Job Categories:
Browse through the various job categories to find opportunities that match your skills and preferences.
Notifications:
Subscribe to notifications or newsletters to receive timely updates on government job openings.
Exam Dates and Results:
Utilize the website for information on exam dates, results, and other relevant details.
Application Process:
Follow the instructions provided on the website for the application process of specific government jobs.
Stay Informed:
Regularly check the website for the most recent updates and announcements regarding Sarkari Naukri.
By leveraging these strategies, you can effectively navigate both LinkedIn for private job searches and <a href="https://sarkarijobwala.in"> Sarkari Job Wala</a> for government job searches to enhance your career prospects.
| saching | |
1,772,203 | Swap Values of Variables Without Temporary Variable : Python challenge 21 | https://youtu.be/ZG-G_WHxZsk | 0 | 2024-02-26T10:13:03 | https://dev.to/ruthrina/swap-values-of-variables-without-temporary-variable-python-challenge-21-3g80 | https://youtu.be/ZG-G_WHxZsk | ruthrina | |
1,772,422 | 🔮 Adobe Redefines Design - Inside Spectrum 2's Visionary Update | Hey everyone ✌️ Here's a quick look at this week's newsletter: 🚀 Your 2024 Boilerplate 🍎 iOS... | 0 | 2024-02-29T12:50:00 | https://dev.to/adam/adobe-redefines-design-inside-spectrum-2s-visionary-update-n59 | design, css, webdev, javascript |
**Hey everyone** ✌️ Here's a quick look at this week's newsletter:
🚀 Your 2024 Boilerplate
🍎 iOS Scrollbar Solved
🐻 Price Pages with Pizzazz
Enjoy this week's edition 👋 - Adam at Unicorn Club.
---
Sponsored by [Webflow](https://go.unicornclub.dev/webflow)
## [Experience the power of code. Without writing it.](https://go.unicornclub.dev/webflow)
[](https://go.unicornclub.dev/webflow)
Take control of HTML5, CSS3, and JavaScript in a completely visual canvas — and let Webflow translate your design into clean, semantic code that’s ready to publish to the web, or hand off to developers.
[**Start building**](https://go.unicornclub.dev/webflow)
---
### 🦄 This week's best
[**Introducing Spectrum 2: Our vision for the future of Adobe experience design**](https://medium.com/thinking-design/introducing-spectrum-2-our-vision-for-the-future-of-adobe-experience-design-a6c34441d2bb?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
A preview of the comprehensive update coming to Adobe’s design system.
[**A CSS project boilerplate**](https://piccalil.li/blog/a-css-project-boilerplate/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
For the many folks who ask how I write CSS since removing Sass, this is how I and the Set Studio team do it in 2024.
[**How to fix the invisible scrollbar issue in iOS browsers**](https://frontendmasters.com/blog/how-to-fix-the-invisible-scrollbar-issue-in-ios/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
On Apple’s platforms, most notably on iOS, the page scrollbar is placed inside the viewport and laid on top of web content.
[**Bear Powered CSS Pricing Page w/ :has() 🤙**](https://codepen.io/jh3y/pen/oNVmQZB?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
A CodePen showing off the power of CSS :has()
---
**🧠 Fun Fact**
**First Photo Uploaded to the Web** - The first photo uploaded on the web was of the comedy band "Les Horribles Cernettes" in 1992, marking the start of the web’s ability to share visual content alongside text.
---
[**The Good, The Bad, The Web Components**](https://www.zachleat.com/web/good-bad-web-components/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
The humble component. The building block of modern web development.
[**CSS is Logical**](https://geoffgraham.me/css-is-logical/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
CSS be weird, but it not be illogical.
[**Don’t Disable Form Controls**](https://adrianroselli.com/2024/02/dont-disable-form-controls.html?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
Just another usability and accessibility pro telling authors not to do the thing they continue to do.
[**How to create rounded gradient borders with any background in CSS**](https://benfrain.com/how-to-create-rounded-gradient-borders-with-any-background-in-css/?utm_source=unicornclub.dev&utm_medium=newsletter&utm_campaign=unicornclub.dev&ref=unicornclub.dev)
A solution! For many months I have been trying to find a decent solution to rounded gradient borders that allow a semi-transparent or blurred main background.
### 🔥 Promoted Links
_Share with 2,000+ readers, book a [classified ad](https://unicornclub.dev/sponsorship#classified-placement)._
[**Unsupervised Learning**](https://go.unicornclub.dev/unsupervised-learning)
A security, AI, and meaning-focused newsletter/podcast that looks at how best to thrive as humans in a post-AI world.
[**TLDR - Keep up with Tech in 5 minutes**](https://go.unicornclub.dev/tldr-newsletter)
Get the most important tech news in a free daily email. Read by +1,250,000 software engineers and tech workers.
#### Support the newsletter
If you find Unicorn Club useful and want to support our work, here are a few ways to do that:
🚀 [Forward to a friend](https://preview.mailerlite.io/preview/146509/emails/114241924420863542)
📨 Recommend friends to [subscribe](https://unicornclub.dev/)
📢 [Sponsor](https://unicornclub.dev/sponsorship) or book a [classified ad](https://unicornclub.dev/sponsorship#classified-placement)
☕️ [Buy me a coffee](https://www.buymeacoffee.com/adammarsdenuk)
_Thanks for reading ❤️
[@AdamMarsdenUK](https://twitter.com/AdamMarsdenUK) from Unicorn Club_ | adam |
1,772,719 | Summarizer Website using MEAN Stack | Introduction: In today's fast-paced world, dealing with information overload is a common... | 0 | 2024-02-26T18:09:34 | https://dev.to/siddheshuncodes/summarizer-website-using-mean-stack-5hbj | webdev, javascript, programming, tutorial | ## **Introduction:**
In today's fast-paced world, dealing with information overload is a common challenge. Picture having a tool that swiftly summarizes lengthy articles, allowing you to grasp the main points without spending excessive time reading. In this article, we'll walk you through the process of crafting a Summarizer Website using the MEAN Stack.
**Project Preview**:
Before we jump into the details, let's take a sneak peek at what our Summarizer Website will look like. This project is designed to offer users a straightforward and intuitive interface. Users can input large amounts of text and, in return, receive concise summaries.
**
Prerequisites:**
Before you start coding, make sure you have the following prerequisites installed on your system:
- MongoDB
- Express.js
- Angular
- Node.js
**Approach:**
Our approach to building the Summarizer Website using the MEAN Stack involves making the most of each component's strengths:
**a. MongoDB: Setting up the Database**
MongoDB acts as our database to store user data and summarized content. Begin by installing and configuring MongoDB on your system. Create a database for user information and another collection for storing summaries.
**b. Express.js and Node.js: Building the Backend**
Express.js, a web application framework for Node.js, serves as our backend framework. Node.js manages server-side operations. Develop RESTful APIs using Express to handle data flow between the frontend and the MongoDB database. Set up routes for user input, text processing, and summary retrieval.
**
c. Angular: Developing the Frontend**
Angular is our frontend framework, providing a robust and structured environment for UI development. Design a clean and user-friendly interface where users can input extensive text. Use Angular components to handle different parts of the UI, such as input forms and result displays. Implement services to communicate with backend APIs.
**d. Connecting Frontend and Backend: Ensuring Seamless Communication**
Establish a seamless connection between the frontend and backend. Use Angular services to make HTTP requests to the Express.js backend. Ensure proper handling of asynchronous operations and error cases. Implement mechanisms for sending user input from the frontend to the backend and receiving summarized content in return.
This approach ensures that each component in the MEAN Stack performs its specific role effectively. MongoDB stores data, Express.js and Node.js handle server-side operations, and Angular provides a dynamic and interactive user interface. As we proceed with the implementation steps, we'll delve deeper into each aspect, guiding you through the intricacies of building this comprehensive web application.
Let's break down the steps to create our Summarizer Website:
**
a. Set up the MongoDB Database:**
Think of the MongoDB database as a digital bookshelf where we neatly organize user data and summaries. To set it up, install MongoDB on your computer. It's like creating different shelves – one for storing user details and another for holding the summaries.
**b. Create the Backend using Express.js and Node.js:**
Now, let's talk about the brain behind our website – the backend. Express.js and Node.js are like a dynamic duo. Express.js is the traffic cop, guiding requests, and Node.js is the hardworking assistant, ensuring everything gets done efficiently. Together, they handle requests and communicate with the database.
**c. Develop the Frontend using Angular:**
Moving on to the part users interact with – the frontend. Angular helps us design a visually appealing and user-friendly interface. Think of it as crafting the front cover of a book – it's what users see and engage with. We'll create forms for users to input text and areas to display the summarized results.
**d. Connect the Frontend and Backend:**
Imagine the frontend and backend as two friends having a conversation. We want them to share information seamlessly. To achieve this, we'll set up a system where the user's input from the frontend travels to the backend. The backend processes it, and then the summarized result smoothly comes back to the frontend. It's like a friendly chat between the user interface and the brain behind the scenes.
**Code Example**
a. Set up the MongoDB Database
```
// Import required module
const mongoose = require('mongoose');
// Connect to MongoDB database
mongoose.connect('mongodb://localhost/summarizerDB', { useNewUrlParser: true, useUnifiedTopology: true });
// Define a schema for user data
const userSchema = new mongoose.Schema({
username: String,
email: String,
// Add more fields as needed
});
// Define a schema for summaries
const summarySchema = new mongoose.Schema({
userId: mongoose.Schema.Types.ObjectId,
text: String,
summary: String,
});
// Create models based on the schemas
const User = mongoose.model('User', userSchema);
const Summary = mongoose.model('Summary', summarySchema);
// Example usage:
// const newUser = new User({ username: 'JohnDoe', email: 'john@example.com' });
// newUser.save();
// const newSummary = new Summary({ userId: newUser._id, text: 'Lorem ipsum...', summary: 'Summary goes here.' });
// newSummary.save();
```
b. Create the Backend using Express.js and Node.js
```
// Import required modules
const express = require('express');
const bodyParser = require('body-parser');
const mongoose = require('mongoose');
// Create Express application
const app = express();
const port = 3000;
// Connect to MongoDB database
mongoose.connect('mongodb://localhost/summarizerDB', { useNewUrlParser: true, useUnifiedTopology: true });
// Middleware for parsing JSON
app.use(bodyParser.json());
// Define routes for handling user input and summaries
app.post('/api/submitText', async (req, res) => {
try {
// Process the incoming text and generate a summary
const { userId, text } = req.body;
const summary = processTextAndGenerateSummary(text);
// Save the summary to the database
const newSummary = new Summary({ userId, text, summary });
await newSummary.save();
res.status(200).json({ success: true, summary });
} catch (error) {
console.error(error);
res.status(500).json({ success: false, error: 'Internal server error' });
}
});
// Start the server
app.listen(port, () => {
console.log(`Server is running on port ${port}`);
});
// Function to process text and generate a summary (replace with your actual summarization logic)
function processTextAndGenerateSummary(text) {
// Simplified example: Just return the first 50 characters as a summary
return text.slice(0, 50);
}
```
**c. Develop the Frontend using Angular**
```
// Import required modules
import { Component } from '@angular/core';
import { HttpClient } from '@angular/common/http';
@Component({
selector: 'app-summarizer',
templateUrl: './summarizer.component.html',
styleUrls: ['./summarizer.component.css'],
})
export class SummarizerComponent {
userInput = '';
summary = '';
constructor(private http: HttpClient) {}
submitText() {
// Send user input to the backend for processing
this.http
.post<any>('http://localhost:3000/api/submitText', { userId: 'user123', text: this.userInput })
.subscribe((response) => {
if (response.success) {
// Update the summary on the frontend
this.summary = response.summary;
} else {
console.error(response.error);
}
});
}
}
```
**Reference:**
MongoDB Documentation:
MongoDB official documentation for installation and usage:
MongoDB Documentation
Express.js Documentation:
Express.js official documentation for creating the backend: Express.js Documentation
Angular Documentation:
Angular official documentation for building the frontend: Angular Documentation
Node.js Documentation:
Node.js official documentation for server-side JavaScript: Node.js Documentation
MEAN Stack Overview:
Understanding the MEAN Stack and its components: MEAN Stack **Overview**
HTTP Client for Angular - HttpClientModule:
Angular documentation for making HTTP requests: HttpClientModule - Angular Docs
Body-Parser Middleware for Express.js:
Body-parser middleware for handling JSON data in Express: Body-parser - npm
Mongoose Documentation:
Mongoose documentation for MongoDB object modeling: Mongoose Documentation
GitHub Repositories for Sample Projects:
Sample projects on GitHub for reference:
Angular Sample Project
Express.js Sample Project
Node.js Sample Project
Summarization Algorithms and NLP (Natural Language Processing):
Research and explore different summarization algorithms and NLP techniques for improving your text summarization logic.
By referring to these resources, you'll have a comprehensive understanding of the technologies used in your Summarizer Website project.
| siddheshuncodes |
1,772,896 | HOW TO USE EOS HYPERION HISTORY NODE | docs.google.com | 0 | 2024-02-26T21:13:02 | https://docs.google.com/document/d/e/2PACX-1vQwi6xYSFoVMWJHYZe5K5JhZajgC3xVyRDYli41Zz0LO1KIrIuA7rrPyjGBQEI16YtJ37Ur7AEO6iCW/pub | {% embed https://docs.google.com/document/d/e/2PACX-1vQwi6xYSFoVMWJHYZe5K5JhZajgC3xVyRDYli41Zz0LO1KIrIuA7rrPyjGBQEI16YtJ37Ur7AEO6iCW/pub %} | ebuka | |
1,772,919 | Understanding AWS Storage: S3 vs. EBS – Use Cases | Introduction Amazon Web Services (AWS) provides a comprehensive suite of cloud storage solutions,... | 26,581 | 2024-02-26T21:51:40 | https://dev.to/nwhitmont/understanding-aws-storage-s3-vs-ebs-use-cases-2jpa | aws, s3, ebs, cloud | **Introduction**
Amazon Web Services (AWS) provides a comprehensive suite of cloud storage solutions, each tailored for specific purposes. Among these, AWS S3 (Simple Storage Service) and AWS EBS (Elastic Block Store) are two of the most widely used. While both are powerful storage technologies, understanding their fundamental differences is crucial in selecting the right tool for your workload. Let's dive into the nuances of S3 and EBS.
**AWS S3: Object Storage Powerhouse**
* **Object-Based:** S3 is designed to store data as objects. Think of objects as files along with associated metadata (information about the file). This makes S3 ideal for storing unstructured data like images, videos, log files, backups, and documents.
* **Scalability and Durability:** S3 scales virtually limitlessly and is renowned for its 99.999999999% (eleven 9s) durability. Your data is replicated across multiple availability zones within an AWS region, ensuring it remains accessible even in the event of hardware failures.
* **Accessibility:** S3 is accessible via HTTP/HTTPS making it well-suited for web content delivery, data archives, or as a backend for large-scale data analytics applications.
**AWS EBS: Block Storage for Your Virtual Machines**
* **Block-Based:** EBS provides block-level storage volumes that you attach to EC2 instances (AWS virtual machines). Think of an EBS volume like a traditional hard drive that your operating system interacts with directly.
* **Performance and Customization:** EBS offers a diverse range of volume types (General Purpose SSD, Provisioned IOPS SSD, Throughput Optimized HDD, etc.) allowing you to fine-tune performance and I/O characteristics for your specific workload.
* **Bound to EC2 Instances:** An EBS volume is tied to a single EC2 instance in a specific availability zone. This makes EBS a natural choice for applications requiring low-latency, high-performance storage like databases or boot volumes for operating systems.
**Key Differences: A Summary**
| Feature | AWS S3 (Object Storage) | AWS EBS (Block Storage) |
|---|---|---|
| Storage Type | Object-based | Block-based |
| Ideal Use Cases | Images, videos, backups, static website hosting, data lakes | Databases, operating system boot volumes, applications requiring direct file system access|
| Scalability | Highly scalable | Scalable, with limits per volume |
| Accessibility | HTTP/HTTPS | Attached to individual EC2 instances |
| Performance | Varies based on object size and access patterns | Fine-grained control with various volume types |
**When to Choose Which**
Here's a quick guide to selecting between S3 and EBS:
* **S3:** Large amounts of unstructured data, web content, data that needs to be accessed from anywhere over the internet.
* **EBS:** Applications requiring direct block-level storage access, databases, file systems that an operating system interacts with directly.
**In Conclusion**
AWS S3 and AWS EBS are both invaluable components of the AWS ecosystem. Choosing the right storage solution depends entirely on your specific needs. Understanding the differences in use cases, access patterns, and performance characteristics discussed in this article will empower you to make informed decisions as you architect your AWS solutions.
Let me know if you'd like any elaborations or use-case examples. I'm happy to tailor this further!
| nwhitmont |
1,773,082 | 🚀 Unleashing the Power of AWS Lambda for Image Compression in Laravel Application 🚀 | Hey Dev community! 👋 Today, let's explore the magic of AWS Lambda and seamlessly integrate it into a... | 0 | 2024-02-27T04:18:08 | https://dev.to/anwarsr/unleashing-the-power-of-aws-lambda-for-image-compression-in-laravel-application-27fi | lamda, aws, serverless, laravel | Hey Dev community! 👋 Today, let's explore the magic of AWS Lambda and seamlessly integrate it into a Laravel application for uploading raw images to S3 and compressing them on the fly! 🖼️💡
**Step 1: Set Up Your Laravel Application**
Ensure you have a Laravel project up and running. If not, use Composer to create a new project:
```bash
composer create-project --prefer-dist laravel/laravel my-laravel-app
```
**Step 2: Configure AWS S3 Bucket**
Create an S3 bucket on AWS to store both raw and compressed images. Note down your access key, secret key, and bucket name.
**Step 3: Integrate AWS SDK in Laravel**
Install the AWS SDK for PHP (Bref) using Composer:
```bash
composer require bref/bref
```
Configure your AWS credentials in the **`.env`** file:
```
AWS_ACCESS_KEY_ID=your-access-key-id
AWS_SECRET_ACCESS_KEY=your-secret-access-key
AWS_DEFAULT_REGION=your-region # e.g., 'us-east-1'
AWS_BUCKET=your-s3-bucket-name
```
**Step 4: Upload Image to S3 in Laravel Controller**
In your Laravel controller, use the AWS SDK to upload the raw image to S3:
```php
// Example Laravel Controller
use Illuminate\Http\Request;
use Aws\S3\S3Client;
class ImageController extends Controller
{
public function uploadToS3(Request $request)
{
$file = $request->file('image');
$key = 'raw/' . $file->getClientOriginalName();
$s3 = new S3Client([
'region' => config('filesystems.disks.s3.region'),
'version' => 'latest',
]);
$s3->putObject([
'Bucket' => config('filesystems.disks.s3.bucket'),
'Key' => $key,
'Body' => fopen($file, 'r'),
'ACL' => 'public-read',
]);
// Your logic for further processing or response
}
}
```
**Step 5: Set Up AWS Lambda for Image Compression**
Create an AWS Lambda function with permissions to access your S3 bucket. Implement image compression using the Intervention Image library.
**Step 6: Trigger Lambda on S3 Object Creation**
Configure an S3 event trigger to invoke your Lambda function whenever a new object is created in the bucket.
**Lambda Function for Image Compression:**
```python
# Example Lambda Function Code
import json
import boto3
from PIL import Image
from io import BytesIO
s3 = boto3.client('s3')
def lambda_handler(event, context):
# Get the S3 bucket and key from the event
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
# Download the image from S3
response = s3.get_object(Bucket=bucket, Key=key)
image_data = response['Body'].read()
# Compress the image using Intervention Image
compressed_image_data = compress_image(image_data)
# Upload the compressed image back to S3
compressed_key = 'compressed/' + key
s3.put_object(Body=compressed_image_data, Bucket=bucket, Key=compressed_key)
return {
'statusCode': 200,
'body': json.dumps('Image compression successful!')
}
def compress_image(image_data):
image = Image.open(BytesIO(image_data))
# Apply your compression logic using Intervention Image or any other library
# Example: image.thumbnail((500, 500), Image.ANTIALIAS)
# Save the compressed image to BytesIO
compressed_image_data = BytesIO()
image.save(compressed_image_data, format='JPEG') # Adjust format based on your requirements
return compressed_image_data.getvalue()
```
**Access Compressed Image:**
When you upload an image to the 'raw/' directory in S3, the Lambda function will be triggered.
The compressed image will be stored in the 'compressed/' directory.
To access the compressed image with lower size but the same quality, use the appropriate URL, for example:
```bash
https://your-s3-bucket.s3.amazonaws.com/compressed/your-image.jpg
```
Now, your Lambda function will automatically compress images upon upload to S3, maintaining quality while reducing file size. Adjust the compression logic in the Lambda function as needed for your specific use case. Feel free to connect for more details or discussions! 🚀👨💻
#AWSLambda #ImageCompression #Serverless #S3 #LaravelApplication #TechInnovation | anwarsr |
1,773,151 | Cloudinary to s3 bucket db changes | If you have a database collection that has url field , containing cloudinary or any cloud based... | 0 | 2024-02-27T06:02:14 | https://dev.to/chetan_2708/cloudinary-to-s3-bucket-db-changes-3mp6 | webdev, javascript, node, mongodb | If you have a database collection that has url field , containing cloudinary or any cloud based platform links and you want to shift/ migrate the urls specifically to s3 bucket then you can use this script.
```
query = { $regex: /^https:\/\/res\.cloudinary\.com/i }
const urls = await Audio.find(query).toArray();
const audioList = [];
// Iterate over each document and process one at a time
for (const doc of urls) {
const _id = doc._id;
const url = doc.url;
const filename = path.join("L:/Check Audios", `${_id}.mp3`);
//Function to download and save audio locally
await downloadAndSaveAudio(url, filePath);
// Upload the audio file to S3
const uploadParams = {
Bucket: bucketName,
Key: filename,
Body: fs.createReadStream(filename),
ContentType: 'audio/mp3'
};
const uploadResponse = await s3Client.send(new PutObjectCommand(uploadParams));
// Generate S3 URL
const s3Url = `https://${bucketName}.s3.${bucketRegion}.amazonaws.com/${filename}`;
// Update the database with the S3 URL
await Audio.updateOne({ _id: _id }, { $set: { url: s3Url } });
// Push information to audioList
audioList.push({ _id: _id, s3Url: s3Url });
}
```
| chetan_2708 |
1,773,162 | Unleashing the Power of Edge Computing: A Technical Deep Dive | Introduction: In the ever-evolving landscape of computing, edge computing has emerged as a... | 0 | 2024-02-27T06:16:31 | https://dev.to/vikasverma/unleashing-the-power-of-edge-computing-a-technical-deep-dive-8ei | **Introduction**:
In the ever-evolving landscape of computing, edge computing has emerged as a transformative paradigm, promising to revolutionize how we process and analyze data. Unlike traditional cloud computing models that centralize data processing in remote data centers, edge computing distributes computing resources closer to the data source. This proximity brings about a plethora of advantages, from reduced latency to enhanced scalability. In this technical post, we will delve into the intricacies of edge computing, exploring its architecture, key components, applications, and the future it holds.
**Understanding Edge Computing Architecture:**
1. Edge Devices:
At the core of edge computing are the edge devices, which can range from IoT devices and sensors to smartphones and edge servers. These devices collect and generate data, acting as the initial layer in the edge computing architecture.
2. Edge Computing Nodes:
Edge nodes are the intermediate layer that processes and filters the data collected by edge devices. These nodes are strategically placed in close proximity to the data sources, reducing the need for data to traverse long distances. Edge nodes can range from dedicated servers to edge routers and gateways.
3. Cloud Infrastructure:
While edge computing emphasizes local processing, it is not isolated from the cloud. The cloud infrastructure serves as a centralized management and coordination layer, allowing for seamless communication between edge nodes and enabling centralized control, monitoring, and updates.
**Key Components of Edge Computing**:
1. Edge Analytics:
Edge analytics involves processing data locally on the edge devices or nodes, minimizing the need to send raw data to the cloud for analysis. This ensures real-time insights and significantly reduces latency.
2. Edge Security:
Security is a paramount concern in edge computing. With data processed closer to the source, robust security measures are implemented at both the edge devices and nodes to safeguard against potential threats and vulnerabilities.
3. Edge Storage:
Edge storage involves storing critical data locally, allowing for quicker access and reducing the dependence on centralized cloud storage. It also facilitates offline operation and ensures data availability even in connectivity-challenged environments.
**Applications of Edge Computing**:
1. IoT and Smart Cities:
Edge computing plays a pivotal role in IoT deployments, enabling real-time processing of sensor data for applications such as smart cities, industrial IoT, and connected vehicles.
2. Augmented Reality (AR) and Virtual Reality (VR):
AR and VR applications benefit from edge computing by minimizing latency, providing users with a seamless and immersive experience.
3. Healthcare:
In healthcare, edge computing enables the processing of sensitive patient data at the source, ensuring timely and secure access to critical information, especially in remote patient monitoring and telemedicine.
**
Future Trends and Challenges**:
1. 5G Integration:
The deployment of 5G networks will further enhance edge computing capabilities by providing faster and more reliable connectivity, enabling a new era of applications and services.
2. Standardization:
As edge computing continues to evolve, the establishment of industry standards becomes crucial to ensure interoperability, security, and seamless integration across diverse ecosystems.
3. Edge AI:
The integration of artificial intelligence at the edge will become more prevalent, allowing for intelligent decision-making directly on edge devices without constant reliance on centralized cloud resources.
**Conclusion**:
Edge computing represents a paradigm shift in the world of computing, ushering in a new era of efficiency, speed, and scalability. As we continue to witness advancements in technology, the role of edge computing will only become more pronounced, influencing a myriad of industries and shaping the way we interact with and leverage data. Embracing this transformative approach opens up a realm of possibilities, empowering developers to create innovative, responsive, and intelligent applications that meet the demands of our increasingly connected world. | vikasverma | |
1,773,685 | Find & Replace in whole folder or all files | Streamlining Code Editing: VS Code and Online Tools for Find & Replace In the world of... | 0 | 2024-02-27T13:47:49 | https://dev.to/sh20raj/find-replace-in-whole-folder-or-all-files-1fo3 | abotwrotethis |
# Streamlining Code Editing: VS Code and Online Tools for Find & Replace
In the world of software development, efficiency and precision are paramount. When working with large codebases, making widespread changes while maintaining accuracy is a challenge. This article explores how the powerful features of Visual Studio Code (VS Code) can be complemented with online tools for find and replace operations, as well as encoding and decoding text.
## Leveraging Visual Studio Code
### Find and Replace
Visual Studio Code, a popular code editor, offers robust tools for searching and replacing text across entire projects. Here's a quick guide on using this feature:
1. **Open Your Project**: Begin by opening your GitHub repository in VS Code.
2. **Find in Files**: Press `Ctrl+Shift+F` (Windows/Linux) or `Cmd+Shift+F` (Mac) to open the "Find in Files" search.
3. **Search and Replace**: Enter your search term and the replacement text.
4. **Replace All**: Click on the "Replace All" button (⚙️) to replace all occurrences in all files.

5. **Review Changes**: After replacing, review the changes in VS Code to ensure correctness.
### Commit and Push
Once your changes are made, it's important to commit and push them to your GitHub repository:
1. **Stage Changes**: Go to the Source Control view (`Ctrl+Shift+G`) and stage the modified files by clicking the `+` button.
2. **Commit**: Enter a commit message and click the check mark to commit.
3. **Push**: Click on the ellipsis (`...`) in the Source Control view and choose "Push" to push your committed changes.
## Enhancing with Online Tools
### Encoding and Decoding
Often, developers need to encode or decode data within their files. This is where online tools come in handy. Here are two useful tools for encoding and decoding:
1. **EMN178 Online Tools** ([Link](https://emn178.github.io/online-tools/)):
- Offers various encoding/decoding options such as Base64, URL Encode/Decode, and more.
- Simply paste your text, select the desired operation, and copy the result.
2. **SOPKIT Encoding Tool** ([Link](https://sopkit.github.io/Encoding/)):
- Provides a straightforward interface for URL encoding/decoding.
- Enter your text, click "Encode" or "Decode," and copy the transformed text.
### Example Workflow
Let's walk through a hypothetical scenario using these tools:
1. **Find and Replace in VS Code**:
- Locate all occurrences of "old_text" and replace them with "new_text."
2. **Encode with EMN178**:
- Select the modified text and encode it using Base64.
3. **Encode URLs with SOPKIT**:
- Encode any URLs within the files using SOPKIT's URL encoding tool.
4. **Commit and Push in VS Code**:
- Stage the modified files, commit with a descriptive message, and push to GitHub.
## Conclusion
By combining the robust find and replace capabilities of Visual Studio Code with online encoding/decoding tools, developers can streamline their workflow and make precise changes across codebases. Whether it's replacing variables, encoding data, or transforming URLs, this integrated approach ensures efficiency and accuracy in code editing.
Next time you find yourself in need of making widespread changes or encoding text within your projects, consider this powerful combination of VS Code and online tools. Your code editing process will be smoother, faster, and more precise than ever before.
| sh20raj |
1,773,797 | Mautic Open Startup Report #9 - December 2023 | Key points This month has seen a growth in memberships, however we have not met the target... | 0 | 2024-02-27T15:53:22 | https://dev.to/mautic/mautic-open-startup-report-9-december-2023-475e | mautic, opensource, openstartup, marketingautomation | ## Key points
This month has seen a growth in memberships, however we have not met the target we set for our end-of-year financial goals, being just under $38,000 short. The year end saw us with a positive balance of just under $68,000 across our various projects on Open Collective. Greater focus will need to paid to bringing in more revenue streams going forward, to ensure Mautic's future financial stability.
We've seen substantial contributions towards the Mautic 5 release which will be made as General Availability in early January.
There continues to be strong growth in adoption, with a 120% increase compared with Q4 2022 in terms of deployed websites with Mautic tracking.
## Finances
This month saw a strong growth in memberships, boosted by the Council elections which required membership to vote for nominated candidates. There has also been some movement in corporate memberships, but uptake has been slower than anticipated.
We end the year $37,756 short of the goal that I had set for the end of this financial year in respect of income from individual and corporate memberships, sponsorships, proceeds from MautiCon events and the trials project which will be launching next quarter.
With a year-end balance across all our accounts at Open Collective of $67,714.82 we can carry such a shortfall, but must continue to drive forward revenue-growth opportunities in the new year to ensure that we meet our goal of $205,370 income against a projected expenditure of $147,491.
The end of year balance of the relevant accounts is provided as follows for this year and the preceding two years for comparison - all amounts are in **USD**.
| Account | **Balance 31-12-2023** | **Balance 31-12-2022** | **Balance 31-12-2021** |
| ---- | ---- | ---- | ---- |
| Main collective | 55,454.68 | 11,084.85 | 13,336.85 |
| Infrastructure working group | 1,014.81 | 1,052.22 | 3,958.22 |
| Marketplace initiative | 391.23 | 293.31 | 260.67 |
| MautiCon India | 0.11 | | |
| Mautic Meetup Valencia | 217.01 | | |
| Community Team | 167.64 | 144 | 2088 |
| Bounties | 3,600 | | |
| Next Generation | 1,212.39 | 1212.03 | 1000 |
| Education Team | 0.61 | | |
| Builders initiative | 1,225 | 0 | 0 |
| Developer Days event | 16.27 | | |
| Marketing team | 1,224.01 | 2103 | 3235 |
| LATAM community | 382.84 | | |
| Product Team | 379.92 | 2689.92 | |
| Install/Upgrade initiative | 428.3 | 428.3 | 428.3 |
| Resource management initiative | 1,000 | 1000 | 1000 |
| Composer initiative | 1,000 | 1000 | 1000 |
| Season of Docs 22 | | 7799.22 | |
| MautiCon Europe | | | 1009.06 |
| MautiCon Global 22 | | | 4875 |
| Total cash at bank | 67,714.82 | 28,806.85 | 26,307.04 |
Several of the accounts (for example team projects) have positive funds which were not spent from the previous year's budget. These will be factored into any future budget requests accordingly.
### Income
This month is the last month for Acquia's monthly financial support, and we also signed a new Bronze sponsor this month. An invoice was also raised for a Diamond sponsor at the end of the year, which will be paid in the new year.
There was a strong growth in individual memberships, which was largely driven by the Council elections requiring membership to vote for the nominated candidates.
We continue to have some income from regular monthly sponsorships which is much appreciated, and this month Mautic Meetup Valencia signed their first sponsor of the monthly meetups.
| Description | Amount |
| ---- | ---- |
| Corporate membership | $11,200 |
| Sponsors | $815 |
| Individual membership | $760 |
| Meetup sponsor | $100 |
### Expenditure
This month we have incurred costs for the upcoming Mautic Conference India event which needed paying upfront and will be recouped via sponsorship and ticket sales. Other costs remain stable.
| Description | Amount |
| ---- | ---- |
| Employment (November) | $8,685.28 |
| MautiCon India venue | $2,332.82 |
| Host fee | $1,287.50 |
| Infrastructure | $343.15 |
| Sessionize license (50% off) | $249.50 |
| Postage | $4.08 |
## Contributions
Kudos to these organizations that are taking Mautic to great heights and driving our growth! All set to delve into the numbers?
Let's take a look at the statistics from the last 90 days: [Mautic | Last 90 Days](https://savannahcrm.com/public/overview/2b4590bf-cad0-4c71-870a-6f942a25f8fe)
You can now view this month’s report here: [Mautic | Monthly Report for December 2023](https://savannahcrm.com/public/report/0d45eb42-c3bf-4865-82c9-9c643e702157)
⬆️ = Increase from last month
⬇️ = Decrease from last month
### Organizations
#### Most active companies
rectorphp 246 (:arrow_up: 602.86%)
Axelerant 145
Leuchtfeuer Digital Marketing 144 (:arrow_up: 89.47%)
Acquia 118 (:arrow_up: 57.33%)
PreviousNext 96
Webmecanik 83 (:arrow_down: 44.30%)
Dropsolid 48 (:arrow_down: 14.29%)
Friendly 37 (:arrow_down: 15.91%)
Codefive 23 (:arrow_down: 25.81%)
Sales Snap 20
#### Top contributing companies
Acquia 110 (:arrow_up: 243.75%)
rectorphp 96 (:arrow_up: 772.73%)
Leuchtfeuer Digital Marketing 22 (:arrow_up: 100%)
Webmecanik 18 (:arrow_down: 45.45%)
Dropsolid 8 (:arrow_down: 11.11%)
Comarch 7 (:arrow_down: 30%)
Friendly 3 (:arrow_down: 25%)
Bluespace 3
Axelerant 3
ip2location.com 2
Contributions are as defined [here](https://docs.savannahhq.com/pages/contributions/) with the addition of Jira issues being closed as completed, GitHub Pull Request reviews and Knowledgebase articles being written or translated, which we track through Savannah’s API.
**Want your organization to shine here? [Start contributing now](https://mau.tc/contribute)!**
### Individuals
A big thank you also to all the individuals who are helping us build this awesome community!
#### Most active contributors
Avinash Dalvi 286
Tomas Votruba 246
Surabhi Gokte 143
John Linhart 105
Rahul Shinde 97
Mohit Aghera 96
Anderson Eccel 82
Norman Pracht 47
Joey Keller 37
Zdeno Kuzmany 34
#### Top contributors
John Linhart 101
Tomas Votruba 96
Anderson Eccel 20
Zdeno Kuzmany 13
Artem Lopata 7
putzwasser 6
Patryk Gruszka 5
Patrick Jenkner 5
Rahul Shinde 5
Mike Van Hemelrijck 4
#### Welcome to our new contributors this month 💖
Anderson Eccel
Pablo Hörtner
Mike Van Hemelrijck
Prateek Jain
IP2Location
#### Top supporters
John Linhart 2
Joey Keller 1
Mike Van Hemelrijck 1
Ekke 1
Zdeno Kuzmany 1
Patrick Jenkner 1
Supporters are folks who have had conversations with people directly before they make a contribution, so most likely helping with that process.
This month we had 6 new contributors :rocket: (:arrow_down: 33%) and 57 new members joining the community !:sparkling_heart: (:arrow_down: 8%).
## Usage of Mautic
We continue to see around 4,000 tracked downloads via mautic.org/download each quarter as detailed below. It's important to note that this does not represent all installs of Mautic - we have a good proportion of people who install Mautic via Composer, via a GitHub download, or simply re-use their base Mautic installer files.

We've had a slight drop in the number of installs of the API library this month via Composer, but a rising trend of Mautic installs which has almost doubled - as measured by the use of the mautic/core-lib package.

We continue to see strong growth in websites being deployed with Mautic tracking enabled, with over 5,200 being detected in Q4 2023. This represents over a 120% increase compared with Q4 2022.

Indeed, when we consider the number of domains with active Mautic tracking, we see just short of a 100% increase between Q4 2022 and Q4 2023 (from 19,531 to 38,801).

We're also seeing continued growth in GitHub star count tracking above the projected trendline, which is promising, especially when you consider Mautic's growth against other open source projects.

## Community Health
December has been a quiet month in the Community, with many people celebrating the various festivals around the world, from Hanukkah to the Winter Solstice and Christmas.
As a result, we saw less new community members and contributors this month, which is a common occurrence each year.
We did, however, have a dramatic rise in the number of contributions in December, up from around 130 per month over the last quarter to a staggering 351 in December! A vast number of these contributions are from people who are helping us with testing new features and bug fixes in the Mautic 5.0 Release Candidate, and also a big thanks to Tomas Votruba from Rector who contributed a stunning 96 pull requests to help update and improve Mautic's codebase.
Something we are monitoring closely is the drop-off in numbers on our Google Analytics properties - this could be associated with the switch to Google Analytics v4. Prior to Q3 2023 we averaged around 120,000 visitors per quarter, but recently this has dropped to around 80,000, The marketing team will dig into this in more detail.
## Conclusion
Overall, December has been a quiet month with many people taking time off over to celebrate festivities. Despite this there has been a tremendous amount of work getting ready for the Mautic 5.0 release next month, with many improvements and bug fixes being tested and merged super efficiently by the core team.
While we are financially not quite at the place where we wanted to be, we have made significant progress since becoming an independent project and I take heart from the growth in number of memberships being taken out, that we're on the right track.
Going forward this will continue to be a focus into the coming year, to ensure that we can stay afloat without the generous seed funding that Acquia provided us during this transitional phase. | rcheesley |
1,773,819 | Regain Bladder Control: The Latest Treatments for Urinary Incontinence | Do you find yourself constantly worrying about unexpected leaks or feeling embarrassed by your lack... | 0 | 2024-02-27T16:06:21 | https://dev.to/forlooks/regain-bladder-control-the-latest-treatments-for-urinary-incontinence-144j | Do you find yourself constantly worrying about unexpected leaks or feeling embarrassed by your lack of bladder control? Urinary incontinence is a common condition that affects millions of people worldwide, and it can significantly impact your quality of life. The good news is that advancements in medical technology have led to innovative treatments that can help you regain control over your bladder and pelvic floor muscles.
**Understanding Urinary Incontinence and the Pelvic Floor
**Before we delve into the latest treatments, it’s important to understand the basics of [urinary incontinence](https://forlooks.com/emsella/) and its connection to the pelvic floor muscles. Urinary incontinence refers to the involuntary leakage of urine, often caused by weak or dysfunctional pelvic floor muscles. These muscles play a crucial role in maintaining bladder control, supporting the organs in the pelvic region, and preventing leakage. Factors such as pregnancy, childbirth, menopause, and aging can weaken these muscles, leading to incontinence issues.
| forlooks | |
1,774,086 | Building a WebSocket Chatroom using Golang and Spread the PubSub Library | We are trying to build a websocket based chatroom. There are two excellent examples of websocket... | 0 | 2024-03-07T13:15:06 | https://dev.to/egemengol/building-a-websocket-chatroom-using-golang-and-spread-the-pubsub-library-2n03 | We are trying to build a websocket based chatroom.
There are two excellent examples of websocket based chatrooms already, let's go over them.
---
[nhooyr/websocket example](https://github.com/nhooyr/websocket/blob/master/internal/examples/chat/chat.go)
The server struct holds a set of subscriber structs, each consisting of a `chan []byte` while knowing how to close themselves.
The server manages access to this set via a mutex and helper functions to keep it current for all of the subscribers coming and going.
---
[gorilla/websocket example](https://github.com/gorilla/websocket/blob/main/examples/chat/hub.go)
There is a `Hub` structure that keeps a set of `Client` structs which point to the `Hub` and a channel, and manages access to them via exposed channels.
The clients, however, send pointers of themselves over these channels whenever they want to come and go.
The `Hub` and `Client` structures are highly coupled, and the logic is *spread* (pun intended) over two files.
---
#### Our Plan
We will use:
- [nhooyr/websocket](https://github.com/nhooyr/websocket) for handling websockets.
- [egemengol/spread](https://github.com/egemengol/spread) for handling the PubSub between multiple websocket connections, with clean and obvious code.
### Message Struct
Let's create our `Message` type.
This struct will be the "message" type of the PubSub topic later.
```golang
type Message struct {
Username string `json:"name"`
Message string `json:"msg"`
}
```
### Publish HTTP Handler
Creating a `http.Handler` for publishing messages to the PubSub topic is easy.
The clients will use it by sending `HTTP POST` requests to the `/publish` endpoint
1. We read the incoming websocket message body and parse it into our `Message` struct.
2. We call the `spread.Topic.Publish` method with our `Message`.
When we call this function with its required dependencies (logger and topic), we will obtain a `http.Handler`. We will pass that to `http.ServeMux` for serving, later.
```golang
func HandlePublish(logger *slog.Logger, topic *spread.Topic[Message]) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
var msg Message
if err := json.NewDecoder(r.Body).Decode(&msg); err != nil {
logger.Warn("error decoding message", "err", err)
http.Error(w, err.Error(), http.StatusBadRequest)
return
}
r.Body.Close()
logger.Info("publishing message", "msg", msg)
if err := topic.Publish(msg); err != nil {
logger.Error("error publishing message", "err", err)
http.Error(w, err.Error(), http.StatusInternalServerError)
return
}
w.WriteHeader(http.StatusNoContent)
})
}
```
As you can see from the accepted `topic` argument, the topic knows and restricts the message types it broadcasts. Only `Message` structs are allowed into this particular topic.
Also, the handler does not know or care anything about its subscribers, it blissfully fires and forgets.
### Subscribe Websocket Handler
The clients will connect to this endpoint by making a connection to `ws://localhost:8000/subscribe` endpoint with their WebSocket library.
Our chatroom implementation chooses to use this websocket connection only for sending messages from the server to the client, even though it could potentially use it in both ways. Makes the implementation and error handling easier. Also, the writer of our websocket library has chosen to implement the chat functionality in this way.
1. We will upgrade the incoming request to a WebSocket connection by calling `Accept`.
2. We will subscribe to the topic, by getting a `<-chan Message` from it.
3. We loop through the messages and write them to the client.
We keep in mind that at any point, client can disconnect, or topic can be cancelled (on shutdown). We handle these cases in our loop.
```golang
func HandleSubscribe(logger *slog.Logger, topic *spread.Topic[Message]) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
conn, err := websocket.Accept(w, r, nil)
if err != nil {
logger.Error("error accepting websocket", "err", err)
return
}
defer conn.CloseNow()
recvChan, removeRecvChan, err := topic.GetRecvChannel(20)
if err != nil {
logger.Error("error getting recv channel", "err", err)
return
}
defer removeRecvChan()
ctx := conn.CloseRead(r.Context())
for {
select {
case <-ctx.Done():
logger.Info("client disconnected", "err", ctx.Err())
return
case msg, ok := <-recvChan:
if !ok {
logger.Info("recv channel closed")
conn.Close(websocket.StatusGoingAway, "")
return
}
data, err := json.Marshal(msg)
if err != nil {
logger.Error("error marshaling message", "err", err)
return
}
if err := conn.Write(r.Context(), websocket.MessageText, data); err != nil {
logger.Warn("error writing message", "err", err)
return
}
logger.Info(
"forwarded to listener",
"fromUser", msg.Username,
"msg", msg.Message
)
}
}
})
}
```
When we subscribe to the topic by requesting a read channel, a copy of all of the messages published to the topic start to getting sent to the newly created channel. We are responsible for closing it via the returned destructor function.
> `ctx := conn.CloseRead(r.Context())`
Our websocket library detects the disconnected clients, only when it tries to read or write.
Since we know we do not read anything from client, we use this helper function to create a context, which gets cancelled when the client disconnects.
### Construct http.Server
1. We create a logger to be used by both our handlers and our topic.
2. We create a topic. It needs:
- The type of the message it will carry, using generics.
- A context, for easy cancellation
- An optional logger, mainly for debug purposes.
- A channel size for the publishers, they will block if the channel is full.
3. We populate the routes, via the handler factories defined above. We give each their dependencies. We then create the http server.
```golang
func Run(ctx context.Context) error {
logger := slog.New(slog.NewTextHandler(
os.Stderr, &slog.HandlerOptions{Level: slog.LevelInfo}
))
topic := spread.NewTopic[Message](ctx, logger, 20)
mux := http.NewServeMux()
mux.Handle("POST /publish", HandlePublish(logger, topic))
mux.Handle("/subscribe", HandleSubscribe(logger, topic))
httpServer := &http.Server{
Addr: "localhost:8000",
Handler: mux,
ReadTimeout: time.Second * 10,
WriteTimeout: time.Second * 10,
}
//
// We will run the server here
//
}
func main() {
ctx := context.Background()
if err := Run(ctx); err != nil {
log.Fatal(err)
}
}
```
### Shutdown
Graceful shutdown is the hardest part of a http application in my opinion, especially a websocket enabled one.
We will implement an optimist but brutal shutdown mechanism here.
1. Get a context that gets cancelled when an interrupt is received.
2. Spawn a goroutine that waits on that context.
1. Wait for the shutdown of websocket handlers, which `Shutdown` does not close.
2. Shutdown the http server. The timeout is for the regular connections to close, in this case serving an `index.html`.
About the wait of the handlers:
- The topic is aware of the outer context, which gets cancelled on an interrupt.
- The topic notifies any channel based subscribers by closing their receive channels.
- The handlers return when they see their recv channels close.
```golang
func Run(ctx context.Context) error {
// Connect the context to interrupts.
ctx, cancel := signal.NotifyContext(ctx, os.Interrupt)
defer cancel()
//
// Rest of the Run function
//
go func() {
// Wait for the context be notified of an interrupt
<-ctx.Done()
// Topic is already listening to the context,
// we know it will send close signals to the handlers
// We wait for them to return for a bit
time.Sleep(100 * time.Millisecond)
// Give the server time to close all the connections
timeoutCtx, cancel := context.WithTimeout(context.Background(), 100*time.Millisecond)
defer cancel()
if err := httpServer.Shutdown(timeoutCtx); err != nil {
logger.Error("error closing http server", "err", err)
}
}()
logger.Info("http server started listening on", "addr", httpServer.Addr)
return httpServer.ListenAndServe()
}
```
> In principle, we could share a `sync.WaitGroup` between subscribe handlers, and wait on it instead of sleeping a fixed amount. The drawback would be waiting while blocking, which would hang on a non-progressing websocket handler, which is not good. We could wrap that blocking wait inside a goroutine and a channel, with a timeout around them.
> It is outside of the scope of this article, which is demonstrating how easy PubSub can be in Golang. The wait here is good enough, the remaining websocket connections are dropped and clients get notified via their websocket libraries anyway.
---
### Conclusion
The application may look complicated, but if we squint hard enough we can see it is mainly boilerplate whenever one writes http servers in golang via the standard library. The `/subscribe` part is typical for a listening websocket.
When we compare the approach here with the official examples, how obvious the code becomes, pushing all the synchronization work to the [egemengol/spread](https://github.com/egemengol/spread) library.
You can see the fully working implementation, along with [a simple UI](https://github.com/egemengol/spread/tree/main/examples/chatroom/index.html) at [spread examples directory](https://github.com/egemengol/spread/tree/main/examples/chatroom).
| egemengol | |
1,774,327 | AI and the future of web and mobile development | AI boom is real this time, unlike many years ago. As a web and mobile developer, I’d like to share... | 0 | 2024-02-28T01:09:53 | https://dev.to/eddiekimdev/ai-and-the-future-of-web-and-mobile-development-2moo | AI boom is real this time, unlike many years ago.
As a web and mobile developer, I’d like to share my prediction for AI and the future development of web and mobile applications.
The LLM-based ChatGPT really changed the way people interact with machines these days. Before the ChatGPT, we use browser for the web, operating system for PCs, android or iOS for mobile devices. This is also the reason behind why do we need different technology to develop applications for different platforms. However, this paradigm might be change in the future as AI will be the only platform or operating system to become a unified interface. Developers Developers’role will also change. There will be no mobile developer or web developer in the future. This AI platform will be the SDK for all developers to work on. Websites and apps will be replaced with personalized applications developed on this SDK. Much like app store, but this time it is the GPT store or something similar. Need to open an online shopping mall? Just open command and type your needs and images and description, GPT will develop an app from scratch and add this to their GPT store.
The scary part of this story? We no longer need that many developers anymore. Everyone can be a developer with little knowledge of computer science or programming. The traditional role of software developer or programmer will not exist.
I can see this wave is coming. Maybe in 5~10 years it will become reality. AI is a double-edged sword. Use it wisely and don’t be the last person to know why your role is made redundant.
| eddiekimdev | |
1,774,464 | winvnes | Nha cai ca cuoc game online dinh cao nhat moi thoi dai, tai winvn co rat nhieu tro choi nhu: the... | 0 | 2024-02-28T04:26:21 | https://dev.to/winvnes/winvnes-10ce | Nha cai ca cuoc game online dinh cao nhat moi thoi dai, tai winvn co rat nhieu tro choi nhu: the thao, ban ca, no hu, casino, gam bai, da ga, co tuong...vv...
Dia Chi: 1067 D. La Thanh, Lang Thuong, Ba Dinh, Ha Noi, Viet Nam
Email: winvnes@gmail.com
Website: https://winvn.es/
Dien Thoai: (+63)9663292132
#winvn #winvn_casino #winvn_com
Social Media:
https://winvn.es/
https://winvn.es/lien-he/
https://winvn.es/chinh-sach-bao-mat/
https://winvn.es/huong-dan-cach-tai-app/
https://winvn.es/huong-dan-nap-tien/
https://winvn.es/huong-dan-rut-tien/
https://winvn.es/thang-phat/
https://www.facebook.com/winvnes/
https://twitter.com/winvnes
https://www.youtube.com/channel/UC6ku82YMnQExUTmDFvnqvTg
https://www.pinterest.com/winvnes/
https://social.msdn.microsoft.com/Profile/winvnes
https://social.technet.microsoft.com/Profile/winvnes
https://vimeo.com/winvnes
https://github.com/winvnes
https://community.fabric.microsoft.com/t5/user/viewprofilepage/user-id/672752
https://www.blogger.com/profile/05429834415002651358
https://gravatar.com/winvnes
https://talk.plesk.com/members/winvnes.318280/#about
https://soundcloud.com/winvnes
https://medium.com/@winvnes/about
https://www.flickr.com/people/winvnes/
https://www.tumblr.com/winvnes
https://winvnes.wixsite.com/winvnes
https://sites.google.com/view/winvnes/trang-ch%E1%BB%A7
https://www.behance.net/winvnes
https://www.openstreetmap.org/user/winvnes
https://draft.blogger.com/profile/05429834415002651358
https://www.liveinternet.ru/users/winvnes/profile
https://linktr.ee/winvnes
https://www.twitch.tv/winvnes/about
http://tinyurl.com/winvnes
https://ok.ru/winvnes/statuses/156203505422348
https://profile.hatena.ne.jp/winvnes/profile
https://issuu.com/winvnes
https://dribbble.com/winvnes/about
https://form.jotform.com/240101559593051
https://unsplash.com/fr/@winvnes
https://scholar.google.com/citations?hl=vi&user=LSFd_WoAAAAJ
https://www.goodreads.com/user/show/174162728-winvnes
https://www.kickstarter.com/profile/winvnes/about
https://tawk.to/winvnes
https://groups.google.com/g/winvnes
https://webflow.com/@winvnes1
https://podcasters.spotify.com/pod/show/winvnes
https://www.ted.com/profiles/45945149/about
https://disqus.com/by/eswinvn/about/
https://500px.com/p/winvnes
https://winvnes.blogspot.com/
https://winvnes.weebly.com/
https://winvnes.webflow.io/
https://winvnes.gitbook.io/untitled/
https://winvnes.mystrikingly.com/
https://winvnes.amebaownd.com/posts/51388623
https://winvnes.seesaa.net/article/502034073.html?1704965495
http://winvnes.splashthat.com
http://winvnes.idea.informer.com/
https://winvnes.contently.com/
https://winvnes.shopinfo.jp/posts/51388664
https://winvnes.bravesites.com/#builder
https://winvnes.themedia.jp/posts/51389621
https://winvnes.storeinfo.jp/posts/51389806
https://winvnes.theblog.me/posts/51389921
https://winvnwinvnes.my.cam/#
https://educatorpages.com/site/winvnes/
https://winvnes.onlc.fr/
https://winvnes.gallery.ru/
https://winvnes.therestaurant.jp/posts/51388726
https://winvnes.wordpress.com/
https://winvnes.livejournal.com/profile
https://winvnes.thinkific.com/courses/your-first-course
https://ko-fi.com/winvnes
https://www.provenexpert.com/winvnes/
https://hub.docker.com/r/winvnes/winvnes
https://independent.academia.edu/winvnes
https://fliphtml5.com/homepage/ofmpj/es-winvn/
https://www.quora.com/profile/Winvnes
https://www.evernote.com/shard/s483/sh/abc0b1ad-5852-ad07-2965-18cee36fb080/Vtp0sTxnf8SiY14_ON5Vwnk664atftBLfhx3P0A6MVdKxFC69riGCr7Oog
https://heylink.me/winvnes/
https://trello.com/u/eswinvn
https://giphy.com/channel/winvnes
https://www.mixcloud.com/winvnes/
https://orcid.org/0009-0007-7203-3193
https://www.deviantart.com/winvnes
https://vws.vektor-inc.co.jp/forums/users/winvnes
https://codepen.io/winvnes
https://community.cisco.com/t5/user/viewprofilepage/user-id/1662999
https://wellfound.com/u/winvn-es
https://about.me/winvnes
https://winvnes.peatix.com/
https://sketchfab.com/winvnes
https://gitee.com/winvnes
https://public.tableau.com/app/profile/winvnes
https://connect.garmin.com/modern/profile/0692c679-0810-4a0e-9ad6-17706775f218
https://www.reverbnation.com/artist/winvnes
https://profile.ameba.jp/ameba/winvnes
https://onlyfans.com/winvnes
https://mastodon.social/@winvnes
https://readthedocs.org/projects/winvnes/
https://flipboard.com/@winvnes
https://www.awwwards.com/winvnes/ | winvnes | |
1,774,501 | The Art of Data Migration to commercetools | Introduction Data migration is a critical step for businesses transitioning to cloud-based commerce... | 0 | 2024-02-28T05:59:12 | https://dev.to/nitin-rachabathuni/the-art-of-data-migration-to-commercetools-57c8 | Introduction
Data migration is a critical step for businesses transitioning to cloud-based commerce platforms like commercetools. This process involves moving data from legacy systems or other e-commerce platforms to commercetools, ensuring that the integrity, functionality, and performance of the data are maintained or enhanced. commercetools, with its flexible, API-first approach, presents unique opportunities and challenges in data migration.
Why Migrate to commercetools?
commercetools is at the forefront of the headless commerce revolution, offering unparalleled flexibility, scalability, and speed. Its API-first approach allows businesses to create unique shopping experiences across various channels. However, migrating to such a dynamic platform requires a well-thought-out strategy to ensure a smooth transition.
Planning Your Migration
1. Assessment: Begin with a comprehensive assessment of your current data structures, volume, and quality. Understanding the data to be migrated is crucial for planning.
2. Strategy: Decide on a migration strategy (big bang or phased approach) based on your business needs and risk tolerance.
3. Data Mapping: Map your current data structure to the commercetools schema. This involves identifying how each piece of data will translate into commercetools' data model.
Data Migration Essentials
1. Cleanup: Data migration is an opportune time to clean your data. Remove duplicates, correct inaccuracies, and discard unnecessary data.
2. Backup: Always back up your data before beginning the migration process to prevent data loss.
3. Testing: Perform comprehensive testing in a staging environment. This step is crucial for identifying and rectifying issues before going live.
Coding Examples
Below are simplified examples of code snippets that might be used in a data migration to commercetools. Note that these examples are illustrative and require adjustments to fit specific migration contexts.
Example 1: Migrating Product Data
```
import requests
# Setup your commercetools project credentials
project_key = "your-project-key"
client_id = "your-client-id"
client_secret = "your-client-secret"
auth_url = "https://auth.europe-west1.gcp.commercetools.com/oauth/token"
api_url = f"https://api.europe-west1.gcp.commercetools.com/{project_key}/products"
# Authenticate and get an access token
auth_payload = {
"grant_type": "client_credentials",
"client_id": client_id,
"client_secret": client_secret,
}
auth_response = requests.post(auth_url, data=auth_payload)
access_token = auth_response.json()["access_token"]
# Example product data to migrate
product_data = {
"name": "Example Product",
"description": "This is an example product for migration.",
"price": 19.99,
# Add more product attributes as needed
}
# Migrate product data to commercetools
headers = {"Authorization": f"Bearer {access_token}"}
response = requests.post(api_url, json=product_data, headers=headers)
if response.status_code == 201:
print("Product migrated successfully.")
else:
print("Failed to migrate product.")
```
Example 2: Batch Importing Customer Data
For batch operations, commercetools provides an Import API that is more efficient for importing large datasets, such as customer data. The code for this would involve preparing your data in the format expected by the commercetools Import API and then sending it in batches.
Best Practices
Incremental Migration: Consider migrating data incrementally to reduce risk and minimize downtime.
Monitoring: Continuously monitor the migration process for errors or issues.
Post-Migration Testing: After migration, thoroughly test the system to ensure that all data has been accurately transferred and that the system is performing as expected.
Conclusion
Migrating to commercetools can transform your e-commerce capabilities, but it requires careful planning and execution. By following the outlined steps and leveraging the coding examples provided, businesses can ensure a smooth transition to commercetools, setting the stage for future growth and innovation.
---
Thank you for reading my article! For more updates and useful information, feel free to connect with me on LinkedIn and follow me on Twitter. I look forward to engaging with more like-minded professionals and sharing valuable insights.
| nitin-rachabathuni | |
1,774,536 | Alkaline water has a higher pH level | Health Center Network | Quench your thirst and rejuvenate your body with our Alkaline Water Plant. Through an 8-stage... | 0 | 2024-02-28T06:58:17 | https://dev.to/healthcenternetwork/alkaline-water-has-a-higher-ph-level-health-center-network-3fp7 | alkalinewater, healthandwellness, waterpurifier, alkalinewaterionize | Quench your thirst and rejuvenate your body with our [Alkaline Water Plant](https://www.healthcenternetwork.in/). Through an 8-stage purification process, our water ionizer optimizes pH levels, ensuring clean and refreshing hydration. Experience the benefits of alkaline water and elev
ate your well-being- indian citizen company. | healthcenternetwork |
1,774,701 | Exploring the Power of dvh Units in CSS | Introduction: In the dynamic realm of web development, keeping pace with the latest CSS innovations... | 0 | 2024-02-28T09:28:07 | https://dev.to/r4nd3l/exploring-the-power-of-dvh-units-in-css-3j92 | dvh, css, rules, units | **Introduction:**
In the dynamic realm of web development, keeping pace with the latest CSS innovations is paramount. One such innovation that has recently garnered attention is the utilization of “dvh” units in lieu of the conventional “vh” units. This article aims to delve into the nuances of CSS units, elucidate the distinction between “dvh” and “vh,” and shed light on other compelling CSS advancements.
**Understanding the Basics: vh Units**
Before delving into “dvh,” let's revisit the fundamentals of “vh” units. “vh” denotes viewport height and serves as a CSS unit representing a percentage of the viewport's height. Widely employed in crafting responsive designs, “vh” facilitates adaptation to diverse screen dimensions.
**Introducing “dvh” Units**
Now, let's introduce the protagonist: “dvh” units. Unlike “vh,” which pertains to viewport height, “dvh” denotes a percentage of the document height. This seemingly subtle distinction holds immense significance in dictating the behavior of web layouts.
**The Power of “dvh” Over “vh”**
1. **Dynamic Layouts:** “dvh” units enable the creation of more dynamic layouts that adjust dynamically based on their content. This obviates the necessity for fixed heights, which often result in awkward gaps or content overflow.
2. **Enhanced User Experience:** By leveraging “dvh,” web pages can offer a more user-friendly experience. Elements expand or contract in response to varying content lengths, augmenting overall user satisfaction.
To fully grasp the potential of “dvh,” consider a practical scenario: envision constructing a blog website hosting articles of varying lengths. Implementing “dvh” for the article container height ensures seamless accommodation of articles, irrespective of their length, thus ensuring a visually pleasing and consistent user experience.
**Going Beyond “dvh”**
While “dvh” heralds a significant advancement in CSS, it's imperative to acknowledge other notable developments in web design:
1. **CSS Grid Layout:** Offering precise control over grid-based layouts, CSS Grid Layout facilitates the creation of complex and responsive web designs.
2. **Variable Fonts:** Variable fonts revolutionize typography on the web, offering unparalleled flexibility in font styles, weights, and sizes.
3. **Dark Mode:** With the pervasive adoption of dark mode, integrating dark mode functionality into web designs has become indispensable, facilitated by CSS custom properties.
### Practicle instance
{% codepen https://codepen.io/r4nd3l/pen/YzMKjQd %}
**Conclusion**
In the dynamic landscape of web development, embracing CSS innovations like “dvh” units is indispensable. By harnessing the potential of “dvh” and staying abreast of emerging trends, developers can create captivating and user-friendly web experiences. So, whether embarking on a new project or revamping an existing one, consider leveraging the advantages of “dvh” units to craft responsive and visually engaging designs. Remember, CSS is a boundless realm ripe for exploration, so stay curious and keep experimenting to stay ahead of the curve. | r4nd3l |
1,774,757 | Easy String Reversal Trick: python challenge 23 | https://youtu.be/oxb_WExInIQ | 0 | 2024-02-28T10:25:10 | https://dev.to/ruthrina/easy-string-reversal-trick-python-challenge-23-7fc | https://youtu.be/oxb_WExInIQ | ruthrina | |
1,774,868 | Have You Explored the Power of Instant Host Notifications in Modern Visitor Systems? | In the rapidly evolving landscape of modern visitor systems, the power of instant host... | 0 | 2024-02-28T11:38:34 | https://dev.to/innomaintcmms/have-you-explored-the-power-of-instant-host-notifications-in-modern-visitor-systems-1cnk | vsitor, tracking, technology, software |

In the rapidly evolving landscape of modern visitor systems, the power of instant host notifications stands out as a game-changer, revolutionizing the way organizations manage and enhance their visitor experiences. This advanced feature leverages cutting-edge technology to provide real-time alerts to hosts, ensuring seamless and secure visitor interactions.
Key Points:
Real-time Communication: Instant host notifications enable instantaneous communication between the **[visitor system](https://www.innomaint.com/solutions/visitor-management-system-software/?utm_source=seo_article)** and hosts. As soon as a visitor checks in, hosts receive immediate alerts through various channels such as mobile apps, emails, or SMS, allowing them to promptly welcome and assist their guests.
Enhanced Security: The rapid notification system contributes significantly to the overall security of a facility. In the event of unexpected or unauthorized visitors, hosts can take swift action, preventing potential security breaches. This proactive approach adds an extra layer of protection to sensitive environments.
Improved Efficiency: Traditional visitor management systems often relied on manual processes and paper logs. Instant host notifications streamline the check-in process, reducing wait times and enhancing overall efficiency. Hosts can prepare for the arrival of their guests, optimizing time and resources.
Customization and Flexibility: Modern visitor systems offer customizable notification settings, allowing hosts to tailor alerts based on their preferences and schedule. This flexibility ensures that hosts receive information in a manner that suits their workflow, improving overall user satisfaction.
Visitor Experience Enhancement: The seamless integration of instant host notifications contributes to an elevated visitor experience. Guests feel welcomed and attended to from the moment they arrive, fostering a positive impression of the organization.
Data Insights and Analytics: Beyond immediate notifications, these systems often provide analytics and insights into visitor patterns. Hosts can leverage this data to make informed decisions, enhancing the overall management and optimization of visitor interactions.
In conclusion, the exploration and implementation of instant host notifications in modern visitor systems represent a paradigm shift in how organizations approach visitor management, combining efficiency, security, and an enhanced visitor experience. As technology continues to advance, these features will undoubtedly play a pivotal role in shaping the future of secure and seamless visitor interactions.
| innomaintcmms |
1,774,879 | "Shedding Pounds: Your Ultimate Guide to Effective Weight Loss Strategies" | Fitspresso Coffee Loophole is a huge industry. That was how to double your effectiveness with... | 0 | 2024-02-28T11:59:43 | https://dev.to/healthinfor31/shedding-pounds-your-ultimate-guide-to-effective-weight-loss-strategies-33en | webdev | Fitspresso Coffee Loophole is a huge industry. That was how to double your effectiveness with Fitspresso Coffee Loophole. Where can mavens receive attractive Fitspresso Coffee Loophole hand-outs? In any respect, I won't explain to you how to use Fitspresso Coffee Loophole. This is how to stop being bothered about something. I actually want to provide this for you so that you understand Fitspresso Coffee Loophole. If one is buying a Fitspresso Coffee Loophole as beginner, one might want to add a book on Fitspresso Coffee Loophole that will assist them.
https://healthsolutionsservices.blogspot.com/2024/02/fitspresso-reviews-igniting-your.html
https://twitter.com/JesmyKalso15002/status/1762349065449394492
https://www.pinterest.com/pin/1134133118662137886
https://telescope.ac/healthsolutionsservices/ke0p9a83xp8n5sbpeft119
https://hackmd.io/@aNQNgw7EQmevv-xQcZ46Wg/HyGQNljn6
https://hypegh-schoany-syniatts.yolasite.com/
https://techplanet.today/post/fitspresso-reviews-a-comprehensive-analysis
https://groups.google.com/g/jesmykalson/c/1NeTyh9uR0o
https://healthsolutionsservices2024.hashnode.dev/fitspresso-reviews-a-comprehensive-look-into-fitnesss-best-kept-secret
https://en-template-accounta-17090125095064.onepage.website/
https://fitspressoreviewshealth2024.mystrikingly.com/
https://medium.com/@jesmykalson/fitspresso-reviews-a-gateway-to-elevated-fitness-4c85848280a7
https://www.tumblr.com/healthsolutionsservices/743453837450412032/fitspresso-reviews-a-game-changer-in-fitness
https://healthsolutionsservices.wordpress.com/2024/02/27/fitspresso-reviews-revolutionizing-your-fitness-regimen/
https://educatorpages.com/site/healthsolutionsservices/pages/fitspresso-reviews
https://fitspressog2024.creatorlink.net/
https://fitspresso-reviews-a59a7b.webflow.io/
https://infogram.com/fitspresso-reviews-1h7v4pdnvdexj4k
https://www.twine.net/healthinfor30
https://www.dibiz.com/marvin_r_frye
https://fitspressoreviews2024.ukit.me/
https://fitspressoreviews.website3.me/
https://socialsocial.social/pin/fitspresso-reviews-3/
https://65dd7e157f9bc.site123.me/
https://fitspressoreviews2024.bravesites.com/
https://fitspressoreviews.journoportfolio.com/
https://sway.cloud.microsoft/c0onBl2rNQdcqXTw
http://fitspressoreviews2024.splashthat.com
https://guides.co/g/fitspresso-reviews-906546/347449
https://disqus.com/by/disqus_WdW62rRJXl/about/
https://influence.co/marvin_r_frye
https://letterboxd.com/healthinfor30/
https://promosimple.com/ps/2ad57/fitspresso-reviews
https://leetcode.com/discuss/interview-question/4788506/Fitspresso-Reviews%3A-Powering-Your-Fitness-Adventure
https://mssg.me/fitspressoreviews2024
https://www.provenexpert.com/en-us/fitspresso-reviews5/
https://fitspresso-reviews-9.jimdosite.com/
https://fitspressoreviews10.godaddysites.com/
https://www.behance.net/gallery/192534603/Fitspresso-Reviews
https://skfb.ly/oRwXQ
https://fitspresso-reviews-22024.jigsy.com/
https://www.spreaker.com/podcast/fitspresso-reviews-transforming-your-fi--6102250
https://www.reddit.com/user/healthinfor30/comments/1b15oen/fitspresso_reviews_a_comprehensive_analysis/
https://caramellaapp.com/healthinfor30/aShyHDIfT/fitspresso-reviews
https://flow.page/fitspressoreviews2024
https://www.flickr.com/people/200145268@N02/
https://www.flickr.com/photos/200145268@N02/53555472085/in/dateposted-public/
https://fitspressoreviews10.godaddysites.com/f/fitspresso-reviews-enhancing-your-fitness-experience
https://www.facebook.com/Healthsolutionsservices2024/
https://marvinrfrye.wixsite.com/fitspresso-reviews
| healthinfor31 |
1,774,990 | Top 3 Elixir books that will make you love Elixir even more | Introduction: Beyond the basics I've been using Elixir for about 2 years now and so far I... | 0 | 2024-02-28T12:44:12 | https://dev.to/hoonweedev/top-3-elixir-books-that-will-make-you-love-elixir-even-more-2bi6 | elixir, phoenix, books, concurrency | ## Introduction: Beyond the basics
I've been using Elixir for about 2 years now and so far I can confidently say that it's one of the most practical languages I've ever used. It has a lot of features that make it a joy to work with. It's a functional language, it's concurrent, it's distributed, it's fault-tolerant, and it's easy to learn. Whenever my colleagues ask me about Elixir, I always tell them that it's a language that's worth learning. I also recommend they read some of the introductory books about Elixir, like _Elixir in Action_ by Saša Jurić, _Programming Elixir_ by Dave Thomas, and _The Little Elixir & OTP Guidebook_ by Benjamin Tan Wei Hao. These books are great for getting started with Elixir and they cover the basics of the language and the ecosystem.
But what about the next step? What about the books that will take you beyond the basics? What about the books that will make you love Elixir even more? In this article, I'll share with you the top 3 books that every Elixir developer should read. These books will help you to deepen your understanding of Elixir and to become a better Elixir developer.
_Disclosure: I'm not affiliated with any of the authors or publishers of the books mentioned in this article. I'm just a fan of Elixir and I want to share my love for the language with others._
## _Real-Time Phoenix_ by Stephen Bussey

So you've learned the basics of Elixir and you've built a few web applications with Phoenix. You've learned how to use Ecto, how to use Phoenix LiveView, and how to use Phoenix PubSub. You've also learned how to use Phoenix Channels to build real-time web applications. But you want to go deeper. You want to learn how to build real-time web applications that are fast, reliable, and scalable.
[Real-Time Phoenix](https://pragprog.com/titles/sbsockets/real-time-phoenix/) is a book that will teach you to understand and how to build **real** real-time web applications with Phoenix. The book covers topics like WebSockets, Phoenix PubSub, Phoenix Presence, and (a little bit of) Phoenix LiveView. The best part (I think) is that the book covers how to test Phoenix sockets and channels, which is something that many developers struggle or ignore with. The book also covers how to deploy real-time Phoenix applications to production, and some considerations for scaling them.
## _Metaprogramming Elixir_ by Chris McCord

[Metaprogramming Elixir](https://pragprog.com/titles/cmelixir/metaprogramming-elixir/) is a book that will teach you how to write **code that writes code**. Metaprogramming is one of the most powerful features of Elixir and it's what makes the language so extensible and flexible. Inspired by Lisp, Elixir's metaprogramming is easier than many other languages (Yes, I talking about you Rust).
As many Elixir devs know, the author Chris McCord is the creator of the Phoenix web framework. He was once a Ruby developer and there were times when Ruby was not good enough for his needs. He then discovered Elixir and he was amazed by its features. He was particularly impressed by the metaprogramming capabilities of Elixir, which led him to create powerful macros and DSLs in Phoenix.
In this book, Chris McCord explains how metaprogramming works in Elixir and how you can use it to solve real-world problems. The book covers topics like macros, quote and unquote, code evaluation, and code generation. It also covers the use cases of metaprogramming, like building DSLs, writing code generators, and creating domain-specific abstractions. Once you understand metaprogramming, you'll be able to write more expressive and concise code, and you'll be able to create your libraries and frameworks.
## _Concurrent Data Processing in Elixir_ by Svilen Gospodinov

Elixir is known for its concurrency primitives. It has lightweight processes, message passing, and supervision trees, which make it easy to write concurrent code. But how do you write concurrent code in Elixir? How do you build concurrent data processing pipelines? How do you handle backpressure and fault tolerance?
[Concurrent Data Processing in Elixir](https://pragprog.com/titles/sgdpelixir/concurrent-data-processing-in-elixir/) is a book that will answer these questions. The book covers topics like processes, message passing, supervision trees, and fault tolerance. It also covers how to use GenStage, Flow, and Broadway to build concurrent data processing pipelines.
One thing I love about this book is that it starts with a very basic element, the process, and then it builds up carefully to more complex topics like GenStage, Flow, and Broadway. Not only will you get a solid understanding of the basics of concurrent programming in Elixir, but you'll also be able to choose what libraries would be best for your use case.
## Conclusion
Elixir is a language that's worth learning. It's a language that's worth mastering. It's a language that's worth **loving**. The books I've mentioned in this article will help you to deepen your understanding of Elixir and to become a better Elixir developer. They will help you to build real-time web applications, write expressive and concise code, and build concurrent data processing pipelines. They will make you love Elixir even more.
I hope you find these books as useful as I did. What are your favorite Elixir books? Let me know in the comments below.
| hoonweedev |
1,775,072 | Is Hosting on Netlify Going to Bankrupt you? | On Tuesday, February 27, I was casually browsing Reddit, as I often do, when I stumbled on a slightly... | 0 | 2024-02-28T15:00:00 | https://wheresbaldo.dev/tech/netlify/is-hosting-on-netlify-going-to-bankrupt-you | netlify, hosting, webdev, discuss | On Tuesday, February 27, I was casually browsing Reddit, as I often do, when I stumbled on a slightly alarming post in the **r/webdev** subreddit. The post was titled "Netlify just sent me a $104K bill for a simple static site".
You can read [the full post here on Reddit](https://www.reddit.com/r/webdev/comments/1b14bty/netlify_just_sent_me_a_104k_bill_for_a_simple) for the OP's story if you didn't catch it, but in short, OP's essentially unknown site got hit by a sudden onslaught of DDoS traffic, **racking them up a cool $104,000 bill from Netlify**!

Now according to Netlify, ***normally***, they can detect and mitigate DDoS attacks, and the OP's case was simply an anomaly. But instead of immediately waiving the bill, they very nonchalantly suggested they'd only charge him 5% of the bill - only $5,200 - basically a steal right?? 🙄

That is, until some commenters on Reddit suggested OP post the story to Hacker News, [which OP did](https://news.ycombinator.com/item?id=39520776), and then Netlify suddenly changed their tune.
After the story went viral, [the CEO commented on the same Hacker News thread](https://news.ycombinator.com/item?id=39521986) that the fees would be waived, and apologized that the support team didn't handle the situation better.

But a lot of damage had already been done, and many Redditors felt that Netlify's response was too little, too late. They commented that they'd lost their trust in the company, and that they had already started migrating their sites away from Netlify, and would never use them again. Others who were considering using Netlify in the future, said they'd now be looking elsewhere.
I've been a Netlify user for a few years now, and while I can't say I find their service perfect as it's missing some pretty crucial support for some things I use, it's still been a pretty good experience overall. And for someone like me (and like the OP), with just a small-time relatively unknown site generating very little traffic every month (basically nothing), their free tier has been a godsend. Or well, at least I thought it was, until I read this story.
## The Concern
Ok, so what's the actual reason, assuming you didn't venture off to read OP's story, that a free-tier site was able to rack up a massive bill in a very short amount of time?
Well, I already mentioned it was related to a sudden onslaught of DDoS traffic, but the real concern here is that **Netlify doesn't shut down your site** when the traffic surges.
In fact, not only do they not shut down the traffic to your site, but apparently OP only received a single email from Netlify about "Extra usage package purchased"! And that was it. No warning, no nothing.

Netlify apparently has agreements in place with paid tiers, but free tiers offer no such provisions. So, if you're on the free tier, and you get hit by a DDoS attack, you're basically screwed. But don't worry, they'll give you a good discount!
Some commenters on Reddit even noted - with such a policy in place - it's almost like Netlify is ***encouraging*** DDoS attacks on free-tier sites, as they'd be the only ones who'd benefit from it.
## The Aftermath
As I noted at the beginning of this post, many Redditors have already started migrating their sites away from Netlify. But to say Netlify is alone with this policy would be inaccurate.
It seems that a number of other popular hosts with free tiers also don't offer a kill-switch for traffic surges, so moving over to another host may not necessarily solve the problem.
Reading the fine print has yet again shown to be crucial, and I'm just as guilty as the others for not doing so!
After some more backlash, Netlify's CEO [posted a follow-up comment](https://news.ycombinator.com/item?id=39522139) on the same Hacker News thread, stating that they'd be reviewing their policies and making changes to ensure that this kind of situation doesn't happen again.

Now while that's a bit of a relief, I wonder how long it'll take for them to actually implement these changes, and if they'll be enough to win back the trust of those who've already left.
## Final Thoughts
I'm not entirely sure if it's enough for me, but I guess only time will tell. As it stands, I'm still considering my options.
I'm not sure if I'll be moving my sites away from Netlify just yet, but I'm definitely going to be keeping a closer eye on my traffic and usage from now on.
I've also learned that I need to be more vigilant with the fine print of the services I use, and I hope you've learned the same from this post.
What are your thoughts on this? Are people freaking out too much over this? Have you been affected by a similar situation with Netlify or another host?
Let me know in the comments below.
| mlaposta |
1,775,150 | A Love Letter to the Underrepresented in Tech | Dear "Underrepresented," In my dreams, you are fully represented - unabashedly you. Never need to... | 26,618 | 2024-03-01T12:58:28 | https://dev.to/abbeyperini/a-love-letter-to-the-underrepresented-in-tech-4jj3 | wecoded, inclusion, career, writing | Dear "Underrepresented,"
In my dreams, you are fully represented - unabashedly you. Never need to watch what you say or how you say it. Never asked to be less or more. Never told what you can and can't. Never told what you want and don't want.
I see the way you wake up every morning and draw on that inner well of strength. I see the exhausted evenings and the all too brief celebrations. The obstacles lay out before you like stairs, each step eagerly awaiting your stiletto-heel stab, combat boot kick, or firm sneaker tread. I want to build you more landings in between your steps - more breaks from your heavy load, more helping hands to hold some of it for you.
For I too have felt the wind pick up, ready to start buffeting me back down the staircase. You help me build a bulwark in the storm. Around you, I can stop, rest, and gather my strength. I don't need to explain why I couldn't let that one little event pass, unnoticed. That teeny tiny event that was just the last grain of sand before the whole dune came barreling towards us in some sort of quicksand landslide. You understand the comments, glances, glares, oversights, and invasions that happen in a place like tech. In a place like... anywhere.
You deserve champagne days. Call your best friend and plan a trip days. Looking back and can't help but be proud days. Letting the [rubber balls drop](https://www.thebalancemoney.com/work-life-balance-and-juggling-glass-and-rubber-balls-2275864) and tossing the glass balls back up days. You deserve friends and (chosen) family who hold you up on bad days. Who climb up the staircase with you. Who get out of the way when you're ready to spread your wings. May you know when you're ready. Not perfect. Never perfect, but scared and ready, prepped and prepared... and before you know it, soaring on your newly unfurled wings like you never doubted you were ready.
I hope you are told frequently how you impact others' lives. Not because I think you need to adjust or learn a lesson. Rather, I want you to hear, see, and feel how much light you bring to those around you. I need you to know how many inner wells of strength you pour a little water back into on a daily basis. Every day that you bring what is unique about you to the table is another day the table is improved. You deserve to see the beautiful blossoms that grow in the ground you've so painfully worked. All the blood, sweat, and tears that have poured out of you over the years have left that ground softer for those who follow you.
Don't doubt they'll follow. Somewhere someone has been remembering that one thing you said. About how they *could*. About how they'll *shine*. About how you'd be there if they trip on one of those steps in that long flight of stairs before them. The never-ending stairs and stares, blank or otherwise. The never-ending "well they just" and "he didn't understand" and how they never "mean anything by it."
May all the laughs be yours going forward, because no one has to say "it was just a joke" - everyone around you has learned good jokes punch up, not down. And those people around you should be showering you with awards and raises and praise. So much praise that your brag doc stretches forever and your resume can't possibly fit on one page. So much praise that you remember why you're doing this on your darkest days. So much praise that it drowns out the tech boys' silly noise. So much praise that no one can doubt you - not even you.
And when we're together, I hope the [whisper network](https://en.wikipedia.org/wiki/Whisper_network) never has to point out the [missing stair](https://en.wikipedia.org/wiki/Missing_stair). I want all the stairs in each of our staircases to be technical, not human. I dare to dream our stairs will correlate to the ladders we want to climb - ladders that were built to take us straight to the top. Because I can't wait to share the view with all y'all.
Love always,
Abbey | abbeyperini |
1,775,396 | What are you learning about this weekend? 🧠 | Howdy! 🤠 Hope you're weekend is going well. Whether you're sharpening your JS skills, making PRs to... | 0 | 2024-03-09T12:30:00 | https://dev.to/devteam/what-are-you-learning-about-this-weekend-183o | learning, beginners, discuss | Howdy! 🤠
Hope you're weekend is going well.
Whether you're sharpening your [JS](https://dev.to/t/javascript) skills, making PRs to [your OSS repo of choice](https://github.com/forem/forem) 😉, sprucing up your portfolio, or [writing a new post](https://dev.to/new) here on DEV, we'd like to hear about it.
Learn some, chill some. Repeat! 🤜💥🤛
 | michaeltharrington |
1,775,568 | Just a simple Songs API using Spring Reactive with Functional Endpoints, Docker and MongoDB | Blocking is a feature of classic servlet-based web frameworks like Spring MVC. Introduced in Spring... | 0 | 2024-02-29T00:55:13 | https://dev.to/daasrattale/just-a-simple-songs-api-using-spring-reactive-with-functional-endpoints-docker-and-mongodb-2hp7 | spring, java, docker, mongodb | Blocking is a feature of classic servlet-based web frameworks like Spring MVC. Introduced in Spring 5, Spring WebFlux is a reactive framework that operates on servers like **Netty** and is completely non-blocking.
Two programming paradigms are supported by Spring WebFlux. Annotations (Aspect Oriented Programming) and WebFlux.fn (Functional Programming).
> "Spring WebFlux includes WebFlux.fn, a lightweight functional programming model in which functions are used to route and handle requests and contracts are designed for immutability. It is an alternative to the annotation-based programming model but otherwise runs on the same Reactive Core foundation." [Spring | Functional Endpoints](https://docs.spring.io/spring-framework/reference/web/webflux-functional.html)
## Project Description
As the title describe, this is a simple Songs API build using Spring, Docker and MongoDB, the endpoints are Functional Endpoints and will have the traditional ControllerAdvice as Exception handler.
## Project Dependencies
- Java Version `21`
- Spring Boot version `3.3.0-SNAPSHOT` with Spring Reactive Starter.
- [Spring Docker Support](https://spring.io/blog/2023/06/21/docker-compose-support-in-spring-boot-3-1).
- Lombok (Optional).
Talking XML these are the project dependencies:
```xml
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb-reactive</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-webflux</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-docker-compose</artifactId>
<scope>runtime</scope>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
<optional>true</optional>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-test</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.projectreactor</groupId>
<artifactId>reactor-test</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
```
## Coding time!
First, let's setup the docker compose file `/compose.yaml` of the project (it should generated by spring via the docker support starter).
```yaml
services:
mongodb:
image: 'mongo:7.0.5'
environment:
- 'MONGO_INITDB_DATABASE=songsDB'
- 'MONGO_INITDB_ROOT_PASSWORD=passw0rd'
- 'MONGO_INITDB_ROOT_USERNAME=root'
ports:
- '27017'
```
With that set, let's create the Song class:
```java
import lombok.AllArgsConstructor;
import lombok.Builder;
import lombok.Getter;
import lombok.Setter;
import org.springframework.data.annotation.Id;
import org.springframework.data.mongodb.core.mapping.Document;
import java.util.UUID;
@Document
@Getter
@Setter
@AllArgsConstructor
@Builder
public class Song {
@Id
private UUID id;
private String title;
private String artist;
}
```
The SongRepository interface will be referring to the Song class in its DB ops:
```java
import org.springframework.data.repository.reactive.ReactiveCrudRepository;
import org.springframework.stereotype.Repository;
import reactor.core.publisher.Flux;
import java.util.UUID;
@Repository
public interface SongRepository extends ReactiveCrudRepository<Song, UUID> {
Flux<Song> findAllByArtist(final String artist);
}
```
## Song Functional Endpoint and Handler
Now, it's time for the Song Router, it will be responsible for router the incoming requests for the /songs ressource:
```java
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.web.reactive.function.server.RouterFunction;
import org.springframework.web.reactive.function.server.ServerResponse;
import static org.springframework.web.reactive.function.server.RouterFunctions.route;
@Configuration
public class SongRouterConfig {
private final SongHandler handler;
public SongRouterConfig(SongHandler handler) {
this.handler = handler;
}
@Bean
public RouterFunction<ServerResponse> router() {
return route().path("/songs", builder -> builder
.GET("/artist", handler::findAllByArtist)
.GET(handler::findAll) // Get endpoints' order is important
.POST("/new", handler::create)
.DELETE("/{id}", handler::delete)
).build();
}
}
```
As you noticed the request are redirected to the SongHandler for a certain logic to be performed.
> Note: If you having trouble understanding the syntax, make sure to know more about Java functional interfaces, lambda and method references.
The SongsHandler will act as Service as well, will perform a business logic and communicate with the SongRepository for operations with the database.
```java
import io.daasrattale.webfluxmongofunctionalendpoints.song.exceptions.InvalidParamException;
import io.daasrattale.webfluxmongofunctionalendpoints.song.exceptions.InvalidUUIDException;
import org.springframework.http.HttpStatus;
import org.springframework.stereotype.Service;
import org.springframework.web.reactive.function.server.ServerRequest;
import org.springframework.web.reactive.function.server.ServerResponse;
import reactor.core.publisher.Mono;
import java.util.Optional;
import java.util.UUID;
@Service
public class SongHandler {
private final SongRepository repository;
public SongHandler(SongRepository repository) {
this.repository = repository;
}
public Mono<ServerResponse> findAll(final ServerRequest request) {
return ServerResponse
.ok()
.body(repository.findAll(), Song.class);
}
public Mono<ServerResponse> findAllByArtist(final ServerRequest request) {
return Mono.just(request.queryParam("artist"))
.switchIfEmpty(Mono.error(new InvalidParamException("artist")))
.map(Optional::get)
.map(repository::findAllByArtist)
.flatMap(songFlux -> ServerResponse
.ok()
.body(songFlux, Song.class));
}
public Mono<ServerResponse> create(final ServerRequest request) {
return request.bodyToMono(Song.class)
.switchIfEmpty(Mono.error(new RuntimeException("Song body not found"))) // you can use that or create a custom exception (recommended)
.doOnNext(song -> song.setId(UUID.randomUUID()))
.flatMap(song -> ServerResponse
.status(HttpStatus.CREATED)
.body(repository.save(song), Song.class)
);
}
public Mono<ServerResponse> delete(final ServerRequest request) {
return Mono.just(request.pathVariable("id"))
.map(UUID::fromString)
.doOnError(throwable -> {
throw new InvalidUUIDException(throwable);
})
.flatMap(songId -> ServerResponse
.ok()
.body(repository.deleteById(songId), Void.class)
);
}
}
```
> Note: The `SongHandler` can be annotated with @Component, since it performs a business logic I see it better have the @Service annotation instead.
## Exception Handling
As previously states, will be using the same old ControllerAdvice as Exception handler with two custom Exceptions as the following:
### Custom Exceptions
```java
import lombok.Getter;
@Getter
public class InvalidParamException extends RuntimeException {
private final String paramName;
public InvalidParamException(final String paramName) {
this.paramName = paramName;
}
}
```
```java
import lombok.Getter;
@Getter
public class InvalidUUIDException extends RuntimeException {
private final Throwable cause;
public InvalidUUIDException(final Throwable cause) {
this.cause = cause;
}
}
```
### Custom Exception Handler
```java
import io.daasrattale.webfluxmongofunctionalendpoints.song.exceptions.InvalidUUIDException;
import lombok.extern.slf4j.Slf4j;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler;
import java.util.Map;
@ControllerAdvice
@Slf4j
public class SongExceptionHandler {
@ExceptionHandler(InvalidUUIDException.class)
public ResponseEntity<Map<String, ?>> handle(final InvalidUUIDException exception) {
return ResponseEntity
.badRequest()
.body(
Map.of(
"status", 400,
"message", "Invalid UUID",
"details", exception.getCause().getMessage()
)
);
}
@ExceptionHandler(Exception.class)
public ResponseEntity<Map<String, ?>> handle(final Exception exception) {
log.error("Unhandled Error, message: {}", exception.getMessage());
return ResponseEntity
.internalServerError()
.body(
Map.of(
"status", 500,
"message", "Unknown Error",
"details", exception.getMessage()
)
);
}
}
```
With all that been set, let's make use of our endpoint using Postman:
- Creating a new Song

- Getting songs by artist:

- Getting all songs:

- Deleting a song:
Sorry not a big fan of Madonna tbh :|

- Checking the result of the delete op:

## Finally,
With that said, our functional songs endpoint will be good to go for further improvements and new features.
This is simple, in real industrial projects, I can assure you things get complicated with more layers, for "getting started" purposes I avoided the use of advanced concepts such as validation, DTO, etc.
You can find the full source [here](https://github.com/daasrattale/webflux-functional-endpoints)
Also find more content on my personal [personal website](https://saadelattar.me). | daasrattale |
1,775,619 | Mastering Test Data Management: A Strategic Approach | To commence effective test data management, you must first comprehend your testing project’s... | 0 | 2024-02-29T02:49:33 | https://asurascanss.com/mastering-test-data-management/ | test, data, management | 
To commence effective test data management, you must first comprehend your testing project’s specific data requirements: these needs can vary significantly. Thus–given their crucial nature; it is imperative to invest time in understanding the test data management tools and precise data needed.
This process encompasses several aspects: identifying the necessary types of data, determining an appropriate volume for comprehensive testing and considering any specific attributes or conditions required.
**Acquiring or Generating Test Data**
After clarifying the data requirements, we must proceed to acquire or generate the test data; this can be accomplished through several methods: synthesizing data from specialized test warehouses, using authentic production system information (while ensuring compliance with security and privacy regulations), and employing a hybrid approach.
The determination of an appropriate method hinges upon—unique needs and constraints inherent in—the specific project under consideration.
**Data Preparation for Testing**
Acquiring the imperative test data necessitates thorough preparation for its use in the testing environment. This process encompasses a range of critical tasks, from masking sensitive information to guaranteeing data completeness, accuracy and consistency. Moreover, it involves formatting the data appropriately – all with an aim towards seamless integration into our test milieu.
By ensuring adequate readiness, we enhance not only how effectively we can test—but also ensure that our tests accurately mirror real-world scenarios.
**Utilization and Management of Test Data**
The actual utilization and management of the prepared test data in the testing process represents a final, crucial step. This phase includes several activities: generating specific test data sets; configuring the environment with necessary data for tests–provisioning this set-up is also an integral part.
Once executed after concluding each cycle, meticulous storage and management of test results are imperative to ensure repeatability and traceability, as well as facilitating future cycles’ smooth operation through effective handling of all relevant information on hand during previous tests.
**Continuous Improvement and Iteration**
An iterative process characterizes test data management. Regular reviews and refinements of the data requirements, methods for generating data, and procedures for preparing it contribute to constant enhancement.
By embracing a feedback loop, teams can adapt to evolving testing needs. This optimizes not only accuracy but also enhances overall efficiency in the test data management process.
**Conclusion**
Opkey, a leader in the dynamic field of test data management, offers a robust solution that flawlessly integrates with test automation workflows. It harnesses state-of-the-art test mining technology to autonomously extract and refine client’s environment-based data for optimal compliance with required formats.
Beyond mere data extraction, Opkey efficiently mines master data details from various sources: Chart of Accounts, Employee, Customers, Items, Suppliers, Procure to Pay and Order to Cash, among others. This comprehensive approach slashes the data collection efforts of QA teams significantly – boosting efficiency by an impressive 40%.
Opkey’s test data management solution for enterprise software testing truly reveals its power in scenarios that require multiple testing cycles; for instance, EBS to Cloud migration or rigorous regression testing for Oracle’s quarterly updates. During these critical situations, Opkey transforms into an indispensable ally by streamlining the testing process and guaranteeing test data readiness.
Essentially, Opkey’s Test Data Management solution proves a cost-effective and time-saving tool for companies conducting Oracle tests. Opkey consistently supplies accurate, correctly formatted and readily deployable test data; this empowers organizations to tackle the intricate challenges of testing with confidence. As businesses aim towards excellence in their testing ventures, they find a reliable partner in Opkey that guarantees not only managing but also optimizing test data for success. | rohitbhandari102 |
1,775,686 | Engenharia Reversa: Primeiro Contato - Parte 1 | Você vai praticar engenharia reversa pela primeira vez. "Engenharia reversa", na área de T.I.... | 0 | 2024-03-17T15:22:02 | https://dev.to/ryan_gozlyngg/engenharia-reversa-primeiro-contato-parte-1-2gih | braziliandevs, tutorial, beginners, debugging | Você vai praticar engenharia reversa pela primeira vez.
"Engenharia reversa", na área de T.I. refere-se à engenharia reversa de software, que, a grosso modo, é a prática de entender o funcionamento de um software alheio, "nos mínimos detalhes".
Esse tutorial é uma breve introdução ao uso do **debugger x64dbg**, que é pré-requisito para o próximo tutorial.
Esse tutorial foi escrito para os iniciantes, o intuito é lhe preparar para o próximo tutorial: Debugando um programa crackme; observe que um é complemento do outro.
Programas "crackme" são softwares com algum tipo de desafio, como, por exemplo, descobrir uma senha de acesso ao próprio crackme.
**O maior objetivo disso tudo, é mostrar um pouco desse mundo para pessoas que têm interesse em "baixo nível", mas não sabem se "isso é para elas".**
**Quero lhe ajudar a ter o "primeiro gostinho" desse mundo...**
## Lista de Conteúdo
- [O que é um Debugger](#o-que-é-um-debugger)
- [Sobre a Nomenclatura "x64dbg"](#sobre-a-nomenclatura-x64dbg)
- [Antes de começar](#antes-de-começar)
- [Baixando o x64dbg](#baixando-o-x64dbg)
- [Abrindo o x64dbg](#abrindo-o-x64dbg)
- [Mapeando as partes mais usadas da Interface Gráfica](#mapeando-as-partes-mais-usadas-da-interface-gráfica)
- [Botões de controle do Debugger](#botões-de-controle-do-debugger)
- [Atalhos para Funcionalidades](#atalhos-para-funcionalidades)
- [Janelas mais usadas](#janelas-mais-usadas)
- [Mapeando a janela CPU](#mapeando-a-janela-cpu)
- [Coluna de Endereços](#coluna-de-endereços)
- [Coluna de Opcodes](#coluna-de-opcodes)
- [Coluna de Instruções Assembly](#coluna-de-instruções-assembly)
- [Coluna de Comentários](#coluna-de-comentários)
- [Coluna de Registradores](#coluna-de-registradores)
- [Stack](#stack)
- [Dump de memória](#dump-de-memória)
- [Status do programa Debugado](#status-do-programa-debugado)
- [Rodando Primeiro Programa no x64dbg](#rodando-primeiro-programa-no-x64dbg)
- [Encontrando Strings](#encontrando-strings)
- [Noções básicas sobre Funções em Assembly](#noções-básicas-sobre-funções-em-assembly)
- [Atalhos úteis](#atalhos-úteis)
- [Final: Considerações e Recomendações](#final-considerações-e-recomendações)
- [Caso esteja procurando obter os fundamentos da computação, eu sugiro que você confira os links desta área](#links-para-iniciantes)
---
### O que é um Debugger
Debugger é um software que serve para "debugar" e testar programas.
"Debugar" é o processo de procurar bugs.
Bugs são erros ou problemas em um software.
E "testar", aqui, se define como "fazer oque quisermos com o software".
O Debugger lhe fornece acesso ao programa compilado.
Em um debugger, você vai ter acesso aos seguintes recursos:
* Valores das suas variáveis em memória - Mapa de memória
* Suas linhas de código transformadas em instruções Assembly
* Módulos (dll's e lib's) usados
* Threads e Handles
---
### Sobre a Nomenclatura "x64dbg"
Não se confunda: o nome do programa como um todo é **x64dbg**.
Ele possui dois APLICATIVOS que são chamados x64dbg e x32dbg.
Você sempre deve saber qual dos dois deve usar para debugar um programa em específico, dependendo da plataforma para a qual o programa foi compilado:
* o aplicativo chamado **x64dbg** serve para debugar programas de 64 bits.
* o aplicativo chamado **x32dbg** serve para debugar programas de 32 bits.
Caso você não saiba se o seu programa é de 32-bits ou se é de 64-bits, abra o link:
[Como saber se um programa é 32 ou 64 bits no windows](https://www.tecwhite.net/2018/09/como-saber-se-um-programa-e-32-ou-64-bits-no-windows.html).
**No tutorial a seguir, quando eu disser "x64dbg", eu estarei me referindo ao programa como um todo, e não ao aplicativo específico.**
---
### Antes de começar
* Vou trabalhar apenas com **Assembly de x86_x64**:<br>
<blockquote>
Assembly não é uma linguagem de programação comum, como <strong>C</strong>, que
você aprende e sai criando programas para "qualquer coisa":
há diversos processadores, e cada um possui uma arquitetura,
que vai dizer como as instruções são nomeadas e como vão
funcionar, que dita o nome e funcionamento dos
registradores, etc.<br>
Mais sobre: <a href="https://www.arm.com/glossary/isa#:%7E:text=An%20Instruction%20Set%20Architecture%20(ISA,as%20how%20it%20gets%20done)" target="_blank">ISA-Instruction Set Architecture</a>
</blockquote>
* Não é esperado nenhum conhecimento prévio sobre debuggers ou Assembly.
* É esperado conhecimento básico em programação, e processo de compilação.
* Conhecimento sobre fundamentos da computação lhe ajudarão a entender melhor o que se passa aqui. (Nota: Você pode usar um debugger para reforçar os estudos dos fundamentos da computação, já que, dessa forma, irá ver as coisas acontecendo na prática).
* O básico sobre sistemas numéricos é esperado.
Espero lhe dar um overview de como é usar um debugger, mas tenha em mente que tópicos extremamente importantes estão sendo deixados de lado para não lhe "inundar" de informações no início. Entenda isso como um "Quick Overview".
Se estiver procurando obter os fundamentos da computação, confira os links deixados na parte final.
---
### Baixando o x64dbg
<blockquote>
<ul>
<li>Vá ao site oficial: <a href="https://x64dbg.com/" target="_blank">x64dbg.com</a></li>
<li>Clique em <strong>Download</strong></li>
<li>Você será redirecionado ao site da Sourceforge. Clique em <strong>Download Latest Version</strong></li>
</blockquote>
### "Instalando" o x64dbg
<blockquote>
<ul>
<li>Extraia o arquivo zip. Você pode criar uma pasta em qualquer lugar para armazená-lo.</li>
<li>Abra a pasta extraída</li>
<li>Vá em: <strong>release/x32</strong>, ache o aplicativo <strong>x32dbg</strong> e crie um atalho na área de trabalho</li>
<li>Vá em: <strong>release/x64</strong>, ache o aplicativo <strong>x64dbg</strong> e crie um atalho na área de trabalho</li>
</ul>
</blockquote>
----
### Abrindo o x64dbg
Para abrir qualquer programa no x64dbg:
* Abra o x64dbg
* Clique em **Arquivo -> Abrir** (ou clique em **F3**)
* Alternativa: Arraste o programa para um dos atalhos do x64dbg
* Para abrir um programa que está rodando, abra o debuger, clique em **Arquivo -> Anexar/Attach** e escolha o programa.
---
### Mapeando as partes mais usadas da Interface Gráfica

*\* Guie-se pelas cores*.
### Botões de controle do Debugger
Área com os botões de controle do Debugger. Clique em **Debug** (ao lado de **Exibir**) para ver todos os nomes e atalhos dos botões nessa seção.

* Ícones, da esquerda para a direita:
1. Abrir Arquivo
2. Recarregar programa
3. Fechar programa que está carregado
4. Rodar programa carregado
5. Pausar programa carregado
6. Step Into: anda, de instrução em instrução, e entra nas instruções ``call``: instruções ``call`` chamam outras funções. "Entrar nelas" quer dizer ir até a função.
7. Step Over: anda, de instrução em instrução, e NÃO entra nas instruções ``call``
Não vou comentar sobre os outros botões, para saber sobre eles:
https://help.x64dbg.com/en/latest/gui/views/Trace.html
https://help.x64dbg.com/en/latest/introduction/ConditionalTracing.html
---
### Atalhos para Funcionalidades
Atalhos para algumas funcionalidades do x64dbg (não entrarei em detalhes).

Vou mencionar apenas o quinto, da esquerda para a direita, que é o ícone de favoritos.
Quando você quiser salvar alguma localização de instrução, use **CTRL+D**, que a instrução selecionada será salva nos favoritos. Clique no atalho mencionado para ser direcionado aos favoritos.
---
### Janelas mais usadas
Nessa parte do x64dbg, você tem todas as janelas de funcionalidades do programa.

Só marquei na caixa amarela as que eu vou estar comentando. Para você começar a usar o programa, acredito que só precisa dessas janelas. Mas não deixe de aprender todas, caso queira prosseguir nos estudos.
Clicando em **exibir** (ao lado de Arquivo, canto superior esquerdo) você pode habilitar as janelas que não estiverem aparecendo, e também pode ver os atalhos para cada uma delas.
Na imagem acima, se encontra a janela **CPU**, que será comentada no final desta parte sobre janelas.
<blockquote>
<ul>
<li><strong>Janela Breakpoints</strong>: breakpoints são
pontos de parada, locais em que o programa vai parar
obrigatoriamente. Logo falaremos mais sobre breakpoints.
E nessa janela você tem acesso aos breakpoints que estão
setados no programa todo. Aqui você pode manipular
breakpoints, habilitando-os, desabilitando-os, ou mesmo
removendo-os, e é até possível editá-los. Clicando em um
deles, você será direcionado até o local em que ele está
setado, na janela CPU.
<li><strong>Janela de Símbolos (Modules)</strong>: essa é
uma janela de extrema importância para algumas
atividades, já que ela nos mostra os módulos importados
pelo programa, inclusive o módulo do próprio programa.
<li><strong>Janela Referências</strong>: Aqui temos as
referências, que procuramos da seguinte maneira: Dentro
da janela CPU, clique com o botão direito do mouse, bem
no final você vai encontrar "Pesquisar por" e "Encontrar
referência aos":
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/abgzv6pirlgpdbhybqlb.PNG" alt="Descrição da imagem">
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ntq6rheu41z1fc6gg3p4.PNG" alt="Descrição da imagem">
Os resultados das pesquisas feitas por essas opções serão mostrados na janela **Referências**.
<li><strong>Janela Mapa de Memória</strong>: Como o nome
diz, é um mapa da memória do programa todo.
Você pode ver onde um determinado endereço se
encontra da seguinte maneira:
<ul>
<li>Dentro da janela CPU, selecione um endereço
desejado (sobre endereços, comentarei mais a
frente)
<li>Dentro da janela CPU, clique com o botão direito
do mouse em cima do endereço desejado
<li>Clique em "Seguir no Mapa da Memória":
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5a2wjcof5shlspd3r5fd.PNG" alt="Descrição da imagem">
<p>Você verá as informações a respeito do endereço
selecionado no mapa de memória.</p>
</ul>
<li><strong>Janela CPU</strong>: Nos mostra tudo o que
estamos vendo na imagem "mapeada" acima, e por essa ser
a janela principal, todos os comentários às marcações
com cores a seguir, serão sobre ela.
</ul>
</blockquote>
---
## Mapeando a janela CPU
Vamos passar a maior parte do tempo nessa janela, ela é a janela principal do x64dbg.
### Coluna de Endereços

Na primeira coluna da grande área, nós temos os endereços de memória (endereços de memória virtual, **Virtual Address - VA**).
A marcação verde nos mostra qual a próxima instrução a ser executada. O destaque em dourado é uma instrução marcada como "favoritos" usando CTRL+D.
Para saber mais sobre endereços de memória: https://learn.microsoft.com/pt-br/windows/win32/memory/virtual-address-space
---
### Coluna de Opcodes


Na segunda coluna da grande área, nós temos os Opcodes, que são códigos de operação, eles ditam uma operação a ser executada pelo processador (por isso do nome **OPeration Code**);
Aqui estão em hexadecimal. Cada opcode é usado para formar uma **instrução** Assembly. Os opcodes também são chamados de Código de Máquina, e isso é oque chamamos de Assembly.
Por exemplo, temos a instrução chamada ``add``, que soma dois operandos.
Em Opcode ela é equivalente a um dos seguintes, dependendo do contexto (na imagem, coluna Opcode, lado esquerdo):
Isso é a tabela que nos mostra como a instrução ``add`` é montada em arquitetura Intel x86_x64.
Repare as colunas **64-bit Mode**, para programas de 64 bits e **Compat/Leg Mode** para programas de 32 bits. Para saber mais, clique no link:
<a href="https://www.intel.com/content/www/us/en/developer/articles/technical/intel-sdm.html" target="_blank">Manual da Intel para desenvolvedores.</a>
Vamos ver qual opcode obtemos compilando um programa para um sistema de 64 bits:
```
// Programa em C
int main(){
int primeiro_numero = 3;
int segundo_numero = 2;
int resultado_da_soma;
resultado_da_soma = primeiro_numero + segundo_numero;
return 0;
}
```
Repare na linha selecionada: o programa está com o opcode ``03 /r``.
---
### Coluna de Instruções Assembly
Na terceira coluna da grande área, nós vemos as instruções Assembly. Falamos sobre elas anteriormente, na coluna dos Opcodes.

---
### Coluna de Comentários
Na quarta coluna da grande área, temos a seção de comentários.

Podemos ter comentários automáticos, por exemplo, no caso de instruções que estão carregando uma string, vamos ter a própria string como comentário. Também podemos adicionar nossos comentários, basta clicar na tecla ``;`` e escrever algo, ao terminar é só dar ``enter``.
---
### Coluna de Registradores
 Aqui, no lado extremo direito, temos os registradores e seus conteúdos.

Os registradores são espaços de memória dentro do processador, e funcionam como variáveis temporárias.
Há **registradores de propósito geral**, que sempre vão ser usados para coisas diversas:
**Registradores de Propósito Geral x64:**
* RAX
* RBX
* RCX
* RDX
* RSI
* RDI
* R8, R9, R10, R11, R12, R13, R14 e R15. Em x64 temos esses oito registradores a mais.
**Registradores de Propósito Geral x86:**
* EAX
* EBX
* ECX
* EDX
* ESI
* EDI
Em teoria esses registradores podem ser usados para qualquer coisa, mas há situações em que RDI/EDI e RSI/ESI são usados de forma exclusiva por certas instruções.
Também há registradores que servem para armazenar coisas específicas.
Vou mencionar apenas os que considero mais importantes para o momento.
Para saber mais, não deixe de ler:
<a href="https://blog.yossarian.net/2020/11/30/How-many-registers-does-an-x86-64-cpu-have" target="_blank">How many registers does an x86-64 cpu have</a>
<a href="https://asm.lucasteske.dev/registers/x86" target="_blank">Registradores x86 (32 e 64)</a>
**Alguns Registradores de uso específico (na ordem: x64/x86):**
* RSP/ESP - Ponteiro para o **topo** da stack/pilha
* RBP/EBP - Ponteiro para a **base** da stack/pilha
* RIP - Ponteiro para a próxima instrução a ser executada. **IP** - Instruction Pointer (vai ser mostrado como **RIP** em x64, e **EIP** em x86). É por esse registrador que nós sabemos onde estamos dentro do programa, durante o fluxo de execução.
Como ele tem o valor da próxima instrução a ser executada, estamos exatamente "um passo" atrás dele.
*\*Também deixei de mencionar os registradores que lidam com float's.* Mais sobre: https://my.eng.utah.edu/~cs4400/sse-fp.pdf
---
### Stack
Abaixo dos registradores, nós temos a Stack, também chamada de pilha (em pt-br).
A stack/pilha é um segmento de memória que opera com uma estrutura de dados de pilha (stack), por isso do nome.
Aqui nós temos os valores de nossas variáveis locais. Enquanto você estiver dentro de uma função, seus dados locais vão estar ali.
A coluna da esquerda representa o endereço da stack/pilha, e a coluna da direita nos mostra o valor armazenado naquele endereço.
A stack/pilha é uma estrutura de dados do tipo **LIFO** - Last In First Out, ou último a entrar, primeiro a sair. Todo mundo costuma usar o exemplo de uma pilha de pratos para explicar a stack: em uma pilha de pratos, qual prato você tira PRIMEIRO? O último empilhado, ou o primeiro, que deu início a pilha?
É lógico que, primeiro, você tira o que está no topo, **LIFO**...
Lembra dos registradores RSP e RBP? Então, é aqui que eles entram: RSP aponta para o topo da pilha, e RBP aponta para a base da pilha.
É assim que o layout de memória de uma função é delimitado e manipulado, através desses registradores.
E isso é que é chamado "stackframe", é o espaço de memória reservado para uma função.
Veremos um pouco mais sobre isso em "Noções básicas sobre funções em Assembly".
---
### Dump de Memória

Ao lado esquerdo dos registradores, nós encontramos um dump da memória em hexadecimal.
Você pode clicar com o botão direito do mouse em qualquer instrução, e seguir tanto o endereço dela, quanto algum endereço que ela esteja operando. Também é possível fazer o mesmo nos registradores.
Voltando a imagem do Dump, a coluna mais a esquerda mostra o endereço respectivo aos dados hexadecimais nas quatro colunas seguintes (quatro colunas na configuração padrão).
Na última coluna, mais a direita, temos a representação ASCII dos dados em memória.
Também podemos alterar a visualização dos dados do dump: além de hexadecimal, ele nos mostra os dados como Inteiros e Floats. Basta clicar com o botão direito do mouse na área de dump e clicar em "Integers" ou em "Floats".
Na parte de cima nós temos vários dumps, para os quais você pode direcionar a visualização de algo que lhe interessar. Também temos outras funções, mas não vou mencioná-las aqui. Para saber mais, leia: <a href="https://help.x64dbg.com/en/latest/commands/watch-control/index.html" target="_blank">Watch Control
</a>
---
### Status do programa debugado
Aqui, no canto inferior esquerdo, nos é mostrado em que situação o programa debugado se encontra, como, por exemplo, se ele está em execução, ou pausado, e se estiver pausado, o motivo.


---
### Rodando primeiro programa no x64dbg
Bom, agora que você já sabe como abrir o seu programa no x64dbg, vamos aprender mais algumas coisas antes de seguirmos para o próximo tutorial.
Abra qualquer programa com o x64dbg, de preferência um feito por você mesmo.
Em seguida, abra a janela (em vermelho) da seguinte maneira: Clique em **Options -> Preferências**
E aí desmarque a caixinha apontada pela seta, **System Breakpoint**. Assim você desativa alguns breakpoints "desnecessários" para nós agora (para saber mais sobre System Breakpoints: https://www.youtube.com/watch?v=vdyyg72tc2w).

Ao abrir o programa, ele vai ser mostrado na barra de tarefas somente quando for carregado. Eu estou abrindo um programa simples, ele já carrega a janela de primeira.
Se for um programa mais complexo, ele vai carregar outras coisas antes de poder desenhar a janela. Basta clicar em **RUN** (**F9**) até parar em um breakpoint: entry breakpoint, que você vê ao lado do status do programa:

Sempre acompanhe o status do programa na barra inferior, ao lado do **PAUSE**.
Vamos tentar encontrar nossas funções. Se o programa não foi "cuidadosamente estripado", achar a função principal, **main**, não é tão difícil.
Vá até a janela **Símbolos**, que possui nossos módulos, encontre o módulo principal (que possui o nome do próprio programa rodando), clique nela e seja direcionado para o início desse módulo:
Assim , se o programa não for extremamente complexo, você vai encontrar a main.
**Dica**: os programas compilados para x86, geralmente mantêm a função **main** no topo, então assim que o programa parar no entry breakpoint, vá ao topo das instruções que você encontrará a **main**.
---
#### Encontrando Strings
Outro modo de encontrar a main, é quando ela chama alguma string.
Para encontrar as strings dentro do programa é fácil:
na janela CPU, clique com o botão direito do mouse no meio da janela, vá até
**Pesquisar Por -> All User Modules -> Referências String**.
Carregando as strings na janela de referências, procure pela que você sabe que é usada na main, e aí clique nela. Você será redirecionado para o local em que ela aparece no programa, na janela CPU. O início da função, é o início da main, a nossa função principal, que o programador escreveu.
---
### Noções básicas sobre funções em Assembly
Toda função em Assembly possui um **prólogo** e um **epílogo**.
Normalmente as funções possuem as seguintes instruções no início (o prólogo):
push ebp
mov ebp, esp
E a seguinte instrução no final (o epílogo):
ret
*Lembre-se que as instruções em x64 utilizam os registradores ``rsp`` e ``rbp``*
No início, o programa salva o valor que está no registrador **EBP/RBP** com ``push ebp``. Esse valor é o endereço da base da stack/pilha da função anterior; lembre-se que a stack/pilha possui os valores das variáveis locais, e que as variáveis locais são variáveis declaradas e/ou definidas dentro das funções (somente as variáveis definidas, isto é, variáveis com valores, vão ser colocadas na stack/pilha).
Após salvar o valor de **EBP**, um novo stackframe é iniciado; em outras palavras, um novo local ("local" se refere a um bloco de memória na stack) para guardar as variáveis dessa nova função é "delimitado". Isso é feito com ``mov ebp, esp``.
Tenha em mente que, para cada nova função, um novo stackframe é mapeado e alocado para ela, assim, cada função lida somente com os valores das suas próprias variáveis.
Você vai precisar saber sobre calling conventions para entender como os argumentos são passados para as funções, eu não irei explicar isso aqui, então se quiser saber mais, não deixe de ler:
https://en.wikipedia.org/wiki/X86_calling_conventions
<a href="https://www.ired.team/miscellaneous-reversing-forensics/windows-kernel-internals/linux-x64-calling-convention-stack-frame" target="_blank">Calling Convention Stack Frame</a>
---
### Atalhos úteis
Clicando com o botão direito do mouse no meio da janela CPU, vá até "**Ir Para**", lá você terá acesso as seguintes funcionalidades (decore os atalhos):
Os atalho mais úteis de todos são o primeiro e o segundo:
1. RIP ``*``: ele te direciona para onde o seu programa está no momento, seguindo o **RIP**.
2. Anterior ``-``: Volta para a instrução executada anteriormente.
---
### Final: Considerações e Recomendações
Não deixe de fazer o tutorial prático com o x64dbg [COLOCAR LINK QUANDO PRONTO].
Os debuggers são poderosos e úteis para as mais diversas atividades na área da computação. O x64dbg faz muitas coisas, várias delas eu deixei de mencionar aqui, como, por exemplo, alterar as instruções de um programa, para modificar sua funcionalidade.
Quanto mais você for estudando sobre tópicos de engenharia reversa, mais os debuggers se mostrarão úteis. Você irá se aprofundar nas diversas funcionalidades conforme a sua necessidade.
Um site muito bom para continuar praticando a leitura de Assembly é o seguinte: <a href="https://godbolt.org/" target="_blank">godbolt.org</a> Selecione uma linguagem, como C ou C++, escolha um compilador, e comece os testes.
#### Caso esteja procurando obter os fundamentos da computação, eu sugiro que você confira os links desta área:
**Guia de estudos:** https://www.mentebinaria.com.br/studying-materials/basico-para-computacao/
**Tutorial Gratuito de Assembly pt-br:** https://github.com/andreluispy/assembly4noobs
**Livro Gratuito de Assembly em pt-br**: https://mentebinaria.gitbook.io/assembly/
**Fundamentos da Engenharia Reversa pt-br:** https://mentebinaria.gitbook.io/engenharia-reversa/
**Curso de Engenharia Reversa Online (vídeo pt-br)**: https://www.youtube.com/playlist?list=PLIfZMtpPYFP6zLKlnyAeWY1I85VpyshAA
**Livro Em Inglês para iniciantes em Engenharia Reversa en**: https://beginners.re/ | ryan_gozlyngg |
1,775,705 | Avoid These Pitfalls: A Programmer's Guide to Career Success | In the world of programming, success isn't just about writing great code; it's also about avoiding... | 0 | 2024-02-29T04:21:04 | https://www.youtube.com/watch?v=8utVaB4MMwo | programming, beginners, career | In the world of programming, success isn't just about writing great code; it's also about avoiding common pitfalls that can derail your career. Let's explore some key things programmers should avoid for a brighter professional future.
## Neglecting Soft Skills
While technical skills are important, don't overlook the value of soft skills like communication, teamwork, and problem-solving. Programmers who excel in these areas are better equipped to collaborate effectively with colleagues and deliver projects on time and within budget.
## Failing to Stay Updated
Technology evolves rapidly, and staying current with the latest trends and tools is essential for career growth. Failing to invest time in learning new technologies can lead to obsolescence and limit your opportunities in the ever-changing tech landscape.
## Ignoring Code Quality
Writing sloppy, poorly documented code may seem expedient in the short term, but it can lead to maintenance nightmares and decrease your value as a programmer. Always strive for clean, well-structured code that is easy to understand and maintain.
## Working in Isolation
Collaboration is key in today's interconnected world. Avoid the temptation to work in isolation; instead, actively seek feedback from peers, contribute to open-source projects, and participate in developer communities to expand your knowledge and network.
## Neglecting Self-Care
Long hours and tight deadlines are part of the job, but neglecting self-care can lead to burnout and negatively impact your performance. Make time for hobbies, exercise, and relaxation to maintain a healthy work-life balance and sustain your passion for programming.
#### Watch This Video Only If You Know Hindi and Can Bare Awful Audio and Unprofessional Description.
{% embed https://www.youtube.com/watch?v=8utVaB4MMwo %}
> By avoiding these common pitfalls, programmers can set themselves up for long-term success in their careers. Remember, it's not just about writing code—it's about navigating the challenges of the profession with skill and foresight.
| aslisachin |
1,775,706 | AWS cross account access (switch role) | In this tutorial we will switch role delegated to access a resources in different AWS accounts. You... | 0 | 2024-03-11T01:07:10 | https://dev.to/aws-builders/aws-cross-account-access-switch-role-3bn | aws, iam, iamrole | In this tutorial we will switch role delegated to access a resources in different AWS accounts. You share resources in one account with users in a different account. By setting up cross-account access in this way, you don't have to create individual IAM users in each account.
- Access AWS console
- Open Identity and Access Management (IAM)
- Click "Roles" on left side menu

- Select the AWS account
- Since it is a cross account access, give the the Account ID to which you want to grant access to your resources

- Next is to give the permission policies, type the policy you want to attach in the search bar.

- Add the Role name, and an option description. Then create the role

- Finally role can be used in cross account by clicking on switch role in the console

- Role can be switched by inputting the Account ID, Role name.

| olawde |
1,775,715 | Posca Markers: A Comprehensive Guide to Mastering Fine Art | In the dynamic realm of digital art and non-fungible tokens (NFTs), Bermuda Unicorn stands as a... | 0 | 2024-02-29T04:44:14 | https://dev.to/jackjones9354/posca-markers-a-comprehensive-guide-to-mastering-fine-art-4ip8 |

In the dynamic realm of digital art and non-fungible tokens (NFTs), Bermuda Unicorn stands as a beacon of creativity and innovation. Among its diverse offerings, Posca Markers NFT emerges as a standout piece within the Valentine's Day Special Collection. Let's delve into a comprehensive guide to mastering the artistry of Posca Markers.
Bermuda Unicorn: A Leading NFT Marketplace in the Digital World
As a premier destination for digital art enthusiasts and collectors, Bermuda Unicorn has solidified its reputation as a leading NFT marketplace in the present digital age. With its curated selection of high-quality NFTs spanning various genres and themes, Bermuda Unicorn offers a platform for artists to showcase their talent and creativity to a global audience.
Exploring Posca Markers NFT: A Valentine's Day Special Collection
Within Bermuda Unicorn's vast array of NFTs, Posca Markers NFT stands out as a captivating addition to the Valentine's Day Special Collection. This unique piece of digital art features a white teddy bear embracing a love-shaped symbol, evoking feelings of warmth, affection, and nostalgia. With its whimsical charm and intricate details, Posca Markers NFT captures the essence of love and companionship in a delightful and enchanting manner.
The Artistry of Posca Markers: A Blend of Tradition and Innovation
Posca Markers NFT showcases the versatility and creativity of digital artistry, utilizing traditional mediums in a modern and innovative way. The use of posca markers as the primary medium imbues the artwork with a tactile and textured quality, adding depth and dimension to the composition. Through meticulous strokes and careful attention to detail, the artist brings the adorable teddy bear to life, evoking a sense of joy and delight in the viewer.
Mastering Fine Art with Posca Markers: Techniques and Tips
For aspiring artists looking to master the art of posca markers, there are several techniques and tips to keep in mind. Experimentation is key, as posca markers can be used on a variety of surfaces, including paper, canvas, wood, and even digital platforms. By varying pressure, angle, and layering techniques, artists can achieve different effects and textures, adding visual interest and depth to their creations.
Additionally, mixing and blending colors can create unique and harmonious palettes, while incorporating fine details and highlights can enhance realism and depth. Practice and patience are essential elements of mastering any art form, and with dedication and perseverance, artists can unlock the full potential of posca markers to create stunning works of art that resonate with audiences on a profound level.
Collecting Posca Markers NFT: A Celebration of Creativity and Inspiration
For collectors and enthusiasts alike, acquiring Posca Markers NFT represents more than just ownership of a digital asset – it is a celebration of creativity, inspiration, and the boundless possibilities of digital art. Each Posca Markers NFT tells a unique story and reflects the vision and talent of its creator, making it a cherished addition to any digital art collection.
In Conclusion: Posca Markers NFT Redefining Digital Artistry
In conclusion, Posca Markers NFT exemplifies the transformative power of digital artistry and its ability to evoke emotions, spark imagination, and inspire creativity. Within Bermuda Unicorn's Valentine's Day Special Collection, Posca Markers NFT stands as a testament to the enduring allure and charm of traditional mediums in the digital age. As collectors continue to seek out unique and captivating pieces to add to their collections, Posca Markers NFT remains a timeless treasure that captures the essence of love, joy, and artistic expression.
FAQs about Posca Markers NFT:
1. What are Posca Markers?
Posca markers are water-based paint pens known for their vibrant colors, opaque coverage, and versatility on various surfaces such as paper, wood, metal, and plastic.
2. What makes Posca Markers NFT unique?
Posca Markers NFT combines traditional art mediums with digital technology, offering a modern interpretation of fine art. Each NFT captures the tactile qualities and charm of posca markers in a digital format.
3. How can I acquire a Posca Markers NFT?
To acquire a Posca Markers NFT, visit Bermuda Unicorn's NFT marketplace and explore the Valentine's Day Special Collection. From there, you can purchase the NFT using cryptocurrency.
4. Can I display a Posca Markers NFT in my digital art collection?
Yes, once you own a Posca Markers NFT, you can display it in your digital art collection using compatible platforms and virtual galleries. Showcase your collection to friends, fellow collectors, and art enthusiasts worldwide.
5. Are Posca Markers NFTs limited edition?
Yes, Posca Markers NFTs are part of the limited edition Valentine's Day Special Collection on Bermuda Unicorn. Each NFT is uniquely crafted by the artist and available in limited quantities, adding exclusivity and value to the collection.
| jackjones9354 | |
1,775,745 | Enhancing Your Experience with WP-Events Manager: Insights and Suggestions | Dear Community, I hope this message finds you well. As someone who passionately teaches Salsa Dance... | 0 | 2024-02-29T05:55:28 | https://dev.to/paulpreibisch/before-you-buy-wp-events-manager-these-3-things-need-to-be-improved-d7p | Dear Community,
I hope this message finds you well. As someone who passionately teaches Salsa Dance in a picturesque beach town in Mexico, I've embarked on a journey that bridges cultures and continents. My unique path—from a Canadian learning Latin dance in Korea, to sharing these vibrant rhythms with both tourists and locals in Mexico—highlights the universal language of dance. Interestingly, while my lessons cater mainly to tourists, it's fascinating to see the cultural exchange unfold, especially since the local dance flavor, Cumbia, differs from the "L.A. Style" Salsa I teach, known locally as "Linia" or "Casino Style".
This cultural tapestry led me to leverage WP-Events Manager for my website, aiming to create a hub for Latin Dance events. Investing in their comprehensive bundle was a decision made with the intent to offer the best to my audience. However, my experience has surfaced opportunities for enhancement within the platform that I believe are crucial for potential users to consider.
Optimization of Event Listings: The current setup where event listings significantly slow down website performance due to the lack of optimized thumbnail support is a critical area for improvement. Implementing a feature for automatic thumbnail optimization or allowing users to upload optimized images could drastically enhance site speed and user experience.
Management of Expired Events: The handling of expired events presents a challenge, especially when it affects the visibility of past events and their SEO value. A refined approach, such as introducing settings to manage the visibility of expired event details without cluttering current listings, could greatly benefit both organizers and attendees, preserving the SEO efforts and enabling historical event exploration.
As I continue to navigate these challenges, I remain hopeful for future updates that will address these concerns, enriching the WP-Events Manager platform for all its users. I look forward to sharing more insights and developments in the coming months and eagerly anticipate the enhancements that WP-Events Manager will introduce to improve our collective experience.
Warm regards | paulpreibisch | |
1,775,771 | Fortifying Your WordPress Arsenal: Essential Strategies for Securing Custom Plugins | WordPress is undeniably one of the most popular content management systems (CMS) globally, powering... | 0 | 2024-02-29T06:51:17 | https://dev.to/jamesmartindev/fortifying-your-wordpress-arsenal-essential-strategies-for-securing-custom-plugins-2mph | wordpress, javascript, programming, devops | WordPress is undeniably one of the most popular content management systems (CMS) globally, powering millions of websites across diverse industries. Its flexibility and ease of use make it a top choice for businesses and individuals alike. However, with great popularity comes great risk, particularly concerning security vulnerabilities. Custom WordPress plugins, while offering tailored functionality, can sometimes introduce security risks if not developed and managed properly. In this blog post, we'll delve into some essential tips and techniques to secure your custom WordPress plugins effectively.
## 1. Follow Best Practices in Plugin Development:
The foundation of a secure WordPress plugin lies in its development process. Adhering to best practices ensures that your plugin code is robust and less susceptible to vulnerabilities. Some key practices include:
**- Sanitize and Validate Input:** Always sanitize and validate user inputs to prevent SQL injection, cross-site scripting (XSS), and other common attacks.
-** Escape Output:** Escaping output data before rendering it in HTML prevents XSS attacks. WordPress provides functions like `esc_html()`, `esc_attr()`, and `esc_js()` for this purpose.
**- Use Nonces:** Implementing nonces (number used once) helps verify that the authenticated user intended to perform a specific action, preventing CSRF (Cross-Site Request Forgery) attacks.
**- Follow WordPress Coding Standards:** Adhering to WordPress' coding standards ensures consistency and readability while also minimizing the risk of introducing vulnerabilities.
## 2. Regularly Update and Maintain Your Plugins:
Keeping your plugins up to date is crucial for staying ahead of security threats. Developers frequently release updates to patch vulnerabilities and improve functionality. Make it a habit to check for updates regularly and apply them promptly. Additionally, if you're no longer actively maintaining a plugin, consider finding alternatives or discontinuing its use altogether to avoid potential security risks.
## 3. Limit Plugin Permissions:
Minimize the potential damage a compromised plugin can cause by limiting its permissions. Only grant the plugin access to resources and capabilities necessary for its functionality. Avoid granting excessive privileges that could be exploited by malicious actors.
## 4. Implement Role-Based Access Control:
WordPress provides a robust role-based access control system that allows you to assign specific capabilities to different user roles. Ensure that your plugin respects these roles and capabilities, granting permissions appropriately. This helps prevent unauthorized users from accessing sensitive functionality within your plugin.
## 5. Utilize Security Plugins and Tools:
Consider using reputable security plugins and tools to bolster your WordPress site's overall security. These tools often offer features such as malware scanning, firewall protection, and login attempt monitoring, helping you detect and mitigate security threats proactively.
## 6. Regular Security Audits and Penetration Testing:
Conduct regular security audits and penetration testing to identify and address vulnerabilities in your custom WordPress plugins. This proactive approach allows you to discover potential security weaknesses before they can be exploited by attackers.
## 7. Stay Informed About Security Best Practices:
The field of cybersecurity is constantly evolving, with new threats and vulnerabilities emerging regularly. Stay informed about the latest security best practices and trends by following reputable blogs, attending security conferences, and participating in relevant online communities. Continuously educating yourself and your team is essential for maintaining a secure WordPress environment.
## Conclusion
securing your **[custom WordPress plugins](https://wpeople.net/service/custom-wordpress-plugin-development/)** requires a proactive and multi-faceted approach. By following best practices in plugin development, keeping your plugins updated, limiting permissions, implementing role-based access control, utilizing security plugins and tools, conducting regular security audits, and staying informed about security best practices, you can significantly reduce the risk of security breaches and protect your WordPress site and its users from harm. Remember, when it comes to security, vigilance is key.
| jamesmartindev |
1,776,004 | Cloud Computing 2024: Explore the Future Today! | Stay earlier of the curve in 2024 with our whole analysis of the modern-day-day dispositions,... | 0 | 2024-02-29T11:53:49 | https://dev.to/ecfdataus/cloud-computing-2024-explore-the-future-today-clh | devops, azure, azureai, cloudcomputing | Stay earlier of the curve in 2024 with our whole analysis of the modern-day-day dispositions, possibilities and traumatic situations in cloud computing! Our blog “[Cloud Computing 2024: Key Trends and Challenges](https://www.ecfdata.com/cloud-computing-key-trends-and-challenges/)” delves deeper into the evolution of cloud offerings, and makes a specialty of the insights crucial to navigate the dynamic international of generation.
Discover excellent headlines together with:
Combining artificial intelligence (AI) and gadget mastering (ML).
Compatibility and mobility for seamless operation
Advantages of multiclouds and allocated cloud strategies
Enhancing cloud safety systems
Managing cloud fees and environmental obligation
Ensuring cloud governance and compliance
Cloud structures designed for particular industries
Adopting cloud-native standards for agile improvement
Improving the Data Environment with DataOps and Data Fabric
Harnessing the electricity of aspect computing for actual-time insights
Gain aggressive gain:
Unlock the functionality of cloud computing to pressure innovation, overall performance and business organization price. Understand the traits shaping the destiny of cloud generation and set yourself on the adventure of virtual transformation.
Contact ECF Data Solutions:
Conquer cloud computing worrying situations with self-belief! Our expert services are tailored to quite a few industries, offering cost-powerful and dependable answers to useful resource your efforts. Contact us in recent times to simplify your cloud migration adventure and unencumber first-rate business opportunities.
[Get in Touch with Us](https://www.ecfdata.com/contact-us/)
| ecfdataus |
1,776,034 | Maximizing Business Potential with AWS Serverless | Imagine if building apps were as easy as snapping your fingers – that's the magic of serverless... | 0 | 2024-02-29T12:38:46 | https://dev.to/krunalbhimani/maximizing-business-potential-with-aws-serverless-1n30 | serverless, cloudcomputing, aws, lambda | Imagine if building apps were as easy as snapping your fingers – that's the magic of serverless computing. It takes all the hassle out of managing servers, so developers can focus solely on writing awesome code. One of the big players in this game is Amazon Web Services (AWS), offering a suite of tools that promise to make apps more scalable, cheaper to run, and simpler to create.
Businesses everywhere are catching on to the benefits of AWS serverless technologies. It's like giving your business a superhero boost – you can innovate faster, stay flexible, and save money all at once. In this exploration, we're diving deep into AWS serverless computing, debunking some common myths, and showcasing how it's transforming businesses worldwide. Let's uncover the magic of AWS serverless computing together!
For a more comprehensive understanding of how AWS serverless computing can revolutionize your business, check out our in-depth guide **[Enhancing Business with AWS Serverless Computing Solutions.](https://www.seaflux.tech/blogs/aws-serverless-computing-solutions-guide?utm_source=devto&utm_medium=social&utm_campaign=guest%20blog)**
## Dispelling Common Myths
In the world of AWS serverless computing, several misconceptions abound. Let's debunk these myths:
**Myth 1: Scalability Concerns**
Some fear that serverless platforms struggle with sudden spikes in traffic. However, AWS Lambda seamlessly scales resources based on demand, as evidenced by its successful handling of peak loads during events like holiday shopping seasons.
**Myth 2: Performance Issues**
There's a misconception that serverless computing introduces latency or bottlenecks. Yet, AWS continuously optimizes its infrastructure to minimize cold start times and enhance performance, supported by techniques like caching and asynchronous processing.
**Myth 3: Security Risks**
Concerns about security are common, however, AWS serverless offerings adhere to rigorous standards. Features like encryption and access controls ensure data safety, trusted by enterprises across industries handling sensitive workloads.
**Myth 4: Vendor Lock-In**
Some worry about being tied to a specific provider, but AWS emphasizes interoperability and provides tools like AWS SAM and CDK. This enables developers to build serverless applications using familiar languages and frameworks, reducing vendor lock-in risks.
**Myth 5: Cost-Effectiveness**
Contrary to belief, AWS serverless technologies offer transparent pricing and cost optimization features. With pay-per-use billing and automatic scaling, businesses control costs and achieve significant savings, as evidenced by case studies.
## Exploring the Benefits of AWS Serverless Computing

Discover the often-overlooked advantages of AWS serverless computing:
- **Scalability:** AWS Lambda scales automatically based on demand, ensuring seamless performance during peak loads.
- **Cost-Effectiveness:** Pay-per-use pricing and no upfront costs make serverless computing budget-friendly for businesses of all sizes.
- **Rapid Deployment:** Simplified infrastructure management means faster deployment, accelerating time-to-market for applications.
- **Reduced Operational Overhead:** Offloading server management to AWS reduces operational burdens, allowing teams to focus on innovation.
- **Improved Developer Productivity:** Developers can iterate quickly and deploy changes with ease, enhancing productivity and driving innovation.
Through case studies, businesses have experienced tangible benefits, such as reduced infrastructure costs and accelerated development cycles. AWS serverless computing empowers businesses to thrive in today's digital landscape.
## Key Considerations for AWS Serverless Adoption
When adopting AWS serverless computing, businesses must focus on the key factors:
- **Architectural Design:** Assess your application architecture to identify components suitable for serverless deployment, ensuring scalability and reliability.
- **Performance Optimization:** Optimize function execution time and resource utilization to enhance user experience, leveraging caching and AWS service selection.
- **Security Measures:** Implement robust security practices, including encryption and access controls, to protect applications and data in compliance with regulations.
- **Regulatory Compliance:** Ensure adherence to industry and geographic regulations, leveraging AWS compliance programs like HIPAA and SOC.
### Best Practices:
- Start small and gradually migrate workloads to serverless.
- Monitor performance metrics closely for optimization opportunities.
- Invest in team training and stay updated on AWS best practices.
- Utilize AWS tools like Well-Architected Framework for guidance.
With these considerations and best practices, businesses can successfully embrace AWS serverless computing for innovation and efficiency.
## Realizing the Potential of AWS Serverless Computing
AWS serverless computing fuels innovation and competitiveness in the digital era:
### Empowering Innovation
By simplifying infrastructure management, businesses can focus on developing innovative solutions to meet customer needs and outpace competitors.
### Across Industries
From e-commerce to healthcare, serverless architecture transforms industries, handling spikes in website traffic and streamlining patient data management.
### Transformative Agility
Serverless technologies enable rapid scaling, quick responses to market demands, and experimentation, fostering a culture of innovation and driving business growth.
AWS serverless computing empowers businesses to innovate, grow, and excel in today's dynamic digital landscape.
## End Note
In conclusion, dispelling myths and understanding AWS serverless computing's true potential is key. By addressing common misconceptions, businesses can harness its benefits for innovation and efficiency.
I encourage readers to explore adopting AWS serverless technologies. Embracing them unlocks new possibilities, driving growth and competitiveness. Take the next step towards realizing the true potential of AWS serverless computing for your projects today. | krunalbhimani |
1,776,185 | Exploring Pkl: Apple's Fresh Approach to Configuration Languages | In a digital epoch where the only constant is change, Apple introduces Pkl—pronounced "Pickle"-a new... | 0 | 2024-02-29T14:20:00 | https://configu.com/blog/exploring-pkl-apples-fresh-approach-to-configuration-languages/ | devops, programming, opensource, configuration | In a digital epoch where the only constant is change, Apple introduces Pkl—pronounced "Pickle"-a new entrant in the dynamic landscape of software development. With an eye towards addressing some of the longstanding issues in configuration management, Pkl trying to bring forward concepts of programmability, scalability, and safety. But beyond the initial buzz, what does Pkl truly offer to the modern developer?
### Pkl at a Glance
Born from the need to transcend the limitations of static configuration files, Pkl stands as Apple’s innovative foray into programmable configuration management. It's not just another language; it's a paradigm shift towards **configuration-as-code (CaC)**.
{% embed https://dev.to/rannn505/configuration-as-code-automating-application-configuration-45k6 %}
### Key Features
- **Programmability:** Pkl introduces conditions, loops, and functions within configuration files, transforming them from static documents into dynamic scripts.
- **Scalability:** Tailored for projects of any size, Pkl's design ensures configurations remain manageable, regardless of the project's complexity.
- **Enhanced IDE Support:** With auto-complete, error highlighting, and inline documentation, Pkl is designed to make configuration management a more integrated and less error-prone part of the development process.
### A Simple Pkl Example
Let's dive into a simple yet illustrative example of Pkl in action. Imagine you're setting up the configuration for a web application. With Pkl, you can easily define your application's settings, including environment-specific variables and even incorporate logic to dynamically adjust settings based on the deployment context.
```hcl
// Define a basic web application configuration
module WebAppConfig {
hostname: String,
port: Int,
environment: "development" | "staging" | "production",
// Dynamically adjust debug mode based on the environment
debugMode: Bool = (environment == "development")
}
// Application instance for development
instance devConfig: WebAppConfig {
hostname = "localhost",
port = 8080,
environment = "development"
}
```
This snippet demonstrates Pkl’s capability to elegantly tailor configurations to different environments, a testament to its programmable nature and practical utility.
### Community Reception and Comparisons
Since its unveiling, Pkl has stirred a mix of excitement and skepticism within the developer community. Platforms like [Hacker News](https://news.ycombinator.com/item?id=39232976) and [Reddit](https://www.reddit.com/r/programming/comments/1ahbzfl/introducing_pkl_a_programming_language_for/) have become arenas of debate, weighing Pkl’s potential against the backdrop of existing solutions. While some applaud Pkl for its innovative approach, others question the necessity of introducing yet another player into the configuration language game. This discourse highlights the diverse needs and preferences within the software development community, underscoring the importance of choice in tools and methodologies.
### Looking Forward
Pkl's debut is not just about a new tool; it's a conversation starter on the future of configuration management. Its adoption and the community's feedback will shape the role it plays in how we manage and deploy software in an increasingly complex world.
### Useful Resources
- [Pkl Official Documentation](https://pkl-lang.org/)
- [Pkl GitHub Repository](https://github.com/apple/pkl)
- [Pkl Code Examples](https://pkl-lang.org/main/current/examples.html)
- [Pkl Introduction Blog](https://pkl-lang.org/blog/introducing-pkl.html)

| rannn505 |
1,776,285 | Introduction to Cannabis Tincture Boxes | Introduction to Cannabis Tincture Boxes In the ever-expanding market of cannabis products, packaging... | 0 | 2024-02-29T16:35:40 | https://dev.to/bobbieschwartz/introduction-to-cannabis-tincture-boxes-m3o | coutom, boxes, wholesale, webdev | <p><strong>Introduction to Cannabis Tincture Boxes</strong></p>
<p>In the ever-expanding market of cannabis products, packaging plays a pivotal role in not only preserving the quality of the product but also in catching the consumer's eye. Among various packaging solutions, <a href="https://thepremierpackaging.com/cannabis-tincture-boxes/"><strong>Custom Cannabis Tincture Boxes</strong></a> hold a significant position due to their practicality and versatility.</p>
<p><strong>What are Cannabis Tincture Boxes?</strong></p>
<p>Cannabis tincture boxes are specially designed containers used for storing and transporting cannabis tinctures. These boxes come in various shapes, sizes, and materials, catering to the diverse needs of cannabis brands and consumers.</p>
<p><strong>Importance of Packaging for Cannabis Products</strong></p>
<p>Packaging is not merely a means of enclosing a product; it serves as a vital component of the branding and marketing strategy for cannabis products. Cannabis tincture boxes not only protect the delicate tincture bottles from damage but also communicate essential information to consumers and differentiate the product from competitors.</p>
<p><strong>Factors to Consider in Cannabis Tincture Box Design</strong></p>
<p>When designing cannabis tincture boxes, several factors must be taken into consideration to ensure both functionality and appeal.</p>
<p><em>Material:</em> The choice of material greatly influences the durability, sustainability, and aesthetics of the packaging.</p>
<p><em>Size and Shape:</em> The size and shape of the box should be optimized to accommodate the tincture bottle comfortably while maximizing shelf space.</p>
<p><em>Labeling and Information:</em> Clear and informative labeling is crucial for compliance with regulations and providing consumers with necessary details about the product.</p>
<p><strong>Eco-Friendly Packaging Solutions</strong></p>
<p>With increasing environmental concerns, there is a growing demand for eco-friendly packaging solutions in the cannabis industry. Manufacturers are exploring sustainable materials and practices to minimize the ecological footprint of cannabis tincture boxes.</p>
<p><strong>The Role of Packaging in Branding</strong></p>
<p>Packaging serves as a powerful tool for branding, helping cannabis companies establish a distinct identity and connect with consumers on a deeper level. Creative and well-designed tincture boxes can enhance brand recognition and loyalty.</p>
<p><strong>Compliance and Regulations</strong></p>
<p>The cannabis industry is heavily regulated, and packaging requirements vary depending on location. Cannabis tincture boxes must comply with local laws and regulations regarding child-resistant packaging, labeling, and THC content.</p>
<p><strong>Tips for Choosing the Right Cannabis Tincture Box Supplier</strong></p>
<p>Selecting a reliable and experienced supplier is essential for ensuring the quality and consistency of cannabis tincture boxes. Factors to consider include reputation, expertise, pricing, and customization options.</p>
<p><strong>Case Studies: Successful Cannabis Tincture Brands and their Packaging</strong></p>
<p>Examining case studies of successful cannabis tincture brands can provide valuable insights into effective packaging strategies and their impact on brand success and consumer perception.</p>
<p><strong>Innovations in Cannabis Packaging</strong></p>
<p>The cannabis industry is dynamic and constantly evolving, with continuous innovations in packaging technology and design. From child-resistant closures to interactive packaging, there is no shortage of creative solutions to enhance the functionality and appeal of cannabis tincture boxes.</p>
<p><strong>Future Trends in Cannabis Tincture Boxes</strong></p>
<p>Looking ahead, the future of <a href="https://thepremierpackaging.com/cannabis-tincture-boxes/"><strong>Cannabis Tincture Boxes Wholesale</strong></a> is characterized by advancements in sustainable materials, smart packaging technologies, and personalized branding experiences tailored to individual consumers.</p>
<p><strong>Benefits of Custom Cannabis Tincture Boxes</strong></p>
<p>Custom cannabis tincture boxes offer numerous benefits, including brand differentiation, enhanced product protection, and increased shelf appeal. By investing in custom packaging solutions, cannabis brands can stand out in a crowded market and attract discerning consumers.</p>
<p><strong>How to Store Cannabis Tincture Boxes Properly</strong></p>
<p>Proper storage is essential for maintaining the quality and integrity of cannabis tincture boxes. It is recommended to store them in a cool, dry place away from direct sunlight and moisture to prevent degradation of the packaging materials.</p>
<p><strong>The Impact of Packaging on Consumer Perception</strong></p>
<p>Packaging plays a significant role in shaping consumer perception and influencing purchasing decisions. Well-designed and visually appealing cannabis tincture boxes can convey professionalism, quality, and trustworthiness, fostering positive associations with the brand.</p>
<p><strong>Conclusion</strong></p>
<p>In conclusion, cannabis tincture boxes are indispensable components of the cannabis packaging landscape, serving a dual purpose of functionality and branding. As the industry continues to evolve, innovative packaging solutions will play a crucial role in driving consumer engagement and loyalty.</p> | bobbieschwartz |
1,776,610 | Building for sustainability: Dashboards | As the software industry increases its focus on environmental, social, and wider sustainability... | 0 | 2024-02-29T22:52:16 | https://newrelic.com/blog/best-practices/building-esg-sustainability-dashboards?utm_source=devto&utm_medium=community&utm_campaign=global-fy24-q4-esg-blog | devops, tutorial, productivity, monitoring | As the software industry increases its focus on environmental, social, and wider sustainability practices, there’s been a rise in new reporting regulations. Regulations like the EU’s Due Diligence proposal put the onus on large and medium companies to identify, prevent, or mitigate damaging environmental and human rights in their practices. The EU’s Taxonomy for Sustainable Activities identifies six environmental objectives for EU-based companies to focus on. Work is now underway to create new metrics and key performance indicators (KPIs) to monitor these actions and communicate them publicly for transparency.
##Build or buy?
To fully track your data across your entire supply chain, you'll need to build a dashboard that utilizes data visualization and your regulatory reporting data. With this information in a single interface, you can monitor your carbon footprint and perform benchmarking tests to meet your targets. To implement a dashboard into your tech stack, you have two options: build your own or buy one off the shelf.
Building a dashboard increases your initial workload and lengthens the time to deployment, but it also lets you customize data taxonomies and categories to meet your company’s exact regulatory needs. Here are four steps to start with:
1. Get access to [ESG data](https://datarade.ai/data-categories/esg-data) and supplier databases.
2. Invest in a monthly or yearly subscription to license the ESG data set. The license cost varies depending upon the number of companies included in the dataset and the length of time covered by the data.
3. Connect the databases to a central dashboard.
4. Categorize your suppliers’ data to fit into your reporting metrics.
Companies can also buy off-the-shelf solutions including pre-made ESG dashboards with taxonomy definitions and category matching for supplier data in place, with options to address different regulatory standards.
##Common features in ESG dashboards
Features you’ll find in most ESG dashboards include:
-
Automated data collection and aggregation.
-
Portfolio-level insights.
-
Benchmarking against peers.
-
Compliance with the [EU’s Sustainable Finance Strategy.](https://finance.ec.europa.eu/publications/strategy-financing-transition-sustainable-economy_en)
-
Sustainability taxonomy and the Sustainable Finance
-
Disclosure Regulation (SFDR).
-
Sustainable Development Goals (SDG) setting and tracking.
-
Many ESG dashboards do not share data sources, but two datasets are regularly referenced: [REFINITIV](https://www.refinitiv.com/) and [CSRHub](https://www.csrhub.com/).
Many dashboard providers also have the following privacy tools and methods to make sure supply chain data remains commercial in confidence and to prevent attacks:
-
2-factor authentication.
-
Dedicated client servers.
-
Admin-controlled whitelisted IPs.
-
Infrastructure and application vulnerabilities monitoring.
-
Monitoring threats with a managed security service provider (MSSP).
-
ISO 27001 compliance certification.
-
Consumer data protection laws on where dashboard providers are headquartered.
##7 important metrics
As new regulations require companies to report on their environmental and social impact, ESG dashboards will become a common tool. To ensure that reporting meets regulatory compliance, data needs to be tracked. Monitoring metrics fall into three categories: integration ops, data ops, and machine learning ops.
##1. CPU and memory usage
**What it is:** The amount of processing power used by your dashboard’s API server to run or consume APIs used in your dashboard.
**How to measure it:** CPU and memory usage can be measured by installing a cluster in the server that hosts your API code.
##2. API consumption
**What it is:** The volume of requests your APIs receive per second/minute.
**How to measure it:** Application monitoring tools like New Relic’s APM track API consumption. To lessen the strain on your dashboard APIs, consider combining multiple API calls into a single call with a flexible pagination scheme.
##3. Response time
**What it is:** The time elapsed between when an API request is made and when it was completed. For example, the amount of time between selecting a list of SDGs in your ESG dashboard and that list loading.
**How to measure it:** Accurate response time can be difficult to measure since latency issues can stem from multiple locations (API endpoints or the network itself, for example).
##4. Data volume
**What it is:** The amount of data you input into the ESG dashboard’s database. By keeping track of data volume, you ensure that all your inputted data has successfully integrated into the database.
**How to measure it:** Compare the number of new items sent to the database against the number of new items shown in the database—the two numbers should be equal.
##5. Corrupted data
**What it is:** Data errors that occur during transfer or processing. A data error could result in an empty value in your database, which could make future calculations inaccurate if not fixed.
**How to measure it:** Your combined total of alerts that have been issued with empty or null responses.
##6. Data invariance
**What it is:** The dashboard’s ability to recognize when data doesn’t adhere to a predetermined schema. For example, an inputted data set that doesn’t match up to the sustainability taxonomy it has been trained to recognize.
**How to measure it:** Your management system should issue an alert if data is incompatible with recognized schemas.
##7. Training and serving features
**What it is:** Tests that determine if the data output in your training environment (where the machine learns data models that form the basis of its future calculations) matches the output of your serving environment (where the calculations take place). For example, if you’ve trained your ESG dashboard to calculate weighted metrics where the sustainability reputation of a company in your supply chain affects another, such as a seller and buyer, those prediction values need to be calculated consistently and accurately.
**How to measure it:** Calculate distribution statistics, like minimum and maximum values, on the training and serving environments to ensure they match.
---
To read this full New Relic blog, [click here](https://newrelic.com/blog/best-practices/building-esg-sustainability-dashboards?utm_source=devto&utm_medium=community&utm_campaign=global-fy24-q4-esg-blog).
| frivolouis |
1,776,926 | Transforming Ideas into Action Collaborating with an IoT App Development Company 2024 | Introduction to IoT App Development The Internet of Things (IoT) refers to the ever-growing network... | 0 | 2024-03-01T08:31:16 | https://dev.to/dhwanil/transforming-ideas-into-action-collaborating-with-an-iot-app-development-company-2024-58pg | iotdevelopment, iotapps, iot, appdevelopment |

Introduction to IoT App Development
The Internet of Things (IoT) refers to the ever-growing network of physical devices embedded with sensors, software, and other technologies that enable them to collect and exchange data. This interconnected world of "smart" devices has revolutionized various industries, creating a significant demand for [IoT App Development Company](https://www.itpathsolutions.com/services/iot-app-development/) to design and build the software applications that power these connected systems.
The significance of IoT lies in its potential to revolutionise various aspects of our lives by:
Improving efficiency and automation in various industries
Enhancing data-driven decision making
Creating new products and services
Providing greater insights and control over various processes
Overview of IoT Applications in Various Industries
IoT applications are rapidly emerging across various industries, including:
Manufacturing: Real-time monitoring of production lines, predictive maintenance, and supply chain optimization.
Healthcare: Remote patient monitoring, medication management, and personalized treatment plans.
Retail: Inventory management, personalized customer experiences, and improved logistics.
Agriculture: Precision farming, soil moisture monitoring, and crop yield optimization.
Smart Cities: Traffic management, energy efficiency, and public safety improvements.
These are just a few examples, and the potential applications of IoT are constantly expanding.
Importance of Custom IoT Applications for Businesses
Custom IoT applications offer several advantages for businesses, including:
Meeting specific needs: They can be tailored to address the unique challenges and opportunities of a particular business.
Enhanced efficiency and productivity: IoT applications can automate tasks, optimize processes, and improve data collection and analysis.
Gaining a competitive edge: Businesses that leverage IoT can gain a significant advantage by offering innovative products and services that improve customer experience.
Improved decision making: Real-time data insights from IoT applications can help businesses make better data-driven decisions.
Choosing the Right IoT App Development Company
Selecting the right partner for your IoT application development project is crucial for its success. Here are some key factors to consider
1. Expertise in IoT Technologies and Frameworks:
Experience: Look for companies with a proven track record of successful IoT projects.
Technical Skills: Ensure the team possesses expertise in relevant technologies like cloud platforms (AWS, Microsoft Azure), communication protocols (MQTT, CoAP), and security protocols (SSL/TLS).
Domain Knowledge: Consider the company's understanding of your specific industry and its unique challenges.
2. Evaluating Past Projects and Client Testimonials:
Portfolio Review: Analyze the company's portfolio to assess the complexity and diversity of their past projects.
Client References: Request references from past clients to gain firsthand insights into the company's work ethic, communication style, and ability to deliver results.
Testimonials and Reviews: Research online reviews and testimonials from previous clients to understand their experience with the company.
Understanding Client Requirements
1. Importance of Clear Communication and Collaboration
Active Listening: IoT App Development Companies should actively listen to the client's vision, challenges, and desired outcomes.
Open Communication: Maintaining open and transparent communication throughout the project allows for timely feedback and adjustments.
Collaborative Environment: Fostering a collaborative environment encourages both parties to share ideas, concerns, and expertise for a successful outcome.
2. Conducting Thorough Requirement Analysis
Understanding the Business Challenge: Identify the specific pain point or opportunity the client wants to address with the IoT application.
Defining User Needs: Understand the needs and expectations of the end users who will interact with the application.
Technical Feasibility Assessment: Evaluate the technical feasibility of the desired features and functionalities based on available technologies and resources.
3. Defining Project Scope, Goals, and Deliverables
Project Scope: Clearly define the boundaries and functionalities of the application to avoid scope creep and ensure efficient development.
Project Goals: Establish measurable and achievable goals that align with the client's business objectives.
Deliverables: Clearly outline the expected outcomes of the project, such as prototypes, reports, and final application versions.
Designing User-Centric IoT Solutions
1. User Experience (UX) Design Principles for IoT Applications
Simplicity and Clarity: Interfaces should be clear, concise, and easy to navigate, catering to users with varying levels of technical expertise.
Intuitive Interaction: Interactions with the application, whether through physical devices or digital interfaces, should be natural and intuitive, requiring minimal learning.
Actionable Insights: Information presented should be relevant and actionable, allowing users to make informed decisions based on real-time data.
Personalization: Consider offering options for user customization to cater to individual preferences and needs.
2. Creating Intuitive Interfaces for Diverse User Demographics
Accessibility: Ensure the application is accessible to users with disabilities, adhering to established accessibility guidelines.
Multiple Input Options: Consider offering alternative interaction methods beyond traditional touchscreens, such as voice commands or physical buttons, to cater to different user preferences and abilities.
Multilingual Support: If your target audience spans diverse regions, consider offering multilingual support to enhance inclusivity and user experience.
3. Incorporating Feedback Loops for Iterative Improvements
User Testing: Conduct user testing throughout the development process to gather feedback and identify areas for improvement.
Feedback Mechanisms: Integrate mechanisms within the application to allow users to easily submit feedback and suggestions.
Data-Driven Analysis: Analyze user data to understand usage patterns and identify opportunities for continuous improvement.
Developing Secure IoT Applications
Security is paramount in the interconnected world of IoT. IoT App Development Companies must prioritize robust security measures to protect user data, device integrity, and overall system functionality.
Risk Assessment and Threat Modeling: Begin by identifying potential security threats and vulnerabilities in your IoT ecosystem. Conduct a thorough risk assessment and create a threat model to understand the attack vectors and potential impacts.
Secure Communication Protocols: Use secure communication protocols such as Transport Layer Security (TLS) or Datagram Transport Layer Security (DTLS) to encrypt data transmission between IoT devices and backend servers. Implement mutual authentication to verify the identity of both parties.
End-to-End Encryption: Encrypt data both at rest and in transit. Utilize strong encryption algorithms such as AES (Advanced Encryption Standard) to protect sensitive information from unauthorized access.
Device Authentication: Implement strong authentication mechanisms to ensure that only authorized devices can access the IoT network. Use techniques like certificate-based authentication or pre-shared keys (PSK) to authenticate devices securely.
Secure Firmware and Software Updates: Ensure that IoT devices receive timely firmware and software updates to patch known vulnerabilities. Implement secure update mechanisms to prevent unauthorized tampering or interception of update packages.
Leveraging IoT Hardware and Sensors
1. Integrating Sensors, Actuators, and IoT Devices into Applications:
Sensor Integration: Sensors collect real-time data from the environment, such as temperature, motion, or pressure. IoT App Development Company must ensure proper integration of these sensors with the application to collect and interpret the data accurately.
Actuator Integration: Actuators are devices that can influence the physical world based on commands from the application. Integrating actuators allows for automation and control of various functionalities.
Device Management: Managing the communication and configuration of various devices within the IoT network is essential for maintaining system functionality and security.
2. Optimizing Hardware Selection for Cost-Efficiency and Performance:
Matching Hardware to Requirements: Selecting hardware that meets the specific needs of the application in terms of processing power, memory, and sensor capabilities is crucial.
Considering Power Consumption: Optimizing power consumption is essential for battery-powered devices and applications aiming for long-term operation.
Cost-Effectiveness: Finding a balance between functionality, performance, and cost is vital for businesses to achieve their desired outcomes within budget constraints.
Leveraging IoT Hardware and Sensors
Sensor Integration: Sensors collect real-time data from the environment, such as temperature, motion, or pressure. [IoT App Development Company](https://www.itpathsolutions.com/services/iot-app-development/) must ensure proper integration of these sensors with the application to collect and interpret the data accurately.
Actuator Integration: Actuators are devices that can influence the physical world based on commands from the application. Integrating actuators allows for automation and control of various functionalities.
Matching Hardware to Requirements: Selecting hardware that meets the specific needs of the application in terms of processing power, memory, and sensor capabilities is crucial.
Considering Power Consumption: Optimizing power consumption is essential for battery-powered devices and applications aiming for long-term operation.
Cost-Effectiveness: Finding a balance between functionality, performance, and cost is vital for businesses to achieve their desired outcomes within budget constraints.
Deployment and Maintenance Strategies
1. Planning for Seamless Deployment and Integration
Pre-Deployment Testing: Conduct thorough testing in a simulated environment to identify and address potential integration issues before actual deployment.
Phased Rollout: Consider a phased rollout approach, starting with a smaller group of devices and gradually expanding to minimize risk and ensure smooth integration.
Robust Configuration Management: Implement a robust configuration management system to ensure consistency and security across all deployed devices.
2. Implementing Over-the-Air (OTA) Updates for IoT Devices
OTA Update Capabilities: Develop and integrate mechanisms for over-the-air (OTA) updates, allowing for remote deployment of bug fixes, security patches, and new features without requiring physical intervention.
Secure Update Delivery: Utilize secure communication protocols and encryption methods to ensure the integrity and authenticity of OTA updates during transmission.
Version Control and Rollback Options: Maintain clear version control of updates and implement rollback options in case of unexpected issues to minimize disruption.
3. Providing Ongoing Support, Maintenance, and Troubleshooting Services
Monitoring and Proactive Maintenance: Continuously monitor device performance and system health to identify potential issues early on and implement preventive maintenance strategies.
Remote Troubleshooting: Develop remote troubleshooting tools and procedures to diagnose and resolve issues remotely, minimizing downtime and cost associated with on-site visits.
User Support and Training: Offer adequate user support and training to ensure users understand how to operate and troubleshoot the IoT application effectively.
Conclusion
In a rapidly evolving IoT landscape, the expertise of IoT App Development Companies becomes increasingly indispensable. Their role extends beyond mere development to encompass understanding client needs, ensuring security, seamless integration of hardware, and ongoing support. By embracing cutting-edge practices like edge computing and DevOps integration, these companies empower businesses to fully harness the transformative potential of IoT, driving innovation across diverse industries. As IoT continues to redefine how we interact with the world, the guidance and solutions provided by these companies will be instrumental in navigating its complexities and unlocking its immense possibilities. | dhwanil |
1,777,056 | Kafka와 RabbitMQ: 적합한 메시징 브로커 선택하기 | Kafka와 RabbitMQ 메시지 브로커 아키텍처, 성능 및 사용 사례 비교 | 0 | 2024-03-01T09:48:45 | https://dev.to/pubnub-ko/kafkawa-rabbitmq-jeoghabhan-mesijing-beurokeo-seontaeghagi-16bb | [이벤트 중심 아키텍처의](https://www.pubnub.com/solutions/edge-message-bus/) 역동적인 세계에서 효율적이고 확장 가능한 커뮤니케이션을 위해서는 올바른 메시징 브로커를 선택하는 것이 중요합니다. 가장 인기 있는 두 가지 경쟁자로는 각각 장단점이 있는 Kafka와 RabbitMQ가 있습니다. 비슷한 용도로 사용되지만 아키텍처, 성능 특성 및 사용 사례는 서로 다릅니다. 이 블로그 게시물에서는 아키텍처 차이점과 성능 비교를 자세히 살펴보고, 의사 결정 과정을 탐색하는 데 도움이 될 수 있도록 Kafka와 RabbitMQ의 몇 가지 일반적인 사용 사례를 살펴보겠습니다.
아키텍처
----
### Kafka
Apache Kafka는 높은 처리량, 내결함성, 실시간 데이터 처리 기능으로 잘 알려진 오픈 소스 분산 이벤트 스트리밍 플랫폼입니다. Kafka는 생산자가 토픽에 메시지를 작성하고 소비자가 해당 토픽을 구독하여 메시지를 수신하는 퍼브-서브 모델을 따릅니다. Kafka는 메시지를 분산된 커밋 로그에 저장하여 높은 확장성과 내결함성을 제공합니다. 따라서 높은 처리량과 메시지 재생 기능이 가능하여 실시간 데이터 처리 및 이벤트 소싱에 이상적입니다.
Kafka의 아키텍처는 생산자, 브로커, 소비자의 세 가지 주요 구성 요소로 이루어져 있습니다. 생산자는 Kafka 토픽에 메시지를 게시하고, 브로커는 카프카 클러스터 전체에 데이터를 저장하고 복제하는 역할을 담당합니다. 소비자는 하나 이상의 토픽에서 데이터를 읽어 병렬 처리와 확장성을 가능하게 합니다.
### RabbitMQ
RabbitMQ는 고급 메시지 큐 프로토콜(AMQP)을 구현하는 유연한 오픈 소스 메시지 브로커입니다. 기존 메시지 큐 모델(RabbitMQ 큐)을 따르며, 애플리케이션이 메시지를 송수신하고 특정 소비자에게 순서대로 메시지를 전달하여 비동기적으로 통신할 수 있도록 합니다. 따라서 안정적인 메시지 순서 지정과 메시지 라우팅의 유연성을 보장하여 작업 처리 및 마이크로서비스 통신에 적합합니다.
RabbitMQ의 아키텍처는 생산자와 소비자 사이의 중개자 역할을 하는 중앙 메시지 브로커를 중심으로 이루어집니다. 메시지 복제 및 보존을 위해 생산자는 메시지를 교환으로 보내고, 교환은 미리 정의된 규칙에 따라 메시지를 대기열로 라우팅합니다. 그러면 소비자는 대기열에서 메시지를 검색하여 처리합니다.
성능
--
성능 측면에서 Kafka와 RabbitMQ는 기능은 비슷하지만 서로 다른 강점을 가지고 있습니다.
### Kafka
처리량이 많고 실시간 데이터 스트리밍 시나리오에 탁월하며 뛰어난 확장성과 짧은 지연 시간을 자랑합니다. 초당 수백만 개의 메시지를 처리할 수 있어 빠르고 지속적인 데이터 처리가 필요한 사용 사례에 적합합니다. 이 아키텍처는 워크로드를 여러 브로커에 분산하여 대량의 데이터를 효율적으로 처리함으로써 수평적 확장이 가능합니다. 또한 메시지를 디스크에 보존하여 내결함성과 데이터 내구성을 보장함으로써 강력한 내구성을 제공합니다.
### RabbitMQ
승인 및 메시지 지속성 등의 기능을 제공하여 안정적인 메시지 전송을 제공합니다. 초당 수천 개의 메시지를 처리할 수 있어 처리량 요구 사항이 중간 정도인 사용 사례에 적합합니다. 중앙 집중식 아키텍처로 인해 약간의 성능 오버헤드가 발생할 수 있지만, 견고성과 메시지 무결성을 제공합니다. 수직적 확장은 가능하지만 수평적 확장 기능은 Kafka에 비해 제한적입니다.
사용 사례
-----
### Kafka
다양한 사용 사례에 이상적
- 실시간 분석 및 스트리밍 애플리케이션
- 특히 빅 데이터와 관련된 이벤트 소싱, 수집, 로그 집계.
- 대용량 메시지 처리를 통한 데이터 파이프라인 및 마이크로서비스 통신
- 높은 확장성과 내결함성이 요구되는 애플리케이션
### RabbitMQ
적합 대상
- 작업 처리, 서비스 통합, 워크플로 오케스트레이션, 메트릭 및 알림을 포함한 워크플로 관리.
- 마이크로서비스 간 비동기 통신
- 메시지 우선순위 및 특정 복잡한 라우팅 요구사항을 포함하여 안정적인 메시지 전달이 가능한 엔터프라이즈 메시징 시스템.
- 지점 간, 게시-구독, 요청-응답과 같은 메시징 패턴을 유연하게 지원하는 RabbitMQ는 다양한 애플리케이션 시나리오에서 유용하게 사용할 수 있습니다.
선택하기
----
궁극적으로 최적의 선택은 특정 요구사항에 따라 달라집니다:
- 높은 처리량과 실시간 데이터 처리를 우선시하시나요? Kafka를 사용하세요.
- 중간 정도의 워크로드를 위한 안정적인 메시지 전송과 유연한 라우팅이 필요하신가요? RabbitMQ를 사용하세요.
- 메시지 재생과 로그 집계를 고려하고 계신가요? Kafka가 강력한 후보로 떠오르고 있습니다.
- 대량의 마이크로서비스 통신을 위한 원활한 확장을 찾고 계신가요? Kafka가 이를 지원합니다.
기억하세요: 어느 쪽이 본질적으로 "더 나은" 것은 아닙니다. 구체적인 요구 사항을 분석하고 중복성, 확장성, 고성능, 고가용성, 대규모 API 및 보안과 같은 요소를 고려하여 정보에 입각한 결정을 내리는 것이 중요합니다.
추가 고려 사항
--------
- 복잡성: Kafka의 분산 아키텍처와 추가 전용 로그는 단순한 큐 기반 접근 방식인 RabbitMQ에 비해 더 많은 운영 전문 지식이 필요할 수 있습니다.
- 커뮤니티 및 지원: 두 플랫폼 모두 상당한 규모의 커뮤니티와 활발한 개발이 이루어지고 있습니다.
- 통합: 기존 인프라 및 도구와 사용 가능한 통합을 평가하세요.
PubNub는 Kafka 및 RabbitMQ와 통합되나요?
--------------------------------
PubNub는 Kafka [Bridge를](https://www.pubnub.com/developers/kafka/) 제공하여 Kafka 스트림을 PubNub와 연결할 수 있도록 하여 Kafka 이벤트를 PubNub로 전송하고 PubNub 이벤트를 Kafka 인스턴스로 추출할 수 있도록 합니다.
또한 PubNub는 Python 및 Java 프로그래밍 언어와 Node/Node.js를 비롯한 여러 서버 및 클라이언트 라이브러리를 지원합니다.
결론
--
아키텍처의 차이점, 성능 벤치마크, 이상적인 사용 사례를 명확하게 이해했다면 Kafka와 RabbitMQ 중 하나를 자신 있게 선택할 수 있습니다. 이제 프로젝트의 특정 요구 사항을 자세히 살펴보고 강력하고 효율적인 [이벤트 중심 아키텍처를](https://www.pubnub.com/solutions/edge-message-bus/) 향한 여정을 시작하세요!
#### 목차
[아키텍처카프카래빗MQ성능카프카래빗MQ사례카프카래빗MQ선택하기추가](#h-9)[고려](#h-10)[사항PubNub는 카프카 및 래빗MQ와 통합되나요?](#h-11)[결론](#h-12)
PubNub이 어떤 도움을 줄 수 있을까요?
========================
이 문서는 원래 [PubNub.com에](https://www.pubnub.com/blog/kafka-vs-rabbitmq-choosing-the-right-messaging-broker/) 게시되었습니다.
저희 플랫폼은 개발자가 웹 앱, 모바일 앱 및 IoT 디바이스를 위한 실시간 인터랙티브를 구축, 제공 및 관리할 수 있도록 지원합니다.
저희 플랫폼의 기반은 업계에서 가장 크고 확장성이 뛰어난 실시간 에지 메시징 네트워크입니다. 전 세계 15개 이상의 PoP가 월간 8억 명의 활성 사용자를 지원하고 99.999%의 안정성을 제공하므로 중단, 동시 접속자 수 제한 또는 트래픽 폭증으로 인한 지연 문제를 걱정할 필요가 없습니다.
PubNub 체험하기
-----------
[라이브 투어를](https://www.pubnub.com/tour/introduction/) 통해 5분 이내에 모든 PubNub 기반 앱의 필수 개념을 이해하세요.
설정하기
----
PubNub [계정에](https://admin.pubnub.com/signup/) 가입하여 PubNub 키에 무료로 즉시 액세스하세요.
시작하기
----
사용 사례나 [SDK에](https://www.pubnub.com/docs) 관계없이 [PubNub 문서를](https://www.pubnub.com/docs) 통해 바로 시작하고 실행할 수 있습니다. | pubnubdevrel | |
1,777,060 | Kafka vs. RabbitMQ: Die Wahl des richtigen Messaging-Brokers | Ein Vergleich von Kafka und RabbitMQ Message Broker Architektur, Leistung und Anwendungsfällen | 0 | 2024-03-01T09:53:46 | https://dev.to/pubnub-de/kafka-vs-rabbitmq-die-wahl-des-richtigen-messaging-brokers-164a | In der dynamischen Welt der [ereignisgesteuerten Architekturen](https://www.pubnub.com/solutions/edge-message-bus/) ist die Wahl des richtigen Messaging-Brokers entscheidend für eine effiziente und skalierbare Kommunikation. Zwei der beliebtesten Konkurrenten sind Kafka und RabbitMQ, die jeweils ihre Stärken und Schwächen haben. Obwohl sie einem ähnlichen Zweck dienen, haben sie unterschiedliche Architekturen, Leistungsmerkmale und Anwendungsfälle. In diesem Blogbeitrag werden wir die architektonischen Unterschiede und Leistungsvergleiche näher beleuchten und einige gängige Anwendungsfälle für Kafka und RabbitMQ untersuchen, um Ihnen bei der Entscheidungsfindung zu helfen.
Architektur
-----------
### Kafka
Apache Kafka ist eine Open-Source-Plattform für verteiltes Ereignis-Streaming, die für ihren hohen Durchsatz, ihre Fehlertoleranz und ihre Echtzeit-Datenverarbeitungsfunktionen bekannt ist. Kafka folgt einem Pub-Sub-Modell, bei dem Produzenten Nachrichten in Topics schreiben und Konsumenten diese Topics abonnieren, um die Nachrichten zu empfangen. Kafka speichert Nachrichten in einem verteilten Commit-Log, was hohe Skalierbarkeit und Fehlertoleranz ermöglicht. Dies ermöglicht einen hohen Durchsatz und die Wiederholung von Nachrichten, was es ideal für die Datenverarbeitung in Echtzeit und die Beschaffung von Ereignissen macht.
Die Architektur von Kafka besteht aus drei Hauptkomponenten: Produzenten, Makler und Konsumenten. Produzenten veröffentlichen Nachrichten in Kafka-Themen, und Broker sind für die Speicherung und Replikation der Daten im Kafka-Cluster verantwortlich. Konsumenten lesen Daten aus einem oder mehreren Topics, was eine parallele Verarbeitung und Skalierbarkeit ermöglicht.
### RabbitMQ
RabbitMQ ist ein flexibler und quelloffener Message Broker, der das Advanced Message Queuing Protocol (AMQP) implementiert. Er folgt einem traditionellen Warteschlangenmodell (Die RabbitMQ-Warteschlange), das es Anwendungen ermöglicht, asynchron zu kommunizieren, indem sie Nachrichten senden und empfangen und Nachrichten in der richtigen Reihenfolge an bestimmte Verbraucher weiterleiten. Dies gewährleistet eine zuverlässige Nachrichtenreihenfolge und Flexibilität bei der Nachrichtenweiterleitung, wodurch es sich für die Aufgabenverarbeitung und die Kommunikation von Microservices eignet.
Im Mittelpunkt der Architektur von RabbitMQ steht ein zentraler Message Broker, der als Vermittler zwischen Produzenten und Konsumenten fungiert. Für die Nachrichtenreplikation und -aufbewahrung senden Produzenten Nachrichten an Exchanges, und diese Exchanges leiten die Nachrichten basierend auf vordefinierten Regeln an Warteschlangen weiter. Die Verbraucher rufen dann die Nachrichten aus den Warteschlangen ab und verarbeiten sie.
Leistung
--------
In Bezug auf die Leistung haben Kafka und RabbitMQ ähnliche Funktionen, aber unterschiedliche Stärken.
### Kafka
eignet sich hervorragend für Szenarien mit hohem Durchsatz und Datenströmen in Echtzeit und zeichnet sich durch eine hervorragende Skalierbarkeit und geringe Latenz aus. Es kann Millionen von Nachrichten pro Sekunde verarbeiten und eignet sich daher für Anwendungsfälle, die eine schnelle und kontinuierliche Datenverarbeitung erfordern. Seine Architektur ermöglicht eine horizontale Skalierung, indem die Arbeitslast auf mehrere Broker verteilt wird, die große Datenmengen effizient verarbeiten. Außerdem bietet es starke Haltbarkeitsgarantien, indem es Nachrichten auf der Festplatte speichert und so Fehlertoleranz und Datenhaltbarkeit gewährleistet.
### RabbitMQ
Bietet zuverlässige Nachrichtenübermittlung durch Funktionen wie Acknowledgments und Nachrichtenpersistenz. Es kann Tausende von Nachrichten pro Sekunde verarbeiten und eignet sich daher für Anwendungsfälle mit moderaten Durchsatzanforderungen. Seine zentralisierte Architektur kann zu einem gewissen Leistungs-Overhead führen, bietet aber Robustheit und Nachrichtenintegrität. Während es vertikal skaliert, sind die horizontalen Skalierungsmöglichkeiten im Vergleich zu Kafka begrenzt.
Anwendungsfälle
---------------
### Kafka
Ideal für eine Vielzahl unterschiedlicher Anwendungsfälle
- Echtzeit-Analysen und Streaming-Anwendungen
- Event-Sourcing, Ingestion und Log-Aggregation, insbesondere für Big Data.
- Datenpipelines und Microservice-Kommunikation mit hochvolumiger Nachrichtenverarbeitung
- Anwendungen, die hohe Skalierbarkeit und Fehlertoleranz erfordern
### RabbitMQ
Gut geeignet für
- Aufgabenverarbeitung, Dienstintegration, Workflow-Orchestrierung und Workflow-Management einschließlich Metriken und Benachrichtigungen.
- Asynchrone Kommunikation zwischen Microservices
- Enterprise-Messaging-Systeme mit zuverlässiger Nachrichtenzustellung, einschließlich Nachrichtenpriorität und spezifischen komplexen Routing-Anforderungen.
- Die Flexibilität von RabbitMQ bei der Unterstützung von Messaging-Mustern wie Point-to-Point, Publish-Subscribe und Request-Response macht es in verschiedenen Anwendungsszenarien nützlich.
Die Wahl treffen
----------------
Letztendlich hängt die optimale Wahl von Ihren spezifischen Anforderungen ab:
- Legen Sie Wert auf hohen Durchsatz und Datenverarbeitung in Echtzeit? Verwenden Sie Kafka.
- Benötigen Sie zuverlässige Nachrichtenübermittlung und flexibles Routing für moderate Arbeitslasten? Verwenden Sie RabbitMQ.
- Ziehen Sie Nachrichtenwiedergabe und Protokollaggregation in Betracht? Kafka ist der beste Kandidat.
- Sie suchen eine nahtlose Skalierung für die Kommunikation von Microservices mit hohem Volumen? Kafka unterstützt dies.
Denken Sie daran: Keine der beiden Lösungen ist von Natur aus "besser". Die Analyse Ihrer spezifischen Anforderungen und die Berücksichtigung von Faktoren wie Redundanz, Skalierbarkeit, hohe Leistung, hohe Verfügbarkeit, groß angelegte API und Sicherheit sind für eine fundierte Entscheidung unerlässlich.
Zusätzliche Überlegungen
------------------------
- Komplexität: Die verteilte Architektur von Kafka und das "append-only"-Protokoll erfordern möglicherweise mehr betriebliches Fachwissen als der einfachere, warteschlangenbasierte Ansatz von RabbitMQ.
- Gemeinschaft und Unterstützung: Beide Plattformen erfreuen sich großer Communities und aktiver Entwicklung.
- Integration: Evaluieren Sie die verfügbaren Integrationen mit Ihrer bestehenden Infrastruktur und Ihren Tools.
Lässt sich PubNub mit Kafka und RabbitMQ integrieren?
-----------------------------------------------------
PubNub bietet die [Kafka Bridge](https://www.pubnub.com/developers/kafka/), mit der Sie Ihren Kafka-Stream mit PubNub verbinden können, so dass Sie Kafka-Ereignisse an PubNub senden und PubNub-Ereignisse in Ihre Kafka-Instanz extrahieren können.
PubNub unterstützt außerdem mehrere Server- und Client-Bibliotheken, darunter die Programmiersprachen Python und Java sowie Node / Node.js.
Fazit
-----
Mit einem klaren Verständnis der architektonischen Unterschiede, Leistungsbenchmarks und idealen Anwendungsfälle können Sie sich getrost zwischen Kafka und RabbitMQ entscheiden. Tauchen Sie also tief in die spezifischen Anforderungen Ihres Projekts ein und machen Sie sich auf den Weg zu einer robusten und effizienten [ereignisgesteuerten Architektur](https://www.pubnub.com/solutions/edge-message-bus/)!
#### Inhalt
[ArchitekturKafkaRabbitMQLeistungKafkaRabbitMQFallbeispieleKafkaRabbitMQWahl treffenZusätzliche](#h-9)[ÜberlegungenKann](#h-10)[PubNub mit Kafka und RabbitMQ integriert werden?](#h-11)[Fazit](#h-12)
Wie kann PubNub Ihnen helfen?
=============================
Dieser Artikel wurde ursprünglich auf [PubNub.com](https://www.pubnub.com/blog/kafka-vs-rabbitmq-choosing-the-right-messaging-broker/) veröffentlicht.
Unsere Plattform unterstützt Entwickler bei der Erstellung, Bereitstellung und Verwaltung von Echtzeit-Interaktivität für Webanwendungen, mobile Anwendungen und IoT-Geräte.
Die Grundlage unserer Plattform ist das größte und am besten skalierbare Echtzeit-Edge-Messaging-Netzwerk der Branche. Mit über 15 Points-of-Presence weltweit, die 800 Millionen monatlich aktive Nutzer unterstützen, und einer Zuverlässigkeit von 99,999 % müssen Sie sich keine Sorgen über Ausfälle, Gleichzeitigkeitsgrenzen oder Latenzprobleme aufgrund von Verkehrsspitzen machen.
PubNub erleben
--------------
Sehen Sie sich die [Live Tour](https://www.pubnub.com/tour/introduction/) an, um in weniger als 5 Minuten die grundlegenden Konzepte hinter jeder PubNub-gestützten App zu verstehen
Einrichten
----------
Melden Sie sich für einen [PubNub-Account](https://admin.pubnub.com/signup/) an und erhalten Sie sofort kostenlosen Zugang zu den PubNub-Schlüsseln
Beginnen Sie
------------
Mit den [PubNub-Dokumenten](https://www.pubnub.com/docs) können Sie sofort loslegen, unabhängig von Ihrem Anwendungsfall oder [SDK](https://www.pubnub.com/docs) | pubnubdevrel | |
1,777,155 | Unlock Your Potential with a Podcast Clip Maker | In today's digital landscape, podcasts have become a powerful medium for sharing ideas, stories, and... | 0 | 2024-03-01T12:31:41 | https://dev.to/devtripath94447/unlock-your-potential-with-a-podcast-clip-maker-27d3 | productivity, discuss, ai, architecture | In today's digital landscape, podcasts have become a powerful medium for sharing ideas, stories, and expertise. As a content creator, tapping into this platform can significantly expand your reach and impact. However, with the proliferation of podcasts, standing out from the crowd can be a challenge. This is where a [podcast clip maker](https://recast.studio/tools/podcast-clip-maker) can be your secret weapon.
### What is a Podcast Clip Maker?
A podcast clip maker is a tool or software designed to help you create engaging and shareable clips from your podcast episodes. These clips are short snippets that highlight the most compelling moments of your content, making it easier for you to capture the attention of your audience on social media platforms and beyond.
### Why You Need a Podcast Clip Maker
Boost Your Visibility: With attention spans getting shorter, capturing your audience's interest within seconds is crucial. Podcast clip makers allow you to condense your message into bite-sized snippets that are more likely to grab attention as users scroll through their feeds.
Increase Engagement: Sharing clips from your podcast can spark curiosity and encourage listeners to tune in to the full episode. By providing a preview of the valuable insights or entertaining content you offer, you can entice your audience to engage with your brand on a deeper level.
Maximize Social Sharing: Podcast clip makers make it easy to create visually appealing clips that are optimized for sharing on social media platforms like Instagram, Twitter, and TikTok. By leveraging these platforms, you can reach new audiences and amplify the impact of your podcast.
Drive Traffic to Your Podcast: Each clip you share acts as a teaser for your podcast episodes, driving traffic back to your main content hub. By strategically sharing clips that highlight the most compelling aspects of your podcast, you can attract more listeners and grow your audience over time.
### How to Use a Podcast Clip Maker Effectively
Identify Key Moments: Listen to your podcast episodes with a critical ear and identify the most engaging or informative segments. These could be powerful quotes, interesting anecdotes, or thought-provoking insights that resonate with your target audience.
Keep it Concise: Aim for clips that are between 30 seconds to 2 minutes long. Remember, the goal is to capture attention quickly and leave viewers wanting more.
Add Visual Appeal: Enhance your clips with visually engaging elements such as captions, graphics, and animations. This will make your content more eye-catching and shareable across different platforms.
Optimize for Each Platform: Tailor your clips to fit the specifications and best practices of each social media platform. For example, vertical videos perform better on Instagram Stories, while square videos are ideal for Facebook and Twitter.
Include a Call to Action: Don't forget to include a clear call to action at the end of your clips, directing viewers to listen to the full episode of your podcast or visit your website for more information.
### Conclusion
In today's fast-paced digital world, capturing and holding your audience's attention is more challenging than ever. However, with the right tools and strategies, you can cut through the noise and make a lasting impression. By incorporating a podcast clip maker into your content creation arsenal, you can unlock new opportunities to expand your reach, increase engagement, and ultimately grow your podcast audience. | devtripath94447 |
1,777,331 | Day 11: Introduction to React Hooks | Introduction Welcome to Day 11 of our 30-day blog series on React.js! Today, we'll explore React... | 26,617 | 2024-03-01T14:29:59 | https://dev.to/pdhavalm/day-11-introduction-to-react-hooks-211m | react, reactjsdevelopment, reactnative, javascript | **Introduction**
Welcome to Day 11 of our 30-day blog series on React.js! Today, we'll explore React Hooks, a powerful feature introduced in React 16.8 for adding state and other React features to functional components. Hooks provide a more concise and flexible way to write components compared to class components.
**What are React Hooks?**
React Hooks are functions that enable functional components to use state, lifecycle methods, context, and other React features without writing a class. Hooks allow you to reuse stateful logic across multiple components, making code more modular and easier to maintain.
**useState Hook**
The `useState` Hook allows functional components to add state to their logic. It returns a stateful value and a function to update that value, similar to `this.state` and `this.setState` in class components.
```jsx
import React, { useState } from 'react';
function Counter() {
const [count, setCount] = useState(0);
return (
<div>
<p>Count: {count}</p>
<button onClick={() => setCount(count + 1)}>Increment</button>
</div>
);
}
```
In the above example:
- We import the `useState` Hook from the React package.
- We use array destructuring to create a state variable `count` and a function `setCount` to update the count state.
- The initial value of count is provided as an argument to `useState`.
**useEffect Hook**
The `useEffect` Hook enables functional components to perform side effects, such as fetching data, subscribing to events, or updating the DOM, similar to lifecycle methods in class components.
```jsx
import React, { useState, useEffect } from 'react';
function Timer() {
const [seconds, setSeconds] = useState(0);
useEffect(() => {
const timerID = setInterval(() => {
setSeconds(prevSeconds => prevSeconds + 1);
}, 1000);
return () => clearInterval(timerID);
}, []); // Empty dependency array ensures the effect runs only once on component mount
return <div>Seconds: {seconds}</div>;
}
```
In the above example:
We use the `useEffect` Hook to start a timer when the component mounts and clean it up when the component unmounts.
The empty dependency array `[]` ensures the effect runs only once when the component mounts.
React Hooks revolutionize the way we write React components by allowing functional components to use state and other React features without classes. The `useState` and `useEffect` Hooks are just the beginning, as React provides many more Hooks for various use cases.
Stay tuned for tomorrow's post, where we'll explore more advanced concepts and Hooks in React, including useContext, useMemo, and useCallback. | pdhavalm |
1,777,345 | webMethods.io Integration processing excel file | Introduction This article explains how the excel input file can be processed using out of... | 0 | 2024-03-01T14:56:16 | https://tech.forums.softwareag.com/t/webmethods-io-integration-processing-excel-file/291726 | webmethods, integration, excel, connectors | ---
title: webMethods.io Integration processing excel file
published: true
date: 2024-02-19 11:07:55 UTC
tags: webMethods, integration, excel, connectors
canonical_url: https://tech.forums.softwareag.com/t/webmethods-io-integration-processing-excel-file/291726
---
## Introduction
This article explains how the excel input file can be processed using out of the box connectors.
## Audience
It is assumed that readers of this article know how to create integrations on [Webmethods.io](http://Webmethods.io).
## Use Case
1. Pick the file from one drive location.
2. Extract the data from the xlsx file.
3. Transform the data into the desired format.
4. Write the data into some third-party storage location.
**Assets Developed**
- **Workflow:** This workflow is responsible for initiating the process on a scheduled basis. It will read xlsx file from one drive location.
- **Flow Service:** This service is responsible for getting the data from the workflow and transforming the data into the desired format.
**Prerequisite**
- Active [webmethods.io](http://webmethods.io) Integration account
- One drive account
- Input file in xlsx format
Workflow
• Create the workflow and name it “ExcelProcessingWorkFlow”.
• Select the excel online connector from the connector list.
• Configure the connector Excel online
Configure the Excel online connector
• Select Action Get Rows
• Name: Get Packaging list details
• Authorized Excel Online: Create Account
[](https://global.discourse-cdn.com/techcommunity/original/3X/3/9/39b4cfed239073fc1dca4eef4c7940447b30579f.png "image")
• Once the connector is configured, we need to configure the location from where the Excel data needs to be picked up
• In the connector it can read the data only from one drive location.
• Select the folder name. it will show you all the folders available in one drive
• Select the Excel file. This will show all the excel files are in xlsx format. Any excel file in older format like xls format will not be visible in the drop-down menu
• Select the Sheet name. It will show all the sheets present in the excel sheet.
• From Row and to Row are optional fields. If you have the requirement to select some particular rows and columns then we need to pass the values.
• In our use case We want to load all the data from the selected sheet, Therefore we have selected nothing in the rows and columns.
[](https://global.discourse-cdn.com/techcommunity/original/3X/9/e/9e8f7d4e2008c71ca82e2f17c13a4f287d893233.png "image")
• Once we extract the data from the Excel sheet, we need to parse the data so that it can be used for further processing.
• Use Json Stringfy application to stringify the data coming out from Excel online connector.
• After getting the data we will submit the data to the flow service for implementing complex transformation logic.
[](https://global.discourse-cdn.com/techcommunity/original/3X/3/6/360f99aa1a5876ddf93eb3d0589457b115c57aeb.png "image")
**Flow Service**
- This flow service will get invoked from the workflow.
- In this flow service are logging the data on the monitor tab.
- In our use case for POC purposes we are mapping the data and creating the output document.
**End-to-End Testing**
- Place the file at the configured one drive location.
- Invoke the workflow to pick the file and start processing.
- Workflow picks the file and extracts the data from excel sheet.
[](https://global.discourse-cdn.com/techcommunity/original/3X/8/4/8436235bb0cbf67c2047756f6111a9b88b57e7ce.png "image")
• Logs for getting the excel data from one drive location
[](https://global.discourse-cdn.com/techcommunity/original/3X/2/3/237aa845078c600d6f2e1926edd4e8b0202c1226.png "image")
[](https://global.discourse-cdn.com/techcommunity/original/3X/4/6/463bc4ef1982978de6abc8ab99573df7003c890e.png "image")
• Below flow service gets invoked from workflow
[](https://global.discourse-cdn.com/techcommunity/original/3X/5/d/5db06c27b69c6dd5cdd136824505c5f8f951dcea.png "image")
[](https://global.discourse-cdn.com/techcommunity/original/3X/f/1/f1ab1fe15e6d63d638ac7e674afc1ace1d957575.png "image")
### Points to remember
- Excel online connector supports the latest version of xlsx processing.
- Every connector has a limit on the payload size. Please refer to the documentation if your use case demands the high payload size processing like in MB’s.
- Attaching the sample workflow and flow service used in our case.
Flow Service:
[TransformExcelData.zip](https://tech.forums.softwareag.com/uploads/short-url/lBg4aor88SQiYfEZk6iF2KqW2YB.zip) (12.7 KB)
Workflow:
[export-fl4df08198b87e1c5bba8168-1708338576316.zip](https://tech.forums.softwareag.com/uploads/short-url/b4xUnxIeIYxWVTeK8GW5FVUfwAL.zip) (137.0 KB)
[Read full topic](https://tech.forums.softwareag.com/t/webmethods-io-integration-processing-excel-file/291726) | techcomm_sag |
1,777,393 | Creating S3 Buckets using CloudFormation via AWS CLI | Whether you're setting up a new stack using CloudFormation or trying to decipher an existing stack,... | 0 | 2024-03-01T23:21:21 | https://dev.to/gritcoding/creating-s3-buckets-using-cloudformation-via-aws-cli-1c1b | cloudformation, s3 | Whether you're setting up a new stack using CloudFormation or trying to decipher an existing stack, understanding CloudFormation syntax is crucial for building, updating, and maintaining AWS resources effectively. In this tutorial, we'll learn by doing. We're going to create two S3 buckets using CloudFormation through the AWS-CLI.
## Prerequisites
Before diving into this tutorial, ensure you have the following prerequisites in place:
- AWS CLI installed on your machine.
- AWS credentials configured with the necessary permissions.
## Understanding the Process Flow

The flow chart above outlines our process:
1. We start by creating an S3 bucket to store our CloudFormation templates.
2. Next, we create an aggregated CloudFormation template.
3. Finally, we deploy a stack with two nested stacks, each creating an S3 bucket.
---
### Step 1: Create an S3 Bucket and Prepare the Template
First, we'll create an S3 bucket that will store our CloudFormation templates. Execute the following command, replacing <your-bucket-name> and <your-region> with your unique bucket name and desired AWS region, respectively. Remember, S3 bucket names must be globally unique.
```bash
aws s3 mb s3://<your-bucket-name> --region <your-region>
```


After creating the S3 bucket, set up your project directory with `main.json` and `S3Template.json` files.

### S3Template.json (Nested Stack Template)
```json
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "S3 bucket template", // Description of what this template does
"Parameters": {
"Environment": { "Type": "String" }, // Parameter for environment (e.g., dev, prod)
"ProjectName": { "Type": "String" }, // Parameter for project name
"Application": { "Type": "String" }, // Parameter for application name
"ExpirationInDays": { "Type": "Number" } // Parameter for setting lifecycle expiration in days
},
"Conditions": {
"LifeCycleCondition": { // Condition to check if lifecycle rule is needed
"Fn::Not": [
{ "Fn::Equals": [{ "Ref": "ExpirationInDays" }, 0] }
]
}
},
"Resources": {
"CFS3Bucket": { // Defines an S3 bucket resource
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": { // Bucket name created from a combination of parameters
"Fn::Join": ["-", [{ "Ref": "ProjectName" }, { "Ref": "Environment" }, { "Ref": "Application" }]]
},
"LifecycleConfiguration": { // Sets lifecycle configuration based on the condition
"Fn::If": [
"LifeCycleCondition",
{ "Rules": [{ "ExpirationInDays": { "Ref": "ExpirationInDays" }, "Status": "Enabled" }] },
{ "Ref": "AWS::NoValue" }
]
},
"PublicAccessBlockConfiguration": { // Blocks public access to the bucket
"BlockPublicAcls": "true",
"BlockPublicPolicy": "true",
"IgnorePublicAcls": "true",
"RestrictPublicBuckets": "true"
}
}
}
},
"Outputs": {
"CFS3Bucket": { "Value": { "Ref": "CFS3Bucket" } }, // Outputs the name of the created bucket
"CFS3BucketArn": { "Value": { "Fn::GetAtt": ["CFS3Bucket", "Arn"] } } // Outputs the ARN of the created bucket
}
}
```
### main.json (Main Stack Template)
```json
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "CF template to demonstrate nested stack creation", // Description of what this template does
"Parameters": {
"Environment": { "Type": "String", "Default": "dev" }, // Default parameter for environment
"ProjectName": { "Type": "String", "Default": "gritcoding" } // Default parameter for project name
},
"Resources": {
"S3CatStack": { // Defines a nested stack for 'cat' application
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"Parameters": { // Parameters passed to the nested stack
"ProjectName": { "Ref": "ProjectName" },
"Environment": { "Ref": "Environment" },
"Application": "cat",
"ExpirationInDays": 0
},
"TemplateURL": "S3Template.json" // URL of the template for the nested stack. Initially, this points to the local S3Template.json file.
}
},
"S3DogStack": { // Defines a nested stack for 'dog' application
"Type": "AWS::CloudFormation::Stack",
"Properties": {
"Parameters": { // Parameters passed to the nested stack
"ProjectName": { "Ref": "ProjectName" },
"Environment": { "Ref": "Environment" },
"Application": "dog",
"ExpirationInDays": 0
},
"TemplateURL": "S3Template.json" // URL of the template for the nested stack. Initially, this points to the local S3Template.json file.
}
}
}
}
```
#### Important Note on Parameters
When using nested stacks in CloudFormation, it's crucial to ensure that all required parameters in the nested stack (in this case, S3Template.json) are provided by the main stack (main.json). If the main stack doesn't pass these parameters, the CloudFormation deployment will fail.
---
### Step 2: Package the CloudFormation Template
In your project's root directory, package the `main.json` and `S3Template.json` files into a single template using the following command:
```bash
aws --region <your-region> cloudformation package --s3-bucket <your-bucket-name> --template-file ./main.json --output-template-file ./packaged-template.json --use-json
```
Check your directory for the newly created `packaged-template.json` file.

---
### Step 3: Deploy the Packaged Template
Now, deploy the stack to AWS CloudFormation using the command below:
```bash
aws cloudformation deploy --template-file ./packaged-template.json --stack-name <your-stack-name>
```

Visit your AWS console, check the CloudFormation and S3 sections, and you should see the newly created stacks and S3 buckets.



To delete the stack, use:
```
aws cloudformation delete-stack --stack-name <your stack name>
```
Confirm the stack's removal in the AWS console.
---
## Learn More
This tutorial aimed to shed light on CloudFormation syntax. For those who prefer hands-on learning, you can fork and explore the source code from this repository: [GitHub](https://github.com/grit-coding/DevToCodeSnippets/tree/main/tech-tutorials/infrastructure/cloudformation/create-two-nested-buckets). And if you found this guide or the repository useful, a star or a reaction would be much appreciated—it's a simple way to show support and keeps me inspired to share more content like this.😄
For further reading and official AWS documentation, refer to:
- [AWS CloudFormation User Guide](https://docs.aws.amazon.com/cloudformation/index.html)
- [AWS CloudFormation Template Reference](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-reference.html)
- [AWS S3 Bucket - AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-s3-bucket.html)
- [AWS CloudFormation Stack - AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-stack.html)
- [Intrinsic Function Reference - AWS CloudFormation](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/intrinsic-function-reference.html)
- [AWS CLI Command Reference](https://docs.aws.amazon.com/cli/latest/index.html)
| gritcoding |
1,777,418 | LEGO 3D Coordinates | The Grid Dimensions Length (X): The number of blocks you can place in a row. Width (Y):... | 0 | 2024-03-01T16:38:06 | https://dev.to/sagordondev/lego-3d-coordinates-4ndp | python, programming, tutorial, beginners | ### The Grid Dimensions
- **Length (X)**: The number of blocks you can place in a row.
- **Width (Y)**: The number of blocks you can place in a column, perpendicular to the row.
- **Height (Z)**: The number of layers of blocks you can stack on top of each other.
Imagine you have a LEGO base that's 1 block wide, 1 block long, and you're allowed to stack up to 1 block high. This gives you a tiny (2 X 2 X 2) space to work with, because you start counting from 0 (so you have positions 0 and 1 in each dimension).
### The Rule (N)
Now, let's say you have a rule: You can't place a LEGO block at any position where the sum of the coordinate's equals (N). For instance, if (N=2), you can't place a block at any spot where adding up the row number, column number, and stack height equals 2.
### Visualizing Coordinates
Each spot where you could place a LEGO block has coordinates (i, j, k):
- (i) is the position along the length of your base.
- (j) is the position along the width.
- (k) is the height or layer number.
### Real-World Example
Let's apply this with our (1 X 1 X 1) model and \(N=2\):
- You start placing blocks at (0, 0, 0), the very bottom corner.
- You can also place blocks at (0, 0, 1), moving up one layer without changing the row or column.
- However, you can't place a block at (0, 1, 1) because (0+1+1=2), and our rule forbids any spot where the coordinates add up to 2.
### The List of Allowed Coordinates
Using a list comprehension to apply these rules, we find the allowed spots to place our LEGO blocks without breaking the (N=2) rule. For our tiny LEGO base, the possible placements, avoiding spots where the sum equals 2, might look like this: [(0, 0, 0), (0, 1, 0), (1, 0, 0), (1, 1, 1)]. Each of these coordinates represents a spot where you can safely place a block according to the rules.
### Visualization
Imagine laying out a grid of possible block positions on your LEGO base and then marking which spots are allowed under the rule. Each allowed spot gets a LEGO block, building a unique structure that adheres to your constraints.
This visualization takes the abstract concept of 3D coordinates and the rule-based exclusion and turns it into a tangible LEGO building exercise, making it easier to grasp how list comprehensions can generate and filter complex collections based on specific conditions.
#### Here is the code in Python using a list comprehension.
```python
#!/usr/bin/python3
# lego_3d_coords.py
def lego_3d_coordinates(x, y, z, n):
"""
This function will print all possible coordinates on the 3D grid within the
specified dimensions, excluding those combinations where the sum of the dimenstions
equals 'n'
"""
coordinates = [[i, j, k] for i in range(x+1) for j in range(y+1) for k in range(z+1) if i + j + k != n]
print(coordinates)
if __name__ == '__main__':
x = int(input())
y = int(input())
z = int(input())
n = int(input())
lego_3d_coordinates(x, y, z, n)
```
> Once you have created this script be sure to make it executable using `chmod +x lego_3d_coords.py` then run it in the same directory using `./lego_3d_coords.py`
> Also remember to add the coordinates input values using your preferred CLI once you run your Python script.
Photo by <a href="https://unsplash.com/@xavi_cabrera?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Xavi Cabrera</a> on <a href="https://unsplash.com/photos/yellow-red-blue-and-green-lego-blocks-kn-UmDZQDjM?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
| sagordondev |
1,777,457 | Ao infinito e além | Hoje, vamos falar um pouco sobre o impossível que está apenas na sua cabeça e os limites que você... | 0 | 2024-03-01T17:30:46 | https://dev.to/ujs74wiop6/ao-infinito-e-alem-2iof | career, pgrowth | Hoje, vamos falar um pouco sobre o impossível que está apenas na sua cabeça e os limites que você mesmo coloca sobre si.
Há algum tempo, comecei a tomar alguns cuidados com a minha saúde mental no trabalho e nos estudos, pois, para atingir a longevidade na minha carreira que pretendo, preciso me preocupar com esses aspectos emocionais, como a síndrome do impostor, insegurança, autoavaliação, etc.
Todos nós temos nosso próprio processo para aprender algo ou nos tornarmos bons em alguma coisa, certo? E isso leva um certo tempo, que continua sendo nosso, sem levar em consideração nenhuma outra pessoa na face da terra, apenas nós mesmos!
Atualmente, vejo que esse processo individual não está sendo respeitado. É basicamente um ciclo de frustração que se aplica não só na área de TIC, mas em diversas outras áreas. Nós, por natureza do século XXI, ficamos a maior parte do tempo vendo como as pessoas fazem as coisas, ou nos comparando o tempo todo. Isso é muito errado. Não sabemos realmente o que uma pessoa fez para chegar aonde chegou ou conquistar o que conquistou. Essa comparação constante está acabando com a nova geração. Esse ciclo de vida se resume basicamente em três fases:
1° - Você decide aprender algo ou alcançar um objetivo.
2° - Você se cobre de pensamentos.
3° - Simplesmente por pensar demais, você acaba colocando seu objetivo primário longe demais da sua realidade e simplesmente desiste, sim... desiste de realizar seu objetivo, por pensar não ser capaz o suficiente! Muitas vezes, antes mesmo de tentar.
A sua insegurança de achar que não consegue fazer algo, ou ter esse 'pré-conceito' que aquilo não é para você, é um pensamento ignorante que limita sua capacidade e coloca uma régua com um limite 'X' na sua vida.
Pense comigo, a questão não é achar que você não sabe fazer algo (ou não é capaz) e se martirizar com isso; o principal ponto é simplesmente não se cobrir de pensamentos e apenas fazer. Nesse caso, você terá duas possíveis consequências como resultado final: na tentativa, você consegue executar com êxito e acerta, proporcionando o glorioso sentimento de satisfação, ou na tentativa, você erra e acaba não conseguindo, mas ganha uma experiência única que vai fazer você tentar outra vez de uma forma diferente (com a plena certeza de que você terá um pouco a mais de chance de acertar).
Errando, você se perpetua em uma onda de pensamentos negativos, mas isso não pode te abalar. Sempre procure pensar que, independentemente do resultado, você está sempre caminhando para o 'objetivo alcançado com sucesso!' e se, por acaso, não conseguir, tente novamente. Essas inúmeras tentativas não são em vão... uma hora você consegue!
Quero acabar este artigo com o seguinte argumento: fique tranquilo e não desista, não pare de tentar, olhe de fato para o seu caminho e não para o do vizinho. | ujs74wiop6 |
1,777,611 | Importance of Data Structures in Computer Science. | Introduction: Data Structures refer to the organization and storage of data to facilitate efficient... | 0 | 2024-03-01T19:54:02 | https://dev.to/nsanju0413/importance-of-data-structures-in-computer-science-4055 | datastructures, dsa, programming, beginners | **Introduction:**
> Data Structures refer to the organization and storage of data to facilitate efficient access and modification. Algorithms, on the other hand, are step-by-step procedures or formulas for solving computational problems. Together, they form the bedrock of computer science, influencing the way programmers approach and solve problems.
**What are Data Structures?**
> At its core, a data structure is a way of organizing and storing data to perform operations efficiently. In the realm of data structures, various types serve different purposes. Arrays offer fast access but limited in size, linked lists provide dynamic storage, stacks and queues manage data access in specific ways, while trees and graphs enable hierarchical relationships.
The significance of data structures lies in their ability to optimize data access and modification, catering to the specific needs of different algorithms. For instance, a search algorithm may require a different data structure compared to a sorting algorithm.
**What are Algorithms?**
> Algorithms are the step-by-step procedures or formulas used for solving computational problems. They define a set of instructions that, when followed, lead to the desired outcome. Algorithms work hand in hand with data structures, leveraging their organization to achieve efficiency in processing.
Sorting algorithms, such as quicksort or mergesort, dictate how data should be arranged. Searching algorithms, like binary search, define how data should be located. The choice of algorithm can significantly impact the speed and efficiency of a program.
**The Significance of DSA in Programming:**
> Understanding Data Structures and Algorithms enhances a programmer's problem-solving skills. It allows them to choose the right tools for the task at hand, leading to more efficient and optimized solutions. In essence, DSA is the key to writing code that not only works but works well.
> Consider a scenario where a program needs to search for a specific element in a dataset. Utilizing an efficient search algorithm, such as binary search on a sorted array, significantly reduces the time complexity compared to a linear search. This efficiency becomes paramount as datasets grow in size.
**DSA in Software Development:**
> In the broader scope of software development, DSA plays a crucial role in designing scalable and optimized systems. Efficient data structures and algorithms are fundamental in crafting software that can handle large amounts of data, ensuring optimal performance.
> In system design and architecture, understanding the underlying principles of DSA aids in creating robust and responsive applications. Modern frameworks and libraries often incorporate well-designed data structures and algorithms to provide developers with powerful tools to solve complex problems.
**Job Interviews and Competitive Programming:**
> The importance of Data Structures and Algorithms in the realm of job interviews cannot be overstated. Many technical interviews for software engineering positions focus heavily on DSA. Being proficient in these concepts not only increases the chances of passing interviews but also signifies a strong foundation in computer science.
> Competitive programming, where participants solve algorithmic problems within a specified time frame, further highlights the significance of DSA. Those skilled in these concepts often excel in such competitions, showcasing their ability to solve problems efficiently and under time constraints.
[**Resources for Learning DSA:**](url)
> For those eager to delve into the world of Data Structures and Algorithms, numerous resources are available. Online courses, books, and tutorials provide a structured approach to learning. Coding platforms, such as LeetCode and HackerRank, offer a practical avenue for applying theoretical knowledge to solve real-world problems.
> Effective learning involves a combination of theoretical understanding and hands-on practice. Consistent practice on coding platforms and participation in coding challenges contribute significantly to mastering DSA.
1.[Geeks for Geeks](https://www.geeksforgeeks.org/learn-data-structures-and-algorithms-dsa-tutorial/)
2.[HackerRank](https://www.hackerrank.com/domains/data-structures)
3.[LeetCode](https://leetcode.com/explore/)
4.[CodeChef](https://www.codechef.com/learn/topic/data-structures-and-algorithms)
**Future Trends and Applications:**
> As technology advances, the role of Data Structures and Algorithms continues to evolve. Emerging technologies, such as artificial intelligence, machine learning, and blockchain, heavily rely on advanced DSA concepts for efficient processing and analysis.
> The future holds exciting prospects for DSA, with ongoing research and development pushing the boundaries of what is possible. As the need for faster and more efficient algorithms grows, so does the importance of understanding and innovating in the realm of DSA.
**Conclusion**
> In conclusion, Data Structures and Algorithms form the cornerstone of computer science. Their importance cannot be overstated, as they influence every facet of programming, from writing efficient code to designing scalable systems. For those venturing into the world of software development or preparing for technical interviews, a strong grasp of DSA is not just an advantage—it's a necessity. Continuous learning, practice, and adaptation to emerging trends will ensure that programmers remain at the forefront of the ever-evolving field of Data Structures and Algorithms.
| nsanju0413 |
1,777,619 | Journey from 82289ms to 975ms: Optimizing a Heavy Query in .NET Core | Introduction: The Quest for Speed Unveiling the Challenge: A 82289ms Monster Analyzing the Culprit:... | 0 | 2024-03-01T20:14:28 | https://dev.to/emadkhanqai/journey-from-82289ms-to-975ms-optimizing-a-heavy-query-in-net-core-4k8j | programming, refactoring | 1. Introduction: The Quest for Speed
2. Unveiling the Challenge: A 82289ms Monster
3. Analyzing the Culprit: Understanding the Code
4. The Road to Optimization: Strategies Employed
5. Refactored Elegance: Witnessing the Transformation
6. Lessons Learned: Insights and Reflections
7. Conclusion: From Struggle to Success
**Introduction: The Quest for Speed**
In the world of software development, optimization isn’t just a goal; it’s a necessity. Recently, I embarked on a journey to tame a beast of a query that was gobbling up precious milliseconds. What began as a daunting challenge ultimately turned into a triumph of efficiency and ingenuity.
**Unveiling the Challenge: A 82289ms Monster**
The journey began when I encountered a query that seemed to defy the laws of efficiency. Clocking in at a staggering 82289ms, it was clear that drastic measures were needed. The culprit? A tangled web of Include and ThenInclude calls fetching a colossal number of rows from our SQL Server database.
**Analyzing the Culprit: Understanding the Code**
Diving into the code, I meticulously dissected each line, searching for bottlenecks and inefficiencies. It became evident that the excessive data retrieval was the primary culprit behind the sluggish performance. With approximately 60k rows being fetched, it was no wonder the query was struggling to keep pace.
**The Road to Optimization: Strategies Employed**
Armed with insight and determination, I set out to optimize the query. Employing a combination of techniques including eager loading, selective data retrieval, and caching, I systematically tackled each bottleneck head-on. Through careful analysis and experimentation, I fine-tuned the code, inching closer to my goal with each optimization.
**Refactored Elegance: Witnessing the Transformation**
The moment of truth arrived as I executed the refactored code. Anticipation hung in the air as the query sprung to life, executing with lightning speed. In a remarkable transformation, the execution time plummeted from 82289ms to a mere 975ms. The satisfaction of witnessing such a dramatic improvement was immeasurable.

**Lessons Learned: Insights and Reflections**
The journey of optimizing a heavy query taught me valuable lessons that extend beyond mere technical proficiency. It underscored the importance of patience, perseverance, and a willingness to challenge conventional wisdom. It also highlighted the power of collaboration and knowledge sharing within the developer community.
**Conclusion: From Struggle to Success**
In conclusion, the story of optimizing a heavy query in .NET Core is a testament to the endless possibilities that lie within the realm of software development. While the road may be fraught with challenges, with the right tools, techniques, and mindset, any obstacle can be overcome. As we continue to push the boundaries of what’s possible, let us embrace each challenge as an opportunity for growth and innovation.
Have you encountered similar challenges in your development journey? Share your experiences and insights in the comments below. Together, let’s celebrate the triumphs and tribulations that unite us as developers on a quest for excellence. | emadkhanqai |
1,777,635 | Estimate the read time of an article without any library in JavaScript. | In this article, we'll embark on a journey to craft a JavaScript function to help us estimate the... | 0 | 2024-03-01T20:56:19 | https://dev.to/lennyaiko/estimate-the-read-time-of-an-article-without-any-library-in-javascript-2k4e | javascript, webdev, tutorial, beginners | In this article, we'll embark on a journey to craft a JavaScript function to help us estimate the read time of an article. You will dabble with a little bit of regex to help you strip your content clean for proper estimation. Keep in mind that since this is pure JavaScript, it works across the stack (front-end and back-end).
Let's get started.
## Strip HTML Tags
If there are HTML tags present in your content, you will need to strip the content to make the estimation more accurate.
To do that, we have to do some regex:
```
const htmlTagRegExp = /<\/?[^>]+(>|$)/g
const textWithoutHtml = text.replace(htmlTagRegExp, '')
```
- **htmlTagRegExp**: The regex function catches any HTML tag syntax.
- **textWithoutHtml**: The `.replace` property replaces the HTML tag syntax caught by the regex with a blank space.
With that, we achieved the first phase of the estimation.
## Match all words
```
const wordMatchRegExp = /[^\s]+/g
const words = textWithoutHtml.matchAll(wordMatchRegExp)
```
- **wordMatchRegExp**: This regex function is used to match all non-whitespace characters. It's designed to match individual words in a text.
- **words**: The matchAll method is used to find all matches of the regular expression in the given textWithoutHtml. It returns an iterator containing all the matches.
## Let's Estimate!
To estimate the read time of the content, you will unpack words and get the length as the word count. Then you divide it by 200. Why? because 200 words per minute is the assumed average reading speed.
```
const wordCount = [...words].length
const readTime = Math.round(wordCount / 200)
```
With that, you have gotten the estimated read time of your content.
## Conclusion
You can always set this up as a reusable function in your project and make use of it without installing any additional packages.
See you on the next one. | lennyaiko |
1,777,833 | Evolving Skills: What Developers Need to Succeed in 2024 and Beyond | Introduction: In the rapidly evolving landscape of technology, staying ahead of the curve is... | 0 | 2024-03-02T04:54:28 | https://dev.to/pdhavalm/evolving-skills-what-developers-need-to-succeed-in-2024-and-beyond-2ok4 | xrdev2024, microservices, blockchain, iot | **Introduction**:
In the rapidly evolving landscape of technology, staying ahead of the curve is essential for developers. As we stride into 2024, the demand for new skills and expertise continues to grow. In this blog, we'll explore the crucial skills that developers need to succeed in 2024 and beyond.
1. **Quantum Computing**: Quantum computing represents a paradigm shift in computing power. Developers must acquaint themselves with quantum algorithms, programming languages like Q#, and development frameworks to harness the potential of quantum computing for solving complex problems.
2. **Ethical AI and Responsible Development**: With the integration of AI into various facets of our lives, developers must prioritize ethical considerations. Understanding fairness, transparency, and accountability in AI systems is paramount to building responsible technology.
3. **Extended Reality (XR)**: The immersive experiences of augmented reality (AR), virtual reality (VR), and mixed reality (MR) are reshaping industries. Developers need skills in XR technologies to create captivating and transformative experiences.
4. **Blockchain Development**: Blockchain technology extends beyond cryptocurrencies, offering solutions in finance, supply chain, and more. Developers must grasp blockchain development, smart contract programming, and decentralized application (dApp) development to leverage its potential.
5. **Cybersecurity**: In an era of increasing cyber threats, developers play a crucial role in building secure systems. Understanding security best practices, secure coding techniques, and implementing robust security measures are imperative skills for developers.
6. **Edge Computing**: Edge computing brings computation closer to data sources, enabling real-time processing. Developers need skills in edge computing architectures and optimizing applications for edge devices to meet the demands of latency-sensitive applications.
7. **Internet of Things (IoT)**: IoT devices are ubiquitous, offering opportunities in various domains. Developers require skills in IoT development platforms, protocols, and data management to create innovative IoT solutions.
8. **Natural Language Processing (NLP)**: NLP powers conversational interfaces and text analysis applications. Developers must master NLP techniques such as sentiment analysis, language understanding, and chatbot development to create intuitive user experiences.
9. **Robotic Process Automation (RPA)**: RPA automates repetitive tasks, enhancing efficiency. Developers need skills in RPA tools and process automation to streamline workflows and drive productivity.
10. **Low-Code/No-Code Development**: Low-code and no-code platforms accelerate application development. Developers should embrace these platforms to rapidly build and deploy applications, freeing up time for innovation and problem-solving.
11. **Data Science and Machine Learning**: Data is the fuel powering modern applications. Developers need skills in data science, machine learning, and data visualization to extract insights from data and build predictive models.
12. **DevOps and Site Reliability Engineering (SRE)**: DevOps practices ensure the seamless delivery of software. Developers should adopt DevOps principles, CI/CD pipelines, and containerization to achieve reliable and scalable software deployments.
13. **Microservices Architecture**: Microservices offer flexibility and scalability in software development. Developers need skills in designing, developing, and deploying microservices-based architectures to build resilient and adaptable systems.
14. **Cross-Platform Development**: With diverse platforms, developers must embrace cross-platform development frameworks like Flutter and React Native. These frameworks enable the creation of applications that run seamlessly across multiple platforms, reaching a broader audience.
15. **Soft Skills**: Beyond technical expertise, developers need strong communication, collaboration, and problem-solving skills. Effective communication with stakeholders and the ability to work in diverse teams are essential for success in today's dynamic environment.
As we navigate the technological landscape of 2024 and beyond, developers must continually adapt and upskill to meet the evolving demands of the industry. By mastering these skills, developers can not only stay competitive but also drive innovation and positive change in the world of technology. | pdhavalm |
1,777,898 | Strengthening Salah through Nazra Quran | Salah is a crucial component of Islamic theology because it provides a direct channel of... | 0 | 2024-03-02T06:40:44 | https://dev.to/equranekareem/strengthening-salah-through-nazra-quran-55oj | qurancourse, onlinequran, onlinenazara | Salah is a crucial component of Islamic theology because it provides a direct channel of communication between believers and the Almighty. Salah can be made even more potent by using Nazra Quran, which is reciting the Quran during prayers. This promotes elevation and a strong spiritual bond. Using the rich legacy of the [Nazra Quran course](https://equranekareem.com/courses/quran-courses-online/nazra-quran/), we examine practical ways to enhance your Salah experience in this book.
What is Nazra Quran
Nazra Quran" means reciting the Quran. In Islam, Nazra is when people read the holy verses of the Quran out loud. This is really important for Muslims because it lets them connect directly with the message from Allah that was revealed to Prophet Muhammad (peace be upon him).

## Pillars of Islam
The Pillars of Islam are five important practices that form the basis of Muslim faith and devotion. Shahada: is the declaration of faith in the oneness of Allah and the prophethood of Muhammad.
Salah: involves performing ritual prayers five times a day to connect directly with Allah.
Zakat: is giving to those in need, ensuring fair distribution of wealth and showing compassion. Sawm: is fasting during Ramadan, promoting self-discipline, reflection, and empathy.
Hajj: is the pilgrimage to Mecca, symbolizing unity and equality among believers. Together, these pillars support the spiritual foundation of Islam, guiding Muslims in their journey of faith and devotion to Allah.
## Importance of Salah
Salah offers a direct line of connection between believers and the Almighty, making it an essential part of Islamic theology. Reciting the Quran aloud during prayers, or Nazra Quran, might increase the efficacy of salah. Elevation and a solid spiritual link are encouraged by this. In this book, we look at ten useful strategies to improve your salah experience using the rich legacy of the Nazra Quran.
Benefits of Strengthening Salah through Nazra Quran
Adding Nazra Quran to Salah brings many spiritual benefits, making both the prayer and the connection with the Quran stronger.
Better Spiritual Connection: Nazra Quran in Salah helps people feel closer to God by connecting with the Quranic verses in a deeper way.
Improved Focus: When people recite Quranic verses rhythmically during Salah, they can concentrate better, focusing fully on their prayer and ignoring other distractions.
Tips for Strengthening Salah through Nazra Quran
Focus on Tajweed
Learning Tajweed ul Quran, which is the correct way to say Quranic verses, makes Nazra Quran during Salah more beautiful and effective.
## Consistent Recitation
Regularly reciting the Quran helps with memorization and fluency. Dedicate some time every day, even if it's just a few verses.
## Reflection and Contemplation
Think about the Quran's teachings and how they relate to daily life. Ponder over the deeper meanings of the verses to feel closer to Allah.
## Impact on Spiritual Growth
Combining Nazra Quran with Salah greatly boosts spiritual growth. It helps you understand Islam better, strengthens your faith, and brings a sense of peace to your heart.
## Application in Salah
Use what you've learned from the Quran in your Salah. Let the verses guide your prayers, making you more mindful and respectful.
**Understanding the Meaning
**Take time to understand what the verses mean. This not only makes Salah better but also makes the spiritual experience richer.
## Integrate Short Surahs
Include short Surahs in your Salah, like Surah Al-Fatiha and Surah Al-Ikhlas, to make recitation during prayer smoother.
## Memorize Quranic Verses
Memorizing important Quranic verses allows worshippers to recite them easily during Salah, making their connection with the divine message stronger.
**Establish a Routine
**Create a regular schedule for reciting Quranic verses during Salah, smoothly including Nazra Quran in your daily prayers.
Conclusion
Improving Salah with Nazra Quran goes beyond just reciting verses mechanically; it's about building a meaningful connection with the Quran and strengthening one's bond with Allah.You can bring perfection in salah by utilizing online resources like eQuranekareem that provide quranic teaching and courses. By bringing the Quran's teachings into daily prayers, Muslims can grow spiritually and find fulfillment in their worship.
FAQs
**How does Nazra Quran enhance the Salah experience?
**Nazra Quran enriches Salah by infusing prayer with divine guidance and spiritual resonance from Quranic verses, fostering a deeper connection with the Quran and Allah.
Can beginners incorporate Nazra Quran into their Salah?
Absolutely! Beginners can start by adding short Surahs to their Salah and gradually include more as they become better at reciting the Quran.
**Is Tajweed necessary for reciting the Quran during Salah?
**While Tajweed improves the beauty and accuracy of Quranic recitation, beginners can begin with basic pronunciation and improve their Tajweed over time.
**How can one maintain concentration during Salah with Nazra Quran?
**Creating a calm environment, minimizing distractions, and focusing on the meanings of Quranic verses can help maintain concentration during Salah with Nazra Quran.
Can listening to audio recitations help in memorizing Quranic verses for Salah?
Yes, listening to audio recitations can aid memorization by reinforcing auditory learning and helping individuals become familiar with correct pronunciation and melody of Quranic verses.
**What role does Khushu' play in Strengthening Salah through the Nazra Quran?
**Khushu', or reverent attentiveness, is crucial for maximizing the spiritual impact of Nazra Quran during Salah, fostering a deep sense of humility and connection with the Divine.
| equranekareem |
1,777,966 | Building a Container-Optimized VM Template with Ignition on Proxmox 8.x | Learn how to create a versatile VM template with Proxmox 8.x for your Kubernetes infrastructure using openSUSE MicroOS, customized with Ignition for seamless deployment and management. Follow step-by-step instructions to download the system image, configure the template, and ensure compatibility with Proxmox features. | 0 | 2024-03-02T10:22:21 | https://dev.to/sdeseille/building-a-container-optimized-vm-template-with-ignition-on-proxmox-8x-356p | proxmox, ignition, chatgpt, tutorial | ---
title: Building a Container-Optimized VM Template with Ignition on Proxmox 8.x
published: true
description: Learn how to create a versatile VM template with Proxmox 8.x for your Kubernetes infrastructure using openSUSE MicroOS, customized with Ignition for seamless deployment and management. Follow step-by-step instructions to download the system image, configure the template, and ensure compatibility with Proxmox features.
tags: proxmox, ignition, chatgpt, tutorial
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j0cu29ao0o7a4da0psjj.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-03-02 09:48 +0000
---
In my recent endeavors, I set up a lab machine powered by a second-hand Dell Optiplex 7040 Mini, featuring an i7 6700t processor. Despite its compact size and silent operation, this machine offers sufficient performance to host multiple virtual machines concurrently. I utilize this setup to experiment with various prototypes, such as running podman containers in rootless mode on an openSUSE Leap VM hosted within it.
A few months ago, I embarked on a journey to learn about building and utilizing Kubernetes infrastructure. Following the course "Architecting with Google Kubernetes Engine" on [Coursera](https://www.coursera.org/) provided invaluable insights and knowledge.
Now, it's time to put that knowledge into practice and gain hands-on experience. To kickstart my journey into Kubernetes, I need to establish a foundation—a VM template that will serve as the cornerstone of my endeavors.
## Obtaining a Container-Optimized System
To create our VM template, we first need to acquire system images from a distribution of our choice. Given my previous experience with podman on openSUSE Leap, I opted to explore the container-optimized system from openSUSE, known as MicroOS. You can find more information about it [here](https://get.opensuse.org/microos/).
### Downloading the openSUSE-MicroOS.x86_64-ContainerHost Image
After selecting our distribution, we navigate to the Proxmox Web interface. Accessing the Shell interface of our standalone Cluster node, we encounter a system prompt like the one below.
```bash
Linux midgard 6.5.11-7-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.11-7 (2023-12-05T09:44Z) x86_64
...
```
Execute the following command to download the image:
```bash
wget https://download.opensuse.org/tumbleweed/appliances/openSUSE-MicroOS.x86_64-ContainerHost-kvm-and-xen.qcow2
```
This command fetches the desired image, which includes support for the qemu guest agent.
## Creating and Configuring the Initial VM for MicroOS
Following the guidance provided in the Proxmox documentation, I executed the command below to create a new VM with specific configurations:
```bash
qm create 9001 --memory 2048 --net0 virtio,bridge=vmbr1,tag=10 --scsihw virtio-scsi-pci
```
## Importing the MicroOS Image to Local-LVM Storage
To make use of the downloaded openSUSE MicroOS disk image, it needs to be imported into the local-lvm storage and attached as a SCSI drive. The following command accomplishes this task:
```bash
qm set 9001 --scsi0 local-lvm:0,import-from=/root/openSUSE-MicroOS.x86_64-ContainerHost-kvm-and-xen.qcow2
```
A warning is issued regarding the potentially large size of the disk.
## Customizing the Template
As the default Proxmox setup utilizes cloud-init for template customization, adjustments were made to accommodate Ignition instead.
### Set Display device
It is possible to use default display (vga) hors to use qxl (spice). I choosed qxl.
```bash
qm set 9001 --vga qxl
```
### Activating qemu-guest-agent
To facilitate interaction between the host and guest machine, the qemu-guest-agent package needed activation within the VM template:
```bash
qm set 9001 --agent 1
```
### Adding a CD-ROM Device
To load Ignition settings from ignition.iso, a cdrom device was added to the VM template:
```bash
qm set 9001 --ide2 none,media=cdrom
```
### Setting Boot Order to scsi0
The boot order was configured to prioritize scsi0 to ensure the VM boots directly from it:
```bash
qm set 9001 --boot order=scsi0
```
### Finalizing the openSUSE MicroOS VM Template
To transform the VM into a template for future use, the VM was named and then converted using the following commands:
```bash
qm set 9001 --name template-opensuse-MicroOS-202403
qm template 9001
```
## Validating the VM Template
### First Boot Setting with Ignition
This section outlines the process of configuring the first boot settings for the VM using Ignition. Ignition is a tool used to configure the initial settings of a machine, such as users, passwords, and network configuration, in an automated and reproducible manner. Follow these steps to create and apply the Ignition configuration:
#### 1. Prepare the Ignition Configuration Template
Start by creating an Ignition configuration template file. This file will contain the initial settings you want to apply to the VM upon first boot. Here's how to create the template file:
```bash
mkdir -p iso/ignition
touch config.tpl
```
Edit the `config.tpl` file to define the desired configuration settings using JSON format. Include details such as user accounts, passwords, SSH keys, and any other system configurations required for your environment.
```json
{
"ignition": {
"version": "3.1.0"
},
"passwd": {
"users": [
{
"name": "root",
"passwordHash": "<password_hash>",
"sshAuthorizedKeys": [
"<ssh_public_key>"
]
}
]
},
"storage": {
"files": [{
"filesystem": "root",
"path": "/etc/hostname",
"mode": 420,
"overwrite": true,
"contents": {
"source": "data:,HOSTNAME_TO_REPLACE"
}
}]
}
}
```
Replace `<password_hash>` with the hashed password for the root user and `<ssh_public_key>` with the SSH public key you want to authorize for remote access. Additionally, replace `HOSTNAME_TO_REPLACE` with the desired hostname for the VM.
#### 2. Generate the Ignition Configuration File
Once you've defined the configuration settings in the template file, use `sed` to replace placeholders in the template with actual values and generate the Ignition configuration file (`config.ign`).
```bash
sed 's/HOSTNAME_TO_REPLACE/<hostname_of_vm>/' config.tpl > iso/ignition/config.ign
```
Replace `<hostname_of_vm>` with the hostname you want to assign to the VM.
#### 3. Create the Ignition ISO
Next, create an ISO image containing the Ignition configuration file (`config.ign`). Use the `mkisofs` command to generate the ISO image.
```bash
mkisofs -o ignition.iso -V ignition iso
```
This command creates an ISO image named `ignition.iso` in the `iso` directory.
#### 4. Apply the Ignition Configuration to the VM
Now that you have the Ignition ISO image, you can apply the configuration to the VM. Follow these steps:
1. Copy the `ignition.iso` file to the directory used to store ISO images in Proxmox (`/var/lib/vz/template/iso/`).
```bash
cp ignition.iso /var/lib/vz/template/iso/
```
2. Create a VM from the template.
```bash
qm clone 9001 108 --name test-vm-auto
```
3. Set the CD-ROM drive of the VM to boot from the `ignition.iso` file.
```bash
qm set <id_of_vm> --ide2 local:iso/ignition.iso,media=cdrom
```
Replace `<id_of_vm>` with the ID of the VM.
#### 5. Start the VM
Start the VM to initiate the first boot process. Ignition will automatically apply the configuration settings defined in the `config.ign` file during the boot process.
```bash
qm start <id_of_vm>
```
Replace `<id_of_vm>` with the ID of the VM.
After completing these steps, the VM will boot with the specified configuration settings applied, allowing for automated and consistent provisioning of new VM instances.
Upon successful boot, the VM was accessible via SSH using the configured Public Key.
```bash
Using username "root".
Authenticating with public key "imported-openssh-key"
Last login: Mon Feb 5 14:45:43 UTC 2024 from 192.168.3.10 on ssh
test-vm-auto:~ # cat /etc/os-release
NAME="openSUSE MicroOS"
# VERSION="20240228"
ID="opensuse-microos"
ID_LIKE="suse opensuse opensuse-tumbleweed"
VERSION_ID="20240228"
PRETTY_NAME="openSUSE MicroOS"
ANSI_COLOR="0;32"
CPE_NAME="cpe:/o:opensuse:microos:20240228"
BUG_REPORT_URL="https://bugzilla.opensuse.org"
SUPPORT_URL="https://bugs.opensuse.org"
HOME_URL="https://www.opensuse.org/"
DOCUMENTATION_URL="https://en.opensuse.org/Portal:MicroOS"
LOGO="distributor-logo-MicroOS"
test-vm-auto:~ #
```
## Conclusion
Congratulations on reaching the end of this guide! By following the steps outlined here, you've successfully created your first VM template with Proxmox, paving the way for efficient VM deployment and management. Whether you're setting up a small lab environment or preparing for larger-scale deployments, understanding how to create and customize VM templates is a valuable skill.
As you've seen, Proxmox offers powerful tools and features for creating and managing VM templates, including support for Ignition configuration and automated provisioning. While this guide focused on using Ignition for initial VM configuration, Proxmox also supports other configuration methods such as Cloud-init, providing flexibility to meet different requirements.
I'd like to note that this article has been improved with the assistance of ChatGPT, an AI language model developed by OpenAI. By leveraging ChatGPT's capabilities, the content was refined to ensure clarity, accuracy, and reader engagement. This collaboration demonstrates the potential of AI-driven tools to enhance the quality and effectiveness of technical documentation.
Thank you for taking the time to explore this guide. I hope you found it informative and valuable for your virtualization projects. If you have any questions or feedback, feel free to reach out. Happy virtualizing!
All files referenced in this article are available on my [GitHub Account](https://github.com/sdeseille/proxmox-vm-template-with-ignition).
| sdeseille |
1,778,037 | Streamlining Your Next.js Projects with Supabase and Drizzle ORM | This guide showcases how to build an efficient application using Supabase for data handling,... | 0 | 2024-03-02T10:50:17 | https://dev.to/musebe/streamlining-your-nextjs-projects-with-supabase-and-drizzle-orm-4gam | ---
title: Streamlining Your Next.js Projects with Supabase and Drizzle ORM
published: true
description:
tags:
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-02-29 06:55 +0000
---
This guide showcases how to build an efficient application using Supabase for data handling, complemented by Drizzle ORM for advanced database schema management in TypeScript. Starting with database seeding, we aim to demonstrate effective data retrieval, leveraging Drizzle ORM's intuitive blend of SQL-like and relational data access. Drizzle ORM not only simplifies database interactions but also introduces a suite of tools to enhance development workflows, setting the foundation for a productive and streamlined development experience.
## **Step 1: Setting Up Your Next.js Environment**
To start your Next.js project, execute the command `npx create-next-app@latest syncleaf`. This quickly sets up a fresh Next.js application named `syncleaf`
```jsx
npx create-next-app@latest syncleaf
```
## Configuring Drizzle ORM and Environment Variables
Proceed by installing Drizzle ORM, PostgreSQL, and dotenv with the command below in your project directory:
```jsx
npm install drizzle-orm postgres dotenv
```
This step incorporates Drizzle ORM for handling database schemas, **`postgres`** for interacting with the PostgreSQL database, and **`dotenv`** for environment variable management, all crucial for a secure and efficient database connection.
Following this, enhance your development workflow by adding Drizzle Kit as a development dependency:
```jsx
npm i -D drizzle-kit -D
```
Think of Drizzle Kit as a magic tool that helps you build and change your database, kind of like building with LEGOs. You tell it how you want your database to look using a special code, and it creates a set of instructions to make or change the database just like that. If you decide to change how your database should look, Drizzle Kit figures out what new instructions are needed and keeps everything organized and safe, so you can always go back and see what changes you made. Plus, you can work on different parts of your database in separate pieces or even work on many databases at once. And if you already have a database, Drizzle Kit can quickly understand how it's built and help you make changes to it super fast!
## **Setting Up Your Supabase Project**
Initiate your Supabase project setup by first logging in at [Supabase Login](https://app.supabase.io/). Once logged in, select "New Project" and name it "Syncleaf." It's essential to generate and save a secure database password for later use. Choose the server region that offers the best performance for your target audience. After filling in all the required fields, click "Create new project." Securely store the database password as you will need it for your `.env` file to establish a database connection.
For a visual guide, refer to the image provided below.

After creating your project, you'll be taken to a screen displaying all necessary API keys and configuration details, including your project URL, anon key for client interactions, and service role key for backend operations. These are crucial for connecting your Next.js app securely with Supabase, so make sure to accurately copy them into your `.env` file for future use.
****

## **Configure Environment Variables**
Create a **`.env`** file at the root of your project to securely store your database credentials. These values are provided by Supabase as highlighted above :
```jsx
DATABASE_URL=
NEXT_PUBLIC_SUPABASE_URL=
NEXT_PUBLIC_SUPABASE_ANON_KEY=
SERVICE_ROLE_KEY=
PW=
```
## **Drizzle Configuration Setup**
Create a `drizzle.config.ts` file at the root of your project to configure Drizzle ORM's interaction with your database.
Here is the content for the configuration file:
```jsx
import type { Config } from 'drizzle-kit';
import * as dotenv from 'dotenv';
dotenv.config({ path: '.env' });
if (!process.env.DATABASE_URL) {
console.log('🔴 Cannot find database url');
}
export default {
schema: './src/lib/supabase/schema.ts',
out: './migrations',
driver: 'pg',
dbCredentials: {
connectionString: process.env.DATABASE_URL || '',
},
} satisfies Config;
```
This configuration file serves as a map for Drizzle ORM, pointing it to the location of your database schema files, where to store migration files, and which database driver to use. It also securely pulls in the database connection string from your **`.env`** file. This setup is essential for enabling Drizzle ORM to manage your database schema and migrations effectively.
Following the structure outlined in our **`drizzle.config.ts`** configuration, let's proceed to create the files and directories:
## **Defining the Database Schema**
For schema definition, place a **`schema.ts`** file within the **`src/lib/supabase/`** directory. To set up this file and its required directory structure, execute the command:
```jsx
mkdir -p src/lib/supabase && touch src/lib/supabase/schema.ts
```
The **`schema.ts`** file is used to define and export data models that closely represent the structure of your database tables. These models facilitate type-safe database operations, ensuring that the data types used in your application match those in your database. This approach significantly enhances development efficiency by enabling autocompletion, reducing runtime errors, and making the codebase easier to understand and maintain.
Add this to it:
```jsx
import { pgTable, uuid, text, decimal, integer, timestamp } from "drizzle-orm/pg-core";
export const product = pgTable("product", {
id: uuid('id').defaultRandom().primaryKey().notNull(),
name: text("name"),
description: text("description"),
price: decimal("price", { precision: 10, scale: 2 }),
quantity: integer("quantity"),
image: text("image"),
created_at: timestamp("created_at").defaultNow(),
updated_at: timestamp("updated_at").defaultNow(),
});
```
This code snippet employs `drizzle-orm/pg-core` to create a `product` table model for PostgreSQL integration with Supabase, ensuring operations adhere to specified data types and schema constraints. This method enhances the reliability and scalability of your application's data layer without detailing individual fields.
## Setting Up Database Connection and Utility Functions
Continuing with our setup, we'll now add a **`db.ts`** file to the **`src/lib/supabase`** directory, crucial for our database connection and utility functions. This step simplifies database interactions, improving maintainability and scalability.
To create the **`db.ts`** file:
```jsx
touch src/lib/supabase/db.ts
```
This prepares us to define our connection and utilities.
Add the following content to the **`db.ts`** file to set up your database connection and utilities:
```jsx
import { drizzle } from 'drizzle-orm/postgres-js';
import postgres from 'postgres';
import dotenv from 'dotenv';
import * as schema from '../../../migrations/schema';
dotenv.config();
if (!process.env.DATABASE_URL) {
console.error('❌ Error: Database URL is not specified in the environment variables.');
process.exit(1);
}
const client = postgres(process.env.DATABASE_URL, { max: 1 });
const db = drizzle(client, { schema });
console.log('Database connection successfully established.');
export default db;
```
This script sets up the database connection using **`drizzle-ORM`** and **`postgres`**, with configurations managed via environment variables. It ensures the **`DATABASE_URL`** is available, initializes the connection, and indicates a successful setup. The **`drizzle`** client is then made available for application-wide usage.
## **Enhancing Database Management with Drizzle Scripts**
To efficiently manage and interact with your database using Drizzle, add the following scripts to your **`package.json`**. These scripts provide convenient commands for database operations such as schema synchronization, introspection, generation, migration, and seeding:
```jsx
"scripts": {
"push": "drizzle-kit push:pg",
"pull": "drizzle-kit introspect:pg",
"generate": "drizzle-kit generate:pg",
"drop": "drizzle-kit drop",
"check": "drizzle-kit check:pg",
"up": "drizzle-kit up:pg",
"migrate": "npx tsx scripts/migrations/migration.ts",
"studio": "drizzle-kit studio",
"seed": "npx tsx scripts/seed.ts"
}
```
These scripts simplify the process of keeping your database schema in sync with your codebase, managing migrations, and seeding data for development and testing.
## **Generating Migration Files for PostgreSQL with Drizzle**
Execute the **`npm run generate`** command to initiate migration file creation:
```jsx
npm run generate
```
Running **`npm run generate`** triggers **`drizzle-kit generate:pg`**, analyzing your PostgreSQL schema and auto-generating a migration file for streamlined schema management. Following this command, a **`migrations`** folder will be created at the root of your project, as directed by the **`out: './migrations'`** setting in **`drizzle.config.ts`**, ensuring an organized approach to tracking database schema changes.
## **Setting Up the Migration Script**
To organize your project's migration scripts, first create a `scripts` folder at the root of your project directory, then add a `migrations` folder within it, and finally create a `migration.ts` file inside this folder. Use the following command to set up this structure:
```bash
mkdir -p scripts/migrations && touch scripts/migrations/migration.ts
```
This command ensures the necessary directories and files are created in your project, ready for you to add your migration logic.
Add the following content to your `migration.ts` file to handle database migrations:
```tsx
import db from '../../src/lib/supabase/db';
import { migrate } from 'drizzle-orm/postgres-js/migrator';
import dotenv from 'dotenv';
dotenv.config();
const migrateDatabase = async (): Promise<void> => {
console.log('🚀 Starting database migration...');
try {
await migrate(db, { migrationsFolder: 'migrations' });
console.log('✅ Successfully completed the database migration.');
process.exit(0);
} catch (error) {
console.error('❌ Error during the database migration:', error);
process.exit(1);
}
};
migrateDatabase();
```
This script initializes the environment variables, then defines and executes a function to migrate the database using `drizzle-ORM`. It logs the start and successful completion of the migration process or catches and logs any errors encountered, ensuring a clear status update during the migration process.
## Executing the Migration Script
To execute the migration script and apply your database changes, run the following command:
```jsx
npm run migrate
```
Upon successful execution, you'll notice new files within the **`migrations`** folder, indicating that the migration scripts have been generated and run. Additionally, by checking your Supabase database, you should find the **`products`** table created, complete with all the fields you've previously defined.
For a more interactive view of your database schema and to manage your data directly, use the command:
```jsx
npm run studio
```
This will launch Drizzle-Kit Studio, utilizing your project's Drizzle configuration file to connect to your database. Drizzle Studio provides a user-friendly interface for browsing your database, as well as adding, deleting, and updating entries according to your defined Drizzle SQL schema.
## **Populating the Products Table with Seed Data**
With the products table in place, it's time to populate it with some sample data. To achieve this, we'll utilize the **`faker`** library to generate realistic product information seamlessly. This approach not only simplifies the process of creating diverse data sets but also enhances the testing and development experience by providing a rich dataset to work with.
Ensure **`faker`** is installed in your project by running:
```bash
npm install @faker-js/faker
```
Next, create the **`seed.ts`** file by executing the following command:
```bash
touch scripts/seed.ts
```
Now, add the following contents to your **`seed.ts`** file:
```bash
import { drizzle } from 'drizzle-orm/node-postgres';
import { Pool } from 'pg';
import { product } from '../src/lib/supabase/schema';
import { faker } from '@faker-js/faker';
import * as dotenv from 'dotenv';
dotenv.config({ path: './.env' });
if (!process.env.DATABASE_URL) {
console.error('DATABASE_URL not found in .env');
process.exit(1);
}
const main = async () => {
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
});
const db = drizzle(pool);
const productsData = [];
for (let i = 0; i < 20; i++) {
productsData.push({
name: faker.commerce.productName(),
description: faker.commerce.productDescription(),
price: faker.commerce.price({ min: 100, max: 1000, dec: 2, symbol: '' }),
quantity: faker.number.int({ min: 1, max: 100 }),
image: faker.image.url(),
});
}
console.log('Seed start');
await db.insert(product).values(productsData).execute();
console.log('Seed done');
await pool.end();
};
main().catch((error) => {
console.error('Failed to seed products:', error);
process.exit(1);
});
```
To populate your database with 20 unique product entries, execute the command `npm run seed`. This command triggers a script that connects to your database, generates product entries using faker, and inserts them into the products table, creating a foundational dataset for development and testing.
After running `npm run seed`, review your Supabase database or drizzle-kit studio to confirm the successful population of product entries, as shown in the provided screenshot. This confirms the success of your migration and seeding efforts, setting the stage for application development.

## **Wrapping Up**
In conclusion, leveraging Drizzle ORM has empowered us to streamline database population, schema evolution, and data manipulation seamlessly. This efficiency has greatly expedited our development journey, furnishing us with a sturdy groundwork for constructing and expanding our application.
## Reference
- GitHub Repository - [https://github.com/musebe/Drizzle-supabase](https://github.com/musebe/Drizzle-supabase)
- Deployed Demo - [https://drizzle-supabase.vercel.app/](https://drizzle-supabase.vercel.app/)
- Supabase - [https://supabase.io](https://supabase.io/)
- Drizzle ORM Documentation - [https://orm.drizzle.team/](https://orm.drizzle.team/)
- Drizzle ORM Kit Documentation - [https://orm.drizzle.team/kit-docs/overview](https://orm.drizzle.team/kit-docs/overview) | musebe | |
1,778,038 | Memory Handling in Java | Before diving into memory management of java one must know java has primitive datatypes and more... | 0 | 2024-03-04T13:19:15 | https://dev.to/coderatul/memory-handling-in-java-jc6 | java, programming, computerscience, learning | > Before diving into memory management of java one must know java has primitive datatypes and more complex objects (reference types)
- Primitive type
- refference type
> Java has no concept of pointers and java only has pass by value, there is nothing like pass by reference in java
---
### primitives
- Primitive types are the basic data types provided by a programming language.
- They are the simplest and most fundamental building blocks of data. In Java, the primitive types include:

- Integral Types:
- byte: 8-bit signed integer
- short: 16-bit signed integer
- int: 32-bit signed integer
-long: 64-bit signed integer
- Floating-Point Types:
- float: 32-bit floating-point
- double: 64-bit floating-point
- Characters:
- char: 16-bit Unicode character
- Boolean:
- boolean: Represents true or false.
---
### Reference type
- Reference types are more complex and are used to store references (memory addresses) to objects. Objects are instances of classes or arrays. In Java, reference types include:

- Objects:
- Instances of classes created using the new keyword.
- Arrays:
- Ordered collections of elements.
- Interfaces:
- Types representing a contract for classes to implement.
---
## How primitives are stored ?
- All data for primitive type variables is stored on the stack
- when setting a primitive type variable equal to another primitive type variable, a copy of value is made.
```
int a = 10;
int b = 20;
int c = b;
int c = 100;
```

> int a = 10; -> int 10 is stored on stack memory
int b = 20; -> int 20 is stored on stack memory
int c = b; -> value of b(20) is copied to c
int c = 100; -> value of c is modified to 100
---
## How Reference type stores value ?
- For reference types, the stack holds a pointer to the object on the heap memory
- When setting a reference type variable equal to another reference type variabel, a copy of only the pointer is made
- Certain object types can't be manipulated on the heap(immutables)

- int[ ] c = {1,2,3,4}; -> creates an array of integers in the heap memory and stack has a reference to that objetc in heap
- int[ ] d = c; -> value of reference(stored in stack) is copied from variable c(stack) to variable d(stack), no new object is cretaed in the heap memory (and not the actual object)
> d[1] = 99; -> value of object at index 1 is changed by variable d, which had reference of object {1,2,3,4,5} hence value is also changed for variable c, as they were having the same reference in the stack memory
- d = new int[5]; -> a new array is created in th heap memory and d rerences to that new array
- int[ ] e = {5,6,7,8} -> creates a new array in heap memory
> int [ ] f = {5,6,7,8} -> also creates a new array in the heap although content of array `e` and `f`is same still the exist in various memory space
> f[1] = 99 -> this would only change index at 1 position for array `f` and not for array `e`
- String g = "hello"; -> a new string with value "hello" is created in the heap memory
- String h = g; -> value of reference stored in stack memory is copied form `g` to `h`(and not the actual object object)
> h = "goodbye"; -> you might be expecting that the value of string `g` should also have been changed as both `g` and `h` are pointing at same string, but as Strings are `immutable` meaning they can't be modified, so instead a new string "goodbye" is cretaed and it's reference is assigned to `h` and `g` still points to hello
[for more reference and image credit](https://www.youtube.com/@BillBarnum)
| coderatul |
1,778,065 | Supercharge Your Website Search with Google's PSE! | Hey developers! Let's talk about enhancing your website's user experience with a powerful search bar.... | 0 | 2024-03-02T12:09:34 | https://dev.to/beginnerdeveloper/supercharge-your-website-search-with-googles-pse-2a0h | webdev, google, search, javascript | Hey developers! Let's talk about enhancing your website's user experience with a powerful search bar. Today, we're diving into Google's Programmable Search Engine (PSE).
## What is PSE?
PSE is a free service that empowers you to integrate a custom search engine directly into your website. It utilizes Google Search's technology, ensuring users receive fast and relevant results within the specific context of your website's content.
Key perks for developers:
**Targeted Search:** Define the search scope, directing users to specific websites or content you choose.
**Customization:** Craft the search bar and results page to match your website's design aesthetics.
**User-Friendly Features:** Implement features like search refinements, autocomplete, and promotions for an enhanced experience.
**Easy Integration:** Integrate the PSE code seamlessly into your website with minimal technical expertise.
Get started with PSE:
Head over to https://programmablesearchengine.google.com/about/ for comprehensive setup instructions and documentation.

| beginnerdeveloper |
1,778,069 | Dynamic AWS IAM Policies | We maintain a CloudFormation custom resource provider for Amazon Connect. The provider has grown... | 0 | 2024-03-02T12:27:50 | https://bliskavka.com/2024/03/02/dynamic-iam-policies/ | aws, cdk, iam, security | We maintain a CloudFormation custom resource provider for Amazon Connect. The provider has grown organically, and as new features were added, the default role policy has become large.
The provider can do simple low-security tasks like `associateLambda`, or complex tasks like `createInstance`, which requires access to security-sensitive resources like `kms` and `iam`.
During a recent security review, we discovered that the same role policy was being used across all provider instances. This meant that if we used a low-security operation, such as `associateLambda`, the role would be granted access to high-security resources like `kms` and `iam`.
## Solution 1 - Inject a Pre-Built Role
For the current project, we resolved the issue by introducing an optional role prop. This allowed the developer to select specific IAM permissions.
```typescript
// PSEUDO-CODE
class ConnectProvider {
role: IRole;
constructor(props: {role?: IRole}){
if(!props.role){
// Configures the default (overly permissive) permissions
this.role = new Role(...);
} else {
// Uses the injected role
this.role = props.role;
}
// Custom resource handler
this.handler = new Function(... {role: this.role})
}
}
const role = new Role(...);
role.addToPrincipalPolicy(
new PolicyStatement({
effect: Effect.ALLOW,
actions: [
'connect:AssociateLambdaFunction',
'connect:DisassociateLambdaFunction'],
resources: [instanceArn]
})
);
const provider = new ConnectProvider({role});
provider.associateLambda(...)
```
### Pros
- We were able to quickly patch the current app
### Cons
- Each dependent app would have to be updated manually. We have A LOT!
- The app developer must know exactly which IAM permissions are required.
## Solution 2 - Dynamically Generate the Role
I updated the custom resource constructs to dynamically build up the policy based on which resources are used, so I could roll out the update in a backward-compatible way.
```typescript
// PSEUDO-CODE
class ConnectProvider {
role: IRole;
constructor(props: {role?: IRole}){
if(!props.role){
this.role = new Role(...);
} else if (props.role instanceof Role){
// Convert to IRole to avoid manipulating the role
this.role = Role.fromArn(props.role.roleArn)
} else {
this.role = props.role;
}
// Custom resource handler
this.handler = new Function(... {role: this.role})
}
// Users call helper functions to create the custom resource
associateLambda(id, instanceArn, lambda){
if(this.role instanceof Role){
// Dynamically update self-managed role
this.role.addToPrincipalPolicy(
new PolicyStatement({
effect: Effect.ALLOW,
actions: [
'connect:AssociateLambdaFunction',
'connect:DisassociateLambdaFunction'],
resources: [instanceArn]
})
);
}
return new CustomResource({
serviceToken: this.handler.functionArn,
properties: {
instanceArn,
functionArn: lambda.functionArn
}
});
}
}
```
### Pros
- No manual intervention is needed for dependent apps. Simply upgrade the NPM package and redeploy.
### Cons
- Resource deletion does not work properly.
- If you had a custom resource like `associateLambda`, everything works fine because the role policy is updated before the resource is created.
- But if you remove the custom resource in a future release, CloudFormation will update the role policy first (and remove the associated permission) before cleaning up the resource.
- As a result, you encounter a permission error when cleaning up the `associateLambda` resource
- Circular dependencies
- If you used the provider to `createInstance` and then used the instance ARN in another construct like `associateLambda` you will encounter a circular reference
- Details
- Invoke `createInstance` and get instance ARN
- Invoke `associateLambda` using instance ARN
- Instance ARN is used in the dynamic policy, resulting in a circular reference
## Solution 3 - Mix of both
In the end, I decided to use a combination of both solutions. I created a `ConnectProviderRoleBuilder` to make it easier for developers to build the role.
Additionally, I also updated the `ConnectProvider` to automatically use the builder if a role is not provided.
This means that we can update existing apps without any manual intervention. If the app encounters the issues described in Solution 2 during ongoing development, the team can use the `ConnectProviderRoleBuilder` to generate an appropriate role quickly.
```typescript
// PSEUDO-CODE
class ConnectProviderRoleBuilder {
role: IRole;
/**
* Tracks if the provider was used to create an instance.
* If so, we cannot limit role permissions to a specific instance
* due to circular dependency.
*/
private createdInstance: boolean = false;
constructor(props: {existingRole?: IRole}){
if(!props.role){
this.role = new Role(...)
} else if(props.role instanceof Role){
// Ensures role is not manipulated by the builder
this.role = Role.fromArn(props.existingRole.roleArn)
} else {
this.role = props.existingRole;
}
}
/**
* Create an instance ARN for permission filtering
* If the provider was used to create the instance the ARN will be
* `instance/*` to avoid circular dependency error
* Assumes this provider will operate on a single instance.
*/
instanceArn(instanceId: string): string {
if (this.createdInstance) {
// We can't reference the instanceId (circular ref)
return `arn:aws:connect:${region}:${account}:instance/*`;
} else {
return `arn:aws:connect:${region}:${account}:instance/${instanceId}`;
}
}
allow(actions, resources){
if(this.role instanceof Role){
// Only add permissions if the role is being managed by the construct.
this.role.addToPrincipalPolicy(
new PolicyStatement({
effect: Effect.ALLOW,
actions,
resources
})
);
}
}
// Helpers to add policy statements
allowAssociateLambda(instanceId, ...functionArns){
this.allow([
'connect:AssociateLambdaFunction',
'connect:DisassociateLambdaFunction'],
[this.instanceArn(instanceId)]
);
// Update lambda resource policy to allow connect invoke
// ...
}
allowCreateInstance(){
this.createdInstance = true;
this.allow(...)
// ...
}
// ...
}
class ConnectProvider {
builder: ConnectProviderRoleBuilder;
role: IRole;
constructor(props: {role?: IRole}){
this.builder = new ConnectProviderRoleBuilder({role: props.role})
this.role = this.builder.role;
this.handler = new Function(... {role: this.role})
}
associateLambda(instanceId, lambda){
this.builder.allowAssociateLambda(instanceId, lambda.functionArn)
return new CustomResource({
serviceToken: this.handler.functionArn,
properties: {
instanceId,
functionArn: lambda.functionArn
}
})
}
}
const myLambda: IFunction;
// Pre-build the role
const builder = new ConnectProviderRoleBuilder()
builder.allowAssociateLambda(instanceId, myLambda.functionArn)
const provider = new ConnectProvider({role: builder.role})
provider.associateLambda(instanceId, myLambda)
```
## Conclusion
The simplest solution would have been to simply force the developer to inject a role but it would have created unnecessary developer friction because:
- "My app used to deploy fine, but now I have to manually create a new role".
- "I have no idea what is happening under the hood and which permissions are required", resulting in even more friction.
This solution was certainly more work, but it solved the problem with the least effort from the downstream developers.
No, go build secure and elegant tools!
| ibliskavka |
1,778,074 | 4 facets of API monitoring you should implement | Introduction Issues with APIs often have the potential to cause major disruptions to... | 0 | 2024-03-02T12:37:23 | https://apitally.io/blog/four-facets-of-api-monitoring-you-should-implement | api, monitoring, webdev | ## Introduction
Issues with APIs often have the potential to cause major disruptions to businesses. Proactive API monitoring is therefore essential for tech professionals who are responsible for maintaining the integrity and performance of business-critical APIs.
In this blog post we'll take an in-depth look at the four fundamental aspects of API monitoring every tech professional should consider to implement:
- API traffic monitoring
- API performance monitoring
- API error monitoring
- API uptime monitoring
Having these in place can empower teams to preemptively address potential issues, optimize API performance, make data-driven product and engineering decisions and ultimately deliver a seamless experience to end-users.
## API traffic monitoring
This aspect of API monitoring involves tracking the volume and type of requests an API receives. It allows developers and product owners to understand how their APIs are being utilized in real-world scenarios, enabling them to make informed decisions about product development and enhancements. If the APIs are being used for integration with other internal systems, analyzing API traffic sheds light onto the behaviors of these systems and can reveal issues or opportunities for optimization.
An equally important benefit of API traffic monitoring is the ability to detect anomalies in usage patterns. Sudden spikes or drops in traffic to certain endpoints can indicate underlying issues, including malicious activities aiming to compromise the API. By setting up alerts for such irregularities, teams can quickly investigate and address the underlying causes, minimizing the risk of downtime or security breaches.
In essence, API traffic monitoring is not just about keeping tabs on the volume of requests; it's about leveraging data to drive strategic decisions, enhance user experiences, and maintain the robustness and integrity of APIs.
## API performance monitoring
Performance is often a key differentiator for APIs. Fast response times not only enhance user experience but also increase the overall efficiency of applications that rely on your API. API performance monitoring involves measuring the time it takes for an API to respond to requests. This is done for the API as a whole, as well as for individual endpoints.
By tracking these latencies over time, you can identify trends and patterns, such as endpoints that consistently take longer to respond. This can help you pinpoint performance bottlenecks and optimize your API for better responsiveness.
Setting performance benchmarks based on these metrics is also crucial. They serve as a standard against which the API's ongoing performance can be measured. Any deviations from these benchmarks should trigger alerts to the API team to investigate and rectify potential issues. Thus, effective API performance monitoring leads to a faster, more reliable API and a smoother user experience.
## API error monitoring
When talking about errors in the context of APIs, it makes sense to distinguish two different categories: server errors (`5xx` responses) and client errors (`4xx` responses).
### Server errors
Server errors result from issues in your application or infrastructure. When API consumers hit server errors, all they can do is retry the request. Server errors can be temporary, for example, when there are intermittent networking issues within your stack. However, when a bug in your application consistently prevents certain requests from being handled successfully, there is nothing API consumers can do but to wait for you to fix the underlying issue. This is why it is vitally important that you have the right tools in place to alert you when these types of errors occur.
### Client errors
Client errors are typically part of the API's regular operation. Monitoring these can provide valuable insights and identify ways to enhance the API's usability for end users. A sudden increase in client errors could indicate problems with your API, such as a new validation rule being too restrictive, or reveal issues with consumer systems providing malformatted input to the API.
In essence, API error monitoring not only helps in pinpointing and fixing issues within the API but also aids in understanding the end-user's interaction with the system. By effectively tracking and analyzing both server and client errors, teams can create a more reliable and user-friendly API.
## API uptime monitoring
Uptime monitoring is another critical facet of API Monitoring. It refers to the act of ensuring that your API is available and functional at all times. Any downtime can lead to significant disruptions to connected systems, making uptime monitoring a crucial part of maintaining a high-quality user experience.
API uptime monitoring involves checking the availability of your API at regular intervals. This can be done by sending requests to various endpoints and verifying the responses. In addition to simple availability checks, uptime monitoring could also consider the 'health' of the response. This might involve checking that the response time is within acceptable limits, or that the data returned in the response is as expected.
Setting up robust alerting is another key aspect of uptime monitoring. Teams should be immediately notified when the API is down or experiencing problems. This allows them to quickly identify and rectify the issues, minimizing the impact on users.
In essence, API uptime monitoring ensures that your API is consistently ready and accessible, providing the high-quality, reliable service that your users expect.
## Conclusion
In conclusion, effective API monitoring involves a comprehensive approach that takes into account API traffic, performance, errors, and uptime.
Fortunately, the technological landscape is filled with tools to assist in these areas. We've gathered some of them in the list below as a starting point for your own research. They range from simple and easy to implement to comprehensive and complex, catering to different use cases.
### Recommended tools
- [Apitally](https://apitally.io/): Simple and easy-to-use API monitoring tool covering traffic, performance, errors, and uptime.
- [Sentry](https://sentry.io/): Error monitoring for applications, including APIs. Also offers application performance monitoring (APM).
- [Postman](https://www.postman.com/api-monitoring/) Uptime and performance monitoring for APIs.
- [Datadog](https://www.datadoghq.com/): Comprehensive monitoring platform.
- [New Relic](https://newrelic.com/): Another comprehensive monitoring platform.
- [Prometheus](https://prometheus.io/): Open-source monitoring system. Often used together with [Grafana](https://grafana.com/). | simongurcke |
1,778,082 | How to: Replace Rollup.js with Vite ⚡️ | For me, it was once again time to take care of a project that I haven't worked on for almost a year.... | 0 | 2024-03-02T12:49:23 | https://thr0n.github.io/how-to-replace-rollup-js-with-vite | webdev, frontend, svelte, vite | For me, it was once again time to take care of a project that I haven't worked on for almost a year. As we can see in the output below (the package.json was analyzed using [npm-check-updates](https://www.npmjs.com/package/npm-check-updates)), the project still uses rollup.js and many libraries have become outdated in the meantime:
## Current dependencies:
```bash
@rollup/plugin-commonjs ^21.1.0 → ^25.0.7
@rollup/plugin-node-resolve ^13.3.0 → ^15.2.3
@rollup/plugin-replace ^3.1.0 → ^5.0.5
@rollup/plugin-typescript ^8.5.0 → ^11.1.6
@tsconfig/svelte ^2.0.1 → ^5.0.2
contentful ^9.3.5 → ^10.6.21
prettier ^2.8.8 → ^3.2.5
prettier-plugin-svelte ^2.10.1 → ^3.2.1
rollup ^2.79.1 → ^4.12.0
rollup-plugin-css-only ^3.1.0 → ^4.5.2
rollup-plugin-scss ^3.0.0 → ^4.0.0
svelte ^3.59.2 → ^4.2.11
svelte-check ^2.10.3 → ^3.6.4
svelte-preprocess ^4.10.7 → ^5.1.3
typescript ^4.9.5 → ^5.3.3
```
So it's time to update!
## Update the application and dependencies
In fact, the introduction of Vite and updating the dependencies were much easier than anticipated. These are the steps I took:
### Setup the basics:
Run `npm create vite@latest`, enter a `<project-name>` and choose `svelte`. When the initial setup is done copy all newly generated files from `./<project-name>` to the actual project directory. Afterwards delete `package-lock.json` once and run `npm install`. You can also delete `rollup.config.js` now.
### Further tasks
The basic setup is now already done! All I have to do now is to install the dependencies I'm using for my project (like leaflet, contentful, sass, etc.) and replace the generated `App.svelte` file with my actual application files.
Since I'm using some environment variables I also have to prefix the variable names in `.env` with `VITE_` and replace all `process.env.VARIABLE`s with `import.meta.env.VITE_VARIABLE` in the source files.
### Bonus task: Run ncu once again and update!
I then checked the dependencies again with ncu:
```bash
➜ vgnmap git:(feat/replace-rollup) ncu
Checking /Users/hendrik/dev/vgnmap/package.json
[====================] 16/16 100%
@playwright/test 1.41.2 → 1.42.1
@types/node 20.11.19 → 20.11.24
contentful 10.6.21 → 10.6.22
prettier-plugin-svelte 3.2.1 → 3.2.2
svelte 4.2.11 → 4.2.12
svelte-check 3.6.4 → 3.6.6
typescript 5.2.2 → 5.3.3
```
This time there were only minor updates. But while I'm at it, I'll also install these updates! I simply run `ncu -u` followed by `npm install`!
## Comparison: build times
Let's take a look at the times required for the production build
Using Rollup.js:
```bash
➜ vgnmap git:(main) ✗ npm run build
> svelte-app@1.0.0 build
> rollup -c
src/main.ts → public/build/bundle.js...
created public/build/bundle.js in 2.8s
```
Using Vite:
```bash
➜ vgnmap git:(feat/replace-rollup) npm run build
> vgnmap@1.1.0 build
> vite build
✓ 46 modules transformed.
dist/index.html 1.23 kB │ gzip: 0.66 kB
dist/assets/index-DfWY1ihM.css 19.01 kB │ gzip: 7.45 kB
dist/assets/index-CPtvyui6.js 265.34 kB │ gzip: 84.48 kB
✓ built in 857ms
```
As we can see, the build for the same application is two seconds faster with Vite than with Rollup. Furthermore: Vite also works noticeably faster in development mode! 💛
### Netlify deployment issues 🚨
Two notes if you also deploy your application to Netlify:
- Don't forget to update the names of your environment variables on netlify.app!
- Vite places the build output in the `dist` folder. So you have to change your Netlify Deploment settings, otherwise you'll get a 404 error!
## References:
For reference you can take a look at these commits:
- [b489e8c - Introduce vite](https://github.com/thr0n/vgnmap/pull/3/commits/b489e8ce0af15103a7777fd7ada1fbed1cdb4683)
- [ab4e7b6 - Drop rollup, run prettier](https://github.com/thr0n/vgnmap/pull/3/commits/ab4e7b617aca7c498f4ef4e39e1952a66a379c2b)
For general information about Vite see: https://vitejs.dev/guide/ | thr0n |
1,778,149 | What is JavaScript? | JavaScript, often abbreviated JS, is a programming language that is one of the core technologies of... | 0 | 2024-03-02T14:42:14 | https://dev.to/lav-01/what-is-javascript-5glm | javascript, beginners | JavaScript, often abbreviated JS, is a programming language that is one of the core technologies of the World Wide Web, alongside HTML and CSS. It lets us add interactivity to pages e.g. you might have seen sliders, alerts, click interactions, popups, etc on different websites — all of that is built using JavaScript. Apart from being used in the browser, it is also used in other non-browser environments as well such as Node.js for writing server-side code in JavaScript, Electron for writing desktop applications, React Native for mobile applications, and so on | lav-01 |
1,778,256 | Error monitoring and bug triage: Whose job is it? | The invisible and thankless work of determining the right things to fix | 0 | 2024-03-02T16:23:42 | https://jenchan.biz/blog/error-monitoring-and-bug-triage | bugs, agile, career, discuss | ---
title: Error monitoring and bug triage: Whose job is it?
published: true
description: The invisible and thankless work of determining the right things to fix
tags: bugs, agile, career, discuss
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4q6lgxl65gnm32cwjk3s.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-03-02 16:01 +0000
canonical_url: https://jenchan.biz/blog/error-monitoring-and-bug-triage
---
I first learned issue triage as a support drone working on a product that had 300 municipal customers. Several times a day, I'd hop on a call with a CPO to (hassle him to) make decisions on escalated customer issues and relay the handoff or feedback to and from developers. Later these skills translated to subsequent dev roles where discussing triage and related issues seemed a lightning rod depending on the environment. At that point, my role as not-real-dev and troubleshooter was to be a buffer between devs and PMs.
With the continued tightening of engineering budgets in 2024, most workplaces are likely expected to do more with less, so [triage and prioritization](https://www.jenchan.biz/blog/bug-triage) is a great skill to have in your back pocket that AI won't get right yet.
## Competing incentives
From a non-engineering perspective, any maintenance or tech debt can be thought of as nuisance, taking your most expensive resource from focusing on activities that generate revenue–building and shipping new features.
I'm also not surprised if developers don't eagerly volunteer to bug sweep on Sentry or resolve customer support fires.
Maintenance is unsexy; it doesn't directly support value delivery. You don't get the same payoff from working all night to ship a feature compared to shipping a hotfix for something found the day after release.
I haven't seen anyone celebrated or thanked for reducing error logs, patching a bug or backfilling a feature a customer might be losing their mind on.
And yet, both log monitoring and bug triage are crucial to maintaining the ability to ship fast and resolve errors quickly.
## Someone's gotta do it
If someone is already doing triage on a somewhat-regular basis, it's probably a product manager, engineering manager, QA or lead dev, or a support person who's combing through like a lone wolf and making independent decisions or chasing down the information they need to make escalations.
If the manager isn't technical, often some unlucky dev gets assigned a ticket they have no idea how to fix or reproduce, and hopefully the originary implementing devs are around to advise why they wrote the code the way they did.
If there's an on-call or rotating support role on a team this might be handled periodically with no shared context or clear decision maker. This is still better than nothing, but the impact ends at delegation and the loop never gets completed on how certain bugs could be prioritized or delegated based on domain or sprint and quarterly goals.
Ideally a core group composed of product, dev, QA leaders meet frequently to prioritize bugs surfaced by support and dev. Unless a company is highly collaborative and mature in agile processes, this doesn't happen as often as it should.
The best triage experiences I had involve a max of 4 people, with SMEs who have done the bug reproduction or understand the backlog item to evaluate effort. The SME likely needs to be fairly confident with their craft or know who worked on a particular domain of the codebase to offer estimates and severity, in addition to considering the impact of taking on the fix to their current and upcoming work.
Triage is only productive if the group agrees on a decision maker. Engineers who waffle on details too long need to be reminded that ultimately all are there to understand the viability and effort of a fix. Decision makers also need to be unafraid of hear out contrasting opinions and make quick difficult decisions given tradeoffs and constrained time. Triage meetings that go overtime because managers won't commit to decisions out of fear. In that situation it might move things along to ask a question as if followed up with your "recommendation" to give them an easy choice, then move onto the next thing instead of keeping them around to hem and haw.
## The cycle of neglect
Some version of the following happens when bugs and regressions aren't triaged with input from different SMEs:
- Managers delegate bugs to fill capacity in sprints to the brim, assuming an ever-present "nice-to-have" train of bugs devs could grab from when all their sprint work is done.
<center>👇🏻</center>
- Developers are assigned bugs with little-to-no context or severity, and no one is assigned to reproduce or investigate the bug.
<center>👇🏻</center>
- Every bug from previous releases get indiscriminately spread out across current or upcoming sprints. Cosmetic, pixel-pushing changes are mislabelled "medium" priority while bugs that are actually feature requests get labelled as high deathmarch feature-backfills.
<center>👇🏻</center>
- Developers get pulled off sprint work to debug and bandaid the latest production fire.
<center>👇🏻</center>
- The team wastes time on low value fixes; the backlog keeps growing ends up frustrating much higher level execs who wonder why velocity and burndown never matches issue input since they have no visibility over the cause of a bloated backlog in the first place.
<center>👇🏻</center>
- Everyone contently ignores error logs and focuses on next urgent fires until some other leader escalates it to an exec, who is inevitably going to wonder "Why didn't we catch this issue sooner?"
<center>👇🏻</center>
- No attempt is made to plan for the future because you're already off course from pre-existing sprint goals that should have been done last quarter, and developers are already overwhelmed as is. All energy for improvement is exhausted by too many incoming tasks.
<center>👇🏻</center>
- The cycle repeats (and some uptight dev blogs about it)
## Process introduction as lightning rod
Bug triage can be a touchy process to introduce, especially if there are unclear roles and many titles. If meetings are more often used to re-announce already-made decisions and work culture isn't particularly collaborative, introducing triage isn't going to bode well.
If no one has been doing it, it's highly likely every party believes someone else should be responsible for triage.
If you start doing it but no one knows, then you cheat yourself out of the recognition of technical leadership for helping your team become more effective.
If you start asking managers to participate you might get panicked faces or refusal to be a part of a new process someone else came up with.
If you start (god forbid) showing people how to do it, the people who should be doing it might feel like they're being told how to do their job!
In all likelihood, people already recognize gaps in planning and prioritizing, but are too overwhelmed to fit _just one more meeting_ in.
## Summary
On large cross functional teams with no clear decision maker, triage is often skipped or delegated away in the interest of keeping the greatest number of people working on sprint goals. This seems like a missed opportunity for preventing future fires and improving incident response.
Any tools or systems set up for it is only helpful for firefighting or customer support in as much as the right errors are handled and captured.
Without rostered or routine patrols of logs we don't have a picture of what's not working well, and how it fits within org goals.
If you set up logging or process and never check it, it's not being leveraged fully as part of quality or incident management.
Weigh out the fights you're unintentionally starting just by exercising your knowledge on process efficiency. Process improvement is of less concern if a company is prone to behave reactively. | jenc |
1,778,302 | React Strict DOM package | Hi Dev's After the React team announcement to all the improvements that React v19 will bring,... | 0 | 2024-03-02T18:49:44 | https://dev.to/ricardogesteves/react-strict-dom-package-1og1 | webdev, react, javascript, news | Hi Dev's
After the React team announcement to all the improvements that React v19 will bring, including the awesome introduction of a compiler. The React team is working on a truly exciting package that I believe is worth your time.
In the dynamic landscape of web and native application development, achieving cross-platform consistency while preserving performance and reliability remains a formidable challenge. Meta's recent release of React Strict DOM introduces a paradigm shift in this realm. In this expansive exploration, we embark on a journey to uncover the depths of React Strict DOM, elucidating its novel features, advantages, disadvantages, and the transformative potential it holds for universal React component development.
**The Genesis of React Strict DOM: A New Era in Universal Components**
React Strict DOM emerges as a revolutionary addition to the React ecosystem, poised to redefine the way developers create and deploy universal components for web and native platforms. By leveraging a novel integration of React DOM and StyleX, React Strict DOM empowers developers to seamlessly craft styled React components that transcend platform boundaries.
**Understanding the New React Strict DOM Package: Features and Purpose**
At its core, React Strict DOM serves as an experimental integration of React DOM and StyleX, offering a subset of imperative DOM and CSS functionalities tailored to support both web and native targets. The primary goal of React Strict DOM is to streamline and standardize the development of styled React components, enhancing development speed, efficiency, and code maintainability across diverse platforms.
**Advantages of React Strict DOM: Accelerating Development Efficiency**
1. **Speed and Efficiency**: React Strict DOM enhances the speed and efficiency of React development by providing a unified platform-agnostic API for creating styled components.
2. **Performance and Quality**: Despite its experimental nature, React Strict DOM does not compromise on performance, reliability, or code quality, ensuring optimal user experiences across platforms.
3. **Standardization**: By standardizing the development process, React Strict DOM promotes consistency and code maintainability, facilitating collaboration and code reuse across projects.
**Disadvantages of React Strict DOM: Addressing Limitations and Challenges**
1. **Experimental Nature**: As an experimental integration, React Strict DOM may entail potential issues or limitations that are yet to be fully explored or documented, necessitating careful consideration and testing.
2. **Compatibility Challenges**: While React Strict DOM aims to support both web and native platforms, compatibility with React Native remains a work in progress, presenting challenges for developers seeking seamless integration across diverse environments.
**Solving the Development Dilemma: Addressing Cross-Platform Challenges**
React Strict DOM addresses the perennial challenge of developing styled React components for both web and native platforms in a standardized, efficient, and performant manner. By leveraging the strengths of React DOM and StyleX, React Strict DOM bridges the gap between web and native development paradigms, empowering developers to create universal components with ease.
**The Concept Behind React Strict DOM: Polyfills and Web API Integration**
At its conceptual core, React Strict DOM builds upon the design goals of the "React DOM for Native proposal" by polyfilling a myriad of standard APIs and leveraging emerging web capabilities in React Native, such as DOM traversal and layout APIs. By integrating these capabilities with a well-defined event loop processing model, React Strict DOM paves the way for seamless cross-platform development experiences.
**Exploring React Strict DOM Through Code: A Technical Deep Dive**
Creating components with react-strict-dom
react-strict-dom is powered by stylex which is a new styling solution created by Meta that is already powering facebook.com in production. It comes with the package under css module. All the building blocks with which we can build our app are available under html. This is how building UI with RSD looks like:
```javascript
import React from "react";
import { css, html } from "react-strict-dom";
export default function App() {
return (
<html.div style={styles.div}>
<html.div data-testid="testid">div</html.div>
<html.span>span</html.span>
<html.p>paragraph</html.p>
<html.div />
<html.span>
<html.a href="https://google.com">anchor</html.a>,
<html.code>code</html.code>,<html.em>em</html.em>,
<html.strong>strong</html.strong>,
<html.span>
H<html.sub>2</html.sub>0
</html.span>
,<html.span>
E=mc<html.sup>2</html.sup>
</html.span>
</html.span>
</html.div>
);
}
const styles = css.create({
div: {
paddingBottom: 50,
paddingTop: 50,
backgroundColor: "white",
},
});
```
react-strict-dom is leveraging APIs that we know from the Web to build universal apps.
Is `<html.div> ` a native component?
Yes, it is! The role of react-strict-dom is to translate one universal API to platforms' primitives.
Let's delve into a series of intricate code examples that showcase the power and versatility of React Strict DOM:
- 1. **Defining Styles with StyleX**
```javascript
import { css } from 'react-strict-dom';
const styles = css.create({
container: {
flexDirection: 'row',
justifyContent: 'center',
alignItems: 'center',
backgroundColor: 'lightblue',
padding: 20,
borderRadius: 8,
},
text: {
fontSize: 18,
fontWeight: 'bold',
color: 'white',
},
});
```
- 2. **Creating a Styled Component**
```javascript
import { html } from 'react-strict-dom';
const StyledComponent = () => {
return (
<html.div style={styles.container}>
<html.span style={styles.text}>Styled Component</html.span>
</html.div>
);
};
export default StyledComponent;
```
- 3. **Rendering Text Elements**
```javascript
const TextComponent = () => {
return (
<html.div>
<html.p>This is a paragraph.</html.p>
<html.span>This is a span.</html.span>
<html.h1>This is a heading.</html.h1>
</html.div>
);
};
```
- 4. **Working with Lists**
```javascript
const ListComponent = () => {
const items = ['Item 1', 'Item 2', 'Item 3'];
return (
<html.ul>
{items.map((item, index) => (
<html.li key={index}>{item}</html.li>
))}
</html.ul>
);
};
```
- 5. **Handling Events**
```javascript
const ButtonComponent = () => {
const handleClick = () => {
console.log('Button clicked');
};
return (
<html.button onClick={handleClick}>Click Me</html.button>
);
};
```
- 6. **Conditionally Rendering Components**
```javascript
const ConditionalComponent = ({ condition }) => {
return condition ? <html.div>Condition is true</html.div> : <html.div>Condition is false</html.div>;
};
```
- 7. **Passing Props to Components**
```javascript
const PropsComponent = ({ name }) => {
return <html.div>Hello, {name}!</html.div>;
};
```
- 8. **Using Fragments**
```javascript
const FragmentComponent = () => {
return (
<>
<html.div>Fragment Component</html.div>
<html.div>Another Fragment Component</html.div>
</>
);
};
```
- 9. **Styling Components Inline**
```javascript
const InlineStyleComponent = () => {
const inlineStyle = { color: 'red', fontSize: '18px' };
return <html.div style={inlineStyle}>Inline Style Component</html.div>;
};
```
- 10. **Implementing Component Lifecycle Methods**
```javascript
class LifecycleComponent extends React.Component {
componentDidMount() {
console.log('Component mounted');
}
componentWillUnmount() {
console.log('Component will unmount');
}
render() {
return <html.div>Lifecycle Component</html.div>;
}
}
```
- 11. **Using Hooks**
```javascript
import { useState } from 'react-strict-dom';
const HooksComponent = () => {
const [count, setCount] = useState(0);
return (
<>
<html.div>Count: {count}</html.div>
<html.button onClick={() => setCount(count + 1)}>Increment</html.button>
</>
);
};
```
- 12. **Handling Form Inputs**
```javascript
const FormComponent = () => {
const [value, setValue] = useState('');
const handleChange = (e) => {
setValue(e.target.value);
};
return (
<html.div>
<html.input type="text" value={value} onChange={handleChange} />
<html.div>Typed Value: {value}</html.div>
</html.div>
);
};
```
- 13. **Using Context API**
```javascript
import { createContext, useContext } from 'react-strict-dom';
const ThemeContext = createContext('light');
const ThemeComponent = () => {
const theme = useContext(ThemeContext);
return <html.div>Current Theme: {theme}</html.div>;
};
```
- 14. **Implementing Error Boundaries**
```javascript
class ErrorBoundary extends React.Component {
constructor(props) {
super(props);
this.state = { hasError: false };
}
componentDidCatch(error, errorInfo) {
console.error('Error caught:', error);
this.setState({ hasError: true });
}
render() {
if (this.state.hasError) {
return <html.div>Error Encountered!</html.div>;
}
return this.props.children;
}
}
```
- 15. **Implementing Higher-Order Components**
```javascript
const withLogger = (Component) => {
return class extends React.Component {
componentDidMount() {
console.log('Component mounted:', Component.name);
}
render() {
return <Component {...this.props} />;
}
};
};
const EnhancedComponent = withLogger(MyComponent);
```
**Conclusion: Embracing the Future of Cross-Platform Development**
In conclusion, React Strict DOM emerges as a game-changing innovation in the realm of universal React component development. By offering a standardized, efficient, and performant solution for crafting styled components across web and native platforms, React Strict DOM heralds a new era of cross-platform development. As developers embrace the transformative potential of React Strict DOM, they unlock new possibilities for creating immersive, platform-agnostic user experiences that transcend traditional boundaries. Welcome to the future of cross-platform development with React Strict DOM.
It's still in experimental faze but it looks really promising.
Follow me **@ricardogesteves**
[X(twitter)](https://twitter.com/ricardogesteves)
{% embed https://github.com/RicardoGEsteves %} | ricardogesteves |
1,778,426 | Uncovering Generative Artificial Intelligence and LLMs: A Brief Introduction | With the popularization of tools such as ChatGPT, Google Bard (currently Gemini), and other similar applications, which generate responses based on what the user asks, the machinery behind these innovations also came to light. | 0 | 2024-03-02T23:00:20 | https://dev.to/yuricosta/uncovering-generative-artificial-intelligence-and-llms-a-brief-introduction-4ge2 | gpt4, generativeai, llm, largelanguagemodels | ---
title: Uncovering Generative Artificial Intelligence and LLMs: A Brief Introduction
published: true
description: With the popularization of tools such as ChatGPT, Google Bard (currently Gemini), and other similar applications, which generate responses based on what the user asks, the machinery behind these innovations also came to light.
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v9htuti7qog7hr9mis01.png
tags: #gpt4 #generativeai #llm #largelanguagemodels
---
With the popularization of tools such as ChatGPT, Google Bard (currently Gemini), and other similar applications, which generate responses based on what the user asks, the machinery behind these innovations also came to light.

We call the set of these technologies **Generative Artificial Intelligence**. Generative AI is, under the hood, algorithms that generate content based on patterns recognized in previously learned data.
Different from other categories of Machine Learning, Generative AIs work by **synthesizing new data instead of just predicting and making decisions**.
With the evolution of statistical models and analysis of linguistic patterns since the 80s, with the advancement of Machine Learning algorithms, along with the exponential growth of data and information from the internet, combined with the increase of the storage capacity with technical improvements and cheaper hardware, Generative AI has become more feasible.
The **LLMs (Large Language Models)** emerged from this reality of abundant data. They are Machine Learning models focused on working with Natural Language Processing, that is, **processing, understanding, interpreting, and generating human language**.
With a vast amount of data learned, the LLMs can detect existing patterns and relationships between the words in a given sentence.
From there, they are able to make predictions of the possible next words, generating more meaningful answers.
The applications of LLMs are diverse, from developing dialogues to generating text, code, images, audio, video, and much more. You can also use video, audio, and image as data input.
Today, it's possible to utilize LLMs to create chatbots and virtual assistants, code generators and correctors, sentiment analysis, text classification and grouping, translation, summarization, you name it.
The future of Generative AI promises even more improvements and possibilities, evolving how we interact and utilize the technology in our daily lives.
Please share this post with your friends.
#### References:
- Large Language Models (LLM) - Databricks
- Generative AI with LangChain - Ben Auffart
- Desenvolvendo aplicativos com GPT-4 e ChatGPT - Olivier Caelen e Maria-Alice Blete
| yuricosta |
1,778,521 | A simple tip to find hidden gems in Shodan | Shodan is a well-known recon tool, but in larger scopes, it has so many results that it’s hard to... | 0 | 2024-03-03T03:55:28 | https://dev.to/menna/a-simple-tip-to-find-hidden-gems-in-shodan-2c92 | security, infosec, cybersecurity | Shodan is a well-known recon tool, but in larger scopes, it has so many results that it’s hard to find something useful without navigating through all the results pages.

In this image searching for hostnames from Microsoft we got +100k results. It would be a TON of work going through 20 pages of results trying to find something.
### That's when the 'facets' search comes into play
Facets are a set of filters that can help with your search. Some basic filters are ‘country’, ‘city’, ‘ssl cert’, and so on.

Personally, the filter that helps me the most to find some interesting stuff for pentests and bug bounties is the ‘http.title’. In many cases, there will be some repetitive titles with an error message or a default response for pages without content.
So instead of going through 20 pages of search, you will have a list that only shows one time each title, and it’s filtered by occurrences.
By doing that we can go for the titles that only show up one or two times in the whole search, that’s where we can find something misconfigured, a subdomain that shouldn’t be public, internal dashboards, and many more.

Usually I don't bother looking for the most common titles, the focus is in the ones with a few appearances.

In this image, we can see that we have some titles that get our attention.
Usually I try to look for titles that contain some keywords like "Dashboard", "Welcome", "Internal" and so on.
From now on, you just gotta dig and look for more. | menna |
1,778,552 | The Magic of Reactivity and Data Binding in Native JavaScript | Reactivity: The Secret Sauce of the Web To put it simply, reactivity means that when you... | 0 | 2024-03-03T05:56:17 | https://oatta.codes/notes/the-magic-of-reactivity-and-data-binding-in-native-javascript | javascript, vue, react, svelte | ## **Reactivity: The Secret Sauce of the Web**
To put it simply, reactivity means that when you update a piece of data or a variable, every part of your webpage that uses or displays that data updates in real time. This automatic synchronization between the data and the UI elements ensures that your webpage always shows the most current information without needing manual refreshes or updates.
## How Major Frameworks Handle Reactivity
In the land of web development, each team — whether it’s Team React, Team Vue, or Team Angular — plays the game a bit differently. React, for instance, has this cool trick up its sleeve called the Virtual DOM. It's like having a shadow copy of your web page that updates every time you make a change. This way, React knows exactly what needs to change on the real page, making updates smooth and fast.
Vue plays the game with a twist, using some smart moves to only change parts of the page that really need it, which means it can be quicker than React when it comes to making updates. React gives you the tools to make things speedy, too, but it's a bit like doing your own magic tricks — you've got to learn the ropes.
Then there's the new kid, Svelte, which kind of reminds us of the old-school way of doing things directly with the web page, like jQuery used to. But don’t worry, Svelte has some neat tricks to make it easier and more efficient, so it's not like going back to the stone age.
While all these tools are awesome for making web pages that respond to our every command, they sort of keep the magic under wraps. So, why not try making our own little reactivity spell with plain old JavaScript? It's like cooking your favorite dish from scratch — you get to see exactly what goes into it and appreciate the flavors even more. Let’s dive in and stir up some fun!
## Diving Into Native JavaScript Reactivity
Ever had a friend who shares every tiny detail of their day? In JavaScript, that's what Observables are like. These digital chatterboxes keep your app informed about any data changes, acting as the behind-the-scenes heroes that ensure your application's data is as current as the latest news.
## Understanding the Observable Pattern
The Observable pattern is like a newsletter subscription for your code. Just like you subscribe to get updates on your favorite topics, your code can "subscribe" to data it's interested in. Here’s the deal: an Observable is a data source, and it can send out updates to anyone who's subscribed. These subscribers are just parts of your code waiting eagerly, like fans at a concert, to react whenever their favorite band (or in this case, data) hits a new note.
## Implementing an Observable Class
Roll up your sleeves; it's DIY time! First, you'll need a class that keeps track of who's subscribed to the newsletter (our Observable). Let's call it `Observable`. This class will have a list (or array) of subscribers and methods to add or remove subscribers, because, let’s face it, not everyone wants to hear about every single detail.
```jsx
class Observable {
#value;
#subscribers = [];
constructor(value) {
this.#value = value;
}
get value() {
return this.#value;
}
set value(newValue) {
this.#value = newValue;
this.notify();
}
subscribe(observer) {
this.#subscribers.push(observer);
}
unsubscribe(observer) {
this.#subscribers = this.#subscribers.filter(sub => sub !== observer);
}
notify() {
this.#subscribers.forEach(subscriber => subscriber(this.#value));
}
}
```
`#value` and `#subscribers` are private fields, indicated by the `#` prefix. This means they can't be accessed or modified directly from outside the class.
The `subscribe` method is like adding a new friend to your group chat, `unsubscribe` is, well, when someone leaves the chat (it happens), and `notify` is sending out a message to everyone still in the chat. Whenever something noteworthy happens (like updating data), `notify` loops through all the subscribers and updates them with the new piece of info. It’s like saying, "Hey, listen up! Here’s what’s new!"
The `value` property holds the current state. The `get` method allows you to access this state, while the `set` method lets you update it. When the state changes via the setter, the Observable class notifies all subscribers about the update, ensuring everyone is informed about the latest state.
In plain JavaScript this can be used like the following:
```jsx
const name = new Observable("Chanandler Bong");
name.subscribe((newName) => console.log(`Name changed to: ${newName}`));
name.value = "Chandler Bing!";
// Console Log: Name changed to: Chandler Bing!
```
## Computed Class: The Clever Cousin of Observables
A Computed class is essentially a special kind of observable that doesn't hold its own state in the traditional sense. Instead, its state is derived from other observables, the dependencies. Whenever any of these dependencies change, our Computed buddy goes, "Ah, something's new! Time to update my own value." It does this by re-running a function you give it, which computes its new value based on the latest states of its dependencies.
Let's code this out. Our Computed class will take a function and a list of observables (dependencies) in its constructor. It listens to these observables and updates its value by re-running the function whenever any dependency changes.
```jsx
class Computed extends Observable {
#dependencies;
constructor(computeFunc, dependencies) {
// Call the Observable constructor with the initial computed value
super(computeFunc());
this.#dependencies = dependencies;
// Define listener to run on dependencies change
const listener = () => {
this.value = computeFunc();
};
// Subscribe to each dependency and run the listener
this.#dependencies.forEach(dep => dep.subscribe(listener));
}
}
```
In this `Computed` class, we extend our `Observable` class because, well, a computed value is also observable! It can have subscribers that want to be notified when it changes, just like any regular observable.
The magic happens in the `listner` function, which recalculates the value whenever any dependency changes and updates `this.value` (inherited from `Observable`), triggering any subscriptions to be updated.
Doesn’t seem too difficult, right? Let’s see an example of how the computed class will be used.
Imagine a day at Central Perk where Chandler's jokes lighten the mood, but Ross's **“WE WERE ON A BREAK”** reminders of his break with Rachel bring it down. We'll use `Observable` instances for both Chandler's jokes and Ross's reminders, and a `Computed` instance to represent the overall mood.
```jsx
const chandlerJokes = new Observable(0);
const rossBreakReminders = new Observable(0);
// A function to compute the Central Perk mood based on jokes and reminders
const computeMood = () => {
if (chandlerJokes.value > rossBreakReminders.value) {
return "Happy";
} else if (chandlerJokes.value < rossBreakReminders.value) {
return "Tense";
}
return "Neutral";
};
const centralPerkMood = new Computed(
computeMood,
[chandlerJokes, rossBreakReminders]
);
// Function to log the mood
function logMood(mood) {
console.log(`The mood in Central Perk is ${mood}.`);
}
// Subscribe to mood changes
centralPerkMood.subscribe(logMood);
// Simulate changes throughout the day
chandlerJokes.value += 3;
// Logs: The mood in Central Perk is Happy.
rossBreakReminders.value += 1;
// Logs: The mood in Central Perk is Happy.
rossBreakReminders.value += 3; // Ross brings up the break three more times
// Logs: The mood in Central Perk is Tense.
chandlerJokes.value += 1; // Chandler evens it out 4-4
// Logs: The mood in Central Perk is Neutral.
```
## **Making Reactivity Work For You: Beyond the Console**
We've mastered observables and computed values in the console, but the real magic unfolds on the screen where user interactions bring our code to life. Now, let's spotlight our Observable and Computed classes in a dynamic UI setting.
Imagine you're creating a shopping list app. Users can add items they need to buy, and the app displays the total number of items. As items are added or removed, the total updates in real-time. Sounds like a job for our observables!
**HTML Setup**
```html
<input type="text" id="newItem">
<button id="addItem">Add Item</button>
<div>Total items: <span id="itemCountEl">0</span></div>
<ul id="itemListEl"></ul>
```
First, we lay out our scene with a bit of HTML. We need an input field for new items, a button to add them, and a place to show the total count.
**Casting Our Observables**
```jsx
const itemList = new Observable([]);
const itemCount = new Computed(() => itemList.value.length, [itemList]);
```
Next, we introduce our data: `itemList`, an observable array to track the shopping items, and `itemCount`, a computed value that reflects the length of `itemList`.
**Directing the Interaction**
```jsx
addItem.addEventListener('click', () => {
if (newItem) {
itemList.value = [...itemList.value, newItem.value]; // Add the new item
newItem.value = ''; // Reset input field
}
});
// Subscribe to update the UI whenever itemCount changes
itemCount.subscribe(count => {
itemCountEl.textContent = count;
itemListEl.innerHTML = itemList.value.map(item => `<li>${item}</li>`).join('');
});
```
With our cast ready, it's time to direct the action. We need to update `itemList` when users add new items and ensure `itemCount` automatically reflects these changes on the screen.
Finally, as the curtain rises, our app springs to life. Users add items, and like a well-oiled machine, our observables and computed values ensure the UI stays perfectly in sync, updating the total item count in real-time. Here is the whole thing in action:
{% codepen https://codepen.io/omaratta/pen/RwOwqge %}
## The Power of Proxy Objects
### What's a Proxy?
A Proxy object wraps another object and intercepts the operations you perform on it, acting as a middleman. Think of it as having a personal assistant who filters your calls and messages, passing through only what you've asked to be notified about. This interception ability makes Proxy a perfect tool for creating reactive data structures.
Let's embark on a journey to implement reactivity using Proxy objects. Our goal? To create a reactive system where changes to our data automatically update the UI, without manually attaching listeners to every piece of data.
### The Mechanics of Proxy Objects
A Proxy object requires two things to come to life: the target object you want to wrap and a handler object that defines the behavior (the set of actions you want to take) when interacting with the target. The handler object can specify a number of "traps," which are methods that provide property access control. These traps include getters for retrieving property values, setters for updating values, and many others for different operations.
1. **Target**: This is the original object you want to wrap with a Proxy. It can be anything from an array to an object.
2. **Handler**: This object defines the behavior of the Proxy. It contains traps for operations like reading a property (get) or writing to a property (set).
Creating a Proxy looks something like this:
```jsx
const target = {}; // Your original object
const handler = {
get(target, prop, receiver) {
// Define behavior for reading a prop
},
set(target, prop, value, receiver) {
// Define behavior for setting a prop
},
// Other traps...
};
const proxy = new Proxy(target, handler);
```
With this setup, any action on `proxy` goes through the handler, allowing for controlled interaction with `target`.
### Going Long: Example on the Game of Reactivity in JavaScript
Let's create a reactive example inspired by the iconic "Friends" Thanksgiving football game, where we'll track the scores of the two teams: "Team Monica" and "Team Ross." Using a Proxy object, we'll ensure that every time a team scores, the scoreboard is automatically updated.
**UI prep**
```html
<div class="teams">
<div class="team">
<div class="name">Team Ross</div>
<div class="score" id="teamRossScore">0</div>
</div>
<div class="team">
<div class="name">Team Monica</div>
<div class="score" id="teamMonicaScore">0</div>
</div>
</div>
```
**Setting Up the Game Scoreboard**
```jsx
const thanksgivingGameScores = {
teamMonica: 0,
teamRoss: 0
};
```
Our handler will intercept score updates and ensure the scoreboard reflects these changes in real time.
```jsx
function updateScoreboard(scores) {
teamRossScore.textContent = scores.teamRoss;
teamMonicaScore.textContent = scores.teamMonica;
};
const scoreHandler = {
set(target, team, newScore) {
console.log(`${team} scores! New score: ${newScore}`);
target[team] = newScore; // Update the team's score
// (Trap) Call to update the scoreboard UI
updateScoreboard(target);
return true; // Indicate successful score update
}
};
```
**Wrapping the Scores with a Proxy**
Next, we encapsulate our game scores within a Proxy to monitor and react to changes.
```jsx
const reactiveScores = new Proxy(thanksgivingGameScores, scoreHandler);
```
### Playing **the Game**
Let’s simulate the game play:
```jsx
setInterval(() => {
// Decide randomly which team scores
if (Math.random() < 0.5) {
reactiveScores.teamMonica += 1;
} else {
reactiveScores.teamRoss += 1;
}
}, 1000); // Update scores every 1 second
```
As each team scores, our Proxy intercepts the updates, logs the score change, and calls `updateScoreboard` to refresh the displayed scores, keeping the audience engaged with the latest game developments. This example showcases the dynamic nature of Proxy objects in creating interactive and responsive web experiences. Here is the whole thing in action with some bad styling:
{% codepen https://codepen.io/omaratta/pen/yLrLGzp %}
## Key Takeaways on Reactivity: Proxy Objects and Observables
In wrapping up our exploration of reactivity in JavaScript, it's clear that both Proxy objects and Observables serve as foundational elements to grasp the broader concept of reactivity. Here's a succinct recap:
**Proxy Objects** offer a built-in way to intercept and manage interactions with objects, enabling automatic updates and notifications when data changes.
**Observables** provide a pattern for subscribing to data changes and broadcasting updates, crucial for keeping application state consistent.
These concepts, while not exhaustive in the realm of reactivity, lay the groundwork for understanding how dynamic updates can be achieved in web applications. They also offer insights into the mechanics behind some of the reactivity features in popular frameworks.
Understanding Proxy objects and Observables equips you with the basic tools to start seeing the underlying logic of reactivity in your favorite frameworks. This knowledge is not just theoretical; it's a stepping stone to implementing reactivity in your projects and possibly enhancing how you work with existing frameworks. Reactivity is a core principle in modern web development, and mastering these concepts is key to building responsive and intuitive applications.
## ⚠️ IMPORTANT ⚠️
Palestinian children are under attack. Call for [ceasefire](https://ceasefiretoday.com/) and consider donating to [UNRWA](https://donate.unrwa.org/one-time/~my-donation) or [PCRF](https://www.pcrf.net/).
{% twitter 1761149433847099834 %} | omaratta212 |
1,778,683 | What You See is What You Get - Building a Verifiable Enclave Image | Table of Contents 1. Obstacle of proofing TEE 1.1. Image digest is... | 0 | 2024-03-03T10:30:06 | https://blog.richardfan.xyz/2024/03/03/what-you-see-is-what-you-get-building-a-verifiable-enclave-image.html | aws, nitroenclaves, sigstore, supplychainsecurity | ## Table of Contents
1. [Obstacle of proofing TEE](#obstacle-of-proofing-tee)
1.1. [Image digest is meaningless](#image-digest-is-meaningless)
1.2. [Stable image digest is difficult](#stable-image-digest-is-difficult)
2. [Solution - Trusted build pipeline](#solution-trusted-build-pipeline)
2.3. [GitHub provides the service suite we need](#github-provides-the-service-suite-we-need)
2.4. [Use SigStore to sign and endorse the image](#use-sigstore-to-sign-and-endorse-the-image)
2.5. [Putting everything together](#putting-everything-together)
2.6. [How can service consumers verify the PCRs](#how-can-service-consumers-verify-the-pcrs)
3. [What's beyond](#whats-beyond)
3.7. [Build log retention](#build-log-retention)
3.8. [Build pipeline still needs to be simple](#build-pipeline-still-needs-to-be-simple)
4. [Wrap up](#wrap-up)
---
**Link to the GitHub Action discussed in this post**: [https://github.com/marketplace/actions/aws-nitro-enclaves-eif-build-action](https://github.com/marketplace/actions/aws-nitro-enclaves-eif-build-action)
---
AWS Nitro Enclaves is a Trusted Execution Environment (TEE) where service consumers can validate if the environment is running what it claims to be running.
I've posted previously on how to achieve it by using attestation documents and why should we care about the content of the attestation document:
* [How to Use AWS Nitro Enclaves Attestation Document](https://blog.richardfan.xyz/2020/11/22/how-to-use-aws-nitro-enclaves-attestation-documenta.html)
* [AWS Nitro Enclaves Ecosystem (1) - Chain of trust](https://blog.richardfan.xyz/2022/12/22/aws-nitro-enclaves-ecosystem-1-chain-of-trust.html)
In this blog post, I want to dive deep into achieving zero-trust between service providers and consumers on TEE, particularly AWS Nitro Enclaves.
## Obstacle of proofing TEE
### Image digest is meaningless
Platform configuration registers (PCRs) are just the application image digests; they are generated by a one-way hashing function against the image.
We cannot see what is inside the image by looking at the hash value. So **without knowing what generated the PCRs, it's meaningless**.
For service consumers who have no oversight of the application source code and build process, they have nothing to do, even if they can validate the attestation document. They can only trust whoever saying **"This PCR value 'abcdef' is generated by a secure and safe application"**
Service providers may ask 3rd party auditor to attest the above statement. But it's no different than getting SOC2 or ISO 27001 certified.
**If we are satisfied with this level of trust model, we can stop talking about TEE already. Why don't we send the SOC2 certificate to the consumers instead of the attestation document?**
### Stable image digest is difficult
If service consumers can access the application source code and the build pipeline definition, they may build the enclave image and compare the digest with the one provided in the attestation document.
The problem is that generating a stable image digest is difficult, **even a small trivial difference occurs in build time can make the digest entirely different**.

Some common trivial changes in build time are:
1. **Timestamp**
Some build steps inject the current timestamp into the environment (e.g. [embedded timestamp in `.pyc` files when installing Python dependencies](https://github.com/pypa/pip/issues/5648#issuecomment-410446975)).
This makes the resulting image dependent on the time of build.
1. **External dependencies**
Even if we pin all dependencies to the exact version, using external sources may still cause image differences.
E.g., when running `apt update` on Ubuntu, the manifest pulled from an external source may be different than previously pulled.
1. **Other build time randomness**
There are more examples that can cause image differences.
E.g., Using random strings as temporary folder names.
By looking at the image digest difference, **we cannot tell if it's caused by trivial differences or service provider changing their source code**.
## Solution - Trusted build pipeline
To avoid the hiccup of creating a reproducible build process, we can instead **create a trust build pipeline that service consumers can see and trust**.
To make it work on AWS Nitro Enclaves images, I have created a GitHub action [AWS Nitro Enclaves EIF Build Action](https://github.com/marketplace/actions/aws-nitro-enclaves-eif-build-action)
[](https://blog.richardfan.xyz/assets/images/8734c91a-5130-4590-88b7-b93684affa4a.jpg)
### GitHub provides the service suite we need
To achieve an end-to-end chain of trust from source code, build process, to the resulting enclave image, we need a publicly accessible and trusted code repository, build environment, and artifact store.
Undoubtedly, GitHub is currently the most popular platform to host open-source code. GitHub also provides GitHub Actions as the build environment and GitHub Packages as the artifact store.
**By putting the entire build pipeline into GitHub, we can minimize the number of parties we build trust into.**
### Use SigStore to sign and endorse the image
The other main component of the solution is SigStore.
[SigStore](https://www.sigstore.dev/) is a set of open-source technologies to handle the digital signing of software.
Using SigStore, we can easily sign the enclave image and prove to the public that this image is built by a specific pipeline run, from a particular code repository commit.
### Putting everything together
In this [sample repository](https://github.com/richardfan1126/nitro-enclaves-cosign-sandbox), I use the **AWS Nitro Enclaves EIF Build Action** to build a Nitro Enclave image from the source code.
After the artifacts are built and pushed to the GitHub Container Registry (GHCR), there will be a `cosign` command to sign the artifact.

Several things are happening behind this command:
1. The OIDC token of the GitHub workflow run is used to request a signing certificate from Fulcio
1. The digest of the uploaded artifacts (In this scenario, the Nitro Enclave EIF and its information) is signed
1. The signature is pushed to the artifact store (i.e., GHCR)
1. The signing certificate and the artifact signature are recorded in the Rekor transparency log
### How can service consumers verify the PCRs
Service consumers can audit the code once the artifact is signed and pushed to the registry.
To verify the PCRs they get from the attestation document are **indeed the same as what was built by the said build pipeline**, they can do the following:
1. Use `cosign` to verify the artifact against the signature stored in Rekor
```bash
cosign verify ghcr.io/username/repo:tag \
--certificate-identity-regexp https://github.com/<username>/<repo>/ \
--certificate-oidc-issuer https://token.actions.githubusercontent.com
```

1. Validate the information on the signing certificate
User can search the signing entry on [Rekor Search](https://search.sigstore.dev/) by its log index


**We should look carefully at the following attributes**:
1. **OIDC Issuer**: The token must be issued by the trusted build environment.
(In this example, it must be the GitHub Actions OIDC issuer `https://token.actions.githubusercontent.com`)
1. **GitHub Workflow SHA**: This indicates which particular Git commit the build pipeline run is from.
This helps us identify from which commit we should look at when auditing the source code.
1. **Build Config URI**: This file defines the build workflow.
We should also check if the build configuration is safe, just like how we audit the application code.
1. **Runner Environment**: We should also ensure the build was run on GitHub-hosted runners instead of self-hosted ones that cannot be trusted.
1. Audit the code based on the information on the certificate
After knowing how the artifact was built, we can go to the specific commit of the code repository to audit the codes.
1. Pull the artifact and get the PCRs
After all the validation, we can use [ORAS](https://oras.land/) to pull the EIF and its information.
The PCR values are inside the signed text file; they can be compared with the ones given by the attestation document from the running service.
```bash
oras pull ghcr.io/username/repo:tag@sha256:<digest>
```

## What's beyond
### Build log retention
GitHub actions run on public repositories can be viewed by anyone; it gives service consumers **more confidence in the enclave application by looking into how exactly it was built**.
However, the GitHub action log can only be retained for up to 90 days.
If the service consumers want utmost scrutiny over the enclave application, service providers may need to rebuild the enclave image every 90 days so that **build logs can be audited at any point in time**.
### Build pipeline still needs to be simple
Although service consumers can audit the build process in this design, it doesn't mean service providers don't need to make their build process simple.
**The more complex a build pipeline is, the more difficult it can be to understand what's being done under the hood**.
E.g., If the build pipeline pulled source codes from an external source instead of the source code repository; How can we see, from the build log, what the content of those codes is?
## Wrap up
Three years after AWS announced Nitro Enclaves, the support from AWS is still minimal. _(Sidetrack: My [PR](https://github.com/aws/aws-nitro-enclaves-sdk-c/pull/132) on `kmstool` is still pending for review)_
There is still little to no discussion on how to utilize Nitro Enclaves to achieve TEE in the real world. I hope the tools I build can at least offer some help to the community.
**Link to the GitHub Action**: [https://github.com/marketplace/actions/aws-nitro-enclaves-eif-build-action](https://github.com/marketplace/actions/aws-nitro-enclaves-eif-build-action)
| richardfan1126 |
1,791,030 | What is the relevance of software testing? | Testing is useful to identify errors in development and compare actual outcome with expected outcome... | 0 | 2024-03-15T06:31:41 | https://dev.to/david3dev/what-is-the-relevance-of-software-testing-3g4g | Testing is useful to identify errors in development and compare actual outcome with expected outcome to make sure product quality before deliver to client. | david3dev | |
1,778,752 | How Do I Get a Refund From Microsoft Store? | Microsoft continues to improve its user experience and customer satisfaction. According to user... | 0 | 2024-03-03T11:24:50 | https://dev.to/subrato525/how-do-i-get-a-refund-from-microsoft-store-13kl | Microsoft continues to improve its user experience and customer satisfaction. According to user feedback, they have improved cancellation procedures and refund policies. It is a sign that they are committed to providing great service. Microsoft Store refund requests are often confusing, even though it is easy to buy apps and game controllers. Here, we explain how to cancel subscriptions on Windows 11/ 10. We also explain how to [request refunds on the Microsoft Store ](https://www.howtogeeki.com/refund-microsoft-store/)and track refund requests. | subrato525 | |
1,778,827 | Learning web development is hard | console.log(Learning web development is hard) Yup, you read it right. Learning web... | 0 | 2024-03-03T14:12:55 | https://dev.to/pietrell/learning-web-development-is-hard-44cj | webdev, javascript, beginners, programming | ## `console.log(Learning web development is hard)`
Yup, you read it right. Learning web development is hard and that's a fact. Not only web development, but in fact everything that's worth learning takes time. For example cooking, if you want to be a good chef, you have to spend lots of time in the kitchen. Same is for web development career, good programmers spend lots of time in code editors. So, how to get better? My answer might not surprise you... The real key to mastery is practice. That's it. Nothing more but a good amount of practice! Watching some tutorials or courses won't make you even remotely as good as if you were practicing by yourself instead of just watching someone else to code. Will watching cooking tutorials make you a good chef? The answer is no. And same is for web development. You will get frustrated, sad and feel like giving up but that's okay. As I said earlier, learning web development is hard. You need to take your time and trust the process and eventually you'll get there.
## My story
When I started learning web development, the first thing I started with was HTML and CSS. The first one seemed easy, but CSS at first felt daunting… there was so much about it to learn that it took me a few months to grasp it well. And when I finally felt good with CSS, JavaScript came on to the stage. It was even harder to add interactions to the page than just styling it with CSS. JavaScript is where the real web programming comes into play, as HTML and CSS are not programming languages(hate me or not). Getting fundamentals of JavaScript wasn't that hard, but the deeper into the concepts of programming the harder it gets. I’m over a year into the field and now I am currently struggling with React. It is hard but it is fun, so I enjoy every minute in vs code.
## Conclusions
Web development is hard and don't let it discourage you, use it to fuel you instead. How? Embrace the process and the reward. It always was and will be a long and hard process
Let me know what you think about this article as it is my first one in the comments below! Thanks for reading and to the next one. | pietrell |
1,778,885 | Training LLMs Taking Too Much Time? Technique you need to know to train it faster | The Challenges of Training LLMs: Lots of Time and Resources Suppose you want to train a... | 0 | 2024-03-03T15:48:23 | https://dev.to/hexmos/training-llms-taking-too-much-time-technique-you-need-to-know-to-train-it-faster-3k8d | llms, ai, llama2, machinelearning | #### The Challenges of Training LLMs: Lots of Time and Resources
Suppose you want to train a **Large Language Model(LLM)**, which can understand and produce human-like text. You want to input questions related to your organization and get answers from it.
The problem is that the LLM doesn't know your organization, It only knows general things. That is where applying techniques to the models like **Finetuning**, **RAG** and many others comes up.
If we want to train Big LLMs, It requires **a lot of resources and time**. So it's a hefty task unless you have the proper machine to do the job.
#### Story of How We Solved The Problem of Time and Resources
Suppose we want to train the [Llama 2](https://llama.meta.com/) LLM based on the information of our organization, and we are using **Google Colab** to train it. The free version of Colab provides a single [Nvidia T4](https://www.nvidia.com/en-in/data-center/tesla-t4/) GPU, which provides **16GB** of memory.
But for training the Llama 2 - 7 Billion Parameter model we require **28GB** of memory.
This is a problem, We can't train the model with only **16GB** of memory.
So to solve this, we tried researching into some optimization techniques and we found [LoRA](https://arxiv.org/abs/2106.09685), Which stands for _Low-Rank Adaptation of Large Language Models_.
**LoRA** adds a layer of finetuning to the model, without modifying the existing model. This consumes less time and memory.
By using LoRA, I was able to finetune the Llama-2 Model and get the outputs from it from a single T4 GPU.

Refer to the above image. I asked the Llama2 model without finetuning a question, _How many servers does Hexmos Have?_ It gave the reply that it is unable to provide the information.
After finetuning I asked the same question, and it gave me this reply
_Hexmos has 2 servers in Azure and 4 servers in AWS_
Let's see how **LoRA** helped me achieve this.
#### How LoRA Helps with Finetuning More Efficiently
Let's have a deeper dive into how **LoRA** works.
When training large models like **GPT-3**, it has 175 Billion Parameters. **Parameters** are like numbers that are stored in Matrices, it is like the knobs and dials that the model tweaks to get better at its task. Fully Finetuning them to our needs is a daunting task and requires **a lot of computational resources**.
**LoRA**, takes a different approach to this problem, Instead of fine-tuning the entire model, it focuses on modifying a smaller set of parameters.

Consider the above 2 boxes. One represents the weights for the existing model, the second one represents our fine-tuned weights(Based on our custom dataset). These are added together to form our **fine-tuned model**.
So by this method, We don't need to change the existing weights in the model. Instead, we add our fine-tuned weights on top of the original weights, this makes it less computationally expensive.
So another question may arise, how are these finetuned weights calculated?
In Matrices, we have a concept called Rank.
**Rank**, in simple words, determines the precision of the model after finetuning, If the Rank is low, There will be more optimization. But at the same time, you will be sacrificing the accuracy of the model.
If the Rank is high, the precision will be higher but there will be lesser optimization.

The LoRA weight matrix is calculated by multiplying 2 smaller matrices.
For example, we have to multiply **1x5** and **5x1** together to form a **5x5** LoRA weight matrix.
We can set the rank of the smaller matrix to determine the balance between precision and optimization.
#### Real Life Example: Training a Llama2 Model with Custom Dataset
Continue reading [the article](https://journal.hexmos.com/train-llm-faster/) | rijultp |
1,778,904 | What is the currying function | Currying is the process of taking a function with multiple arguments and turning it into a sequence... | 0 | 2024-03-03T16:12:37 | https://dev.to/lav-01/what-is-the-currying-function-3kpi | Currying is the process of taking a function with multiple arguments and turning it into a sequence of functions each with only a single argument. Currying is named after a mathematician Haskell Curry. By applying currying, an n-ary function turns into a unary function.
Let's take an example of n-ary function and how it turns into a currying function,
const multiArgFunction = (a, b, c) => a + b + c;
console.log(multiArgFunction(1, 2, 3)); // 6
const curryUnaryFunction = (a) => (b) => (c) => a + b + c;
curryUnaryFunction(1); // returns a function: b => c => 1 + b + c
curryUnaryFunction(1)(2); // returns a function: c => 3 + c
curryUnaryFunction(1)(2)(3); // returns the number 6 | lav-01 | |
1,778,969 | CSS tips to avoid bad UX | I believe CSS is a powerful tool to make perfect UX. I'm here to share my tips for unfortunate... | 0 | 2024-03-03T17:18:56 | https://dev.to/melnik909/css-tips-to-avoid-bad-ux-2b40 | css, webdev | I believe CSS is a powerful tool to make perfect UX. I'm here to share my tips for unfortunate mistakes.
If you like it you'll get more [by subscribing to my newsletter](https://cssisntmagic.substack.com).
## Please, stop using resize: none
We used to use resize: none to disable textarea resizing. We end up textareas are terrible for typing data 😒
The vertical value and limit to the heights makes the same without discomfort 💡
**A wrong code**
```css
.textarea {
resize: none;
}
```
**A correct code**
```css
.textarea {
resize: vertical;
min-height: 5rem;
max-height: 15rem;
}
```
## Leave the content property empty to avoid unexpected voicing
Pay attention, screen readers voice a text that's defined with the content property. It might lead to unexpected voicing. It's why we shouldn't use CSS to add text to a web page 😉
**A wrong code**
```css
.parent::before {
content: "I am text";
}
```
**A correct code**
```css
.parent::before {
content: "";
}
```
## aspect-ratio is a page jump pill
Page jumps after loading pictures is a pain. It's well, aspect-ratio help to avoid it. For example if the picture has the 600x400px size we should set aspect-ration: 1.5 (600px / 400xp = 1.5) 😊
**A wrong code**
```css
img {
display: block;
max-width: 100%;
}
```
**A correct code**
```css
img {
display: block;
max-width: 100%;
aspect-ratio: 1.5;
}
```
## animation without prefers-reduced-motion might lead to dizziness or headache
Motion animation might lead users with vestibular disorders to experience dizziness 😩
So we should wrap it in prefers-reduced-motion to avoid problems if users disable animations in OS settings 👍
**A wrong code**
```css
.example {
animation: zoomIn 1s;
}
```
**A correct code**
```css
@media (prefers-reduced-motion: no-preference) {
.example {
animation: zoomIn 1s;
}
}
``` | melnik909 |
1,779,000 | ➡️Whats next❓Der erste Schritt in die Welt der Nullen und Einsen | Die Suche nach dem richtigen Ausbildungsbetrieb kann eine aufregende, aber auch herausfordernde Zeit... | 0 | 2024-03-03T18:46:55 | https://dev.to/codingwerkstatt/whats-nextder-erste-schritt-in-die-welt-der-nullen-und-einsen-470i | fachinformatiker, ausbildung, beginners, ausbildungsbetrieb | Die Suche nach dem richtigen Ausbildungsbetrieb kann eine aufregende, aber auch herausfordernde Zeit sein. Als ich mich damals auf die Suche nach einem Ausbildungsplatz machte, war ich zunächst etwas überfordert. Ich bewarb mich relativ spät auf verschiedene offene Stellen und hoffte auf eine Rückmeldung. Glücklicherweise erhielt ich tatsächlich positive Rückmeldungen und konnte mir einen Ausbildungsplatz bei einem Top-Betrieb sichern.
Im Nachhinein betrachtet, war mein Ansatz jedoch recht naiv und unvorbereitet. Doch meine Leidenschaft für das Thema und mein Interesse an den Inhalten halfen mir, mich schnell in meine Rolle hineinzufinden. Rückblickend denke ich, dass es wichtig ist, sich bewusst zu machen, was man von der Ausbildung erwartet und wie man den richtigen Betrieb auswählt.
In der kommenden Woche möchte ich mich daher intensiv mit dem Thema “Ausbildungsbetrieb finden” beschäftigen. Hier ein Überblick über die spannenden Artikel, die euch erwarten:
## 🧑🏾🎓 **Studium vs.** 🔨 **Ausbildung: Eine kurze Gegenüberstellung**
Wir vergleichen die Vor- und Nachteile von Studium und Ausbildung und schauen uns an warum das Studium nicht unbedingt der beste Weg in die Industrie ist.
## 🚀 **Vor der Ausbildung: Was du dir bewusst machen solltest**
Bevor du dich für einen Betrieb entscheidest, gibt es einige wichtige Überlegungen anzustellen. Daher möchte ich dir einige Erkenntnisse und Themen mit auf den Weg geben, um sicherzustellen das der Beruf auch wirklich das Richtige für dich ist.
## 🔍 **Tipps und Tricks: Den richtigen Ausbildungsbetrieb auswählen**
Welche Kriterien sind entscheidend? Was habe ich von meiner Suche gelernt und mit welchen Fragen könnt ihr herausfinden, ob ein Ausbildungsbetrieb auch wirklich das hält was er verspricht.
## 💰**Warum Geld in der Ausbildung an zweiter Stelle stehen sollte**
Das Gehalt spielt natürlich immer eine Rolle. Doch es ist meines Erachtens sehr lohnenswert erst einmal auf eine höhere Ausbildungsvergütung zu verzichten, wenn der Betrieb andere Vorteile bietet.
## 🧠 **Erfahrungsbericht: Meine Zeit in der Ausbildung**
Ich teile meine persönlichen Erfahrungen und Erlebnisse während meiner Ausbildungszeit. Was lief gut und was hätte besser sein können?
Freut euch auf eine Woche voller wertvoller Informationen, praktischer Ratschläge und persönlicher Einblicke. Ich bin gespannt darauf, was ihr zu berichten und welche Erfahrungen ihr mitzuteilen habt! 🌟 | codingwerkstatt |
1,779,298 | All Time Best Figma Plugins | Figma is a browser-based interface and design application that can help you design and prototype and... | 0 | 2024-03-04T04:59:52 | https://dev.to/chandankumarpanigrahi/all-time-best-figma-plugins-2j2o | figma, ux, ui, plugins | **Figma** is a browser-based interface and design application that can help you design and prototype and can be used to generate code for your application. It is likely the **leading interface design tool on the market** right now and has features that support teams throughout every step of the design process.
If you compare Figma to Adobe XD or Sketch, Figma has many advantages, for example, **it works online and allows you to collaborate with others in real-time.**
It also has great functionality, a sleek UI, **and the relatively recent launch of Figma plugins.**
There are currently **numerous plugins for functions and processes** on Figma, which can make project design and launch as seamless as plug-and-play. In this article, I’ll go over the 18 plugins I think **you need to use as of today.**
## Unsplash:
Unsplash is a stock photography website that created a Figma plugin. It allows you to choose beautiful, royalty-free images submitted by the public community.

## Palette:
This plugin gives you palette colors that complement whatever design cue your app has. It even has an AI function to generate random color schemes and fine-tune the look of your theme to your satisfaction.

## Content Reel:
This plugin helps you pull content (text, icons, avatars) into your design. You can even use it to add randomized data to your design and avoid having to add dummy text anywhere.

## Color Contrast Checker:
This is a quick and easy tool to scan all your app layers at once to immediately identify any that do not meet Web Content Accessibility Guidelines (WCAG) guidelines. The plugin allows you to click an individual color swatch to see the layer and adjust the lightness of any text on it and the background to get a WCAG passing grade.

## Iconify:
This plugin provides roughly 40,000 from which to choose. Third-party icon designers may soon find themselves obsolete because of this plugin!

## Figmotion:
You generally cannot create in-app animations in Figma, but Figmotion allows you to overcome this shortcoming without having to use a third-party app.

## Mockuuups Studio:
This plugin offers over 500 scenes to choose from and add to your design with just a few clicks. It can be used for social media, blogs, marketing campaigns, design mockups, and a lot more.

## Coda for Figma:
This plugin allows you to fill layouts with data from external services such as Wikipedia, Gmail, Dropbox, Jira, Github, and more.

## LilGrid:
This handy plugin will help you clean up your app’s interface. It takes all of the various elements on your dashboard or app and organizes them into a grid that you can then define yourself. It is great for organizing many buttons and/or icons that your design or system uses.

## Movie Posters:
Great for anyone who wants to create applications or websites for movies and TV shows. What it does is that it randomly fills vector objects with images or posters from movies or TV shows.

## GiffyCanvas:
This plugin allows you to create GIF images within Figma. Install the plugin, select the images that you want to create your GIF with, set the relevant parameters such as interval, width, and height, and download the ready GIF file.

## BeatFlyer Lite:
This wonderful tool allows you to animate and add creative effects to your designs using only a few clicks.

## Color Kit:
This handy plugin helps you generate shades of colors that meet your needs. It is especially useful for apps that want to have a tried and tested color grading scheme instead of something that looks like it looks good but does not meet established design aesthetics.

## Wire Box:
Use this plugin to create UI mockups. It can also be used to convert HD mockups to low fidelity wireframes for whenever you want to concentrate on the user experience part of your project.

## Vector Maps:
This plugin allows you to add vector maps of countries, regions, and cities to your Figma mockup.

## LottieFiles:
This plugin will bring your designs to life by adding wonderful animations that are a pleasure to look at. You can add thousands of free Lottie animations (in GIF format or as SVG animation frame files).

## Design Lint:
Use this plugin to ensure that your design files are all consistent. This plugin checks for discrepancies within your mockups, even small issues such as unmatching colors or fonts, different effects, and fills, strokes, or border-radii that do not match) and it corrects those inconsistencies.

### Downloading Figma Plugins
_Find all plugins [here](https://www.figma.com/community/explore?tab=plugins) and explore projects and tools [here](https://www.figma.com/community/explore). There are plugins for virtually every need, from icons and processes used in design systems, wireframes, and illustrations, to icons, typography, mobile design, web design, UI kits, and more._
### How do I Install Figma Plugins?
To install a Figma plugin, you first need to find the plugin you want.
_You can use the links above or, using your Figma account, navigate to the Community page. From there, you can explore popular Community resources, or you can navigate to the Feed tab to view resources published by creators you follow. You can also browse featured plugins. You can even browse plugins by name, developer, or keywords._
_All plugins have their own resource pages. You can see details on the plugins that interest you using the plugin’s resource page. From there, you can simply click Install to add the plugin to your Figma account. Doing this will link the plugin to your Figma account and you will be able to see the plugin in files in your drafts and you can use it across any browser or device you use._
### Advantages of Using Plugins
Plugins give you a simple yet powerful way to enhance your Figma capabilities.
_They can help streamline and automate repetitive tasks, quickly create new features, and name and group layers, build advanced search and grouping, add special functions, add content to project mockups, and more. New plugins are added all the time, and they are developed by the vibrant Figma community. From cross-platform functionality to managing design handoff between teams and bringing feedback and automation to your design systems, Figma is an all-in-one solution that can enhance the efficiency and performance of your design teams._ | chandankumarpanigrahi |
1,779,352 | Embrace Luxury Living: Discover M3M Antalya Hills | Title: Embrace Luxury Living: Discover M3M Antalya Hills Situated in the bustling metropolis of... | 0 | 2024-03-04T06:25:24 | https://dev.to/comingkeysss/embrace-luxury-living-discover-m3m-antalya-hills-5ebl | m3m, gurugram, comingkeys, antalyahills | Title: Embrace Luxury Living: Discover M3M Antalya Hills

Situated in the bustling metropolis of Gurugram, where elegance and refinement collide, [M3M Antalya Hills](https://m3m-newlaunch.in/) towers as a testament to contemporary living. Located amidst abundant vegetation and expansive vistas, this esteemed residential development provides an unmatched standard of elegance and coziness.
Embracing Nature's Beauty
Envision awakening to the tranquil tones of the natural world and being welcomed by stunning vistas of undulating hills and lush terrain. Residents at M3M Antalya Hills may take advantage of the conveniences of city living while reestablishing a connection with nature. The project is painstakingly planned to blend in perfectly with the surrounding landscape, producing a tranquil haven in the middle of the bustle of the metropolis.
Exquisite Design and Architecture
M3M Antalya Hills is not just a residential complex; it is a masterpiece of design and architecture. The sleek and contemporary aesthetic is evident in every aspect of the project, from the meticulously landscaped gardens to the elegant interiors of the apartments. The attention to detail is impeccable, with every element carefully curated to exude luxury and sophistication.
Unmatched Amenities and Facilities
Residents of M3M Antalya Hills are treated to a wide range of world-class amenities and facilities designed to enhance their lifestyle. Whether you're looking to relax and unwind or stay active and fit, there's something for everyone here. From a state-of-the-art fitness center and swimming pool to lush green parks and jogging tracks, every amenity is thoughtfully designed to cater to the needs of the residents.
A Sanctuary of Serenity
In today's fast-paced world, finding moments of tranquility and peace is essential for overall well-being. M3M Antalya Hills offers just that—a sanctuary of serenity where residents can escape the chaos of everyday life and immerse themselves in a world of luxury and comfort. Whether you're enjoying a leisurely stroll in the landscaped gardens or unwinding with a book by the poolside, every moment spent here is a testament to the art of fine living.
Prime Location
Located in Sector 68, Gurugram, M3M Antalya Hills enjoys excellent connectivity to the major business hubs, educational institutions, healthcare facilities, and entertainment options of the city. With easy access to the Golf Course Extension Road and Sohna Road, residents can enjoy seamless connectivity to Delhi and other parts of the National Capital Region (NCR), making it an ideal choice for those seeking convenience and accessibility.
Conclusion
In conclusion, M3M Antalya Hills represents the pinnacle of luxury living in Gurugram. With its stunning natural surroundings, exquisite design, unmatched amenities, and prime location, it offers residents a lifestyle beyond compare. Whether you're looking for a peaceful retreat or a vibrant community to call home, M3M Antalya Hills is the perfect choice for those who appreciate the finer things in life. | comingkeysss |
1,779,401 | SOLID Principle in NextJS using Typescript | SOLID is an acronym for five key principles of object-oriented programming that aim to improve the... | 0 | 2024-03-04T11:46:20 | https://dev.to/fajarriv/solid-principle-in-nextjs-using-typescript-3l3k | webdev, typescript, nextjs, solidprinciples | SOLID is an acronym for five key principles of object-oriented programming that aim to improve the readability, maintainability, extensibility, and testability of code. However, SOLID principles are not limited to object-oriented programming that uses classes. They can also be applied to other paradigms, such as functional programming, that use functions, modules, or components as the main building blocks of software. With this idea, we can apply SOLID principle when building a frontend app.
## Single Responsibility Principle
This principle states that a module/class/function should have only one responsibility and one reason to change. For example, we can also use custom hooks to encapsulate the logic for fetching data, managing state, or performing side effects.
For example, if we have a page/component to render list of schedules, any unrelated tasks like fetching data from server should be handled by other module.
```typescript
// src/hooks/useDataSchedule.ts
export const useDataSchedule = () => {
const fetcher = (url: string) =>
fetch(process.env.NEXT_PUBLIC_API_URL + url).then((res) => (res.json()));
const { data, error, isLoading, mutate } = useSWR<TSchedule[], Error>(
'/api/schedule/list/',
fetcher
);
return {
data,
error,
isLoading,
mutate,
};
}
```
```typescript
// src/components/schedule/ScheduleList.tsx
const ScheduleList = () => {
// We call the hook and retrieve the Schedule
const { data, error, isLoading, mutate } = useDataSchedule();
if (error) return <div>Failed to load</div>
if (isLoading) return <div>Loading...</div>
return (
<div>
<h1>Schedule List:</h1>
<ul>
{data?.map((schedule) => (
<li key={schedule.id}>
<h2>{schedule.name}</h2>
<ScheduleDetail title={schedule.detail.title}
startTime={schedule.detail.start_time}
endTime={schedule.detail.end_time} />
</li>
))}
</ul>
</div>
);
};
```
The benefit of applying the single responsibility principle is that it makes the code more **modular**, **maintainable**, and **testable**. By separating the concerns of different modules, we can avoid coupling and dependency issues that may arise when changing or adding new features. We can also reuse the modules in different contexts, such as different components or pages, without duplicating the code.
## Open-closed principle
A class or module should be open for extension, but closed for modification. This means that we should be able to add new features or behaviors without changing the existing code. This principle can be fulfilled when we create a reuseable components for our project.
```typescript
// src/components/common/CustomButton.tsx
// Interface for the props of the button component
interface CustomButtonProps {
text: string;
className: string;
onClick: () => void;
};
// A button component that takes a string and applies a tailwind class to the button element
export const CustomButton: React.FC<ColorButtonProps> = ({ text, className, onClick }) => {
return (
<button
type="button"
className={`text-white font-bold py-2 px-4 rounded ${className}`}
onClick={onClick}
>
{text}
</button>
);
};
```
```typescript
//src/components/modules/LoginModule/buttons.tsx
import { CustomButton } from '@components/common'
export const LandingPageModule = () => {
return (
<div className="flex flex-col">
<CustomButton text="Login" className="bg-green" />
<CustomButton text="Create account" className="bg-gray" />
</div>
)
}
```
When we have a reuseable components like above code, we can simply add add more styling to the button by passing it through the props but we can't change the default styling of the `CustomButton`.
## Liskov substitution principle
A subclass or subcomponent should be able to replace its superclass or supercomponent without breaking the functionality. This means that we should follow the contract or interface defined by the parent class or component. The explanation may sounds a lot harder than the implementation.
We can modify the previous example to adhere Liskov substituton principle.
```typescript
// src/components/common/CustomButton.tsx
// interface for the custom button component
interface ICustomButton extends ButtonHTMLAttributes<HTMLButtonElement> {
children: ReactNode
className: string;
};
// A button component that takes a string and applies a tailwind class to the button element
export const CustomButton:React.FC<ICustomButton> = ({ text, className, child, ...props}) => {
return (
<button
type="button"
className={`text-white font-bold py-2 px-4 rounded ${className}`}
{...props}
>
{child}
</button>
);
};
```
We give the new button the attributes that we have inherited from the original button. This preserves program behavior and adhere the Liskov Substitution Principle by allowing any instance of CustomButton to be used in place of an instance of Button.
Liskov Substitution in React essentially promotes the development of a unified and adaptable component structure for creating reliable and manageable user interfaces.
## Interface Segregation Principle
In React interface segregation means that components should have simple and specific interfaces that suit their purpose. We should avoid creating component interfaces that have too many or irrelevant properties or methods. This way, the code becomes more organized, easy to understand, and easy to change, as components are smaller and more focused. ISP helps improve the quality and performance of the code by using component interfaces that are clear and concise.
```typescript
// src/components/schedule/ScheduleDetail.tsx
interface IScheduleDetail {
title:string
startTime:string
endTime:string
}
const ScheduleDetail = ({ title, startTime, endTime}:IScheduleDetail) => {
return (
<div>
<h3>{title}</h3>
<h4>Time Duration</h4>
<p>{startTime} - {endTime}</p>
</div>
);
};
```
To implement this principle we can simply limit the props to only what the component need.
## Dependency inversion principle
This principle states that a high-level module/class should not depend on low-level module/class, both should depend abstractions, not on concretions.
For example, when we have a create schedule feature and edit schedule feature that using the same field of form, we should abstract the form to adhere this principle.
```typescript
export const ScheduleForm = ({ onSubmit }: () => void) => {
return (
<form onSubmit={onSubmit}>
<input name="title" />
<input name="startTime" />
<input name="endTime" />
</form>
);
};
```
```typescript
const CreateScheduleForm = () => {
const handleCreateSchedule = async () => {
try {
// Logic to handle create schedule
} catch (err) {
console.error(err.message);
}
};
return <ScheduleForm onSubmit={handleCreateSchedule} />;
};
```
```typescript
const UpdateScheduleForm = () => {
const handleUpdateSchedule= async () => {
try {
// Logic to handle update schedule
} catch (err) {
console.error(err.message);
}
};
return <ScheduleForm onSubmit={handleUpdateSchedule} />;
};
```
Implementing this principle will give us a better separation of concerns and more scalable code.
| fajarriv |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.