id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,891,417
Shamir Secret Sharing in 256 characters or less
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
27,753
2024-06-17T15:35:52
https://dev.to/kalkwst/shamir-secret-sharing-in-256-characters-or-less-3a3a
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer You and your friends have a treasure map (secret). You cut it into pieces so no one can find the treasure alone.Each friend gets a piece (share). BUT, you cut it in such a way that only a certain number of you need to be there (quorum).
kalkwst
1,891,416
JSON
JSON.stringify() The JSON.stringify() method converts a JavaScript object or value to a...
0
2024-06-17T15:34:35
https://dev.to/__khojiakbar__/json-130f
javascript, json
## JSON.stringify() > The JSON.stringify() method converts a JavaScript object or value to a JSON string. This is useful for sending data over a network, storing data in text format, or logging. ``` const obj = { name: "John", age: 30, city: "New York" }; const jsonString = JSON.stringify(obj); console.log(jsonString); // Output: {"name":"John","age":30,"city":"New York"} ``` ## JSON.parse() The JSON.parse() method parses a JSON string and constructs the JavaScript value or object described by the string. This is useful for converting JSON data received from a web server into JavaScript objects. ``` const jsonString = '{"name":"John","age":30,"city":"New York"}'; const obj = JSON.parse(jsonString); console.log(obj.name); // Output: John console.log(obj.age); // Output: 30 console.log(obj.city); // Output: New York ```
__khojiakbar__
1,891,415
File locking vs Row Level Locking
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-17T15:34:03
https://dev.to/dwivedialind/file-locking-vs-row-level-locking-2d93
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer <!-- Explain a computer science concept in 256 characters or less. --> Have you ever wondered how can multiple users perform DML operations on same table at the same time? Because of row level locking in RDBMS. In RDBMS every row is a file instead of table as a file. Rows inside a table is scattered (fragmented) all over the DB server HD, to speed up ***INSERT*** statement. ## Additional Context <!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. --> Consider a multi-user environment, if multiple users are inserting rows simultaneously into the same table, and if MySQL were to store the rows sequentially then it would be really slow. That's why when you insert a row into the table, wherever it finds the free space it will store the row there. <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- Don't forget to add a cover image to your post (if you want). --> <!-- Thanks for participating! -->
dwivedialind
1,891,414
A Beginner's Guide to Choosing the Right Domain and Hosting for Your Website
Choosing the Perfect Domain and Hosting for Your Website When starting your website, two key...
0
2024-06-17T15:29:16
https://dev.to/ridoy_hasan/a-beginners-guide-to-choosing-the-right-domain-and-hosting-for-your-website-4ae7
webdev, career, learning, productivity
**Choosing the Perfect Domain and Hosting for Your Website** When starting your website, two key decisions are crucial: selecting a domain name and choosing a hosting provider. Here’s a simple guide to help you make the right choices: **1. Domain Name:** Your domain is your online identity. Follow these tips: - **Simplicity and Memorability:** Choose a name that’s easy to remember and spell. - **Reflect Your Brand:** Ideally, your domain should align with your brand or website’s purpose. - **Choose the Right Extension:** .com is standard, but consider others like .net or .org depending on your site’s focus. **2. Hosting Provider:** Your hosting provider stores your website’s files and makes it accessible online: - **Reliability and Uptime:** Opt for providers known for high uptime percentages. - **Speed and Performance:** Look for fast loading times; it’s crucial for user experience and SEO. - **Support:** 24/7 customer support ensures help when needed. - **Scalability:** Ensure your hosting plan can grow with your site’s needs. - **Security:** Features like SSL certificates and regular backups are essential for site security. **Putting It All Together:** - **Register Your Domain:** Use a registrar like GoDaddy or Namecheap to search and purchase your domain. - **Choose a Hosting Plan:** Sign up with a provider that fits your requirements and follow their instructions to link your domain. Take your time with these decisions as they impact your site’s performance and accessibility. If in doubt, most providers offer support to guide you through the process. Happy building!
ridoy_hasan
1,891,409
In Spring Boot, what is the use of @PostConstruct? Explain using Example.
Use of @PostConstruct in Spring Boot ==================================== In Spring Boot,...
0
2024-06-17T15:24:40
https://dev.to/codegreen/n-spring-boot-what-is-the-use-of-postconstruct-explain-using-example-30lk
springboot, java, interview
##Use of @PostConstruct in Spring Boot ==================================== In Spring Boot, `@PostConstruct` is used to annotate a method that should be executed after dependency injection is complete and before the bean is put into service. Example: -------- Suppose we have a class `UserService` that requires some initialization logic after its dependencies are injected. We can use `@PostConstruct` to annotate a method for this purpose. ```java import javax.annotation.PostConstruct; import org.springframework.stereotype.Service; @Service public class UserService { @PostConstruct public void init() { // Initialization logic, e.g., loading configuration, setting up resources, etc. System.out.println("UserService initialized!"); } // Other methods of UserService } ``` In this example: * The `UserService` class is annotated with `@Service` to indicate it as a Spring-managed bean. * The `init` method is annotated with `@PostConstruct`, ensuring it runs automatically after all dependencies of `UserService` are injected by Spring. * Any initialization logic needed for `UserService` can be placed within the `init` method. Conclusion: ----------- `@PostConstruct` in Spring Boot provides a convenient way to perform initialization tasks for a bean after its dependencies are injected. It ensures that the initialization logic is executed exactly once before the bean is used, contributing to the robustness and reliability of the application.
manishthakurani
1,891,402
Deploy react, node projects for free on vercel
Looking to deploy your projects without spending a dime? You've come to the right place. There are...
0
2024-06-17T15:20:38
https://www.parshipraneesh.me/blogs/deploy
webdev, beginners, deployment
Looking to deploy your projects without spending a dime? You've come to the right place. There are several platforms out there that offer free deployment with some resource limitations. Some of the options include: - GitHub Pages - Glitch - Render - Netlify - Vercel - Digital Ocean Among these, I find Vercel to be the best for hobby projects. It's free, even for backend applications, and it boasts a 99% uptime. I was pleasantly surprised when my Node and Flask applications worked seamlessly on Vercel. In this blog, I'll show you how to deploy a frontend (React.js), a Node.js backend, and even connect MongoDB to your backend. --- ### Deploying React.js with Vercel Let's start with deploying a React.js application. I used Vite to create my React apps. If you haven't used it before, don't worry; it's straightforward. Here's a step-by-step guide: 1. Upload Your Project to GitHub Make sure your project is uploaded to your GitHub account. 2. Create a Vercel Account Sign up for a Vercel account using your GitHub credentials. 3. Add a New Project on Vercel 4. Click on the "Add New" button (top right, white color). 5. Connect your GitHub account. 6. Choose the repository you want to deploy. 7. Type in your project name. 8. Select the root directory (where your React files are located, e.g., frontend/reactfile/). 9. Hit deploy. 10. And that's it! Your project will be deployed, and you'll get a URL ending with vercel.app. --- ### Deploying Node.js with Vercel Deploying a Node.js application is a bit more involved, but don't worry, I've got you covered. Follow these steps carefully: 1. Create a directory named 'api' in your project folder. 2. Place your main server logic in index.js inside the /api directory. 3. Organize Additional Files 4. If you have additional server files (e.g., teacherAPI.js), place them in the /api directory. 5. Ensure Correct File Placement: Make sure package.json, package-lock.json, and .gitignore are outside the /api directory. 6. Create vercel.json: In your project folder, create a file named vercel.json and paste the following content: ``` { "version": 2, "rewrites": [ { "source": "/(.*)", "destination": "/api" } ] } ``` 7. Commit your changes and push your project to GitHub. 8. Deploy on Vercel: Go to Vercel and click "Add New" and select your repository. 9. Set the framework preset to "Other." 10. Ensure your root directory contains /api, .gitignore, package.json, etc. 11. Copy your .env file content and paste it into the environment variables section on Vercel. 12. Hit Deploy and voila! Your Node.js application is now deployed. File Structure for Node.js Backend Here's a visual representation of the file structure: ``` root directory |___ |_ api | |_ index.js | |_ middlewares |_ package.json |_ .gitignore ``` --- ### Connecting MongoDB to Your Node Server If you're using MongoDB with your Node server, you'll need to configure Vercel a bit differently. However, I've encountered issues with MongoDB disconnecting after a while. I recommend using Firebase instead, which is as simple as MongoDB and more stable for this setup. ### Final Thoughts I hope this guide helps you deploy your projects easily and efficiently. Experiment, learn, and happy coding! Feel free to reach out if you have any questions or need further assistance. Keep experimenting and keep coding!
ppraneesh
1,891,401
Perplexity AI: A Beginner's Guide to Getting Started"
In this article, I want to introduce you to something truly amazing—Perplexity AI. This AI tool has...
0
2024-06-17T15:20:37
https://dev.to/proflead/perplexity-ai-a-beginners-guide-to-getting-started-3mfj
ai, perplexity, tutorial, howto
In this article, I want to introduce you to something truly amazing—Perplexity AI. This AI tool has incredible potential, and I believe it can be a game-changer for many of you. I'll walk you through how to use Perplexity AI and highlight some of its great features. ## Introduction to Perplexity AI Perplexity AI is an advanced search engine that works with real-time data. Unlike traditional search engines, it pulls information from various sources and provides a comprehensive summary. Perplexity AI utilizes advanced Natural Language Processing (NLP) algorithms and machine learning models to understand and process user queries. It employs a combination of neural networks and data parsing techniques to generate accurate and relevant responses. In NLP, 'perplexity' measures how well a probability model predicts a sample. Lower perplexity indicates better predictive performance, which is a key metric for evaluating the effectiveness of language models used by Perplexity AI. ![Perplexity AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n4cdgvqfrdmrv5tmj82l.png) ## Getting Started with Perplexity AI To get started, simply open the Perplexity AI website (https://www.perplexity.ai/). You can immediately begin by asking your first prompt. It's incredibly user-friendly and works just like a search engine, but with more powerful capabilities. Perplexity AI can be used for answering questions ranging from basic facts to complex queries, creating code, and summarizing content, all sourced from up-to-date information. Additionally, it allows in-depth topic exploration with its Copilot feature, organizes your research with Collections, and facilitates data interaction and web searches within the platform. ![Getting Started with Perplexity AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wgu9ejs6cvp03j8mwcoa.png) ## Focus Functionality in Perplexity AI Focus functionality is a feature that lets users specify the context or domain for their search queries. By selecting a particular focus, users can narrow down the search results to be more relevant to their specific needs, whether it's academic research, content creation, or any other specialized task. ![Focus Functionality in Perplexity AI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h5faq6gvvfon4cus4yd8.png) - Academic Research: Users can focus their searches on academic papers, journals, and research articles. This is ideal for students, researchers, and academics who need in-depth information and credible sources for their studies or projects. - Content Creation: Writers, bloggers, and content creators can use the focus functionality to find relevant information, statistics, and references for their articles, blogs, or social media posts. This helps in creating well-informed and accurate content. - Video Descriptions: For YouTubers and video creators, this feature can help in generating detailed and accurate descriptions, scripts, or summaries for their videos. It ensures that the content is engaging and informative. - Business and Market Analysis: Professionals in business and finance can use the focus functionality to gather market data, financial reports, and industry trends. This aids in making informed business decisions and strategic planning. - Healthcare: Medical professionals can focus their searches on medical journals, clinical trials, and treatment guidelines. This is useful for staying updated with the latest medical research and improving patient care. ## How to Write a Good Prompts To write good prompts for Perplexity AI, be clear and specific, use complete sentences, include relevant details, ask open-ended questions, avoid ambiguity, use contextual keywords, break down complex queries, specify the format, use follow-up questions, and refine your prompts based on responses. ## The Difference Between Free and Pro Plans ![The Difference Between Free and Pro Plans](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cx5xgkpjf2qtmd2263h9.png) ### Free Plan - Free - Basic Search Capabilities: Perform searches and receive comprehensive summaries from various sources. - No Registration Required: Start using the service immediately without creating an account. - Limited Data Handling: Attach files like PDFs or images with limited processing capabilities. - Standard AI Models: Access basic AI models for general tasks. ### Pro Plan - 20 USD/month - Advanced Search Capabilities: Get more detailed and specific information, including access to specialized data like medical research. - Year-Specific Searches: Specify the exact year for more precise data retrieval. - YouTube Integration: Search for information on YouTube and receive relevant video suggestions with summaries. - Personal Data Handling: Upload personal data, such as images or documents, for the AI to analyze and explain. - Creating Collections: Organize searches into collections for easier management and access. - Selecting AI Models: Choose from a variety of AI models, including GPT-4, Claude, DALL-E Playground, and Stable Diffusion. - Image Generation: Generate images based on user prompts, useful for visual projects or presentations. - Enhanced Sharing Options: Share search results easily for collaborative projects or sharing findings with others. - Pro Mode Toggle: Unlock additional functionalities and more powerful search capabilities. ## Video Tutorial about Perplexity AI {% embed https://youtu.be/ZvvgfCbNWTs?si=oRQ9pHeBJ3cZyrTB %} [Visit my YouTube Channel](https://www.youtube.com/@proflead/videos?sub_confirmation=1) ## Conclusion Perplexity AI is a versatile and powerful tool that has helped me with a wide range of tasks, from academic research to content creation. I highly recommend giving it a try. If you decide to sign up for a Pro account, you can unlock additional functionalities and enhance your research and productivity. **P.S. You can get a $10 discount on the Pro account by using the link in the video description. :)**
proflead
1,891,392
I've created an open source Spring Boot + Nextjs starter kit
Seeing how these are quite popular currently (mainly in nextjs space) and how one still doesn't exist...
0
2024-06-17T15:17:38
https://dev.to/nermin_karapandzic/ive-created-an-open-source-spring-boot-nextjs-starter-kit-6fk
Seeing how these are quite popular currently (mainly in nextjs space) and how one still doesn't exist with spring boot as a backend, I've decided to create one myself. Here is the repository: https://github.com/NerminKarapandzic/spring-boot-nextjs-starter-kit There's nothing special about the integration with Nextjs here, you could easily swap it with any frontend framework. The starter kit includes: Authentication with email and password Authentication with oauth2 providers (google, facebook, github, okta) RBAC Password reset Email sending with SMPT, using mailpit locally Basic s3 integration Pages for all of the basic user stuff on frontend and so on... There is a youtube video that I made while building this, if you're interested you can check that out as well: {% youtube EbIps-suESk %} I plan to add smaller updates to this over time, things like user impersonation, maybe some basic analytics (perhaps self hosted solution), observability, newsletter subscriptions and scheduling etc. I'd love to get contributors from outside as well, so feel free to make suggestions or straight up pr's :)
nermin_karapandzic
1,891,391
Assymetric Encryption in 256 characters or less
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
27,753
2024-06-17T15:15:48
https://dev.to/kalkwst/assymetric-encryption-in-256-characters-or-less-4lap
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Assymetric encryption is like a locked box. Anyone can put a letter through the slot (public key), but only the owner can open it and get the letter (private key). This ensures that only the intended receipient can read the message.
kalkwst
1,891,389
Amazon DevOps Guru for the Serverless applications - Part 11 Anomaly detection on SNS (kind of)
Introduction In the 1st part of the series we introduced the Amazon DevOps Guru service,...
24,936
2024-06-17T15:14:50
https://dev.to/aws-builders/amazon-devops-guru-for-the-serverless-applications-part-11-anomaly-detection-on-sns-kind-of-388
aws, serverless, devops, aiops
## Introduction In the [1st part of the series](https://dev.to/aws-builders/amazon-devops-guru-for-the-serverless-applications-part-1-introduction-to-devops-guru-39i0) we introduced the Amazon DevOps Guru service, described its value proposition, the benefits of using it and explain how to configure it. We also need to go through all the steps in the [2nd part of the series](https://dev.to/aws-builders/amazon-devops-guru-for-the-serverless-applications-part-2-setting-up-the-sample-application-for-the-anomaly-detection-167) to set everything up. In the subsequent parts we saw DevOps Guru in action detecting anomalies on DynamoDB and Aurora Serverless v2, API Gateway and Lambda alone and also in conjunction with other AWS Serverless Services like SQS, Kinesis, Step Functions and Aurora Serverless v2. In this part of the series I'd like to explore whether DevOps Guru will recognize anomalies with Amazon Simple Notification Service (SNS) ## Detecting anomalies with SNS Let's enhance our architecture so that in case of creation of the new product we send the notification to the SNS Topic which then delivers this notification to other (external) HTTP(s) endpoint. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/25wqlv01b7min857e0e1.png) Not let's imagine that this HTTP(s) endpoint was moved or answers with the 500 error code, so that SNS will consider the notification as not being delivered. I was able reproduce this scenario on AWS but deploying temporary API Gateway endpoint and configured as SNS subscription. I needed to confirm the subscription, so I put the Lambda behind my temporary API Gateway endpoint which was triggered for POST request (this is what SNS sends to the configured HTTP(s) endpoint as confirmation request). Then I logged the whole HTTP body of the POST request in my Lambda function and copied the subscription URL (which is a part of the HTTP body) which I entered in the browser. With SNS subscription being confirmed, I then deleted my temporary API Gateway endpoint so that SNS HTTP(s) subscription was sent but couldn't be delivered to the endpoint anymore. Then I sent several hundreds create product requests via the [hey tool](https://github.com/rakyll/hey) like : ``` hey -q 1 -z 15m -c 1 -m PUT -d '{"id": 1, "name": "Print 10x13", "price": 0.15}' -H "X-API-Key: XXXa6XXXX" https://XXX.execute-api.eu-central-1.amazonaws.com/prod/products ``` which all failed to be delivered and have been retried (without success) 3 times by default, see [Amazon SNS message delivery retries](https://docs.aws.amazon.com/sns/latest/dg/sns-message-delivery-retries.html). Despite seeing the NumberOfNotificationsFailed in CloudWatch metrics (see the blue line), no DevOpsGuru insight has been created even after re-trying this experiment several times. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ha8dovpb7hyt0ewya2fq.png) Then directly after this experiment and I immediately started another experiment to fetch not existing product from the database which then caused HTTP Error 404 (Not Found) on API Gateway. I was then surprised that the following insight has been created by DevOps Guru right away: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2taj8adxum08hzgarre5.png) with the following anomalous metrics **NumberOfNotificationsFailed Average** (for anomaly with SNS) and **4XXX Error Average** (for anomaly with API Gateway): ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n9qnn0ljqx66gc74iysr.png) and the following graphed anomalies : ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bwnqxkmigudby09uatdz.png) ## Conclusion In this article we explored whether DevOps Guru will recognize anomalies with Amazon Simple Notification Service (SNS) like the HTTPs Subscription which endpoint doesn't exist anymore (no connection can be established) or answers with HTTP 500. We saw that DevOps Guru seemed not to react on the anomalous metric NumberOfNotificationsFailed Average alone as DevOps Guru considers this not to be an anomaly (which is wrong on my opinion). It only seems to create DevOps Guru insight then at least another anomalous metric will be detected. I will approach the DevOps Guru team with my insights so that they can verify the experiment and look behind the scenes what's happening and hopefully improve DevOps Guru service to correctly handle also this SNS anomaly.
vkazulkin
1,891,390
Fasilitas eksklusif dan pengalaman spiritual yang mendalam tersedia dalam Paket Umroh Plus.
Paket Hemat 2024 dengan 9 Hari penjelajahan Kalender Keberangkatan: 10 Agustus. 14 September. 12...
0
2024-06-17T15:13:40
https://dev.to/umrohplus/fasilitas-eksklusif-dan-pengalaman-spiritual-yang-mendalam-tersedia-dalam-paket-umroh-plus-74n
Paket Hemat 2024 dengan 9 Hari penjelajahan Kalender Keberangkatan: 10 Agustus. 14 September. 12 Oktober. 9 November. 7 Desember Rute SUB-JED tanpa transit menggunakan Lion Air/Citilink/setaraf seowa Paket dengan biaya Rp Rp 28.300.000 all inclusive Tempat akomodasi yang dipilih di Mekkah dan Madinah: Hotel Mekkah : Fajr Badee/Setaraf Hotel Madinah : Abraj Tabah/Setaraf Harga lengkap meliputi: Tiket Pesawat PP. Hotel & Akomodasi. Visa. Perlengkapan. Ziyarah. Muntawif. Bus. Makan 3x. Air Zamzam 5L BONUS eksklusif yang didapat: 1. GRATIS Umroh tiap hari hingga 4x Umroh 2. GRATIS Kereta cepat Madinah - Mekkah 3. GRATIS Extra city tour ke Thaif 4. GRATIS Skincare Umroh Ambil Promo ini sekarang, kuota terbatas untuk 30 pendaftar [Umroh Plus](https://umrohplus.biz.id)
umrohplus
1,891,387
El Arte de Conectar a Través de los Gustos y las Aficiones
En la vida cotidiana, encontramos múltiples oportunidades para conectar con los demás a través de...
0
2024-06-17T15:07:44
https://dev.to/angelique0908/el-arte-de-conectar-a-traves-de-los-gustos-y-las-aficiones-pgk
productivity, career, learning, google
En la vida cotidiana, encontramos múltiples oportunidades para conectar con los demás a través de intereses compartidos. La capacidad de hablar sobre nuestros gustos y aficiones no solo enriquece nuestras conversaciones, sino que también fortalece los vínculos personales y profesionales. Comprender los Gustos y Aficiones Uno de los primeros pasos para establecer una conexión significativa es comprender nuestros propios gustos y aficiones. Reflexionar sobre lo que nos apasiona nos permite ser más auténticos en nuestras interacciones. Además, cuando nos expresamos sobre nuestros intereses con confianza y entusiasmo, trascendemos la superficialidad y logramos una interacción más profunda. Por ejemplo, si eres amante de la música, hablar sobre tu banda favorita puede abrir la puerta a debates interesantes y conexiones profundas. La Importancia de Preguntar No podemos subestimar el poder de hacer preguntas. Interesarnos genuinamente por las pasiones de los demás es una forma poderosa de mostrar respeto y aprecio. Preguntas como '¿Qué te llevó a interesarte por la fotografía?' o '¿Cuál es tu película favorita y por qué?' son excelentes puntos de partida. Además, estas preguntas tienen el potencial de revelar puntos en común que pueden ser el fundamento de futuras interacciones. Frases Clave para Conversaciones Enriquecedoras Aquí surge la necesidad de enriquecer nuestro vocabulario y conocer frases adecuadas para hablar de estos temas. En este sentido, encontrar recursos que nos ayuden a mejorar nuestras habilidades de conversación es esencial. Estas frases especiales pueden ser de gran ayuda tanto para los novatos como para los más experimentados en el arte de la conversación. Escucha Activa y Respuesta Apropiada Escuchar activamente es tan importante como hablar. Esto implica prestar atención a lo que dice la otra persona y responder de manera que se sienta comprendida. A veces, una simple afirmación como 'Vaya, eso suena increíble' puede hacer la diferencia entre una conversación superficial y una conexión auténtica. Para concluir, dominar el arte de conversar sobre los gustos y aficiones es una herramienta invaluable que no solo mejora nuestras relaciones personales sino que también puede ser crucial en el ámbito profesional. Así que la próxima vez que te encuentres en una conversación, no dudes en preguntar y compartir sobre esos pequeños detalles que nos hacen únicos.
angelique0908
1,891,386
LeetCode Day10 Stack&Queue Part 2
LeetCode No.150. Evaluate Reverse Polish Notation You are given an array of strings tokens...
0
2024-06-17T15:07:06
https://dev.to/flame_chan_llll/leetcode-day10-stackqueue-part-2-lma
leetcode, java, algorithms, datastructures
# LeetCode No.150. Evaluate Reverse Polish Notation You are given an array of strings tokens that represents an arithmetic expression in a Reverse Polish Notation. Evaluate the expression. Return an integer that represents the value of the expression. Note that: The valid operators are '+', '-', '*', and '/'. Each operand may be an integer or another expression. The division between two integers always truncates toward zero. There will not be any division by zero. The input represents a valid arithmetic expression in a reverse polish notation. The answer and all the intermediate calculations can be represented in a 32-bit integer. Example 1: Input: tokens = ["2","1","+","3","*"] Output: 9 Explanation: ((2 + 1) * 3) = 9 Example 2: Input: tokens = ["4","13","5","/","+"] Output: 6 Explanation: (4 + (13 / 5)) = 6 Example 3: Input: tokens = ["10","6","9","3","+","-11","*","/","*","17","+","5","+"] Output: 22 Explanation: ((10 * (6 / ((9 + 3) * -11))) + 17) + 5 = ((10 * (6 / (12 * -11))) + 17) + 5 = ((10 * (6 / -132)) + 17) + 5 = ((10 * 0) + 17) + 5 = (0 + 17) + 5 = 17 + 5 = 22 Constraints: 1 <= tokens.length <= 104 tokens[i] is either an operator: "+", "-", "*", or "/", or an integer in the range [-200, 200]. [Original Page](https://leetcode.com/problems/evaluate-reverse-polish-notation/description/) ``` public int evalRPN(String[] tokens) { Deque<Integer> deque = new LinkedList<>(); for(String s: tokens){ char ch = s.charAt(s.length()-1); if(ch>='0' && ch<='9'){ deque.push(Integer.valueOf(s)); } else{ int num2 = deque.pop(); int num1 = deque.pop(); deque.push(calculation(num1,num2,ch)); } } return deque.pop(); } public int calculation(int num1, int num2, char sign){ return switch(sign){ case '+' -> num1+num2; case '-' -> num1-num2; case '*' -> num1 * num2; case '/' -> num1 / num2; default -> Integer.MAX_VALUE; }; } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u7oyievuwm1t1gfv9r9u.png) Be careful - String array's elements are only comprised by numbers or '+', '-', '*', and '/',so we divided them into number and non-number - in Java we cannot directly evaluate a String whether is a number or not so we have to change it into char and to evaluate whether the ***last element*** of the String is number or not (the last element because the negative numbers are start from '-' as well) - don't forget cast String to int (actually it will cast String to Integer and auto-boxing Integer to int) # LeetCode 347. Top K Frequent Elements Given an integer array nums and an integer k, return the k most frequent elements. You may return the answer in any order. Example 1: Input: nums = [1,1,1,2,2,3], k = 2 Output: [1,2] Example 2: Input: nums = [1], k = 1 Output: [1] Constraints: 1 <= nums.length <= 105 -104 <= nums[i] <= 104 k is in the range [1, the number of unique elements in the array]. It is guaranteed that the answer is unique. Follow up: Your algorithm's time complexity must be better than O(n log n), where n is the array's size. [Original Page](https://leetcode.com/problems/top-k-frequent-elements/description/) 1 statistic for elements 2 sorted 3 return value -----> From above three key points we can get we at least need 1 Map 2 Sorted So here is a directly code about this method ``` public int[] topKFrequent(int[] nums, int k) { Map<Integer, Integer> map = new HashMap<>(); for(int i: nums){ map.put(i, map.getOrDefault(i,0)+1); } List<Map.Entry<Integer,Integer>> result = map.entrySet() .stream() .sorted(Map.Entry.<Integer,Integer>comparingByValue(Comparator.reverseOrder())) .collect(Collectors.toList()); List<Integer> out = new ArrayList<>(); for(int i=0; i<k;i++){ Map.Entry<Integer,Integer> temp = result.get(i); out.add(temp.getKey()); } return out.stream().mapToInt(Integer::valueOf).toArray(); } ``` ### Refine above code ``` public int[] topKFrequent(int[] nums, int k) { Map<Integer, Integer> map = new HashMap<>(); for(int i: nums){ map.put(i, map.getOrDefault(i,0)+1); } List<Integer> result = map.entrySet() .stream() .sorted(Map.Entry.<Integer,Integer>comparingByValue(Comparator.reverseOrder())) .limit(k) .map(Map.Entry::getKey) .collect(Collectors.toList()); return result.stream().mapToInt(Integer::valueOf).toArray(); } ``` Time: O(nlogn) +n + k in detail but O(nlogn) in general ### To Be Continue: PriorityQueue
flame_chan_llll
1,891,384
Navigating Decentralization: Building Applications with Amazon Managed Blockchain
Navigating Decentralization: Building Applications with Amazon Managed Blockchain The...
0
2024-06-17T15:05:30
https://dev.to/virajlakshitha/navigating-decentralization-building-applications-with-amazon-managed-blockchain-5c0
![usecase_content](https://cdn-images-1.medium.com/proxy/1*zqfBK-ivKOyE5TLv4mHkkA.png) # Navigating Decentralization: Building Applications with Amazon Managed Blockchain The emergence of blockchain technology has ushered in a new era of secure and transparent data management. As organizations explore the potential of decentralized applications (dApps), the need for robust and scalable blockchain infrastructure becomes paramount. Amazon Managed Blockchain (AMB) emerges as a powerful solution, simplifying the creation and management of scalable blockchain networks. This blog post delves into the capabilities of AMB, exploring its use cases, comparing it with other cloud offerings, and ultimately, envisioning an advanced use case that leverages the broader AWS ecosystem. ### Understanding Amazon Managed Blockchain Amazon Managed Blockchain is a fully managed blockchain service that allows you to create and join blockchain networks with just a few clicks. AMB eliminates the heavy lifting involved in setting up, operating, and scaling your own blockchain network. **Key Features:** * **Choice of Networks:** AMB supports two popular blockchain frameworks: Hyperledger Fabric and Ethereum. This flexibility enables you to select the framework that best aligns with your application requirements. * **Scalability and Performance:** AMB leverages the power and elasticity of AWS, allowing your blockchain network to seamlessly scale to meet growing demands. * **Security:** Security is deeply integrated into AMB. It offers network security through Amazon VPC, manages access control with AWS IAM, and ensures data protection with encryption at rest and in transit. * **Simplified Management:** As a fully managed service, AMB takes care of tasks like provisioning nodes, setting up network components, and managing software updates, allowing you to focus on application development. ### Use Cases: Unlocking the Potential of AMB The versatility of AMB makes it suitable for a wide array of use cases across different industries. Here are a few examples: #### 1. Supply Chain Transparency **Challenge:** Global supply chains are often complex and opaque, making it difficult to track goods and ensure authenticity. **Solution:** AMB can power a decentralized supply chain management system. Each transaction, from the origin of materials to the final product delivery, can be recorded on the blockchain. This immutable record provides real-time visibility into the supply chain, allowing businesses and consumers to track products, verify their origins, and combat counterfeiting. #### 2. Secure Data Sharing and Collaboration **Challenge:** Sharing sensitive data securely between multiple parties can be challenging. Traditional methods often involve centralized control, which can be vulnerable to data breaches. **Solution:** AMB enables the creation of a permissioned blockchain network where participants can share data securely and transparently. This is particularly useful in industries like healthcare, where patients can grant controlled access to their medical records to different healthcare providers. #### 3. Decentralized Identity Management **Challenge:** Identity theft and fraud are persistent problems in the digital age. Traditional identity verification systems are susceptible to breaches and centralized control. **Solution:** AMB can underpin a decentralized identity management system. Individuals can store their verified credentials on the blockchain, giving them greater control over their data and reducing the risk of identity theft. This approach streamlines KYC (Know Your Customer) processes and simplifies online identity verification. #### 4. Digital Asset Tracking and Management **Challenge:** Tracking the ownership and transfer of digital assets like digital art, in-game items, and virtual real estate can be complex. **Solution:** AMB empowers the creation of secure and transparent systems for managing digital assets. NFTs (Non-Fungible Tokens) can represent these assets on the blockchain, ensuring their uniqueness and enabling verifiable ownership and transfer. #### 5. Streamlining Voting Systems **Challenge:** Traditional voting systems can be vulnerable to manipulation and fraud. Ensuring transparency and trust in elections is crucial. **Solution:** AMB can form the foundation of a secure and transparent electronic voting system. Votes can be recorded as transactions on the blockchain, making them tamper-proof and auditable. This approach increases trust in the electoral process and mitigates the risk of fraud. ### Comparing Cloud Blockchain Offerings While AMB is a robust solution, it's important to be aware of other cloud providers that offer blockchain services: * **Microsoft Azure Blockchain Service:** Similar to AMB, Azure provides a managed blockchain service that supports Quorum, Ethereum, and Hyperledger Fabric. * **Google Cloud Blockchain Node Engine:** This fully managed service focuses on Hyperledger Fabric, enabling you to deploy nodes and manage your network. * **IBM Blockchain Platform:** IBM offers a comprehensive blockchain platform, supporting Hyperledger Fabric and allowing deployment on various environments, including multi-cloud and on-premise. Each platform has its strengths and weaknesses in terms of supported frameworks, pricing models, and features. Choosing the right platform depends on your specific project requirements. ### An Advanced Use Case: Building a Decentralized Supply Chain Finance Platform Imagine a decentralized supply chain finance platform built on AMB. This platform connects suppliers, buyers, and financiers in a transparent and secure ecosystem. Here's a breakdown of how it works and how it utilizes various AWS services: 1. **Smart Contracts on AMB:** The core logic of the platform is implemented using smart contracts deployed on an AMB Hyperledger Fabric network. Smart contracts automate processes like invoice generation, financing requests, and payment releases based on predefined conditions. 2. **Secure Data Storage with Amazon S3:** Sensitive documents like invoices, purchase orders, and bills of lading are encrypted and stored securely on Amazon S3. Decentralized access control ensures that only authorized parties can access these documents. 3. **Identity and Access Management with Amazon Cognito and IAM:** Amazon Cognito provides identity management for users of the platform, while AWS IAM ensures fine-grained access control to blockchain resources and S3 buckets. 4. **Data Analytics with Amazon Athena and QuickSight:** Supply chain data stored on the blockchain is queried and analyzed using Amazon Athena. Insights derived from this data, such as delivery times, financing rates, and inventory levels, are visualized using Amazon QuickSight for informed decision-making. 5. **Serverless Integration with AWS Lambda:** Events on the blockchain, like the creation of a new invoice or the approval of a financing request, can trigger serverless functions on AWS Lambda. These functions automate tasks like sending notifications, updating external systems, and integrating with other business applications. This comprehensive solution showcases how AMB can be integrated with other AWS services to create a sophisticated, secure, and efficient decentralized application. ### Conclusion Amazon Managed Blockchain provides a robust and scalable platform for building and deploying blockchain applications. Its support for multiple blockchain frameworks, integration with the broader AWS ecosystem, and simplified management make it an attractive option for businesses looking to leverage the power of decentralized technologies. As the blockchain landscape continues to evolve, platforms like AMB will play an increasingly important role in shaping the future of decentralized applications.
virajlakshitha
1,891,378
Build a waitlist with Clerk user metadata
Fast feedback when building a software-as-a-service application is critical. This is especially true...
0
2024-06-17T15:00:55
https://clerk.com/blog/build-a-waitlist-with-clerk-user-metadata
clerk, security, rbac, javascript
Fast feedback when building a software-as-a-service application is critical. This is especially true in the early days of building. The quicker you can get a working version of your product in the hands of users, the faster you can collect input and make decisions based on that input. Doing so can make an incredible difference in the success of your online business. One option is to use a platform to collect emails and notify them that the application is ready to test, but wouldn't it be nice to have them sign up for the application directly first? In this article, you'll learn how to configure Clerk to allow users to sign up for your application but restrict their access until you explicitly allow it. You'll also learn how to create a page to interact with the user's info in Clerk to grant them access to the application. 💡 To follow along with this article, you'll need a [free Clerk account](https://dashboard.clerk.com/sign-up), as well as Node.js installed on your computer. ## Follow along using the Cooking with Clerk repository Cooking with Clerk is an open-source web application built with Clerk and Next.js that will be used to apply the techniques outlined in this article. The application is an AI-powered recipe generator that uses OpenAI's API as part of the generative process. During development, we don't want to allow anyone to use it since it can easily start increasing our cost to use the OpenAI API. If you want to follow along, clone the repository to your computer and follow the steps outlined in the `readme.md` file to get your local environment set up. The source code can be found at [https://github.com/bmorrisondev/cooking-with-clerk](https://github.com/bmorrisondev/cooking-with-clerk). The remainder of this article assumes you will be following along using the `waitlist-article` branch, however this is entirely optional. It also assumes you've already signed in with your own account. To build the waitlist functionality, we'll be performing the following actions: - Configure session tokens and user metadata to flag users in and out of the waitlist. - Set up the Clerk middleware to redirect users based on those flags. - Design an admin dashboard that allows administrators to enable/disable users. ## Configure session tokens and user metadata Users in Clerk can be configured with [various types of metadata](https://clerk.com/docs/users/metadata) that can store information about that user in JSON format. We can take advantage of this storage mechanism to assign the various flags to users of our application: - `isBetaUser` can be used to determine if the user has access to test the application while in early development. - `isAdmin` can be used to determine if the user has access to the admin dashboard that will be created to allow users into the beta. Let's start by setting the `isAdmin` flag on our own account. Open the Clerk dashboard and navigate to **"Users"** from the left navigation. ![The Clerk Dashboard Users section](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5sxl71se0tahww9mel1j.png) Select the user you want to allow admin access to, then scroll to the bottom and locate the **Metadata** section. Click the first **"Edit"** button to edit the users' public metadata. ![Editing the user's public metadata in the Clerk Dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pdtpuivbbb9qknjstu82.png) Paste the following into the JSON editor and click Save. ```json { "isBetaUser": true } ``` Now even though public metadata is accessible from the front end, we'll be modifying the Clerk middleware to determine where to redirect the user once they've signed in. This means we'll need to add the public metadata to the claims so we have access to it before the user is fully loaded in the front end. To do this, select **"Sessions"** from the left navigation, then click **"Edit"** in the **Customize session token** section. ![The sessions tab of the Clerk Dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2zaz1wza7slx80cbkpya.png) Paste the following into the JSON editor and click Save. ```json { "metadata": "{{user.public_metadata}}" } ``` Every token minted from now on will contain the JSON that is saved to the user's public metadata within the claims of the token. > ⚠️ It's worth noting that the total size of the authentication object (including custom session claims) cannot exceed 4kb. ## Route users using Clerk middleware The Clerk middleware runs on every page load to determine if the user is authenticated and is allowed to access the requested resource using the `isProtectedRoute` helper. For example, the following middleware configuration will protect every page that starts with the `/app` route and the `/api` route: ```tsx // src/middleware.ts import { clerkMiddleware, createRouteMatcher } from "@clerk/nextjs/server"; const isProtectedRoute = createRouteMatcher([ '/app(.*)', '/api(.*)' ]); export default clerkMiddleware((auth, req) => { if (isProtectedRoute(req)) { auth().protect() } }) export const config = { matcher: ['/((?!.*\\..*|_next).*)', '/', '/(api|trpc)(.*)'], }; ``` Clerk will automatically parse the session claims (where the public metadata is) within the `auth()` function, which means it's accessible to us during this process like so: ```tsx const { sessionClaims } = auth() ``` Using this, we can determine if the session claims contain our `isBetaUser` flag. Update the `src/middleware.ts` file to match the following: ```tsx // src/middleware.ts // 👉 Update the imports import { ClerkMiddlewareAuth, clerkMiddleware, createRouteMatcher } from "@clerk/nextjs/server"; import { NextResponse } from "next/server"; const isProtectedRoute = createRouteMatcher([ '/app(.*)', '/api(.*)' ]); // 👉 Create a type to define the metadata type UserMetadata = { isBetaUser?: boolean } export default clerkMiddleware((auth, req) => { if (isProtectedRoute(req)) { auth().protect() // 👉 Use `auth()` to get the sessionClaims, which includes the public metadata const { sessionClaims } = auth() const { isBetaUser } = sessionClaims?.metadata as UserMetadata if(!isBetaUser) { // 👉 If the user is not a beta user, redirect them to the waitlist return NextResponse.redirect(new URL('/waitlist', req.url)) } } }) export const config = { matcher: ['/((?!.*\\..*|_next).*)', '/', '/(api|trpc)(.*)'], }; ``` From now on, any user that does not have `isBetaUser` defined in their public metadata will instead be redirected to a page that simply tells them that they are on the waitlist. It's also worth noting that since this check is performed after `auth().protect()`, this function will only run if the user is logged in with a Clerk account, preventing it from running when not needed. To see this in action, start the project on your computer by running `npm run dev` in your terminal and navigate to the URL displayed in the terminal (the default is `http://localhost:3000`, but may differ if another process is using port 3000). ![Cooking with Clerk homepage](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/crcsn4encim2szk5535n.png) Click **"Sign In"** in the upper right and log in with the user account you used during setup. You should be able to access and test the app with no issues. ![Cooking with Clerk with recipes generated](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/65lc2hb5opun92dwxu67.png) Now sign out using the user menu, and sign in again with a different account. You'll notice that instead of accessing the application, you are redirected to `/waitlist`. This is the middleware at work! ![Cooking with Clerk waitlist](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ko6rt6n8rb4527qmnab.png) ## Creating the admin area Now that we've built the capability into the app to require the `isBetaUser` flag to be set, we need a way to set this for users interested in testing the app. Sure, it can be done from within the Clerk dashboard, but we can also take advantage of the Clerk SDK to create a page that allows us to perform this action within the app. Start by creating the `src/app/admin/page.tsx` file and paste the following code into it. This will create a page at `/admin` that displays an empty table. ```tsx // src/app/admin/page.tsx import React from 'react' import { Table, TableBody, TableHead, TableHeader, TableRow, } from "@/components/ui/table" import UserRow from './UserRow' import { clerkClient } from "@clerk/nextjs/server" export const fetchCache = 'force-no-store'; async function Admin() { // 👉 Gets the users from the Clerk application let res = await clerkClient.users.getUserList() let users = res.data return ( <main> <h1 className='text-2xl font-bold my-2'>Admin</h1> <h2 className='text-xl my-2'>Users</h2> <Table className='border border-gray-200 rounded-lg'> <TableHeader> <TableRow> <TableHead className="w-[100px]">Name</TableHead> <TableHead>Email</TableHead> <TableHead className="text-right">Beta enabled?</TableHead> </TableRow> </TableHeader> <TableBody> // 👉 User records will be displayed here </TableBody> </Table> </main> ) } export default Admin ``` Next, we're going to create a client component that will display a row for each user within the table named `UserRow`. Before we do that, however, we need a server action that the `UserRow` component can use to interact with the Clerk Backend SDK to toggle the `isBetaUser` flag within a user's public metadata. Create the `src/app/admin/actions.ts` file and populate it with the following: ```tsx // src/app/admin/actions.ts 'use server' import { clerkClient } from "@clerk/nextjs/server" export async function setBetaStatus(userId: string, status: boolean) { await clerkClient.users.updateUserMetadata(userId, { publicMetadata: { isBetaUser: status } }) } ``` Now create the `src/app/admin/UserRow.tsx` file with the following contents. This will be used to render each user in a row on the admin page. ```tsx // src/app/admin/UserRow.tsx 'use client' import React, { useState } from 'react' import { TableCell, TableRow, } from "@/components/ui/table" import { Switch } from "@/components/ui/switch" import { setBetaStatus } from './actions' // 👉 Define the necessary props we need to render the component type Props = { name: string id: string emailAddress?: string metadata?: UserPublicMetadata } function UserRow({ name, id, metadata, emailAddress }: Props) { // 👉 Set the initial state of `isBetaUser` based on the metadata const [isBetaUser, setIsBetaUser] = useState(metadata?.isBetaUser || false) // 👉 Calls the server action defined earlier and sets the state on change async function onToggleBetaStatus() { try { await setBetaStatus(id, !isBetaUser) setIsBetaUser(!isBetaUser) } catch(err) { console.error(err) } } return ( <TableRow> <TableCell className='flex flex-col'> <span>{name}</span> <span className='italic text-xs text-gray-600'>{id}</span> </TableCell> <TableCell>{emailAddress}</TableCell> <TableCell className="text-right"> <Switch onCheckedChange={onToggleBetaStatus} checked={isBetaUser} aria-readonly /> </TableCell> </TableRow> ) } export default UserRow ``` Finally, update `src/app/admin/page.tsx` by importing the new component and adding it to the table: ```tsx // src/app/admin/page.tsx import React from 'react' import { Table, TableBody, TableHead, TableHeader, TableRow, } from "@/components/ui/table" import { clerkClient } from "@clerk/nextjs/server" import UserRow from './UserRow' export const fetchCache = 'force-no-store'; async function Admin() { let res = await clerkClient.users.getUserList() let users = res.data return ( <main> <h1 className='text-2xl font-bold my-2'>Admin</h1> <h2 className='text-xl my-2'>Users</h2> <Table className='border border-gray-200 rounded-lg'> <TableHeader> <TableRow> <TableHead className="w-[100px]">Name</TableHead> <TableHead>Email</TableHead> <TableHead className="text-right">Beta enabled?</TableHead> </TableRow> </TableHeader> <TableBody> {users?.map(u => ( <UserRow key={u.id} id={u.id} name={`${u.firstName} ${u.lastName}`} metadata={u.publicMetadata} emailAddress={u.emailAddresses[0]?.emailAddress} /> ))} </TableBody> </Table> </main> ) } export default Admin ``` Open the app in your browser again and navigate to `/admin`, you should see a list of the users from your Clerk application displayed in a table. Notice how only the account you manually added `isBetaUser` to during the first part of this guide has the toggle enabled under the "Beta enabled?" column. ![Cooking with Clerk admin panel](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/otrdk19sr7v0aqf3nr7d.png) Now, if you toggle another user on and log in again with that account, you should be redirected to `/app` instead of `/waitlist`! Furthermore, if you open the application in the Clerk dashboard and review the user's public metadata, you should see that `isBetaUser` has been enabled via the dashboard. ## Securing the admin page At this point, we've effectively built the waitlist functionality, as well as created a polished experience for controlling the flags enabled on the user account. The problem is that the middleware is set up only to protect `/app` and not `/admin`, so anyone with the beta flag could technically access the admin panel. With a few minor tweaks to `middleware.ts`, we can also prevent users from accessing the admin panel: ```tsx // src/middleware.ts import { ClerkMiddlewareAuth, clerkMiddleware, createRouteMatcher } from "@clerk/nextjs/server"; import { NextResponse } from "next/server"; const isProtectedRoute = createRouteMatcher([ '/app(.*)', '/api(.*)', '/admin(.*)', ]); type UserMetadata = { isBetaUser?: boolean isAdmin?: boolean } export default clerkMiddleware((auth, req) => { if (isProtectedRoute(req)) { auth().protect() const { sessionClaims } = auth() const { isAdmin, isBetaUser } = sessionClaims?.metadata as UserMetadata if(isAdmin) { // 👉 If the user is an admin, let them proceed to anything return } if(!isAdmin && req.nextUrl.pathname.startsWith('/admin')) { // 👉 If the user is not an admin and they try to access the admin panel, return an error return NextResponse.error() } if(!isBetaUser) { return NextResponse.redirect(new URL('/waitlist', req.url)) } } }) export const config = { matcher: ['/((?!.*\\..*|_next).*)', '/', '/(api|trpc)(.*)'], }; ``` Now whenever someone tries to access `/admin` without the `isAdmin` flag set in their Clerk user metadata, they'll get a 404 page instead of the admin panel. ## Conclusion Clerk user metadata can be extremely useful for storing various information about the user. This is simply one example of how to use metadata. If you need some more inspiration, we also have a blog post showing how to build an onboarding flow using a similar approach that I recommend reading! Do you have an interesting way you've used user metadata in your application? Share it on X and let us know by tagging [@clerkdev](https://x.com/clerkdev)!
brianmmdev
1,890,997
Resolve Content Security Policy (CSP) Issues in Your Discord Activity Using a Node.js Proxy
If you're building a Discord Activity, you may encounter issues with Content Security Policy (CSP)...
0
2024-06-17T15:00:00
https://dev.to/waveplay/resolve-content-security-policy-csp-issues-in-your-discord-activity-using-a-nodejs-proxy-2634
discord, programming, node, javascript
If you're building a **[Discord Activity](https://support.discord.com/hc/en-us/articles/4422142836759-Activities-on-Discord)**, you may encounter issues with **[Content Security Policy (CSP)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP)** restrictions. **CSP** is a security feature that helps prevent **[cross-site scripting attacks](https://owasp.org/www-community/attacks/xss)** by restricting the resources a web page can load. However, it can sometimes interfere with loading external resources like fonts or media in your activity. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h4rr7xajb7ggu14gnlpw.png) The recommended fix is to use Discord's **[URL Mapping](https://discord.com/developers/applications)** feature to rewrite URLs. However, this method has limitations and may not work in all cases. ## Proxies to the Rescue An alternative solution is to use a **[proxy server](https://developer.mozilla.org/en-US/docs/Web/HTTP/Proxy_servers_and_tunneling)** to fetch external resources and serve them from your server. This way, you can bypass **CSP** restrictions and load resources without any issues. The way it works is simple: 1. Your activity makes a request to your server for an external resource. 2. Your server fetches the resource and streams it back as a response. 3. Your activity receives the resource from your server, bypassing **CSP** restrictions. This method allows you to load any content from any source without having to map each domain individually in Discord's **URL Mapping**. It may sound complex, but it's **[literally one line of code](#create-your-own-proxy)** if you created your **Discord Activity** with **[Robo.js](https://roboplay.dev/docs)**. ## Create a Discord Activity Project **[Create a Discord Activity with Robo.js](https://dev.to/waveplay/how-to-build-a-discord-activity-easily-with-robojs-5bng)** if you don't have one already: ```bash npx create-robo my-activity -k activity ``` We'll be using **[React](https://react.dev)** for this example, but you can apply the same principles to any frontend framework or vanilla **[HTML](https://developer.mozilla.org/en-US/docs/Web/HTML)** and **[CSS](https://developer.mozilla.org/en-US/docs/Web/CSS)**. We highly recommend using **[create-robo](https://docs.roboplay.dev/cli/create-robo)** to create a **Discord Activity** project. **[Robo.js](https://roboplay.dev/docs)** providers a lot of features and tools to help you build your activity faster and more efficiently, such as **[Multiplayer Support](https://dev.to/waveplay/how-to-add-multiplayer-to-your-discord-activity-lo1)**, **[Easy Hosting](https://docs.roboplay.dev/discord-activities/hosting)**, **[Streamlined Tunnels](https://docs.roboplay.dev/discord-activities/tunnels)**, **[Built-in Database](https://docs.roboplay.dev/robojs/flashcore)**, **[Plugin System](https://docs.roboplay.dev/plugins/overview)**, and so much more! {% embed https://dev.to/waveplay/how-to-build-a-discord-activity-easily-with-robojs-5bng %} ## Making a Music Player Let's say you want to add an audio player to your activity that plays music from an external source. You might run into **CSP** issues when trying to load the audio file due to restrictions on loading external resources. Let's create a simple music player using the `<audio>` element in **React** by adding a new file named `Player.tsx` inside the `/src/app` folder: ```jsx import { useState, useRef } from 'react' export const Player = (props) => { const { url } = props const audioRef = useRef(null) const [isPlaying, setIsPlaying] = useState(false) const togglePlayPause = () => { const audio = audioRef.current if (audio && isPlaying) { audio.pause() } else if (audio && !isPlaying) { audio.play() } setIsPlaying(!isPlaying) } return ( <> <audio ref={audioRef} src={url} preload="auto" /> <button onClick={togglePlayPause}>{isPlaying ? 'Pause' : 'Play'}</button> </> ) } ``` Then update your `Activity.tsx` file to use it: ```jsx import { Player } from './Player' const ExternalUrl = 'https://media.waveplay.com/t/ckwldfuiq6608re6x8dzc5tyt.mp3' export const Activity = () => { return ( <div> <img src="/rocket.png" className="logo" alt="Discord" /> <h1>Hello, World</h1> <Player url={ExternalUrl} /> <p> <small> Powered by{' '} <a className="robojs" href="https://roboplay.dev/docs"> Robo.js </a> </small> </p> </div> ) } ``` In this example, we're loading an audio file from an external source and playing it using the `<audio>` element. However, this also triggers a **CSP** violation because `https://media.waveplay.com` is not whitelisted in **[Discord's Proxy](https://docs.roboplay.dev/discord-activities/proxy)**. Let's fix this by creating a proxy! ## Create Your Own Proxy Create a new file named `proxy.js` inside the `/src/api` folder: ```js export default async (request) => { return fetch(request.query.url) } ``` _That's it!_ Modify as you see fit to improve security. The **[@robojs/server](https://docs.roboplay.dev/plugins/server)** plugin used in this **Robo.js** project extends the standard web **[Request](https://developer.mozilla.org/docs/Web/API/Request)** and **[Response](https://developer.mozilla.org/docs/Web/API/Response)** interfaces which work seamlessly with **[Fetch API](https://developer.mozilla.org/docs/Web/API/Fetch_API)**, which is why we can return the result directly. > **Heads up!** You'll need at least **@robojs/server** version `0.5.3` or newer. Now, update your `Activity.tsx` file to use the proxy: ```jsx <Player url={'/api/proxy?url=' + ExternalUrl} /> ``` By adding the `/api/proxy?url=` prefix to the audio URL, we're telling our server to fetch the audio file and serve it back to the activity without triggering **CSP** violations. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u3obwnrg2yk3jkupu8jy.png) You can use this method to proxy simple external resources in your activity. For more complex scenarios, such as web pages, our upcoming **@robojs/browser** plugin will provide a more robust solution. ## Security Implications While this method is effective for bypassing **CSP** restrictions, it does introduce additional latency because your server has to fetch the resource before serving it back to the activity. This may impact the performance of your activity, especially for large files or high traffic, so make sure your **[Hosting Service](https://docs.roboplay.dev/discord-activities/hosting)** can handle it. Be aware of potential security risks when using a proxy server, such as **[URL Injections](https://cheatsheetseries.owasp.org/cheatsheets/Unvalidated_Redirects_and_Forwards_Cheat_Sheet.html)** and **[SSRF (Server-Side Request Forgery)](https://owasp.org/Top10/A10_2021-Server-Side_Request_Forgery_(SSRF))** attacks. Always use Discord's **URL Mapping** whenever possible and only resort to proxies when necessary. ## Easy, Right? By creating a simple proxy server, you can bypass **CSP** restrictions and load external resources in your **Discord Activity** without any issues. This method is easy and effective, allowing you to focus on building your activity without worrying about security restrictions. You can find the complete source code for this example available in **TypeScript** as a **[Robo.js Template](https://docs.roboplay.dev/templates/overview)**. ➞ [🔗 **Template:** Music Player Proxy](https://github.com/Wave-Play/robo.js/tree/main/templates/discord-activities/music-player-proxy) Don't forget to **[join our Discord server](https://roboplay.dev/discord)** to chat with other developers, ask questions, and share your projects. We're here to help you build amazing apps with **Robo.js**! 🚀 {% embed https://roboplay.dev/discord %} Our very own Robo, **Sage**, is there to answer any questions about Robo.js, Discord.js, and more!
waveplay-staff
1,891,371
How to Network When You Don’t Have a Big Social Circle
Networking can feel like a daunting task, especially when your social circle is small. Trust me, I’ve...
0
2024-06-17T14:59:47
https://dev.to/tectrain_academy/how-to-network-when-you-dont-have-a-big-social-circle-44fp
beginners, learning, discuss, softwaredevelopment
Networking can feel like a daunting task, especially when your social circle is small. Trust me, I’ve been there! For more than four years, I’ve worked as a social media and content manager, and I’ve been on a mission to build a meaningful professional network. **By using tools like LinkedIn, community discussion platforms, and webinars, you can build meaningful connections that will benefit your professional journey.** Today, I want to share some of the tips and strategies that have worked for me. Hopefully, you’ll find them helpful as you work on building your own network. ## _1- Professional + Social + Network = LinkedIn_ I started by creating a LinkedIn profile that showcases my skills, experiences, and goals. First, find my friends then continue with connecting with my professors and people I worked with. I joined industry groups related to my field of interest. For this part, **don’t be shy to engage in discussions or share your insights**. You can send connection requests to people you admire or want to learn from by personalizing your invitations with a brief note explaining why you’d like to connect. It’s a friendly and professional way to grow your network. [![LinkedIn account Aslihan Kilic](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0zzgiu7vr8zr4xx4aef8.png)](https://www.linkedin.com/in/aslihan-kilic/) Think of LinkedIn as a virtual coffee shop where you can meet new people, exchange ideas, and build relationships that can lead to exciting opportunities. Just like in a real coffee shop, be respectful and genuine. This platform is an excellent starting point for anyone looking to broaden their professional horizons. ## _2- Community-Based Discussion Forums_ **So we already know Dev.to is a great choice**, I was writing to share my ideas and you are reading these sentences. That’s why we are here :) [Reddit](https://www.reddit.com/) and **Dev.to** are for everyone who enjoys learning and sharing knowledge, not just computer geeks. They provide a forum for discussing trends, posing questions, and interacting with like-minded individuals. You can join forums on Dev.to, for instance, that discuss design, web development, or even career guidance. On the other side, Reddit offers a plethora of subreddits (forums) where you can participate in conversations about almost any subject. These resources are priceless, whether you're looking for help with a code issue or want to learn more about a certain sector. ## _3- Online Webinars_ Webinars are another fantastic way to expand your network and gain valuable insights. Even while researching the topic you're interested in on Google, you can find many online free or paid options. There are two main sources that I follow and recommend, which offer IT-related options: [Gartner](https://www.gartner.com/en) and [Tectrain](https://tectrain.ch/en/academy). You can find online events on various topics almost every month on these sources. By following these sources on social media channels or their websites, I register for the webinars that interest me and add them to my calendar. I can say that thanks to these webinars I attended, I learned to find the right answers to the questions I had on current topics. Webinars usually include interactive areas. While the speaker presents the topic, I can ask questions through live chat or participate live in the Q&A sessions to ask questions or discuss the topics I want to solve. **In this way, I can get answers to my questions from experts and also learn different perspectives from other participants.** I think it's the greatest value that today's technology brings to those who want to improve themselves. Make it a habit to attend webinars that align with your interests and career goals. If you want to learn top trends impacting your IT talent in 2024, [you can join a Gartner webinar](https://www.gartner.com/en/webinar/609530/1356460), or if you are interested in [iSAQB-related topics](https://www.linkedin.com/events/tecstreamwebinarseries-building7204757862399504384/comments/) like Strategies for Software Architecture Success, you can join Tectrain's online webinar sessions. **These are all free.** (PS: If you want to gain a certificate about some specific topics like [iSAQB](https://tectrain.ch/en/isaqb) or [SAFe](https://tectrain.ch/en/safe) there is paid options you can find.) **Take notes, ask questions, and follow up with the speakers or other participants on LinkedIn afterward. It's a straightforward way to connect with like-minded individuals who share your passion for learning and professional growth.** **If you have other channels you recommend, you can share them in the comments.** I can't wait to check them out. Learning new things and making connections through these teachings is great for both personal and professional development in my opinion. Maybe the connections we make will bring us one step closer to our goals. The ideas we share can give birth to new ideas, and maybe one day, these connections will turn into products or projects that help many people. The mere possibility of this increases my enthusiasm for creating a community and networking with people. **If you think the same, you can start by taking a step today.** ## Find Your Path and Connect with Your Community At the end of the day, the key to effective networking is to have a clear goal. **Are you looking for a job? Do you want to learn new skills? Or are you simply looking to meet people who share your interests?** Once you’ve defined your main objective, choose the best platform or method to reach your goal. Remember, networking is not just about meeting people; it’s about building relationships and being part of a community. Whether you choose LinkedIn, discussion platforms like Dev.to and Reddit, or online webinars, each method offers unique opportunities to connect and grow. **So, take the leap, start reaching out, and watch your social circle expand!**
tectrain_academy
1,891,383
SHA-256: The Heartbeat of Cryptographic Security in 256 Characters
## SHA-256(Secure Hash Algorithm 256-bit) "SHA-256 is a cryptographic hash function that converts...
0
2024-06-17T14:59:35
https://dev.to/harish_05/sha-256-the-heartbeat-of-cryptographic-security-in-256-characters-3i8d
devchallenge, cschallenge, computerscience, beginners
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mtdwp8gdmscaygnwz8nf.jpg) **## SHA-256(Secure Hash Algorithm 256-bit)** "SHA-256 is a cryptographic hash function that converts input data into a fixed 256-bit hash. It's widely used for data integrity, digital signatures, and secure password storage. Each unique input produces a unique hash, making it crucial for verifying data authenticity and security."
harish_05
1,891,380
CSS Pure body animation
Check out this Pen I made!
0
2024-06-17T14:58:37
https://dev.to/tidycoder/css-pure-body-animation-1kme
codepen
Check out this Pen I made! {% codepen https://codepen.io/TidyCoder/pen/PovEVgb %}
tidycoder
1,891,379
Backstage for AI's Decision-Making Process
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ...
0
2024-06-17T14:58:24
https://dev.to/vyshnavik18/backstage-for-ais-decision-making-process-5hm6
devchallenge, cschallenge, computerscience, beginners
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* ## Explainer Grad CAM vitally used in regions of CNN decisions, is a part of DL. uses gradients flowing into the final convolutional layer to show where model "looks" to make decisions, passes localization tests such as PASCAL VOC, MS COCO, and ImageNet. ## Additional Context - Visualisation aspect: It conveys the idea that Grad-CAM shows "where model looks," which is a vivid way to describe its visualisation capability. - Benchmark inclusion: Mentioning specific datasets (PASCAL VOC, MS COCO, and ImageNet) demonstrates Grad-CAM's wide applicability and validation. - Accessibility: The explanation balances technical terms with more approachable language, making it understandable to a wider audience. <!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. --> <!-- Don't forget to add a cover image to your post (if you want). --> <!-- Thanks for participating! -->
vyshnavik18
1,891,366
Usando ffmpeg para converter vídeos em GIF e muitas outras coisas
O ffmpeg é um programa de linha de comando eficiente e fácil de usar. É usado para conversão,...
0
2024-06-17T14:57:18
https://dev.to/fernandovaller/usando-ffmpeg-para-converter-videos-em-gif-e-muitas-outras-coisas-369d
O ffmpeg é um programa de linha de comando eficiente e fácil de usar. É usado para conversão, gravação, transmissão e reprodução de arquivos de áudio e vídeo. O ffmpeg é obrigatório para desenvolvedores, editores de vídeo e entusiastas de multimídia, pois suporta uma ampla variedade de formatos. **Como instalar (wsl2/Linux)** ```bash sudo apt install ffmpeg ``` **Como usar (wsl2/Linux)** A conversão de arquivos de vídeo com FFmpeg é simples e direta. Aqui estão alguns exemplos básicos: **Converter um vídeo em `*.mp4` para `*.gif`** ```bash ffmpeg -i input.mp4 output.gif ``` Veja que é possível usar vários outros tipos de saída com esse simples comando. **Extrair o áudio de um vídeo** ```bash ffmpeg -i input.mp4 -q:a 0 -map a output.mp3 ``` - `-q:a 0` define a qualidade de áudio para a melhor possível. - `-map a` seleciona apenas a faixa de áudio. **Redimensionar um vídeo** ```bash ffmpeg -i input.mp4 -vf scale=1280:720 output.mp4 ``` - `-vf scale=1280:720` redimensiona o vídeo para 1280x720 pixels. **Acelerar um vídeo** Vídeo de 10 minuto acelerado para 5 minutos: ```bash ffmpeg -i input.mp4 -filter:v "setpts=PTS/2" output.mp4 ``` - `-filter:v` Indica que estamos aplicando um filtro ao vídeo. O `v` especifica que o filtro é para o vídeo (e não para o áudio). - `setpts` é um filtro que ajusta os timestamps dos frames de vídeo (PTS - Presentation Time Stamp). Para acelerar um vídeo 4 vezes (reduzir a duração para 1/4 do tempo original), você pode usar: ```bash ffmpeg -i input.mp4 -filter:v "setpts=PTS/4" output.mp4 ``` **Converter todos os arquivo em uma pasta para *.mp3** Você pode converter de uma maneira eficiente de converter múltiplos arquivos `*.m4a` para `*.mp3`. ```bash ls *.m4a | xargs -I {} ffmpeg -i {} -codec:a libmp3lame -q:a 2 {}.mp3 ``` Caso deseje é possível além de converter para `*.mp3` aplicar uma normalização de áudio: ```bash ls *.m4a | xargs -I {} ffmpeg -i {} -codec:a libmp3lame -q:a 2 -af loudnorm=I=-16:LRA=11:TP=-1.5:print_format=summary {}.mp3 ``` FFmpeg é uma ferramenta essencial para qualquer pessoa que trabalhe com multimídia. Sua instalação é simples tanto no Windows quanto no Linux, e sua utilização oferece uma ampla gama de possibilidades para a conversão e manipulação de arquivos de áudio e vídeo. Explore as opções e comandos do FFmpeg para aproveitar ao máximo essa poderosa ferramenta! Esses são apenas alguns dos principais comandos que utilizo no dia a dia, veja que existem muitos outros recursos que você pode explorar e aplicar em suas tarefas.
fernandovaller
1,891,358
New ChatGPT-4o: A Game-Changer That Could Replace Data Analysts
In this video, I explore the new ChatGPT-4o and its impact on data analysis. Can data analysts...
0
2024-06-17T14:54:04
https://dev.to/proflead/new-chatgpt-4o-a-game-changer-that-could-replace-data-analysts-3gni
In this video, I explore the new ChatGPT-4o and its impact on data analysis. Can data analysts survive in a world where AI is taking over? Join me as I dive into the features of ChatGPT-4o and discuss its potential to revolutionize the field of data analysis. I also share my thoughts on the future of data analysts and what skills will be essential to stay relevant. Don't forget to like, comment, and subscribe for more insights!
proflead
1,891,375
Nullish coalescing vs Logical || by aryan
The difference between these two operators are, nullish(??) operators consider 0 and "",[],{} as a...
0
2024-06-17T14:53:29
https://dev.to/aryan015/nullish-coalescing-vs-logical-by-aryan-j9e
javascript, react
The difference between these two operators are, nullish(??) operators consider 0 and "",[],{} as a true. Only `null` and `undefined` is considered as false. ```js const obj = { name:'aryan khandelwla' age:26 } obj.['name'] //aryan khandelwal obj||obj['name'] //aryan khandelwla ``` `??` nullish operator helps avoid repetition.__[recommended]__
aryan015
1,891,374
LeetCode Meditations: Pacific Atlantic Water Flow
The description for this problem states: There is an m x n rectangular island that borders both the...
26,418
2024-06-17T14:49:07
https://rivea0.github.io/blog/leetcode-meditations-pacific-atlantic-water-flow
computerscience, algorithms, typescript, javascript
The description for [this problem](https://leetcode.com/problems/pacific-atlantic-water-flow) states: > There is an `m x n` rectangular island that borders both the **Pacific Ocean** and **Atlantic Ocean**. The **Pacific Ocean** touches the island's left and top edges, and the **Atlantic Ocean** touches the island's right and bottom edges. > > The island is partitioned into a grid of square cells. You are given an `m x n` integer matrix `heights` where `heights[r][c]` represents the **height above sea level** of the cell at coordinate `(r, c)`. > > The island receives a lot of rain, and the rain water can flow to neighboring cells directly north, south, east, and west if the neighboring cell's height is **less than or equal to** the current cell's height. Water can flow from any cell adjacent to an ocean into the ocean. > > Return _a **2D list** of grid coordinates_ `result` _where_ `result[i] = [ri, ci]` _denotes that rain water can flow from cell_ `(ri, ci)` _to **both** the Pacific and Atlantic oceans_. For example: <img src="https://assets.leetcode.com/uploads/2021/06/08/waterflow-grid.jpg" alt="Example image" /> ``` Input: heights = [ [1, 2, 2, 3, 5], [3, 2, 3, 4, 4], [2, 4, 5, 3, 1], [6, 7, 1, 4, 5], [5, 1, 1, 2, 4] ] Output: [[0, 4], [1, 3], [1, 4], [2, 2], [3, 0], [3, 1], [4, 0]] Explanation: The following cells can flow to the Pacific and Atlantic oceans, as shown below: [0, 4]: [0, 4] -> Pacific Ocean [0, 4] -> Atlantic Ocean [1, 3]: [1, 3] -> [0, 3] -> Pacific Ocean [1, 3] -> [1, 4] -> Atlantic Ocean [1, 4]: [1, 4] -> [1, 3] -> [0, 3] -> Pacific Ocean [1, 4] -> Atlantic Ocean [2, 2]: [2, 2] -> [1, 2] -> [0, 2] -> Pacific Ocean [2, 2] -> [2, 3] -> [2, 4] -> Atlantic Ocean [3, 0]: [3, 0] -> Pacific Ocean [3, 0] -> [4, 0] -> Atlantic Ocean [3, 1]: [3, 1] -> [3, 0] -> Pacific Ocean [3, 1] -> [4, 1] -> Atlantic Ocean [4, 0]: [4 ,0] -> Pacific Ocean [4, 0] -> Atlantic Ocean Note that there are other possible paths for these cells to flow to the Pacific and Atlantic oceans. ``` --- Although the description can be a challenge in itself to understand at first glance, what we need to do is essentially simple (at least, in theory). We want a cell whose neighbors are less than or equal to it, all the way to the north, south, east, and west until we reach both "oceans." First, we can initialize two sets to store the cells that can reach "Pacific" and "Atlantic": ```ts const reachableToPacific: Set<string> = new Set(); const reachableToAtlantic: Set<string> = new Set(); ``` | Note | | :-- | | We're initializing them as sets of strings, similarly as we did in the [Number of Islands solution](https://rivea0.github.io/blog/leetcode-meditations-number-of-islands). We're going to represent the row and column pair like `` `${row},${col}` ``. | Instead of going cell by cell and checking if it can reach the oceans, we can first start by the cells that are adjacent to the oceans, and see which cells can reach _us_. Since we're getting the cells that are reachable to oceans in different sets, we can return those that are in both sets (because we need to get those that are reachable to both oceans). So, **at the end**, what we'll do is this: ```ts for (const cell of reachableToPacific.values()) { if (reachableToAtlantic.has(cell)) { const [r, c] = cell.split(','); result.push([+r, +c]); } } ``` | Note | | :-- | | We're converting string values to numbers with the handy [unary plus operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Unary_plus). | We can use a [breadth-first search](https://rivea0.github.io/blog/leetcode-meditations-chapter-11-graphs#bfs) to visit and mark the cells. For the top and bottom edges of the grid, we'll mark the cells that can reach to Pacific and Atlantic: ```ts for (let col = 0; col < colsLength; col++) { bfs(0, col, reachableToPacific); bfs(rowsLength - 1, col, reachableToAtlantic); } ``` We can do the similar thing for left and right edges of the grid as well: ```ts for (let row = 0; row < rowsLength; row++) { bfs(row, 0, reachableToPacific); bfs(row, colsLength - 1, reachableToAtlantic); } ``` Now, on to the implementation of `bfs`. The purpose of our `bfs` function is to mark the cells that can reach to an "ocean." So, it'll take three parameters: `r` for row, `c` for column, and `reachableToOcean` for the set that stores the cells that are reachable. As usual with BFS, we'll keep a queue that has arrays consisting of a row, a column, and the corresponding value in the grid: ```ts let queue = [[r, c, heights[r][c]]]; ``` As we go over the elements of `queue`, we'll mark a row-column pair as reachable <mark>as long as that pair is not out of bounds, _or_ we haven't already added it as reachable, _or_ the value it has is **greater than or equal** to the previous "height" we've looked at.</mark> | Note | | :-- | | <mark>Since we're starting from the edge cells, we're interested in values _greater than or equal_.</mark> In other words, we're not interested in "heights" that are shorter. | While `queue` is not empty, we'll first pop the current row, current column and previous height from `queue`: ```ts const [currentRow, currentCol, prevHeight] = queue.pop() as number[]; ``` | Note | | :-- | | It's "previous height," because as we'll see below, the new values we push to `queue` will be updated row and column values (like `currentRow + rowToGo` and `currentCol + colToGo`) while the cell will be the "previous" one (`heights[currentRow][currentCol]`). | If one of the conditions we mentioned above is true, we want to continue with the next element in the queue. Otherwise, we'll add it to our set: ```ts if ( isOutOfBounds(currentRow, currentCol) || reachableToOcean.has(`${currentRow},${currentCol}`) || heights[currentRow][currentCol] < prevHeight ) { continue; } reachableToOcean.add(`${currentRow},${currentCol}`); ``` Then, we'll add the neighbors to the queue, as well as `heights[currentRow][currentCol]`, which is going to be the "previous height" for the next element: ```ts // up, down, left, right const coords = [[-1, 0], [1, 0], [0, -1], [0, 1]]; for (const [rowToGo, colToGo] of coords) { queue.push([ currentRow + rowToGo, currentCol + colToGo, heights[currentRow][currentCol] ]); } ``` And, that's it for the `bfs` function: ```ts function bfs(r: number, c: number, reachableToOcean: Set<string>) { let queue = [[r, c, heights[r][c]]]; while (queue.length > 0) { const [currentRow, currentCol, prevHeight] = queue.pop() as number[]; if ( isOutOfBounds(currentRow, currentCol) || reachableToOcean.has(`${currentRow},${currentCol}`) || heights[currentRow][currentCol] < prevHeight ) { continue; } reachableToOcean.add(`${currentRow},${currentCol}`); // up, down, left, right const coords = [[-1, 0], [1, 0], [0, -1], [0, 1]]; for (const [rowToGo, colToGo] of coords) { queue.push([ currentRow + rowToGo, currentCol + colToGo, heights[currentRow][currentCol] ]); } } } ``` Putting everything together, here is what our final solution looks like in TypeScript: ```ts function pacificAtlantic(heights: number[][]): number[][] { let result = []; const rowsLength = heights.length; const colsLength = heights[0].length; const reachableToPacific: Set<string> = new Set(); const reachableToAtlantic: Set<string> = new Set(); function isOutOfBounds(r: number, c: number) { return r < 0 || c < 0 || r >= rowsLength || c >= colsLength; } function bfs(r: number, c: number, reachableToOcean: Set<string>) { let queue = [[r, c, heights[r][c]]]; while (queue.length > 0) { const [currentRow, currentCol, prevHeight] = queue.pop() as number[]; if ( isOutOfBounds(currentRow, currentCol) || reachableToOcean.has(`${currentRow},${currentCol}`) || heights[currentRow][currentCol] < prevHeight ) { continue; } reachableToOcean.add(`${currentRow},${currentCol}`); // up, down, left, right const coords = [[-1, 0], [1, 0], [0, -1], [0, 1]]; for (const [rowToGo, colToGo] of coords) { queue.push([ currentRow + rowToGo, currentCol + colToGo, heights[currentRow][currentCol] ]); } } } for (let col = 0; col < colsLength; col++) { bfs(0, col, reachableToPacific); bfs(rowsLength - 1, col, reachableToAtlantic); } for (let row = 0; row < rowsLength; row++) { bfs(row, 0, reachableToPacific); bfs(row, colsLength - 1, reachableToAtlantic); } for (const cell of reachableToPacific.values()) { if (reachableToAtlantic.has(cell)) { const [r, c] = cell.split(','); result.push([+r, +c]); } } return result; } ``` #### Time and space complexity The time complexity is {% katex inline %} O(n * m) {% endkatex %} — where {% katex inline %} n {% endkatex %} is the number of rows, and {% katex inline %} m {% endkatex %} is the number of columns, as we're traversing the whole grid but making use of the [Set data structure](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set) to avoid visiting the same cell. The space complexity is—I think—also {% katex inline %} O(n * m) {% endkatex %}, again, where {% katex inline %} n {% endkatex %} is the number of rows, and {% katex inline %} m {% endkatex %} is the number of columns. The size of our queue will grow proportionately to the size of the grid, and we also keep two sets, their sizes can also grow proportionately to the grid we're given. --- The next and final problem we're going to look at in this chapter is [Course Schedule](https://leetcode.com/problems/course-schedule). Until then, happy coding.
rivea0
1,891,373
My GitHub Account
https://github.com/arzooshamim13
0
2024-06-17T14:47:28
https://dev.to/arzooshamim13/my-github-account-eml
webdev
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sc7ue4aawirt97qrnskg.png)https://github.com/arzooshamim13
arzooshamim13
1,891,372
I'm in
A post by Jagan mohan Gade
0
2024-06-17T14:45:58
https://dev.to/jagan_gade/im-in-1cbg
jagan_gade
1,891,369
Top Low-Cost Cybersecurity Measures
According to statistics, SMBs spend an average of 12% on their IT infrastructure, which includes its...
0
2024-06-17T14:41:55
https://dev.to/whotarusharora/top-low-cost-cybersecurity-measures-37el
webdev, cybersecurity, security, beginners
According to statistics, SMBs spend an average of 12% on their IT infrastructure, which includes its configuration, updation, maintenance, and security. It's obvious that small and medium-scale firms have to spend more on marketing, as they need to make their space in the market. To help such organizations maintain their cyber-hygiene, here are listed some affordable cybersecurity solutions. And it’s expected that these strategies will be covered in the current investment for data security. So, let’s get started and have a look. ## How To Find Budget-Friendly Cybersecurity Strategies Finding inexpensive cybersecurity solutions is a task when every company is marketing their expensive tools only. However, with the following approaches, you can efficiently find a lot of budget-friendly solutions, tips, tricks, and more. * Seek recommendations from security professionals' blogs, articles, and even podcasts. * Ask the sales representative of the companies to provide you with discounts. * Analyze the available tools for their price and functionality. It will assuredly help you find the right one within cost constraints. * You can ask for suggestions on digital platforms, such as LinkedIn. ## What are the Top Affordable Cybersecurity Solutions Following are the top five budget-friendly cybersecurity strategies and solutions that an SME or any other organization can opt for to save cost and secure data. ### #1:Train the Stakeholders from Day Zero Training is considered one of the most effective and budget-friendly cybersecurity strategies to prevent data breaches and unauthorized access. Organizations should provide relevant training to all their primary and secondary stakeholders on how to maintain the integrity of their personal and professional data. The key aspects to focus on for training are: * Understanding phishing attacks and their mitigation * Software and hardware to utilize with company devices * File scanning for malware before opening it * Checking digital signatures on emails * Preventing social engineering attacks Additionally, the training must be conducted quarterly or as per your own schedule to ensure that the latest measures can be taken to ensure data confidentiality. You can find thousands of training modules available for free, helping to save quite a chunk of investment. ### #2:Utilize Authentic Tools Authentic tools are the ones that are not cracked or downloaded from third-party websites. When legitimate software tools are utilized, they help you prevent large data breaches that can cause a loss of millions of dollars and reputation. Investing in legitimate tools can seem expensive at the beginning. But, for long-term cost saving, it's the best thing a company can do. It provides numerous advantages per security aspect, such as: * The code is secured using a code signing certificate, meaning that hackers cannot modify it. * The software tools development firm provides updates to align it with required security standards. * The legitimate tools are tested and patched, minimizing the possibility of getting breached. ### #3:Use Advanced AAA Mechanism Whether it's a small, medium, large, or a fortune company, every one of them must implement the AAA mechanism. It stands for Authentication, Authorization, and Access control. The authentication helps to validate the user’s identity. Authorization helps with providing relevant access according to the role and responsibility of a person. Access control aids in defining who can and cannot access a certain portal, control tool, or even a physical location on the premises. All three security mechanisms are best suited together, and to strengthen them, multi-factor authentication should be configured. The AAA is inexpensive to implement and can be effortlessly managed through an active directory or a centralized server. ### #4:SSL is Must Nowadays, every organization has its own website, which works as the primary source of communication and digital presence. From telling people about the company to enabling professionals to join the firm for an open position, the website serves all such purposes. So, you should make it highly secure, and one of the relevant and affordable ways is an SSL/TLS certificate. SSL/TLS certificates must be configured for your website, as it provides dual benefits. The first advantage of assuring data security and the second is it improves search engine indexing. Furthermore, SSL certificates come in different versions according to needs, such as: * Domain Validation SSL Certificate for static websites * Organization Validation SSL Certificate for dynamic website * Extended Validation SSL Certificate for eCommerce, medical, and financial transaction-associated websites SSL certificates are also available for a number of domains. Thus, you should thoroughly research SSL certificates before configuring them. ### #5:Firewalls and Updates Firewalls are available in both software and hardware versions. Where hardware firewalls are expensive, software firewalls are budget-friendly and easy to configure. If your organization’s budget is low, then you should consider configuring a software firewall in your network. To strengthen its capabilities, an end-point security mechanism, such as anti-malware software, can also be installed. You can choose from the following software firewall solutions: * Comodo firewall * Netgate pfSense * Azure firewall * Sophos firewall * FortiGate NGFW (Next Generation Firewall) Additionally, you should update the firewall policies according to dynamic business needs and modifications in the infrastructure. It’ll help you block potential attackers and secure data in every possible use case. ## High-End Cost-Effective Cybersecurity Tools Data is the most valuable asset of an organization, and it's recommended that the most avant-garde tools be used to protect it. However, it’s not compulsory to configure only expensive solutions. The below low-cost yet advanced solutions can help you maintain data integrity and confidentiality. * Coalition * Curricula * Duo Security * Snort * Wireshark * Windows Defender * JupiterOne * Vulcan Cyber Before implementing any of the tools in your infrastructure, you should check their features, compliance with standards, and ability to fulfill your requirements. ## Concluding Up Maintaining good cyber hygiene is the priority of every organization. However, due to a lack of budget, SMBs are not able to utilize expensive tools and solutions. In such cases, the above-listed low-cost cybersecurity solutions come into effect. By configuring an SSL, using a software firewall, training stakeholders, and configuring AAA, data integrity can be maintained. By following all the listed cybersecurity approaches, a firm can efficiently prevent attackers in a budget-friendly manner.
whotarusharora
1,891,368
Livekriyathmakakaravagenima
A post by Buddika Chathurnga
0
2024-06-17T14:37:02
https://dev.to/buddika_chathurnga_79c567/livekriyathmakakaravagenima-3gnl
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/12iavrmjxea3u5hlg7ly.jpg)
buddika_chathurnga_79c567
1,891,203
Interval Score Matching: Enhancing Fidelity in Text-to-3D Models with LucidDreamer
Author: Harpreet Sahota (Hacker in Residence at Voxel51) A CVPR Paper Review and Cliff’s...
0
2024-06-17T14:33:22
https://medium.com/voxel51/interval-score-matching-enhancing-fidelity-in-text-to-3d-models-with-luciddreamer-f18c022dd4ac
machinelearning, datascience, computervision, ai
_Author: [Harpreet Sahota](https://www.linkedin.com/in/harpreetsahota204/) (Hacker in Residence at [Voxel51](https://voxel51.com/))_ ## A CVPR Paper Review and Cliff’s Notes Traditional 3D modelling is time-consuming and requires specialized skills, creating a barrier to widespread use in various industries. Recent advancements in text-to-3D generation have shown promise yet often fail to produce models with fine details and realism. Addressing these challenges, the latest research introduces novel methodologies to bridge this gap. [This paper introduces LucidDreamer, a new system that can create detailed and realistic 3D models from text descriptions](https://arxiv.org/abs/2311.11284). Imagine you could type “a red sports car” into a system and, within minutes, receive a highly detailed 3D model that captures the intricate curves, reflective surfaces, and precise proportions of a real sports car. ## No time to read the blog? You can hear me talk about the paper: {% embed https://www.youtube.com/watch?v=NJQXgoARLCY %} LucidDreamer, with its Interval Score Matching (ISM) technique, achieves this level of high-fidelity text-to-3D generation. By addressing the limitations of previous methods like Score Distillation Sampling (SDS), LucidDreamer produces 3D models with unparalleled detail and realism, making it a groundbreaking tool for applications ranging from virtual reality to digital content creation. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/el6gfx39p3jge67psbui.gif) _<center>[LucidDreamer](https://github.com/EnVision-Research/LucidDreamer?tab=readme-ov-file)</center>_ ## The Problem Creating 3D models is usually a time-consuming task that requires expertise. Several advancements have recently allowed us to generate 3D models from text descriptions, for example: ### Magic3D A text-to-3D content creation tool developed by NVIDIA that generates high-quality 3D mesh models from textual descriptions. It utilizes image conditioning techniques and a prompt-based editing approach to provide users with novel ways to control 3D synthesis. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6if6lkb7m0k6pbc8wyzy.gif) _<center>[NVIDIA Magic3D](https://research.nvidia.com/labs/dir/magic3d/)</center>_ ### Fantasia3D A text-to-3D content creation that disentangles geometry and appearance modelling, enabling the generation of photorealistic 3D assets from text prompts. It uses a hybrid scene representation and encodes surface normals extracted from the representation as input to an image diffusion model for geometry learning. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b4sjfqkpypyif1k7mjyp.gif) _<center>[Fantasia3D](https://fantasia3d.github.io/)</center>_ ### ProlificDreamer A text-to-3D generation method that uses variational score distillation to generate high-fidelity and diverse 3D content from text prompts. It improves upon the existing score distillation sampling (SDS) method by modelling the 3D parameter as a random variable instead of a constant, addressing issues like over-saturation, over-smoothing, and low diversity in generated 3D models. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/r820gvyhv4kpb6w2f7eq.gif) _<center>[ProlificDreamer](https://ml.cs.tsinghua.edu.cn/prolificdreamer/)</center>_ ### Still, these methods often produce models that are not very detailed or realistic. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l35zqgzot2p4x8a5dobb.png) One popular method for this is called Score Distillation Sampling (SDS), but it has some issues: - The models it creates can look “smooth” and lack detail. - The updates it makes to improve the 3D model are often inconsistent. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vuij3d462h7487fphsvf.png) To solve these problems, the authors propose a new approach called **Interval Score Matching (ISM).** **Let’s break down how this works:** 1. **Score Distillation Sampling (SDS):** First, it’s essential to understand that SDS uses a pre-trained model that can convert text to images. It tries to use this model to guide the creation of a 3D model. However, the way it updates the 3D model tends to average out details, making the final result look smooth and not very detailed. 2. **ISM Improvements:** - **DDIM Inversion:** This is a fancy way of saying that ISM uses a method to create a consistent path for updating the 3D model, reducing randomness and improving detail. - **Interval-Based Matching:** Instead of making big jumps in updating the 3D model, ISM makes smaller, more controlled updates. This helps maintain the details and avoid errors. ## Why It’s Better With these improvements, **LucidDreamer** can create 3D models that are much more detailed and realistic compared to older methods. It also does this more efficiently, requiring less time and computing power. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aos8s54gg04qih1ouhb5.png) ## Key Contributions - **Detailed Analysis: **The authors examined why SDS wasn’t working well and identified its fundamental problems. - **New Method (ISM):** They introduced ISM, which significantly improves the quality of 3D models. - **Advanced Techniques:** By combining ISM with **3D Gaussian Splatting**, they enhanced the 3D model quality by reducing the training time. ## Results The new method (LucidDreamer using ISM) was tested and shown to produce better and more detailed 3D models compared to other state-of-the-art methods like **Magic3D, Fantasia3D,** and **ProlificDreamer.** Plus, it does this with less training, making it more efficient. ## Real-World Applications This technology can be used in various fields, including: - **Animation and Gaming:** Creating detailed characters and environments. - **Virtual and Augmented Reality:** Building realistic 3D assets for VR and AR experiences. - **Retail and Online Shopping:** Generating 3D models of products based on descriptions. ## Final Thoughts The paper introduces significant improvements in generating 3D models from text, making the process faster and producing better-quality results. This makes it easier for people without 3D modelling skills to create high-quality 3D content. ## Learn More The authors mentioned they will make their code available online, meaning others can use and build upon it. This is great for the research community and developers interested in this technology. - [GitHub](https://github.com/EnVision-Research/LucidDreamer) **If you’re going to be at CVPR this year, be sure to come and say “Hi!”** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qm0f7mmdyne1o533h4jc.png)
jguerrero-voxel51
1,891,211
Fixing CLIP’s Blind Spots: How New Research Tackles AI’s Visual Misinterpretations
Author: Harpreet Sahota (Hacker in Residence at Voxel51) Overview The paper “Eyes Wide...
0
2024-06-17T14:32:15
https://medium.com/voxel51/fixing-clips-blind-spots-how-new-research-tackles-ai-s-visual-misinterpretations-8b8ef4b4c250
computervision, datascience, machinelearning, ai
_Author: [Harpreet Sahota](https://www.linkedin.com/in/harpreetsahota204/) (Hacker in Residence at [Voxel51](https://voxel51.com/))_ ## Overview The paper [“Eyes Wide Shut? Exploring the Visual Shortcomings of Multimodal LLMs”](https://arxiv.org/abs/2401.06209) investigates the visual question-answering (VQA) capabilities of advanced multimodal large language models (MLLMs), particularly focusing on GPT-4V. It highlights systematic shortcomings in these models’ visual understanding and proposes a benchmark for evaluating their performance. The authors introduce the Multimodal Visual Patterns (MMVP) benchmark and propose a Mixture of Features (MoF) approach to improve visual grounding in MLLMs. ## No time to read the blog? No worries! Here’s a video of me covering what’s in this blog! {% embed https://www.youtube.com/watch?v=Iy9xClK65Bs %} ## Existing Challenge Despite their impressive capabilities, multimodal AI models like GPT-4V often fail to correctly answer basic questions about images. These failures are mostly due to how these models interpret visual information. ## Why Current Methods Fail The current methods rely heavily on a system called CLIP. CLIP pairs images with text descriptions to create a joint understanding of both. However, CLIP has a significant flaw: it can create what’s known as “CLIP-blind pairs.” **CLIP-Blind Pairs** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1umw6a27sutee4982lm3.png) When the researchers identify CLIP-blind pairs, they specifically address the issue by proposing a new method called the Mixture of Features (MoF). Here’s a detailed breakdown of what they do and how it helps: - **Definition:** CLIP-blind pairs are sets of images that CLIP sees as very similar, even though they are quite different. - **Example:** Imagine two images, one of a cat and one of a dog. If CLIP considers these images similar because they both have furry animals, it might treat them as nearly identical, even though cats and dogs are very different. - **Impact:** This confusion leads to poor visual representations. When the multimodal model tries to answer questions about these images, it might confuse details or provide incorrect answers because it doesn’t truly understand the visual differences. These issues with CLIP-blind pairs propagate to more advanced models that use CLIP as their visual backbone. As a result, these models: - **Give Incorrect Answers:** They might misidentify objects or misunderstand their positions in the image. - **Hallucinate Explanations:** They sometimes make up explanations for their incorrect answers, which can be misleading. ## The Solution: Mixture of Features (MoF) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w219mziazjp79reupnw5.png) This method aims to improve the visual understanding of multimodal models by integrating better visual representations from another model called DINOv2. ## Proposed Solution The researchers introduced the Mixture of Features (MoF) approach to tackle these visual shortcomings. MoF aims to improve the visual grounding capabilities of these models by integrating better visual representations. ### How the Solution Works **Current Method (CLIP):** - CLIP tries to understand images by comparing them to text descriptions, but it struggles with CLIP-blind pairs, leading to ambiguous or incorrect visual representations. **Improvements with MoF:** - **Additive-MoF (A-MoF):** This method combines features from CLIP with another system called DINOv2. The model's overall visual grounding improves by adding features from DINOv2, which better understand visual details. However, this can sometimes reduce the model’s ability to follow text instructions precisely. - **Interleaved-MoF (I-MoF):** This method spatially mixes visual tokens from CLIP and DINOv2. This more integrated approach ensures that the model benefits from the detailed visual understanding of DINOv2 while maintaining its capability to follow instructions from text. ## Why It’s Better The MoF approach offers several benefits: - **Improved Visual Understanding:** By incorporating features from DINOv2, the models become better at distinguishing details in images, reducing errors from CLIP-blind pairs. - **Balanced Capabilities:** The Interleaved-MoF method ensures that the models understand images and follow text instructions. - **Systematic Error Reduction:** This approach directly addresses the visual confusion caused by CLIP-blind pairs, leading to more accurate answers. ## Key Contributions The main contributions of the paper include: - **Detailed Analysis:** An in-depth study of the visual shortcomings in current multimodal models, particularly those based on CLIP. - **New Testing Tool:** The Multimodal Visual Patterns (MMVP) benchmark has been introduced to better evaluate how well these models understand images. - **Improved Method:** The development of the Mixture of Features (MoF) approach, which integrates different types of visual understanding to enhance model performance. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0roe3wduxj7now55pg0w.png) ## Results The researchers tested their new method and found: - All the models, including GPT-4V, struggled with simple visual questions. - GPT-4V performed better than random guessing but still had significant room for improvement compared to humans. - The MoF approach significantly improved visual grounding, reducing errors caused by CLIP-blind pairs. ## Real-World Applications A better visual understanding of AI models can be useful in many fields: - **Animation and Gaming:** It can help create more realistic characters and interactions. - **Virtual and Augmented Reality:** It can make VR/AR environments more accurate and immersive. - **Retail and Online Shopping:** It can improve product searches and recommendations. ## Final Thoughts The improvements suggested in the paper are important because they improve AI models' understanding of images, which is crucial for many applications. This research helps make high-quality visual understanding more accessible and reliable. Learn more about the paper by visiting: - [Project Page](https://tsb0601.github.io/mmvp_blog/) - [GitHub](https://github.com/tsb0601/MMVP) **If you’re going to be at CVPR this year, be sure to come and say “Hi!”** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wfg54lbx1s3o9g8soukk.png)
jguerrero-voxel51
1,891,365
Improving CSS Loading in React Applications: Avoiding `@import` in `createGlobalStyle`
When working with React and styled-components, you might encounter performance and compatibility...
0
2024-06-17T14:31:20
https://dev.to/mochafreddo/improving-css-loading-in-react-applications-avoiding-import-in-createglobalstyle-4d9p
react, styledcomponents, webperf, cssom
When working with React and styled-components, you might encounter performance and compatibility issues if you use `@import` within `createGlobalStyle`. This blog post will explain why this happens and how to resolve it by embedding the stylesheet link directly in your `index.html` file. ## The Problem with `@import` in `createGlobalStyle` Using `@import` in `createGlobalStyle` can lead to several issues: 1. **CSSOM API Limitations**: The CSS Object Model (CSSOM) API, which allows for programmatic manipulation of CSS, has limitations when handling `@import` rules in dynamically created stylesheets. This can cause unexpected behavior or failures in applying styles. 2. **Performance Issues**: `@import` can cause stylesheets to load sequentially, which slows down the page load time. In contrast, using `<link>` tags allows stylesheets to load in parallel, improving performance. 3. **Predictable Load Order**: `<link>` tags ensure that stylesheets are loaded in the order they appear in the HTML, making the application of styles more predictable and easier to manage. ## Solution: Using `<link>` in `index.html` To avoid these issues, you can move the font import to the `index.html` file. Here’s how you can do it: ### Step 1: Update `index.html` Add the following `<link>` tag inside the `<head>` section of your `public/index.html` file: ```html <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <link rel="icon" href="%PUBLIC_URL%/favicon.ico" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <meta name="theme-color" content="#000000" /> <meta name="description" content="Web site created using create-react-app" /> <link rel="apple-touch-icon" href="%PUBLIC_URL%/logo192.png" /> <link href="https://fonts.googleapis.com/css2?family=Noto+Sans+KR:wght@100..900&family=Roboto:ital,wght@0,100;0,300;0,400;0,500;0,700;0,900&display=swap" rel="stylesheet" /> <link rel="manifest" href="%PUBLIC_URL%/manifest.json" /> <title>React App</title> </head> <body> <noscript>You need to enable JavaScript to run this app.</noscript> <div id="root"></div> </body> </html> ``` ### Step 2: Update `GlobalStyles.ts` Remove the `@import` statement from your `GlobalStyles.ts` file: ```typescript import { createGlobalStyle } from 'styled-components'; const GlobalStyle = createGlobalStyle` html, body, div, span, applet, object, iframe, h1, h2, h3, h4, h5, h6, p, blockquote, pre, a, abbr, acronym, address, big, cite, code, del, dfn, em, img, ins, kbd, q, s, samp, small, strike, strong, sub, sup, tt, var, b, u, i, center, dl, dt, dd, ol, ul, li, fieldset, form, label, legend, table, caption, tbody, tfoot, thead, tr, th, td, article, aside, canvas, details, embed, figure, figcaption, footer, header, hgroup, menu, nav, output, ruby, section, summary, time, mark, audio, video { margin: 0; padding: 0; border: 0; font-size: 100%; font: inherit; vertical-align: baseline; } article, aside, details, figcaption, figure, footer, header, hgroup, menu, nav, section { display: block; } body { line-height: 1; } ol, ul { list-style: none; } blockquote, q { quotes: none; } blockquote:before, blockquote:after, q:before, q:after { content: ''; content: none; } table { border-collapse: collapse; border-spacing: 0; } * { box-sizing: border-box; } body { font-family: "Noto Sans KR", "Roboto", sans-serif; background-color: ${(props) => props.theme.bgColor}; color: ${(props) => props.theme.textColor}; line-height: 1.2; } a { text-decoration: none; color:inherit; } `; export default GlobalStyle; ``` ## Why This Works ### CSSOM API The CSSOM API allows for programmatic manipulation of CSS stylesheets. However, it has limitations when dealing with `@import` rules in dynamically created stylesheets. By using `<link>` tags, you avoid these limitations and ensure that styles are applied correctly. ### Performance Using `<link>` tags allows the browser to load stylesheets in parallel, which can significantly improve page load times. This is especially important for web applications that rely on multiple stylesheets. ### Predictable Load Order With `<link>` tags, stylesheets are loaded in the order they appear in the HTML. This makes it easier to manage and predict the application of styles, reducing the risk of style conflicts and ensuring a more consistent user experience. ## Conclusion By moving the font import to the `index.html` file and using `<link>` tags instead of `@import` in `createGlobalStyle`, you can avoid compatibility issues with the CSSOM API, improve performance, and ensure a more predictable load order for your stylesheets. This simple change can lead to a more robust and performant React application.
mochafreddo
1,891,319
Lukas Höllein on the Challenges and Opportunities of Text-to-3D with “ViewDiff”
Author: Harpreet Sahota (Hacker in Residence at Voxel51) A Q&amp;A with an author of a CVPR 2024...
0
2024-06-17T14:30:54
https://medium.com/voxel51/lukas-h%C3%B6llein-on-the-challenges-and-opportunities-of-text-to-3d-with-viewdiff-40203fb59c93
datascience, machinelearning, computervision, ai
_Author: [Harpreet Sahota](https://www.linkedin.com/in/harpreetsahota204/) (Hacker in Residence at [Voxel51](https://voxel51.com/))_ A Q&A with an author of a CVPR 2024 paper discussing the implications of his work for 3D Modeling ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dqjjkqet57r0vo5zrb11.png) I got a chance to have a (virtual) sit-down Q&A session with [Lukas Höllein](https://www.linkedin.com/in/lukas-hoellein/) about his paper [ViewDiff: 3D-Consistent Image Generation with Text-to-Image Models](https://arxiv.org/html/2403.01807v1), one of the accepted forpapers CVPR 2024. His paper introduces ViewDiff, a method that leverages pretrained text-to-image models to generate high-quality, multi-view consistent images of 3D objects in realistic surroundings by integrating 3D volume-rendering and cross-frame-attention layers into a U-Net architecture. Lukas discusses the challenges of training 3D models, the innovative integration of 3D components into a U-Net architecture, and the potential for democratizing 3D content creation. Hope you enjoy it! **Harpreet: Could you briefly overview your paper’s central hypothesis and the problem it addresses? How does this problem impact the broader field of deep learning?** _**Lukas:** Pretrained text-to-image models are powerful because they are trained on billions of text-image pairs._ _In contrast, 3D deep learning is largely bottlenecked by much smaller datasets. Training models on 3D datasets will reach a different quality and diversity than we have nowadays in 2D. This paper shows how to bridge this gap: we take a model trained on 2D data and only finetune it on 3D data._ _This allows us to keep around the expressiveness of the existing model but translate it into 3D._ **Harpreet: Your paper introduces a method that leverages pretrained text-to-image models as a prior, integrating 3D volume-rendering and cross-frame-attention layers into each block of the existing U-Net network. What are the key innovations of this technique, and how does it improve upon existing methods?** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8pgg7566nj45hptd1urp.png) _**Lukas:** The key innovation shows how we can utilize the text-to-image model and still produce multi-view consistent images._ _Earlier 3D generative methods created some 3D representations and rendered images from them._ _Integrating a text-to-image model into this pipeline is problematic because it operates on different modalities (images vs. 3D)._ _In contrast, we keep around the 2D U-Net architecture and only add 3D components. By design, this allows the creation of consistent 3D images. Our output is *not* a 3D representation but multi-view consistent images (that can be turned into such a representation later)._ **Harpreet: One of the significant findings in your research is the ability to generate multi-view consistent images that are photorealistic and diverse. Can you explain the implications of this result for real-world applications in deep learning?** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ud8losvowqm0kdealfys.gif) _**Lukas:** Eventually, we want to be able to create entire 3D scenes with the help of pretrained deep learning models._ _This would significantly reduce the time and skills required (e.g. instead of hiring expert artists in 3D modelling)._ _Basically, it democratizes 3D content creation._ _One example I like is sending GIFs to friends through messengers. How cool would it be to create your own just from text input? Our paper is one step in that direction_. _By specifying a text prompt, people can use such methods to create 3D assets and their corresponding surroundings._ **Harpreet: What challenges did you face during your research, particularly in implementing or validating the integration of 3D volume-rendering and cross-frame-attention layers into the U-Net architecture? How did you overcome them?** _**Lukas:**_ _**Issue 1:** Make images consistent → It turns out that both 3D volume rendering and cross-frame attention are necessary. The first gives accurate control over poses._ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2o734b8dp5bzbzb2ty6f.png) _Without it, the generated images do not closely follow the input poses. The second ensures a consistent object identity._ _**Issue 2:** Keep around 2D prior → We want text prompt control, but we finetuned on a smaller 3D dataset._ _We used the Dreambooth paper’s trick to finetune a prior preservation dataset._ **Harpreet: For practitioners looking to apply your findings, what practical steps or considerations should they consider? Are there specific scenarios where your method shines the most?** _**Lukas:** Our method needs a lot of memory to be trained, but it can run at inference time on smaller GPUs._ _Remember the desired output domain: a single category of objects or generalized across a dataset of multiple categories. This influences the training time._ _Limitations: flickering due to lighting differences → can reduce it with better data._ **Harpreet: The quality and diversity of training data are crucial for the effectiveness of diffusion models. Can you discuss your approach to data collection, cleaning, and curation to ensure the data is well-prepared and representative? How do you address challenges regarding ensuring fairness and minimizing bias in your datasets?** _**Lukas:**_ _**1. Data Collection and Cleaning:**_ _- Real-world Video Capture: We capture real-world videos of diverse objects and scenes. This provides a rich source of data that reflects the complexity of the real world._ _- Image Extraction and Filtering: We extract individual frames from the videos and employ a filtering process to ensure high quality and remove blurry or otherwise unusable frames. This step is essential for creating a clean and reliable dataset._ _**2. Data Curation for Specific Control Mechanisms:**_ - _**3D Pose Control:** We aim to enable control over the 3D pose of generated objects. To achieve this, we align videos of different objects into a shared world space. This allows us to consistently manipulate objects’ pose within the model’s training data._ - _**Text-based Control:** We want to enable users to control the generated output through text prompts. To facilitate this, we label images with a pre-trained image captioning model. This provides a textual representation of the image content, which can be used for text-based control. To further ensure diversity in the output, we generate multiple captions per image and sample them randomly during training._ _**3. Mitigating Bias:**_ - _**Pose Control Fairness:** A key challenge is ensuring fairness in our pose control mechanism. We aim to avoid biases where certain poses are overrepresented in the training data. To address this, we implement a sampling strategy that ensures every pose direction is sampled equally often. This helps to prevent the model from learning biased representations of object poses._ ## Final Thoughts This Q&A with Lukas Höllein, author of the CVPR 2024 paper “ViewDiff,” highlights the potential of leveraging pretrained text-to-image models for 3D generation. ViewDiff’s approach, integrating 3D components into a U-Net architecture, addresses the challenges of training 3D models and demonstrates the feasibility of generating multi-view consistent images from text prompts. The method’s ability to generate realistic 3D scenes and assets has significant implications for democratizing 3D content creation. ViewDiff represents a significant advancement in the field, paving the way for further research and development in text-to-3D generation. You can learn more about ViewDiff here: - [Paper](https://arxiv.org/pdf/2403.01807v1) - [GitHub](https://github.com/facebookresearch/ViewDiff) - [Project Page](https://lukashoel.github.io/ViewDiff/) **If you’ll be at CVPR this year, come and say “Hi!”** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/89jh4l025hr5jib52kw4.png)
jguerrero-voxel51
1,891,331
Patch-wise Attention Enhances Fine-Grained Visual Recognition
Author: Harpreet Sahota (Hacker in Residence at Voxel51) A CVPR Paper Review and Cliff’s Notes You...
0
2024-06-17T14:30:15
https://medium.com/voxel51/patch-wise-attention-enhances-fine-grained-visual-recognition-6f87550b590e
computervision, datascience, machinelearning, ai
_Author: [Harpreet Sahota](https://www.linkedin.com/in/harpreetsahota204/) (Hacker in Residence at [Voxel51](https://voxel51.com/))_ **A CVPR Paper Review and Cliff’s Notes** You don’t usually think of two things in the same sentence: creepy crawlies and cutting-edge AI. However, this combination will improve agriculture because if we can accurately identify insect species, we can protect our crops and ensure food security. The paper [“Insect-Foundation: A Foundation Model and Large-scale 1M Dataset for Visual Insect Understanding”](https://arxiv.org/abs/2311.15206) buzzes into the world of precision agriculture, tackling the need for accurate insect detection and classification. It hatches a novel dataset, “Insect-1M,” swarming with 1 million images of insects, each meticulously labelled with detailed taxonomic info. ## The Problem ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sg3g3wh29rq6ejjiovgg.png) In precision agriculture, accurately identifying and classifying insects is crucial for maintaining crop health and ensuring high-quality yields. Existing methods face several challenges: - Current insect datasets are significantly smaller and less diverse than needed. For instance, many datasets contain only tens of thousands of images and cover a limited number of species. Given the estimated 5.5 million insect species, this is inadequate, leading to poor generalization and coverage for practical applications. - Existing datasets often fail to provide the fine-grained details to distinguish similar insect species. Many datasets lack multiple images per species, diverse angles, or high-resolution images that capture subtle, distinguishing features. This makes it difficult for models to differentiate between species with minor but crucial variations. - Many datasets do not include comprehensive taxonomic hierarchy or detailed descriptions. They often provide basic labels without deeper taxonomic context, such as genus or family levels. This limits the models’ ability to learn effectively, as they miss out on the rich relational information within the insect taxonomy. ## The Solution ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m4lzyvum70zvqzvnobxj.png) The authors propose two main contributions: the “Insect-1M” dataset and a new Insect Foundation Model. ### Insect-1M Dataset ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jwljt7eyybbork56321j.png) - Contains 1 million images spanning 34,212 species, significantly larger than previous datasets. - Includes six hierarchical taxonomic levels (Subphylum, Class, Order, Family, Genus, Species) and auxiliary levels like Subclass, Suborder, and Subfamily. - Provides detailed descriptions for each insect, enhancing the model’s understanding and training. ### Insect Foundation Model ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o7dpxfygrmsfwo952vhr.png) The Insect Foundation Model is designed to overcome fine-grained insect classification and detection challenges. Here’s a detailed overview of its components: **Image Patching** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/12eezpkrpl78k7vw12di.png) - **Patch Extraction:** Input images are divided into smaller patches, allowing the model to focus on localized regions of the image. - **Patch Pool Creation:** These patches form a pool the model uses for further processing. **Patch-wise Relevant Attention** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1jg0aqyp0zi9ybav2idh.png) - **Relevance Scoring:** Each patch is assigned a relevance score based on its importance for classification. This is done by comparing patches to masked images, highlighting subtle differences. - **Attention Weights:** Patches with higher relevance scores are given more attention, guiding the model to focus on the most informative parts of the image. **Attention Pooling Module** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wuw3o30g1p0v04rflog1.png) - **Aggregation of Information:** The attention pooling module aggregates information from the patches, using the attention weights to prioritize the most relevant features. - **Feature Extraction:** This process helps extract detailed and accurate features to distinguish similar insect species. **Description Consistency Loss** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qc4xnwy0yhc6845cumdk.png) The model incorporates a description consistency loss, which aligns the visual features extracted from the patches with the textual descriptions of the insects. Text decoders contribute to the description consistency loss, which ensures that the visual and textual features are consistent and complementary. By minimizing this loss, the model enhances its understanding and classification accuracy. **Text Decoders** **1. Feature Extraction:** The text decoders extract semantic features from the textual descriptions. These features encapsulate the essential information conveyed in the descriptions. **2. Alignment with Visual Features:** The extracted textual features are aligned with the visual features obtained from the image patches. This alignment is facilitated through attention mechanisms, ensuring that the model learns to associate specific visual patterns with corresponding textual descriptions. **Multimodal Text Decoders** Multimodal text decoders extend standard text decoders’ capabilities by simultaneously processing visual and textual inputs. They are designed to handle the complexities of integrating information from multiple modalities. **Role in the Framework** 1. Multimodal text decoders create joint representations that combine visual and textual features. This holistic representation captures the intricate relationships between the two modalities. 2. These decoders utilize the attention mechanism to focus on the most relevant parts of the image and the text. This ensures that the model pays equal attention to critical visual details and essential textual information. 3. By integrating visual and textual data, multimodal text decoders enhance the model’s contextual understanding, allowing it to make more informed decisions during classification and detection tasks. **Model Training** - Self-Supervised Learning: The framework employs self-supervised learning techniques, where the model learns from the data without requiring manual annotations for every feature. - Fine-Tuning: The model is fine-tuned using labelled data to improve its accuracy and performance. ## Results ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ljo97fa17f5tiagext3.png) The new method was evaluated against standard benchmarks for insect-related tasks: - The proposed model achieved state-of-the-art performance, outperforming existing methods. - The model significantly improved in capturing fine details and accuracy. ## Final Thoughts This paper introduces the **Insect-1M** dataset and a novel **Insect Foundation Model**. The Insect-1M dataset, with 1 million images across 34,212 species, includes detailed hierarchical taxonomic labels and descriptions, addressing the limitations of existing datasets in size and diversity. The **Insect Foundation Model** utilizes **Patch-wise Relevant Attention** to focus on critical image regions and **Description Consistency Loss** to align visual features with textual descriptions. These techniques significantly improve fine-grained insect classification and detection. Overall, the contributions of the **Insect-1M** dataset and the **Insect Foundation Model** advance the state-of-the-art in visual recognition, enhancing accuracy and detail capture. You can learn more here: - [Paper](https://arxiv.org/abs/2311.15206) - [Project page](https://medium.com/voxel51/uark-cviu.github.io/projects/insect_foundation.html) **If you’re going to be at CVPR this year, be sure to come and say “Hi!”** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h8msgusdjp6ye6exyr3j.png)
jguerrero-voxel51
1,891,338
SelfEQ Enhances Visual Grounding with Self-Consistency
Author: Harpreet Sahota (Hacker in Residence at Voxel51) A CVPR Paper Review and Cliff’s...
0
2024-06-17T14:29:18
https://medium.com/voxel51/selfeq-enhances-visual-grounding-with-self-consistency-cdaba01e236c
computervision, machinelearning, ai, datascience
_Author: [Harpreet Sahota](https://www.linkedin.com/in/harpreetsahota204/) (Hacker in Residence at [Voxel51](https://voxel51.com/))_ ## A CVPR Paper Review and Cliff’s Notes Precise visual grounding remains a challenging yet essential task, particularly when models encounter varied textual descriptions. The paper [“Improved Visual Grounding through Self-Consistent Explanations”](https://arxiv.org/abs/2312.04554) tackles this head-on by introducing a method that leverages paraphrases to enhance model consistency and localization accuracy without relying on extensive annotations. This approach promises significant improvements in aligning visual and textual data, making it a critical advancement for engineers focused on refining AI’s interpretative capabilities. The main contribution is introducing a weakly-supervised strategy called Self-Consistency Equivalence Tuning (SelfEQ), which leverages paraphrases to consistently improve the model’s ability to localize objects in images. ## The Problem ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/933ihgrohjyyubtxb2x4.png) ### Existing Challenge Vision-and-language models trained to match images with text often struggle with the precise localization of objects, especially when the textual descriptions vary slightly (e.g., “frisbee” vs. “disc”). The challenge is to improve these models’ grounding abilities without relying on extensive object location annotations. ### Current Methods and Their Insufficiencies ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n9i6iqt1h9j4jubmtjj9.png) Current methods often require additional finetuning with a bounding box or segment annotations or depend on pretrained object detectors. These approaches are limited by their need for detailed annotations and can lack consistency when handling varied textual descriptions. #### Specific Issues - **Lack of Detail:** Existing models may not handle diverse vocabulary well, leading to inconsistent localization. - **Inconsistency:** Models may fail to provide consistent visual explanations for paraphrased textual inputs referring to the same object. ## The Solution ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lletrfmzl0lwja4icieh.png) The paper proposes SelfEQ, which encourages self-consistent visual explanations for paraphrased text inputs. This method involves generating paraphrases using a large language model and finetuning the vision-and-language model to ensure that the original and paraphrased texts map to the same image regions. **How It Works** - **Start with an Existing Method:** The ALBEF model, which aligns images and text using image-text pairs without object location annotations, serves as the foundation. **Improvements by SelfEQ** 1. **Paraphrase Generation:** A large language model (e.g., Vicuna-13B) is used to create paraphrases for text descriptions. 2. **Self-Consistency Tuning:** Finetunes the model using GradCAM to ensure consistent visual attention maps for original and paraphrased texts. **Why It’s Better** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/teqylmu56zcqphp7affv.png) **Benefits of the New Approach** - **Expanded Vocabulary:** The model can handle a broader range of textual descriptions. - **Improved Localization:** SelfEQ enhances the precision and consistency of object localization without requiring bounding box annotations. - **Efficiency:** The approach leverages weak supervision, reducing the need for detailed annotations and making the finetuning process more efficient. ## Key Contributions - **Novel Objective (SelfEQ):** Introduces a self-consistency equivalence tuning objective to improve visual grounding. - **Paraphrase Utilization:** Employs large language models to generate high-quality paraphrases, enhancing the model’s vocabulary handling. - **Performance Improvements:** Achieves significant improvements in standard benchmarks (Flickr30k, ReferIt, RefCOCO+). ## Results ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8a3iu8ejfrldzqb6bd0n.png) **Testing and Performance** The new method was tested on several benchmarks (Flickr30k, ReferIt, RefCOCO+), showing substantial improvements: - **Flickr30k:** 84.07% (4.69% absolute improvement) - **ReferIt:** 67.40% (7.68% absolute improvement) - **RefCOCO+:** 75.10% (test A), 55.49% (test B) (3.74% average improvement) **Comparison with State-of-the-Art** SelfEQ outperforms several prior methods, especially those that do not use box annotations, demonstrating better localization performance and vocabulary handling. ## Final Thoughts The improvements presented in this paper enhance the robustness and applicability of vision-and-language models in visual grounding tasks. By focusing on self-consistent explanations and leveraging weak supervision, the authors provide a pathway for models to handle a wider range of textual inputs more effectively. This work is essential for advancing research in visual grounding and making models more adaptable to real-world scenarios. **Learn more here:** - [Paper](https://arxiv.org/abs/2312.04554) - [Project page](https://catherine-r-he.github.io/SelfEQ/) - [GitHub](https://github.com/uvavision/SelfEQ?tab=readme-ov-file) **If you’ll be at CVPR this year, be sure to come and say “Hi!”** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6gokug9awfv5760hdpeh.png)
jguerrero-voxel51
1,891,346
Secure Your Digital Frontier with CQR Cybersecurity LLC
Choosing CQR Cybersecurity means opting for a future where your digital assets are guarded by the...
0
2024-06-17T14:01:30
https://dev.to/cqr/secure-your-digital-frontier-with-cqr-cybersecurity-llc-5bbn
cybersecurity, web3, it, security
Choosing CQR Cybersecurity means opting for a future where your digital assets are guarded by the best in the business. Let’s discuss how we can tailor our protection strategies to suit your unique needs. Your security journey starts with CQR Cybersecurity LLC — where reliability meets professionalism. CQR Cybersecurity LLC 🔒 [CQR Cybersecurity](https://cqr.company/) your reliable partner in the field of cybersecurity, providing comprehensive services for over 15 years! We offer a full range of services: 🔎 [Professional Penetration Testing](https://cqr.company/service/penetration-testing/) 🛡 [Red Teaming for Comprehensive Protection](https://cqr.company/service/red-teaming/) 🖥 [SOC is your security operations center](https://cqr.company/service/soc-service/) ☁️ [Security audit of cloud services and Active Directory](https://cqr.company/service/active-directory-audit-2/) 👨💼 vCISO — Your virtual information security specialist for quality consulting and assistance in developing security strategies 👁 CRYEYE — Our cybersecurity automation product: Find out more about CRYEYE, our solution that includes about 25 security controls for comprehensive protection of your network from A to Z at [cryeye.net](https://cryeye.net/) . All Required Certifications: Our team holds all the key certifications required by the industry, including OSCP, OSWE, CEH, EWPTXv2, CISCCP, SSCP and others, confirming our high level of expertise and qualifications. Why choose us? 🚀 Highly professional and high-quality performance of each service. 🔗 Wide range of services for comprehensive protection of your business. 💼 Expert team with many years of experience in the field of cybersecurity. 📈 Increasing the level of information security and reducing risks. 🔒 We are ready for personal meetings in the office and at locations in San Francisco, Bay Area, San Ramon, Sacramento, and now in Chicago to discuss how we can protect your business. 📞 Contact us: ☎️ 415–606–9587 📱 WhatsApp: [+1 415–606–9587](https://api.whatsapp.com/send?phone=14156069587&text=Request%20for%20Communication%20from%20CQR%20company) 📨 salesteam@cqr.company 📱 Telegram: [@CQR_Cybersecurity](https://t.me/cqrcompany) Choosing [CQR Cybersecurity](https://cqr.company/), you choose reliability and professionalism!
cqr
1,891,345
3D Scene Understanding: Open3DSG’s Open-Vocabulary Approach to Point Clouds
Author: Harpreet Sahota (Hacker in Residence at Voxel51) A CVPR Paper Review and Cliff’s...
0
2024-06-17T14:26:02
https://medium.com/voxel51/3d-scene-understanding-open3dsgs-open-vocabulary-approach-to-point-clouds-69d443d29cb2
machinelearning, datascience, ai, computervision
_Author: [Harpreet Sahota](https://www.linkedin.com/in/harpreetsahota204/) (Hacker in Residence at [Voxel51](https://voxel51.com/))_ ## A CVPR Paper Review and Cliff’s Notes Understanding 3D environments is a critical challenge in computer vision, particularly for robotics and indoor applications. The paper, [Open3DSG: Open-Vocabulary 3D Scene Graphs from Point Clouds with Queryable Objects and Open-Set Relationships](https://arxiv.org/abs/2402.12259), introduces a novel approach for predicting 3D scene graphs from point clouds in an open-world setting. The paper’s main contribution is a method that leverages features from powerful 2D vision language models (VLMs) and large language models (LLMs) to predict 3D scene graphs in a zero-shot manner. This allows for querying object classes from an open vocabulary and predicting inter-object relationships beyond a predefined label set. This research moves beyond traditional, predefined class limitations by leveraging vision-language models to identify and describe arbitrary objects and their relationships, setting a new standard for machine perception and interaction in complex environments. ## The Problem Current 3D scene graph prediction methods depend heavily on labeled datasets, restricting them to a fixed set of object classes and relationship categories. This limitation reduces their effectiveness in real-world applications where a broader and more flexible vocabulary is necessary. **Insufficiencies of Current Methods** - **Fixed Label Set:** Traditional methods are confined to a narrow scope of training data, hindering their ability to generalize to unseen object classes and relationships. - **Lack of Compositional Understanding:** Existing 2D VLMs struggle with modeling complex relationships between objects, which is crucial for accurate 3D scene graph predictions. - **Inflexibility:** Supervised training with fixed labels cannot adapt to new or rare object classes and relationships, limiting the practical utility of the models. ## The Solution ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qh7abqn20vmyomsjyfij.png) The paper proposes Open3DSG, an approach to learning 3D scene graph prediction without relying on labelled scene graph data. The method co-embeds the features from a 3D scene graph prediction backbone with the feature space of open-world 2D VLMs. ### How the Solution Works ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uvsklnb9a3oylslzd7wn.png) **1. Initial Graph Construction:** The method begins by constructing an initial graph representation from a 3D point cloud using class-agnostic instance segmentation. **2. Feature Extraction and Alignment:** Features are extracted from the 3D scene using a Graph Neural Network (GNN) and aligned with 2D vision-language features. **3. Object Class Prediction:** At inference time, object classes are predicted by computing the cosine similarity between the distilled 3D features and open-vocabulary queries encoded by CLIP. **4. Relationship Prediction:** Inter-object relationships are predicted using a feature vector and the inferred object classes, providing context to a large language model. ### Improvements Introduced ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gbiwckgz28pt7oipy8a8.png) - **Open-Vocabulary Predictions:** The method can predict arbitrary object classes and relationships, not limited to a predefined set. - **Zero-Shot Learning:** This approach allows for zero-shot predictions. It can generalize to new objects and relationships without additional training data. - **Compositional Understanding:** The method enhances the ability to model complex relationships between objects by combining VLMs with LLMs. ### Why It’s Better - **Detail and Realism:** The method provides fine-grained semantic descriptions of objects and relationships, capturing the complexity of real-world scenes. - **Efficiency: **By aligning 3D features with 2D VLMs, the method achieves effective scene graph predictions without requiring extensive labeled datasets. - **Computational Power:** The approach leverages powerful existing models (like CLIP and large language models), enhancing its ability to generalize and perform complex reasoning tasks. ## Key Contributions ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/26y0twn2jkvuzzb0h63q.png) **1. First Open-Vocabulary 3D Scene Graph Prediction:** This paper presents the first method for predicting 3D scene graphs with an open vocabulary for objects and relationships. **2. Integration of VLMs and LLMs:** This approach combines the strengths of vision-language models and large language models to improve compositional understanding. **3. Interactive Graph Representation:** The method allows for querying objects and relationships in a scene during inference time. ## Results ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6phk7rgvw8odggkg3lyf.png) - **Experimental Validation:** The method was tested on the closed-set benchmark 3DSSG, showing promising results in modelling compositional concepts. - **Comparison with State-of-the-Art Methods:** Open3DSG demonstrated the ability to handle arbitrary object classes and complex inter-object relationships more effectively than existing methods. ## Final Thoughts As a forward-thinking system, Open3DSG’s benefits are twofold: 1. Enhances the expressiveness and adaptability of 3D scene graphs Paves the way for a more intuitive machine understanding of complex environments. 2. With applications ranging from robotics to indoor scene analyses, the potential is vast. The improvements introduced by Open3DSG are significant as they enable a more flexible and detailed understanding of 3D scenes. This can be particularly important for computer vision and robotics applications, where understanding complex scenes is crucial. **Will you be at CVPR 2024? Come by the Voxel51 booth and say “Hi!”!** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tjmvsuuo0li4f8cllxb5.png)
jguerrero-voxel51
1,891,349
5 Papers on My CVPR 2024 Must-See List!
Author: Jacob Marks (Machine Learning Engineer at Voxel51) I’m excited to attend CVPR 2024! There is...
0
2024-06-17T14:25:30
https://voxel51.com/blog/5-papers-on-my-cvpr-2024-must-see-list/
computervision, machinelearning, datascience, ai
_Author: [Jacob Marks](https://www.linkedin.com/in/jacob-marks/) (Machine Learning Engineer at [Voxel51](https://voxel51.com/))_ I’m excited to attend CVPR 2024! There is A LOT of awesome research again this year! Gearing up for the event, I made a short list of papers I find interesting and would like to explore more, especially as it relates to my work on open source FiftyOne. 📄 Here’s a summary of my LinkedIn posts from this week — a paper per day — in reverse order. 🙃 Also, visit the Voxel51 booth #1519 at CVPR and chat with me and the rest of the team about visual AI, data-centric ML, or whatever excites you! 👋 ## 🔥 CVPR 2024 Paper Spotlight: CoDeF 🔥 Recent progress in video editing/translation has been driven by techniques like Tune-A-Video and FateZero, which utilize text-to-image generative models. Because a generative model (with inherent randomness) is applied to each frame in input videos, these methods are susceptible to breaks in temporal consistency. Content Deformation Fields (CoDeF) overcome this challenge by representing any video with a flattened canonical image, which captures the textures in the video, and a deformation field, which describes how each frame in the video is deformed relative to the canonical image. This allows for image algorithms like image translation to be “lifted” to the video domain, applying the algorithm to the canonical image and propagating the effect to each frame using the deformation field. Through lifting image translation algorithms, CoDeF achieves unprecedented cross-frame consistency in video-to-video translation. CoDeF can also be applied for point-based tracking (even with non-rigid entities like water), segmentation-based tracking, and video super-resolution! - Arxiv: https://arxiv.org/abs/2308.07926 - Project page: https://qiuyu96.github.io/CoDeF/ - GitHub: https://github.com/qiuyu96/CoDeF - My post on LinkedIn: https://www.linkedin.com/posts/jacob-marks_cvpr2024-computervision-ml-activity-7207366220457598977-wKBh/ ## 🔥 CVPR 2024 Paper Spotlight: Depth Anything 🔥 How do you estimate depth using just a single image? Technically, calculating 3D characteristics of objects like depth requires comparing images from multiple perspectives — humans, for instance, perceive depth by merging images from two eyes. Computer vision applications, however, are often constrained to a single camera. In these scenarios, deep learning models are used to estimate depth from one vantage point. Convolutional neural networks (CNNs) and, more recently, transformers and diffusion models employed for this task typically need to be trained on highly specific data. Depth Anything revolutionizes relative and absolute depth estimation. Like Meta AI’s Segment Anything, Depth Anything is trained on an enormous quantity and diversity of data — 62 million images, giving the model unparalleled generality and robustness for zero-shot depth estimation, as well as state-of-the-art fine-tuned performance on datasets like NYUv2 and KITTI. (the video shows raw footage, MiDaS — previous best, and Depth Anything) The model uses a Dense Prediction Transformer (DPT) architecture and is already integrated into [Hugging Face](https://www.linkedin.com/company/huggingface/)‘s Transformers library and FiftyOne! - Arxiv: https://arxiv.org/abs/2401.10891 - Project page: https://depth-anything.github.io/ - GitHub: https://github.com/LiheYoung/Depth-Anything - Depth Anything Transformers Docs: https://huggingface.co/docs/transformers/model_doc/depth_anything - Monocular Depth Estimation Tutorial: https://medium.com/towards-data-science/how-to-estimate-depth-from-a-single-image-7f421d86b22d - Depth Anything FiftyOne Integration: https://docs.voxel51.com/tutorials/monocular_depth_estimation.html#Hugging-Face-Transformers-Integration - My post on LinkedIn: https://www.linkedin.com/posts/jacob-marks_cvpr2024-computervision-ml-activity-7207003799486357504-o6e1/ ## 🔥 CVPR 2024 Paper Spotlight: YOLO-World 🔥 Over the past few years, object detection has been cleanly divided into two camps. 1️⃣ Real-time closed-vocabulary detection: Single-stage detection models like those from the You-Only-Look-Once (YOLO) family made it possible to detect objects from a pre-set list of classes in mere milliseconds on GPUs. 2️⃣ Open-vocabulary object detection: Transformer-based models like Grounding DINO and Owl-ViT brought open-world knowledge to detection tasks, giving you the power to detect objects from arbitrary text prompts, at the expense of speed. YOLO-World bridges this gap! YOLO-World uses a YOLO backbone for rapid detection and introduces semantic information via a CLIP text encoder. The two are connected through a new lightweight module called a Re-parameterizable Vision-Language Path Aggregation Network. What you get is a family of strong zero-shot detection models that can process up to 74 images per second! YOLO-World is already integrated into [Ultralytics](https://www.linkedin.com/company/ultralytics/) (along with YOLOv5, YOLOv8, and YOLOv9), and FiftyOne! - Arxiv: https://arxiv.org/abs/2401.17270 - Project page: https://www.yoloworld.cc/ - GitHub: https://github.com/AILab-CVC/YOLO-World?tab=readme-ov-file - YOLO-World Ultralytics Docs: https://docs.ultralytics.com/models/yolo-world/ - YOLO-World FiftyOne Docs: https://docs.voxel51.com/integrations/ultralytics.html#open-vocabulary-detection - My post on LinkedIn: https://www.linkedin.com/feed/update/urn:li:activity:7206641438845992960/ ## 🔥 CVPR 2024 Paper Spotlight: DeepCache 🔥 Diffusion models dominate the discourse regarding visual genAI these days — Stable Diffusion, Midjourney, DALL-E3, and Sora are just a few of the diffusion-based models that produce breathtakingly stunning visuals. If you’ve ever tried to run a diffusion model locally, you’ve probably seen for yourself how these models can be pretty slow. This is because diffusion models iteratively try to denoise an image (or other state), meaning that many sequential forward passes through the model must be made. DeepCache accelerates diffusion model inference by up to 10x with minimal quality drop-off. The technique is training-free and works by leveraging the fact that high-level features are fairly consistent throughout the diffusion denoising process. By caching these once, this computation can be saved in subsequent steps. - Arxiv: https://arxiv.org/abs/2312.00858 - Project page: https://horseee.github.io/Diffusion_DeepCache/ - GitHub: https://github.com/horseee/DeepCache?tab=readme-ov-file - DeepCache Diffusers Docs: https://huggingface.co/docs/diffusers/main/en/optimization/deepcache - My post on LinkedIn: https://www.linkedin.com/posts/jacob-marks_cvpr2024-computervision-ml-activity-7206279082433478656-E5QC/ ## 🔥 CVPR 2024 Paper Spotlight: PhysGaussian 🔥 I’m a sucker for some physics-based machine learning, and this new approach from researchers at UCLA, Zhejiang University, and the University of Utah is pretty insane. 3D Gaussian splatting is a rasterization technique that generates realistic new views of a scene from a set of photos or an input video. It has rapidly risen to prominence because it is simple, trains relatively quickly, and can synthesize novel views in real time. However, to simulate dynamics (which involves motion synthesis), views generated by Gaussian splatting had to be converted into meshes before physical simulation and final rendering could be performed. PhysGaussian cuts through these intermediate steps by embedding physical concepts like stress, plasticity, and elasticity into the model itself. At a high level, the model leverages the deep relationships between physical behavior and visual appearance, following Nvidia’s “what you see is what you simulate” (WS2) approach. Very excited to see where this line of work goes! - Arxiv: https://arxiv.org/abs/2311.12198 - Project page: https://xpandora.github.io/PhysGaussian/ - My post on LinkedIn: https://www.linkedin.com/posts/jacob-marks_cvpr2024-computervision-ml-activity-7205916642499780608-sxti/ **If you’ll be at CVPR this year, be sure to come and say “Hi!”** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6aeqqj3f6xwzaadjac8w.png)
jguerrero-voxel51
1,890,868
Hardware Acceleration
This is a submission for DEV Computer Science Challenge v24.06.12: One Byte...
0
2024-06-17T14:23:23
https://dev.to/harishprabhu02/hardware-acceleration-2lkn
cschallenge
*This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).* **Explainer** Hardware acceleration is a feature that uses specialized hardware to enhance the performance of certain tasks like graphics, videos, and sounds. It frees up the CPU, which results in much smoother and faster computer operations for a far superior experience **Additional Context** With the help of Gemini, I have been able to write about hardware acceleration. This still involved my hand in structuring the words and not making it a complete AI generated submission.
harishprabhu02
1,891,359
The Developers' Missing Piece for Boosting Productivity
As a front-end engineer and technical writer, my work life revolves around crafting interactive,...
0
2024-06-17T14:22:32
https://smoothtech.hashnode.dev/the-developers-missing-piece-for-boosting-productivity
productivity, ai, webdev, frontend
As a front-end engineer and technical writer, my work life revolves around crafting interactive, responsive and dynamic User Interfaces and writing about tech. I am always eager to learn new things and share my views about them through writing. About two weeks ago, I came across two different posts talking about Pieces while scrolling on X, my curiosity kicked in and I decided to take a look at what it was all about. Since that day, I can say that my development productivity levels have sky-rocketed 🚀. Productivity is crucial for every developer as we want to complete tasks within a set timeframe to either beat a deadline, complete a project or have time to do some other things. Helping my fellow developers out is the reason why I decided to write this piece 😉 In this article, I will be reviewing the Pieces for Developers tool and how it can boost your output levels just like it did mine. Walk with me! **What is Pieces** Pieces is an AI-powered productivity tool built to streamline developers' workflow activities. I like to call it a Hub of streamlined workflow, seamlessly integrating essential aspects of a programmer's development process. Before I started using Pieces, my code snippets, screenshots and general workflow activities tracking were all over the place. I used Notion, sometimes Notepad and I always leave many tabs open on my browser, so I can visit those specific webpages housing code snippets when I need them. You can imagine how long it takes to trace a particular code snippet and how hectic it is. ## Awesome Pieces functionalities One important thing to note as we dive into the incredible world of Pieces is that it is a developer-focused tool - tailored specifically to assist developers. Pieces has many cool and useful functionalities. However, in this section, I will focus on the aspects that blow my mind and I use quite often. **Robust Snippet Management** Pieces offers a snippet management functionality that enables you to save a bunch of code snippets from both your browser and IDE to the Desktop App. ![Full snippet view](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ud823ntcdklio4rul2f.PNG) The Desktop app stores all saved snippets which can be accessed at any time and also keeps track of workflow activities. **Scenario:** After saving some code snippets to Pieces, I decided to edit by adding a comment to it and later on, deleted the snippet. When you navigate to Workflow Activity, you will be able to view all actions performed on your snippets. This enables you to keep track of your snippets. Imagine, a snippet was deleted by mistake, all you need to do is to view your Workflow Activity and get an idea of what happened. ![Workflow Activity view](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k5dfe1ddr11o2y0btuur.PNG) **Shareable Snippet Link** One cool thing about how Pieces manages snippets is, you can generate a link that can be shared with colleagues or anyone on communication channels like GChat, Teams, Slack and emails to access your code. To generate a shareable link on Pieces, First, click on Generate Shareable Link icon. ![Image showing shareable link icon](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ib66qxnz277g6vlckr71.PNG) Then, copy your personal link after generation. ![Personal shareable link generation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d9ftd1u1xlwtzg2mq18h.PNG) You can now share this link with whoever you wish to. **NB:** You only generate a snippet link once. The next time you want to have access to the link, it would be attached to the Context Preview of your saved snippet. **Live Context** This is a powerful and versatile copilot feature that intrigues me the most and I know you would find it interesting too. Pieces Live Context can scan across your system and be aware of what is going on. It helps you remember anything and interacts with everything. It captures all context gathered from your system, processes and stores them entirely on-device. **Scenario:** I want to get my work day started and I lost track of where I stopped the previous day, that is where Live Context can come into play. To use Live Context, you need to switch it on before you proceed to prompt the copilot. ![Live contexting](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8namsptn30k6dy4zvo9g.PNG) Already, you get some Suggested Prompt options including a prompt of where we left off. Just a click and it lists the last actions you performed! ![Suggested Live contexting prompt](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wgn4kkbdwgg0c9lpiszh.PNG) **Context** You can also manually set Context for the copilot to read files, folders, websites, snippets or messages. **Scenario:** I was given an assignment to revamp the web pages of a particular project so, I cloned the Github repository to my local machine. It was a huge Laravel project with many folders and I did not have a clue as to where the user view was located. Aha! I do have a tool that can help me out in that regard. Up steps Pieces. All I need to do is to set the Context to Folders and add the folder. ![Image showing both ways of adding folder to context](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ro0inhpgfem6tx39mkfd.PNG) Then, I proceeded to ask the copilot for the user view in the project folder. It took some time but it came back with an answer correctly pointing to the location of the file which is **resources > views > user**. ![Context showing location of user in the folder](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nsnbrklrxnbyyvmtd0y1.PNG) **Synchronized Workflow** My complete Pieces suite consists of the Desktop app, Visual Studio Code extension and a Chromium Browser extension (Google Chrome). When saving snippets in my Chrome browser, they get added to your Saved Materials in the Desktop app in real time which keeps your work flowing nicely and smoothly. **On-Device Large Language Models (LLMs)** On the 4th of June, the famous ChatGPT was down and the whole world was talking about it. People who heavily relied on the Generative AI tool were left stranded, but with an LLM locally deployed in your machine, you will not have to worry about such occurrences. Pieces offer LLMs that can reside on your computer system which enables you to go about your development activities without having to be online. ![List of on-device LLMs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uaxviwv62vy1tyh00npw.PNG) Also, I love the fact that Pieces copilot is readily available on my Visual Studio Code where I do all my coding and also on my Chrome browser where I carry out my research. This makes it possible for me to carry the same copilot conversation across all these tools. **Where Pieces Can Improve** The response time of the copilot when using Live Context. When using Live Context for huge folders, the delay is understandable as it is going through a lot of files/folders. But, for more straightforward Live Context prompting, the response should be delivered much faster. The related links attached to saved snippets ironically links some unrelated web pages. Overall, I am impressed by the idea behind the development of this software. The Pieces team took into consideration the complexity of a programmer's development process and built a software that streamlined the workflow. And can I also say that the team at Pieces are amazing. Always willing to answer questions and help you resolve any issues you encounter. Kudos to the Pieces Team! Having had early access to the stable and soon-to-be-released features, I can say that Pieces is here to stay for a long time and will grow as developers seek workflow organization. Want to try it out? You can download Pieces for free [here](https://pieces.app/) If you have any questions about Pieces, leave them in the comment section. You can also connect with me on [LinkedIn](https://www.linkedin.com/in/timothy-olanrewaju750/)for software-related posts and articles.
timothyolanrewaju
1,891,333
Mastering React: An indepth guide to building React applications
An In-Depth Guide to React.js Introduction React.js, commonly known as React,...
0
2024-06-17T13:45:36
https://dev.to/nmaduemmmanuel/mastering-react-an-indepth-guide-to-building-react-applications-39ol
# An In-Depth Guide to React.js ## Introduction React.js, commonly known as React, is an open-source JavaScript library used for building user interfaces, particularly for single-page applications. It allows developers to create reusable UI components and manage the state of their applications efficiently. React was developed by Facebook and has gained immense popularity due to its simplicity, flexibility, and performance. ## Brief History of React.js React was first developed by Jordan Walke, a software engineer at Facebook, in 2011. It was initially used in Facebook's newsfeed and later in Instagram. In 2013, React was released as an open-source project, and since then, it has become one of the most widely used libraries for front-end development. ## Key Features of React.js ### Component-Based Architecture React follows a component-based architecture, which means the UI is divided into small, reusable components. Each component represents a part of the user interface and can be nested, managed, and handled independently. This modular approach makes it easier to develop and maintain complex applications. **Example: Creating a Simple Component** ```javascript import React from 'react'; function Greeting() { return <h1>Hello, World!</h1>; } export default Greeting; ``` ### Virtual DOM React uses a virtual DOM to improve performance. The virtual DOM is a lightweight copy of the actual DOM. When the state of a component changes, React updates the virtual DOM first and then compares it with the actual DOM. This process, known as reconciliation, ensures that only the necessary parts of the DOM are updated, resulting in faster rendering. **Example: Updating the Virtual DOM** ```javascript import React, { useState } from 'react'; function Counter() { const [count, setCount] = useState(0); return ( <div> <p>You clicked {count} times</p> <button onClick={() => setCount(count + 1)}>Click me</button> </div> ); } export default Counter; ``` ### Declarative Syntax React uses a declarative syntax, which makes the code more predictable and easier to debug. Developers describe what the UI should look like, and React takes care of updating the DOM to match that description. This approach simplifies the development process and reduces the chances of errors. **Example: Declarative Rendering** ```javascript import React from 'react'; function App() { const isLoggedIn = true; return ( <div> {isLoggedIn ? <h1>Welcome back!</h1> : <h1>Please sign in.</h1>} </div> ); } export default App; ``` ### JSX (JavaScript Syntax Extension) JSX is a syntax extension for JavaScript that allows developers to write HTML-like code within JavaScript. It makes the code more readable and easier to understand. JSX is compiled to JavaScript before being executed in the browser. **Example: Using JSX** ```javascript import React from 'react'; function UserProfile() { return ( <div> <img src="profile.jpg" alt="Profile" /> <h2>John Doe</h2> <p>Software Engineer</p> </div> ); } export default UserProfile; ``` ### One-Way Data Binding React follows a one-way data binding approach, which means data flows in a single direction from parent to child components. This unidirectional data flow makes it easier to understand how data changes affect the application and helps in debugging. **Example: One-Way Data Binding** ```javascript import React from 'react'; function ChildComponent({ message }) { return <p>{message}</p>; } function ParentComponent() { const message = "Hello from parent!"; return ( <div> <ChildComponent message={message} /> </div> ); } export default ParentComponent; ``` ## Benefits of Using React.js ### Reusability React components are reusable, which means they can be used in different parts of the application or even in different projects. This reusability reduces development time and effort. ### Performance React's virtual DOM and efficient update mechanism ensure high performance, even in complex applications with frequent state changes. ### Flexibility React can be used with other libraries and frameworks, such as Redux for state management or Next.js for server-side rendering. This flexibility allows developers to choose the best tools for their specific needs. ### Strong Community Support React has a large and active community of developers who contribute to its development and provide support through forums, tutorials, and documentation. This strong community support makes it easier for developers to find solutions to their problems and stay updated with the latest trends. ## Getting Started with React.js ### Setting Up the Development Environment To start using React, you need to set up your development environment. You can use tools like Create React App, which is a command-line tool that sets up a new React project with a single command. It includes all the necessary configurations and dependencies, allowing you to focus on writing code. **Example: Setting Up a New React Project** ```bash npx create-react-app my-app cd my-app npm start ``` ### Creating a React Component A React component can be created using either a function or a class. Here is an example of a simple functional component: **Example: Functional Component** ```javascript import React from 'react'; function HelloWorld() { return <h1>Hello, World!</h1>; } export default HelloWorld; ``` ### Managing State State is an essential concept in React that allows components to manage and respond to changes. You can use the `useState` hook to add state to a functional component: **Example: Using useState Hook** ```javascript import React, { useState } from 'react'; function Counter() { const [count, setCount] = useState(0); return ( <div> <p>You clicked {count} times</p> <button onClick={() => setCount(count + 1)}>Click me</button> </div> ); } export default Counter; ``` ## Conclusion React.js is a powerful and flexible library for building user interfaces. Its component-based architecture, virtual DOM, and declarative syntax make it an excellent choice for developing modern web applications. By leveraging React's features and best practices, developers can create high-performance, maintainable, and scalable applications. I hope you find this enhanced article helpful and informative! If you have any specific questions or need further details, feel free to ask in the comment section.
nmaduemmmanuel
1,891,364
How to Start a Software Developvment Company?
I've been observing the software industry for quite some time now. It's fascinating how companies...
0
2024-06-17T14:21:56
https://dev.to/igor_ag_aaa2341e64b1f4cb4/how-to-start-a-software-developvment-company-4fna
softwaredevelopment
I've been observing the software industry for quite some time now. It's fascinating how companies like Apple and Microsoft have led the charge in technological advancements, setting a high standard for success. Many of us dream of launching our own software ventures, but do you know how to go about it? New software firms are popping up all the time, eager to leverage the latest in tech innovation. Establishing a successful software company, one that can stand the test of time, is even more daunting. To thrive, I've learned that software companies must first pinpoint the problems they intend to solve and then hone in on providing solutions that genuinely address their customers' needs. Moreover, standing out from the competition is paramount for gaining that crucial edge. In this blog, I'm gonna talk about how to launch a software company. We'll chat about key considerations for leaders and startup founders, delve into strategies for ensuring the company's success, and explore ways to safeguard business and software products. ## Steps to Begin a Software Company In my opinion, before launching a software company, it's important to assess your skill set and identify the problem your software business will solve. Many software startup founders have a background in the technology industry or in software. Some gain programming and business skills through formal education, such as a degree in computer science or software engineering. Taking additional business courses in finance, accounting, or marketing can also be helpful. While formal education is helpful, it is not always necessary. Some successful founders gain skills through hands-on experience, such as working in management positions at software companies. Training programs offered by software companies can also help develop important communication and leadership skills. I think that in addition to skills and education, you should identify a problem that needs to be solved in the software industry. Think about what is missing or needed based on your experience. Assess the viability of your proposed product and determine if there is a market for it. Once you identify the problem and prove that there is a market for your solution, you will have a strong case for starting a software company. ### Software Business Structure & Legal Compliance When starting a software company, one must choose the appropriate legal structure that will impact tax filing and financial responsibilities. There are various options available depending on the company's objectives. - **Sole Proprietorship**: This is a basic business structure where one person owns and runs the business without any legal distinction between personal and business assets. The owner is personally liable for all aspects of the business. - **Partnership**: In a partnership, two or more individuals share ownership and responsibilities. Partners agree on profit sharing, losses, and liabilities within the business. - **Limited Liability Company (LLC)**: An LLC provides protection to owners from personal liability related to business decisions. Owners pay taxes based on individual income and self-employment tax. - **Corporation**: A corporation is a separate legal entity owned by shareholders. It files taxes independently, potentially leading to double taxation. Corporations are suitable for fast-growing software businesses but may not be ideal for smaller companies. Selecting the appropriate legal framework for your software company is essential to comply with all the necessary legal obligations when establishing and operating a business. There are various other legal considerations that must be taken into account. - It is important to understand and adhere to both national and state regulations for commencing and managing a software enterprise. This may involve obtaining permits and licenses as required. Failure to obtain the correct licenses and permits could result in significant fines and the closure of your business; - Identify the taxes that your business must pay, such as sales and income taxes. Depending on your business structure, you may have different tax payment options available; - Obtaining a tax identification number may be necessary for your software business. Corporations and partnerships are typically required to have a tax ID number if they file tax returns or provide business information to the IRS annually; - Registering a business name with the state government is also crucial. A "Doing Business As" (DBA) registration is mandatory if you operate under a name different from your legal name. The DBA can be registered through the state government or county clerk's office. ![Making a Business Plan](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0wda7sxyy7yml3mha9t1.png) ## Making a Business Plan Once you have determined the legal structure for your software company, the next step is to create a business plan. A business plan is a formal document that outlines your business goals, strategies, and financial projections. It serves as a roadmap for your company and helps you stay focused and organized while navigating the challenges of starting a new business. Below are the key components that should be included in your business plan: ### Executive Summary The executive summary is a brief overview of your business that should grab the reader's attention and provide an overview of what they can expect in the rest of the plan. This section should include a mission statement, a general description of your company, and a summary of your products or services. ### Company Description In this section, you should provide a detailed description of your company, including its legal structure, location, and history. You should also mention your target market, your unique selling proposition (USP), and your competitive advantage. ### Market Analysis A market analysis is a critical component of your business plan as it helps you understand your industry, market trends, and customer needs. This section should include research on your target market, your competitors, and any potential challenges or opportunities in the market. ### Organization and Management Here, you should provide an overview of your company's organizational structure, including the roles and responsibilities of key team members. This section should also outline your management team's qualifications and experience. Products or Services In this section, you should provide a detailed description of your software products or services. This includes how they work, their features and benefits, and pricing information. Additionally, if you have any intellectual property, such as patents or trademarks, this is where you would mention them. ### Marketing and Sales Strategy Your marketing and sales strategy is crucial for the success of your software company as it outlines how you will reach and attract your target audience. This section should include your target market, your marketing channels, and your sales tactics. ### Financial Projections The financial projections section of your business plan should include your budget, income statement, cash flow projections, and balance sheet. This section should also include your break-even analysis and any funding or investment requirements. ![Raising the Necessary Fundsn](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ewex4hs2a1vw9pneyjb4.png) ## Raising the Necessary Funds Unless you are one of the few founders who have been successful enough to self-fund your next business, you will likely need outside funding to launch your software company. Raising the necessary capital can be time-consuming. Many entrepreneurs turn to venture capital funds for investment. My advice is to research information about venture capital funds that support software companies like yours and contact them to explore partnership opportunities. When you accept funding from venture capital funds, you will have to hand over some of your company's equity to the investors. Grants and loans are alternative funding options. You may be eligible for a loan under the auspices of the SBA, so it's worth checking with your local Small Business Administration office for information. I believe that government agencies and educational institutions such as universities may also offer research grants that could benefit your software company. In addition to using your own savings, another option is to seek investment from trusted friends, relatives, and business associates. Combining personal and professional relationships can be challenging, but I think that using your connections can be helpful in growing your software company. ## Assessing the Key Costs and Expenses In addition to raising funds, it's crucial to assess the key costs and expenses involved in starting a software company. This includes both one-time and ongoing costs that you will need to budget for to ensure the smooth operation of your business. ### One-Time Costs One-time costs are expenses that are incurred only once when starting your software company. These may include: - Research and development costs for creating your software product; - Legal fees for registering your business, obtaining necessary licenses, etc.; - Office space and equipment; - Marketing and advertising costs; - Website development. It's essential to carefully budget for these one-time costs to avoid any financial setbacks in the early stages of your business. ### Ongoing Expenses Ongoing expenses are regular costs that must be paid regularly to keep your business running. These may include: - Employee salaries and benefits; - Rent or mortgage payments for office space; - Marketing and advertising costs; - Utilities, internet, and other monthly expenses; - Software and hardware maintenance and updates. It's crucial to carefully assess these ongoing expenses and factor them into your operating budget to ensure that your business remains profitable in the long run. ## Hiring the Right People Like any other business, the success or failure of a software company depends on its employees. I believe that hiring the right people is critical to ensuring a strong start to your business. Therefore, founders should be actively involved in the hiring process. While every employee plays a role in a company's success, for software companies, having a skilled software development team is essential. When recruiting developers, it is important to look for candidates with the necessary skills and a passion for working in startup environments. Offering stock options can be a way to attract and retain valuable talent. When advertising job openings, be clear about the required skills and target candidates with experience in new product development and startups. Outsourcing work to contractors, freelancers, or overseas workers can be a good option, especially in the early stages of your business. However, make sure to have measures in place to protect your software, such as source code, and only hire from reputable organizations. ## Testing, Promoting, and Marketing Your Software Products In my opinion, software testing is critical to the success of your company at all stages, including the distribution of SaaS (Software as a Service) products. After the development phase, thorough testing should be done to identify and eliminate any bugs before launching the product. A high-quality SaaS product will enhance your company's reputation and increase customer loyalty, especially in the early stages of company development. It is essential for software companies to have a comprehensive quality control plan in place for testing their SaaS products. A dedicated team of developers should test each feature to ensure it works correctly within the SaaS environment. External testers may also be involved to evaluate the product's quality across different platforms and devices. Testing procedures should be followed rigorously, and a selected group of end-users should provide feedback on the product's usability and functionality before finalizing it for distribution. Once testing is complete, the SaaS product needs to be distributed effectively. Unlike traditional software distribution methods, SaaS software is typically distributed over the internet through cloud-based platforms. Companies can utilize subscription models or free trials to attract customers and encourage them to use the product. Additionally, establishing partnerships with other SaaS providers or integrating the product into existing platforms can help expand its reach. Promotion and marketing play a crucial role in the distribution of SaaS software. Establishing a strong online presence with an attractive website, active social media accounts, and promotional campaigns is essential. This may include previews or teasers of the new SaaS product to attract potential clients and generate interest in the offering. ### Protecting Your Software From Intellectual Property Theft Intellectual property (IP) is a crucial asset for any software company, and protecting it is essential to the success of your business. IP includes patents, trademarks, copyrights, and trade secrets, and without proper protection, it can be vulnerable to theft or infringement. Below are some key steps you can take to protect your software from intellectual property theft: ### Register Your Copyrights and Trademarks Copyrights protect creative works, such as software codes, while trademarks protect logos, slogans, and brand names. Registering your copyrights and trademarks with the appropriate government agencies can provide legal protection against infringement and allow you to take legal action in case of theft. ### Use Trade Secrets Trade secrets are confidential information that gives your software company a competitive advantage. It's essential to have non-disclosure agreements (NDAs) in place when sharing trade secrets with employees or contractors to prevent them from being shared with competitors. ### Monitor Your Intellectual Property Regularly monitoring your intellectual property can help you identify any potential infringements and take prompt legal action. This includes conducting regular searches on the internet and trademark databases to ensure that no one is using your copyrighted material or trademarks without permission. ![Monitor Your Intellectual Property](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s48987cp98x4r1qvyupo.png) ## Protecting Your Software Business with the Right Insurance Coverage Learning how to establish a software company involves evaluating the various risks that you and your business will encounter from the beginning. Just like any other business, a software company cannot operate legally and securely without having the appropriate insurance coverage in place. This is why having a solid risk management strategy is crucial. Cyber liability insurance is vital for all software companies to safeguard against cyber threats such as ransomware attacks and data breaches. Technology errors and omissions insurance (Tech EO) is equally important as it provides protection against liability risks specific to individuals working in the tech industry, including those at software companies. For software development firms, this should be a primary insurance consideration. Most business owners, including startup founders, typically need a business owner's policy (BOP). A BOP combines multiple policies like general liability insurance, commercial property insurance, and business interruption insurance. Directors and officers insurance (DO) is essential for the leadership of a software company. DO insurance shields directors and officers from legal actions alleging a breach of their fiduciary responsibilities. While having DO insurance is generally recommended, it is particularly crucial for any software company with a board of directors. All software companies with employees must have workers' compensation insurance. Although there are exceptions, most states mandate workers' comp coverage for businesses. Additionally, employment practices liability insurance (EPLI) offers protection for software companies against claims brought by employees against their employer. ## Conclusion Starting a software company requires dedication, research, and patience. You must fully commit to your vision and be prepared to take risks to make it a reality. I think it is crucial for startup founders to secure their business from the beginning by developing a solid business plan, choosing the right legal structure, fulfilling all legal obligations, securing funding, hiring the right team, and ensuring the quality of the product through testing. Protecting your intellectual property and having the correct insurance coverage are essential steps in responsibly starting a software company.
igor_ag_aaa2341e64b1f4cb4
1,891,363
Some Books to Upskill Communication For Software Engineers
(Full disclosure: This post includes affiliate links which, if you click through and purchase the...
0
2024-06-17T14:21:39
https://www.stephenhara.com/posts/2024-06-17-communication-upskill-books
books, communication
*(Full disclosure: This post includes affiliate links which, if you click through and purchase the product, will earn me a small commission. This helps support my writing so I can keep writing helpful posts like this!)* When we write code and build software, we're not just communicating to the computer the instructions we want it to perform. We're also communicating various intents to our future selves and our team members: - What the system should do now, as the code available to be executed - What the system should do in the future, as code hidden behind feature flags and other forms of prevented execution - How we want the code to grow with the system, as architectures and design patterns Ultimately, we are doing an awful lot of communicating in our work as developers. And "work on communication" is pretty common advice, but it's hard to find specifics. So I'd like to share 3 books I found incredibly helpful for my growth and improving my communication skills! ## The Science of Effective Communication [The Science of Effective Communication by Ian Tuhovsky](https://amzn.to/4crQ0Zb) is a general book on communication and how it's used to build and maintain connections of all kinds. It goes over virtually all aspects of communication, from the first chapter about listening to the last chapter about interview skills. In between, it addresses how to keep conversations alive; how to handle discussions with high emotions; persuasion and why you shouldn't feel bad about trying to be a little persuasive; complaining; communicating with non-native speakers and hearing-impaired speakers; and a whole lot more. Software developers spend a lot of time talking to other people: our team, our manager, their manager, the product manager, HR, recruiters, clients, and on and on. This book is full of actionable tips and ways to think about thoughtful conversations so you can improve your career, improve your relationships be it business or personal, and carry yourself with more confidence. ## Writing for Busy Readers [Writing for Busy Readers by Todd Rogers and Jessica Lasky-fink](https://amzn.to/3VLhLGg) is specifically about writing and communicating through text and accompanying media. The advice is based on studies and real successful work by the authors in increasing outcomes for clients like schools, political parties, and enterprises. The main conceit of the book is that readers in today's world are *really busy*. What that means is you need to reduce the barriers of understanding your writing as much as possible, with strategies like using simple language, using layout and formatting effectively, and make your intended outcomes easy for the reader. The easiest way for software developers to increase our [impact and visibility](https://www.pathtostaff.com/blog/20231214-visibility) is by writing. It leaves a permanent mark on the organization that says "I contributed to our body of knowledge or systems of value". Whether it be design docs, proposals, code documents, or just a Slack message asking for thoughts, writing is a critical skill to develop for developers. The authors also offer several [free resources](https://writingforbusyreaders.com/resources/) on the book's website, so you can get a lot of benefit before buying the book as well. ## Docs for Developers [Docs for Developers by Jared Bhatti, Sarah Corleissen, Jen Lambourne, David Nunez, Heidi Waterhouse](https://amzn.to/4c4n4Xb) is more specifically about writing documentation, but documentation is an incredible exercise in effective communication. You can't know what the right documentation to write is without knowing what your reader needs, and that's a foundation of communication. Much like software products, documentation requires planning, drafting and revision, and evolution and maintenance. The book goes through these steps from the perspective of a fictional cuddly dog-to-human translation service called Corg.ly. It covers the pre-work, the work itself, and the continuous aspects of documentation, as well as how to include code and visual content, publishing, organization, and more. One of the more important takeaways from this book for me was the idea that there are different kinds of documentation: tutorials, API references, glossaries, etc. It's more helpful to decide on one for each piece of documentation than to try and do multiple things with one piece. Similar to the idea of [separation of concerns](https://en.wikipedia.org/wiki/Separation_of_concerns). There's a [great talk by Daniele Procida](https://www.writethedocs.org/videos/eu/2017/the-four-kinds-of-documentation-and-why-you-need-to-understand-what-they-are-daniele-procida/) that has the same idea, but presented differently. ## Summary Communication is an integral part of our work as software developers, and we should strive to improve our skills in communication as much as we should strive to improve our technical skills. I thought these books were all tremendously helpful, and I think if you haven't read them, you'd be doing yourself a great favor if you did!
tarsir
1,891,362
JavaScript Modules: Love 'Em or Hate 'Em?
JavaScript modules are a way to organize and structure code in a modular fashion, making it more...
27,558
2024-06-17T14:21:02
https://dev.to/imabhinavdev/javascript-modules-love-em-or-hate-em-40kp
webdev, javascript, beginners, tutorial
JavaScript modules are a way to organize and structure code in a modular fashion, making it more manageable and reusable. Modules allow developers to break down complex applications into smaller, self-contained units that can be easily maintained, tested, and reused across projects. In JavaScript, there are two primary module systems: CommonJS and ES Modules (ECMAScript Modules). This blog will provide an in-depth look at both, highlighting their features, differences, and usage with detailed examples. ## Introduction to JavaScript Modules Modules are essential in JavaScript for creating well-structured and maintainable code. Before modules, developers often faced issues with global scope pollution and dependency management. With the advent of modules, these problems are significantly reduced. JavaScript modules help in: - Encapsulation: Keeping code and variables private within modules. - Reusability: Allowing code to be reused across different parts of the application. - Maintainability: Making it easier to manage and update code. ## CommonJS Modules CommonJS is a module system used primarily in Node.js. It was created to allow JavaScript to be used for server-side scripting and provide a mechanism for including and exporting modules. ### Features of CommonJS 1. **Synchronous Loading**: Modules are loaded synchronously, which means the code execution is blocked until the module is fully loaded. 2. **Exports Object**: Modules are exported using the `module.exports` or `exports` object. 3. **require() Function**: Modules are imported using the `require()` function. 4. **Single Export**: Each module can export a single object, which can be an object, function, or primitive value. ### Examples of CommonJS #### Exporting Modules Create a file named `math.js`: ```js // math.js function add(a, b) { return a + b; } function subtract(a, b) { return a - b; } module.exports = { add, subtract }; ``` #### Importing Modules Create a file named `app.js`: ```js // app.js const math = require('./math'); console.log(math.add(5, 3)); // Output: 8 console.log(math.subtract(5, 3)); // Output: 2 ``` In this example, the `math.js` file exports an object containing the `add` and `subtract` functions, which are then imported and used in the `app.js` file using the `require()` function. ## ES Modules ES Modules (ECMAScript Modules) is the official standardized module system introduced in ES6 (ECMAScript 2015). It is designed to work in both browser and server environments. ### Features of ES Modules 1. **Asynchronous Loading**: Modules can be loaded asynchronously, which improves performance, especially in browser environments. 2. **Static Analysis**: The module structure can be statically analyzed, enabling advanced optimizations like tree shaking. 3. **import and export Keywords**: Modules are imported and exported using the `import` and `export` keywords. 4. **Named and Default Exports**: Modules can have multiple named exports and a single default export. ### Examples of ES Modules #### Exporting Modules Create a file named `math.mjs`: ```js // math.mjs export function add(a, b) { return a + b; } export function subtract(a, b) { return a - b; } export default function multiply(a, b) { return a * b; } ``` #### Importing Modules Create a file named `app.mjs`: ```js // app.mjs import multiply, { add, subtract } from './math.mjs'; console.log(add(5, 3)); // Output: 8 console.log(subtract(5, 3)); // Output: 2 console.log(multiply(5, 3)); // Output: 15 ``` In this example, the `math.mjs` file exports the `add` and `subtract` functions as named exports and the `multiply` function as the default export. The `app.mjs` file then imports and uses these functions. ## Comparison: CommonJS vs. ES Modules ### Differences in Syntax | Feature | CommonJS | ES Modules | |----------------------|-------------------------------|------------------------------| | Export | `module.exports` or `exports` | `export` keyword | | Import | `require()` | `import` keyword | | Default Export | `module.exports = value` | `export default value` | | Named Export | `exports.name = value` | `export const name = value` | | Importing Named | `const { name } = require()` | `import { name } from` | | Asynchronous Loading | No | Yes | ### Differences in Features 1. **Loading Mechanism**: CommonJS modules are loaded synchronously, while ES Modules support asynchronous loading, which is beneficial for performance, especially in browsers. 2. **Scope**: CommonJS modules are wrapped in a function before execution, providing module-level scope. ES Modules use block-level scope, which is more in line with modern JavaScript practices. 3. **Syntax**: ES Modules use a more modern and declarative syntax (`import`/`export`), while CommonJS uses a function-based approach (`require`/`module.exports`). ### Performance Considerations 1. **Browser Compatibility**: ES Modules are natively supported in modern browsers, whereas CommonJS modules require a bundler or transpiler (e.g., Webpack, Babel) for use in browsers. 2. **Tree Shaking**: ES Modules support tree shaking, a process where unused code is eliminated during the build process, reducing the final bundle size and improving performance. 3. **Dependency Resolution**: ES Modules use static analysis, allowing for better optimization and error checking at compile time compared to CommonJS. ## When to Use Which Module System - **CommonJS**: Use CommonJS if you are working with Node.js. It is the default module system for Node.js and is widely used in server-side development. - **ES Modules**: Use ES Modules if you are working with modern JavaScript (ES6+) and need compatibility with both browser and server environments. ES Modules are the future of JavaScript modularization and offer better performance and optimization capabilities. ## Conclusion Both CommonJS and ES Modules provide robust solutions for modularizing JavaScript code, each with its own set of features and use cases. Understanding their differences and when to use each can help you write more efficient, maintainable, and scalable code. ### Summary - CommonJS is primarily used in Node.js and uses synchronous loading and the `require()` function. - ES Modules are standardized in ES6, support asynchronous loading, and use the `import` and `export` keywords. - ES Modules offer better performance optimizations like tree shaking and are natively supported in modern browsers. By mastering both module systems, you can leverage the strengths of each in your JavaScript projects and stay ahead in the ever-evolving world of web development. Happy coding!
imabhinavdev
1,891,354
7 Open Source Projects for Web Development that you didn't know
Exploring lesser-known Open Source projects can unveil tools that are not only powerful but also...
0
2024-06-17T14:19:23
https://dev.to/buildwebcrumbs/7-open-source-projects-for-web-development-that-you-didnt-know-1c8c
opensource, webdev, productivity, discuss
Exploring lesser-known Open Source projects can unveil tools that are not only powerful but also tailored to specific development needs that might not be covered by the mst popular projects. Here are seven less known Oprn Source projects that can help you bring innovation and efficiency to your web development projects. --- ### 1. **[Alpine.js](https://alpinejs.dev/)** - **Description**: A rugged, minimal framework for composing JavaScript behavior in your markup. It offers the reactive and declarative nature of big frameworks like Vue or React at a much lower cost. ### 2. **[Stencil](https://stenciljs.com/)** - **Description**: A compiler that generates Web Components (more robust and interoperable custom elements) which work across modern frameworks. Stencil combines the best concepts of the most popular frameworks into a simple build-time tool. ### 3. **[Hugo](https://gohugo.io/)** - **Description**: Hugo is a fast and modern static site generator written in Go, designed to make website creation simple yet powerful. It's ideal for projects needing lightning-fast build times and supports a wide range of content management scenarios. ### 4. **[RedwoodJS](https://redwoodjs.com/)** - **Description**: A full-stack JavaScript framework that brings together the best parts of React, GraphQL, Prisma for ORM, and Jest for testing. It's designed to be highly scalable and offers a unique 'cells' feature for declaratively loading data. ### 5. **[Astro](https://astro.build/)** - **Description**: A fresh but promising framework for building faster websites and web applications. Astro allows you to write your UI components using React, Vue, Svelte, or even plain HTML/CSS, while managing to deliver lightning-fast performance by shipping less JavaScript. ### 6. **[KeystoneJS](https://keystonejs.com/)** - **Description**: A powerful CMS and web app framework built on Express and MongoDB. KeystoneJS makes it easy to create complex databases and backend logic while providing intuitive admin UI and API layer for your apps. --- ## 7. [Webcrumbs](https://www.webcrumbs.org/) - **Description**: Webcrumbs is poised to be a game-changer in web development. We are creating a Plugin Ecosystem for the JavaScript community with the JavaScript community help. As we build, [we invite the community to join us](https://discord.gg/4PWXpPd8HQ) on this exciting journey—participate in our development process, provide feedback, and be part of something groundbreaking. Stay tuned and join our community to get early access and updates! ⭐ Star our GitHub repo: https://buff.ly/3xktEK5 💬 Join our Discord community: https://buff.ly/4e9INP5 📬 Sign up for our newsletter: https://buff.ly/3VxadXE
opensourcee
1,891,356
How to transform Component Development with Storybook and Symfony UX ?
Hey everyone! I am so excited about this article because what I'm going to show you here has been my...
0
2024-06-17T14:17:39
https://dev.to/sensiolabs/how-to-transform-component-development-with-storybook-and-symfony-ux--c86
Hey everyone! I am so excited about this article because what I'm going to show you here has been my dream since I first heard about Symfony UX! I will demonstrate a setup that makes me incredibly productive, but most importantly, brings me a lot of joy. As you know, I love working with components (TwigComponent/LiveComponent), and I also adore Storybook! In my last article, I showed you how to use Storybook to share your components with your team. Today, we're going to dive even deeper. Storybook has been a game changer for me primarily because it provides the best environment to work with components. Components are visual and interactive, and Storybook offers a fantastic playground to view, interact with, and test your components. It helps you create beautiful, interactive, fun, and robust components. So, let’s see how this works! ## Working in isolation If you remember from my first article, we discussed the four main rules of component architecture. One of these rules is independence. Your components should not depend on the context of the page; you should be able to move your component from one page to another without any issues. The great thing about using Storybook is that it enforces this rule. Storybook operates by isolating each component, allowing you to test them one by one in complete isolation. This ensures that if your component works in Storybook, it will work seamlessly on all your pages. ### Hot Reload So when I am working on a new component, the first thing I do is create a story. A really basic one that just gives me the environment to start building. ```js import Alert from '../templates/components/Alert.html.twig'; import { twig } from '@sensiolabs/storybook-symfony-webpack5'; export default { component: (args) => ({ components: {Alert}, template: twig` <twig:Alert> {{ message }} </twig:Alert> ` }), } export const Default = {} ``` Then I can just focus on creating my component. With hot reload, I get quick feedback, which makes things really comfortable for a visual component like an alert. ![Show storybook hot reload](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a0xdg6chddff7wdaaa8l.gif) No need to press F5 anymore or try your component on random pages. Just write a story for your Storybook, and you will have a nice development environment. ### Interactions When working on a new feature, one of the most frustrating aspects can be having to interact with your component to access the part you want to test. For example, I have a small form here, and I want to see how it looks when the entered email is invalid. ![Form in storybook](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kdsfybf9hb9smv11jhcq.gif) Doing this manually can be a huge waste of time, especially for larger components. That's why I don’t do that anymore! With Storybook, you can automate your interactions to reach the exact state you want to test **How can I do that ?** First, I set up my Default story: ```js import Email from '../templates/components/Email.html.twig'; import { twig } from '@sensiolabs/storybook-symfony-webpack5'; export default { component: (args) => ({ component: {Email}, template: twig` <twig:Email/> `, }) } export const Default = {} ``` Then, I add a new story called ‘WrongEmail’: ```js import Email from '../templates/components/Email.html.twig'; import { twig } from '@sensiolabs/storybook-symfony-webpack5'; import {userEvent, waitFor, within, expect, fn} from "@storybook/test"; ... export const Default = {} export const WrongEmail = { play: async () => { ... } } ``` In this story, I define a play function. This function contains snippets of code executed after the story renders, allowing you to interact with your components and test scenarios that would otherwise require manual intervention. The play function looks like this: ```js import {userEvent, waitFor, within, expect, fn} from "@storybook/test"; play: async ({canvasElement}) => { const canvas = within(canvasElement); await userEvent.type(canvas.getByLabelText('Email'), 'wrongemail'); await userEvent.type(canvas.getByLabelText('FirstName'), 'Kobe'); await userEvent.type(canvas.getByLabelText('LastName'), 'Bryant'); await userEvent.click(canvas.getByRole('button')); } ``` Storybook provides a wrapper around https://testing-library.com/. If you are not familiar with this library, it is widely used in the JS community, very robust, and has a strong community around it, making it easy to find good resources. So, if we get back to our play function, we see that we define an argument **canvasElement**. This **canvasElement** represents the canvas where our story is rendered. Then we do the following: ```js const canvas = within(canvasElement); ``` Here, we wrap our canvas in an object to enable better assertions later. ```js await userEvent.type(canvas.getByLabelText('Email'), 'wrongemail'); ``` This line simulates the user typing ‘wrongemail’ in the input with the label Email. We do the same thing for the first name and last name: ```js await userEvent.type(canvas.getByLabelText('FirstName'), 'Kobe'); await userEvent.type(canvas.getByLabelText('LastName'), 'Bryant'); ``` Then we click on the submit button: ```js await userEvent.click(canvas.getByRole('button')); ``` And just like this, Storybook will perform the interactions for me, so I no longer need to do all these steps by hand. ![interactions storybook](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tgsqr3n2zra72aodt60i.gif) You can also easily debug what happens using the interaction panel and go back to previous steps. ![step by step in storybook](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l4j05b8g8u9o6w8djcf8.gif) I just love this feature—it saves me so much time. Components are visual and interactive, and having an environment that lets me see and interact with my components easily is a real game changer. And you know what? We can do even more! We can use Storybook to test our components! ### Testing I have a component, `RadioList`, that displays a list of radios and a search bar. When the user types into the search bar, the component updates the list of radios accordingly. ![radio component in storybook](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zgxj5nxdylnu12jo8l1t.gif) I know the content of my database, so I want to ensure that when I type "90," four radios are displayed. We already know how to do this by writing a play function and adding an assertion at the end. ```js import {twig} from "@sensiolabs/storybook-symfony-webpack5"; import {userEvent, waitFor, within, expect} from "@storybook/test"; export default { component: (args) => ({ template: twig` <twig:RadioList /> ` }), } export const Default = { }; export const Play = { play: async ({ canvasElement }) => { const canvas = within(canvasElement); await userEvent.type(canvas.getByRole('searchbox'), '90'); await waitFor(() => { expect(canvas.queryAllByRole('listbox')).toHaveLength(4) }); }, } ``` And just like that, I have a real test that fully tests my LiveComponent from PHP to JavaScript! I can see that my test is working by checking the interaction panel. I now have a green "Pass" indicator. ![Tests passes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pn5aci42d4z0o8ywt242.png) We can go even further and run all our tests at once by running: ```bash npm run test-storybook ``` What's interesting is that even simple stories with no interaction are tests. Storybook checks that all your components are rendered correctly. This is just a test runner, so you can completely run all your tests in your CI. By leveraging Storybook for testing, you ensure a seamless and efficient development process, catching issues early and maintaining high-quality components. Leveraging Storybook has transformed the way I develop and test components. From working in isolation to hot reloading, automating interactions, and writing comprehensive tests, Storybook provides a development environment that boosts productivity and fun. Although we still rely on Node.js, it’s not a big deal since Storybook is only used in your development environment and not deployed to production. Components are often visual elements of your application, and maintaining a visual development approach greatly enhances the process. Storybook ensures that your components are robust, interactive, and beautifully designed. I hope you enjoyed this article and found it helpful. See you soon!
webmamba
1,891,355
Why is it dangerous to use the HTTP protocol on public Wi-Fi.
What is HTTP HTTP is a hypertext transfer protocol that underlies the Internet. It`s located at the...
0
2024-06-17T14:12:45
https://dev.to/marko_k/why-is-it-dangerous-to-use-the-http-protocol-on-public-wi-fi-1gh
http, https
**What is HTTP** HTTP is a hypertext transfer protocol that underlies the Internet. It`s located at the last layer of the OSI and TCP/IP model (application layer). HTTP is implemented in two programs: a client program and a server program. Client and server programs running on different end systems. They communicate with each other by exchanging HTTP messages. HTTP defines the structure of these messages and how the client and server exchange messages. The problem with this protocol is that the data is transmitted in clear text and anyone can intercept your traffic. **What is HTTPS** The problem with HTTP protocol is solved by its extended version HTTPS. HTTPs - HyperText Transfer Protocol Secure. This protocol adds to the regular version the ability to encrypt data using the TLS cryptographic protocol. The HTTPs protocol provides that when a connection is established, the client and server agree to use a temporary key with which they will encrypt and decrypt messages. This key is called a “session” key and is valid only for the current session. Each new session will generate a new key. To transfer a website to HTTPS, the owner must obtain a special certificate information from which is used to verify the authenticity of the web resource. Accordingly, the organization that issued such a certificate becomes a third party whose participation allows the users not to fear that their data will be stolen. **The main danger of HTTP** What's so scary about the fact that traffic with the HTTP protocol is quite easy to intercept? By using sites with the HTTP protocol on the open network you are open to a person with bad intentions. He or she can easily intercept your traffic and be able to access your cookies, server software versions and see everything that you enter into various forms including bank cards , logins, passwords and other metadata that can be used to test the system or identify potential vulnerabilities. **Capturing an HTTP packet using WireShark** As I wrote earlier, the danger of the HTTP protocol is that its packets are quite easy to intercept, and now I will clearly demonstrate this. I created my own simple web server on Apache HTTP Server with an authorization window. How I created it will be described in the next article. Thanks to it, I can show how unprotected you will be with the HTTP protocol on an open network. ![Authorization window](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sa4g0wivktcl0xrmkgsd.png) You see a typical authorization window with the ability to enter a login and password (in my case it’s email, but this doesn’t change the essence). I enter hypothetical data. My web server has a database with existing users and if the data entered in the form does not correspond to the basic data, the tab with the basic information will not be displayed. Unfortunately, even without gaining access to the My web server has a database with existing users and if the data entered in the form does not correspond to the basic data, the tab with the basic information will not be displayed. Unfortunately, even without gaining access to the treasured information, your data can be stolen. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7u24wcbn2rr3300k1bqw.png) WireShark has a convenient filtering feature. I will use it to quickly find the necessary http packages. We need the POST method because it could cause our data to end up in the hands of an attacker. POST is a method of sending data to the server, for example, after filling out a registration form or authorization on a website. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fmu11yodhnl2n6fcz2rf.png) Click “Follow TCP Stream” and get a window in which the entire exchange between two nodes will be clearly demonstrated. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ghp5e9duxscz4bl45bor.png) And look! Without any encryption we are presented with a login and email (there could be a password). I hope my article clearly showed why you CANNOT use sites with an unsecured HTTP protocol especially on open unsecured networks where all your network traffic is in full view. I also advise you not to enter confidential information through an unsecured network. Again, your traffic can be intercepted. If you want to protect yourself, use a VPN.
marko_k
1,891,352
Handling React Warnings: Filtering Props in Styled Components
When working with React and styled-components, you might encounter warnings about unrecognized props...
0
2024-06-17T14:10:44
https://dev.to/mochafreddo/handling-react-warnings-filtering-props-in-styled-components-3233
react, styledcomponents, performance, codemaintainability
When working with React and styled-components, you might encounter warnings about unrecognized props being passed to DOM elements. This blog post will walk you through understanding why this happens and how to resolve it effectively. #### The Problem You might see a warning like this in your console: ``` Warning: React does not recognize the `isActive` prop on a DOM element. If you intentionally want it to appear in the DOM as a custom attribute, spell it as lowercase `isactive` instead. If you accidentally passed it from a parent component, remove it from the DOM element. ``` This warning occurs because React is trying to pass a prop (`isActive` in this case) to a DOM element, but `isActive` is not a standard HTML attribute. React warns you to prevent potential issues with rendering and performance. #### Why This Happens 1. **HTML Standard Attributes**: DOM elements should only receive standard HTML attributes. Non-standard attributes can cause unexpected behavior or be ignored by the browser. 2. **Performance**: Passing unnecessary props to DOM elements can lead to performance overhead. 3. **Maintainability**: Ensuring only relevant props are passed to DOM elements makes the code cleaner and easier to maintain. #### The Solution To prevent non-standard props from being passed to DOM elements, you can use the `shouldForwardProp` utility provided by `styled-components`. This utility allows you to filter out props that should not be forwarded to the underlying DOM element. Here’s how you can update your styled component to filter out the `isActive` prop: ```typescript import styled from 'styled-components'; const Tab = styled('span').withConfig({ shouldForwardProp: (prop) => prop !== 'isActive', })<{ isActive: boolean }>` text-align: center; text-transform: uppercase; font-size: 12px; font-weight: 400; background-color: rgba(0, 0, 0, 0.5); border-radius: 10px; color: ${(props) => props.isActive ? props.theme.accentColor : props.theme.textColor}; a { padding: 7px 0px; display: block; } `; ``` #### Explanation 1. **`styled('span')`**: This creates a styled `span` element. 2. **`.withConfig`**: This method allows you to configure the styled component. 3. **`shouldForwardProp`**: This function filters out the `isActive` prop, preventing it from being passed to the DOM element. 4. **Dynamic Styling**: The `isActive` prop is used to conditionally apply styles without being passed to the DOM. #### Benefits - **Avoid React Warnings**: Prevents React from issuing warnings about unrecognized props. - **Improved Performance**: Reduces the overhead of passing unnecessary props to DOM elements. - **Cleaner Code**: Ensures that only relevant props are passed, making the codebase easier to understand and maintain. #### Conclusion By using `shouldForwardProp` in `styled-components`, you can effectively manage which props are passed to DOM elements, avoiding React warnings and improving the overall quality of your code. This approach ensures that your components are both performant and maintainable. Feel free to refer to this solution whenever you encounter similar issues with prop forwarding in React and styled-components.
mochafreddo
1,891,343
How I Became a Top-Rated Freelancer: 20 Proven Strategies You Need!
Hey, my name is Can İz, but people know me as “Kris” in the freelancer space. I’m a full-stack web...
0
2024-06-17T14:08:35
https://mrkriswaters.medium.com/how-i-became-a-top-rated-freelancer-20-proven-strategies-you-need-e58f58615205
freelance, fiverr, tradingview, pinescript
Hey, my name is Can İz, but people know me as “Kris” in the freelancer space. I’m a full-stack web developer with a 12-year professional work career. In 2017, I got tired of working in the same field, so I started looking for other ways to enjoy my life while working on projects that needed professional help. I got involved in a couple of startups and felt the energy and freedom of creating things I loved. At the end of 2020, I decided to leave my corporate job and continue my career as a freelance programmer. Between 2017 and 2020, I developed an interest in finance. I learned technical analysis, which helped me find my new niche in the freelance world. On platforms like Fiverr and Upwork, there’s a lot of competition. That’s why picking your field is the most important decision you’ll make when entering the freelancer world. I’m here to share my story of a 3-year freelance career with over 1200 completed projects as a “Pine Script” programmer. I will provide tips and tricks about freelancing, and I hope you’ll find them useful for your own journey. ![KrisWaters Fiverr Profile Screen](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kp3hsruhcgcbbtx3ezeg.png) ## 1. Communication is Key — Create Your Own Communication Style and Template Words In my early days as a freelancer, I had no idea how to convert connections into paid tasks. After taking on many projects, I understood which communication styles worked best. Once I realized this, I started creating “template sentences.” **Here are some examples that work well for me:** - Hi Joe, thank you for reaching out to me. If you can share all the details, I will check and keep you posted soon. - Thank you for reaching out to me. I will check the details and keep you posted shortly. - Thanks for the detailed explanation. I understand the idea. I can code you a strategy script based on your instructions in “X days.” My service fee would be “X US$.” Please let me know if you have any questions. ## 2. Create Articles and Share Them with Your Customers When Needed At the end of a project, you may need to share information with your client on how they can view or execute the delivery. This content generally remains consistent from client to client. For such scenarios, I recommend preparing guide content. This way, you can save time and leave a more professional impression on the other party. For example, I save the code I write in a text document and share it with my clients. They need to transfer this file to the platform where they will view the result and perform the compilation process. I have a guide that explains all the necessary steps for my clients to view the result smoothly and accurately. Instead of writing the same thing with each delivery, it is sufficient to share the link to the existing guide. [If you don’t know how to add source code into your TradingView account, please read the article below.](https://medium.com/pinescriptcoding/how-to-add-pinescript-code-on-your-tradingview-chart-a1c64f886c8e) You can create guide content covering frequently asked similar questions from different clients. [You can learn how to create an alert on the TradingView strategy by reading the article below.](https://mrkriswaters.medium.com/how-to-create-an-alert-for-strategies-on-tradingview-efa504725601) [If you wish to create a TradingView alert that works with 3Commas auto trade service, please follow the instructions below.](https://medium.com/pinescriptcoding/how-to-create-an-strategy-alert-for-tradingview-that-works-on-3commas-d8f51e78708c) If the feedback you receive from your clients is often not detailed enough, I recommend preparing guide content explaining your expectations to them. [How to report a bug for a custom coded TradingView script?](https://mrkriswaters.medium.com/how-to-report-a-bug-for-custom-coded-tradingview-script-21fe392cd49) ## 3. Create a Repository — Use It for Similar Tasks Organizing your completed work systematically can be incredibly beneficial for future projects, as you can reuse it for similar client requests. This can save you time and effort. For instance, as a programmer, while client requests may be unique, they often ask for features that you’ve implemented in previous work. With good archiving, you can easily find past work and use existing components for new clients instead of starting from scratch. I keep all my work in a single folder. When I remember that I’ve created a similar code for a previous request, I can easily access the work related to code scanning with the help of a text editor within the file. For a designer, organizing their archive in nested folders would be more meaningful. Instead of consolidating all their work in one area, dividing it by industry (clothing, education, sports), type of work (logo, letterhead, website), and even color scheme, into different folders would make it easier to find the desired work when needed. ## 4. Check Your Messages Frequently — Speed is Key Especially at the beginning of your freelance career, it’s beneficial to check your inbox frequently. Due to the low number of completed jobs and feedback received, one of the few aspects where you can stand out is your response time. Downloading the mobile application of the platform you use and enabling notifications can make it easier to access incoming messages and provide faster responses. Even if you’re busy at the moment, by indicating that you’ve read the message and will respond in detail as soon as possible, you can prioritize in your customer’s mind. ## 5. Follow the House Rules — Do Not Share Any Contact Info Over Freelancer Platforms The revenue source for freelancer platforms is the commission they take from completed jobs. This commission is deducted from both the seller and the buyer, so your clients may ask you to share your email or phone information to pay less. If you aim for a long-term freelance career, never share your contact information with your clients. Your communication history is scanned with bots to control such violations. If such a violation is detected, your account may be closed, your balance frozen, and you may not be allowed to work on the same platform again. Therefore, if your client makes such a request, you can inform them that you cannot share your contact information due to platform restrictions. In this case, your client might share their own information with you, which would not cause any problems for you as it would be a violation on their part. ## 6. How to Avoid Toxic Clients? It is every freelancer’s dream to work with clients who know what they want, can describe their request well, respect your work and effort, and provide meaningful feedback. Unfortunately, all these criteria are found in very few people. Therefore, when taking on a new job, your priority should be to work with clients who have good communication skills rather than focusing solely on the money you will earn. Poor communication can result in time, money loss, and stress. Think twice before starting to work with people who cannot clearly describe their request, send each sentence as a new message instead of sharing their request in a single message, and engage in tight price negotiations! The time you plan to complete the job can increase two to three times in such cases. Since you have made an agreement on the fee at the beginning of the job, the most reasonable method you can apply is to deliver the work as soon as possible and not to work with the same client again. Usually, clients like these also come up with different requests in addition to the criteria you agreed upon at the beginning of the job. In this case, you should hold onto the initial agreement tightly and inform them that new requests are not included in the initial offer and that you will need to charge an extra service fee for them. Otherwise, they may try to take advantage of your good intentions and expect to get three units of work for the price of one unit. ## 7. Cancellation Issues — How to Avoid Them? Usually, the cancellation process occurs because either you did not understand your client’s request well or your client could not express themselves clearly. There may be a difference between the output expected by the client and the output obtained. Even though the problem cannot be solved with clear feedback, unfortunately, as a last resort due to insufficient feedback, an agreement is reached to cancel the project and the refund process is initiated. Freelancer platforms also have a dispute resolution process, but they are generally focused on protecting the client rather than you. If you are confident in your delivery and cannot convince your client about it, you can defend yourself and protect the fee you received by applying to the platform’s dispute resolution process. Although this situation has happened to me a few times, instead of getting into an argument, I always chose the refund route. Out of a total of 1200 projects completed on the Fiverr platform, my project cancellation count is 15. This corresponds to a rate of around 1% and is acceptable. ![Fiverr - Total Order Screen](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l07wrgo8r4tihruh40zs.png) If you have doubts before starting a job, you can offer your client to provide a sample output, and if you are on the same page, you can start the project. Consider the time you will spend on this offer, and if you need to work long hours, you can inform them that you do not want to take the project. ## 8. Don’t Be Afraid to Ask About Unclear Parts As a freelancer, there’s nothing more natural than asking questions. It leaves a positive impact on the other party, showing that you’re trying to better understand the job. The main reason for asking many questions is that the person describing the job hasn’t shared the details clearly. So there’s no need to feel bad or hesitant. Clarify all the details by asking questions before taking the job; otherwise, you’ll end up having to ask and learn as you go after you’ve already accepted the job. ## 9. How to Set My Price in the Early Stages and After Gaining Some Experience? Before creating a profile on a freelance platform, review the services, prices, and delivery times offered by your competitors. Initially, offering similar content at one-third of the price and, if possible, delivering it in a slightly shorter time will increase the chances of someone with few or no reviews getting the job. Assuming that as you receive more jobs over time with your chosen competitive pricing policy and quick response time, when you reach a certain maturity, you can update your prices. You can realize this maturity when you no longer have much time for yourself during the day and have a busy work routine. For me, this period was around 6 months. Over three years, I raised my prices three times in total and for about the past year I have been offering services to my clients at the same price. Initially, my package fees were $45, $90, and $135, ranging from easy to difficult. Later, I updated them to $90, $140, and $240, and with my latest update, the package fees for the services I offer became $140, $240, and $340. A low pricing policy can also attract toxic clients. Raising prices not only brings financial gain but also keeps troublesome clients away from you. ## 10. Be Aware of Your Value — Do Not Make Huge Discounts If I were to give an approximate ratio, about %20 of the time when you mention your fee, they ask for a discount. Additionally, they may mention that they have more work and if they work with you on this project, they will choose you for the other projects as well. In the maturity period of my freelance career, I generally do not offer discounts to people I am working with for the first time. If such a request comes up and if we have a good understanding and maintain strong communication, I inform them that I could offer a small discount for the next projects. This usually means around a 10% discount. In the early stages of my career, I used to feel a sense of disappointment when I missed out on jobs I had quoted for and did not engage in bargaining. However, now I am much more comfortable. My communication with clients who insist on discounts usually does not go well. In many of the projects I took by compromising, I ended up spending more time than I expected, getting more tired, and losing motivation. If you don’t want to experience such situations, insist on the fee you request and know your worth. For clients you have been working with for long periods on different projects, you can make small gestures without them asking. This will strengthen your relationship and enable you to have longer-term collaborations. ## 11. Work Smart, Not Hard — Avoid Complex Projects, Try to Pick Simple Tasks and Multiply Them Try to take on more projects that are relatively simple and can be completed quickly rather than challenging ones that require extensive research due to your lack of experience. In a different scenario, a client might invite you to a video call because they are too lazy to write their requests in a document. Do not accept every meeting request. Instead, ask for a job description first. If the document you receive catches your interest and you are eager to take on the job, then proceed with the meeting. These unscheduled video calls can disrupt your work pace, so schedule them as late in the day as possible or for the next day to plan your day according to the relevant meeting. During the video call, ask your client to also send their requests in writing or write them down yourself and share them with the client to ensure you are on the same page before starting the work. ## 12. Check Your Rivals from Time to Time, Track Their Price Changes and Workload Visit the profiles of individuals who do similar work to you at regular intervals, check their pricing, and if less experienced individuals with fewer projects are charging more than you, consider raising your rates to at least similar levels. At certain times of the year, your workload may increase or decrease. If you experience an unusual change, check the workload of other individuals. Freelancer platforms often display the number of active projects a person is working on their profile page. ## 13. Keep Track of Your Stats — Know Your Average Workload, Plan Your Day/Month/Year Accordingly Freelance platforms have statistics pages where you can view your past performance, but this data is often irregular and doesn’t provide a clear overview at a glance. To address this, I’ve created an Excel document where I can track all aspects of my projects. I input specific data each time I take on a new project and complete it. This includes the client’s name, project start date, profits, and net profits. Net profits data is used to determine the amount we earn after deducting the freelance platform’s commission. Using this data, I’ve created “Net Monthly Profits,” “Monthly Completed Orders,” and “Net Profits per Order” charts using Excel. This allows me to see past period statistics on a single page in a more organized manner. This data can indicate to you which periods you may experience a high workload, if you have taken fewer jobs than usual in a month, you may need to push harder, or if you are performing above average, you may need to relax a bit. It can also serve as a guide for your annual potential earnings. ![Excel Performance Tracking Sheet for Freelancers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qcwcn6lw9nxyt5w6x455.png) ## 14. Write a Cool Explanation Text, Create Eye-Catching Project Pictures, and Don’t Forget to Put Your Portfolio In your description, briefly describe the work you do, talk about your experience, and highlight your strengths. In your project image, avoid using too much text; instead, try to convey your work with graphic elements. If your competitors have opted for colorful images, consider choosing simpler ones, or if they have chosen simple images, consider adding some color to differentiate yourself and attract attention. If you don’t have much experience in graphic design, I recommend using the Canva platform to prepare your project image. Make sure to add completed projects to your profile as references. The work you’ve done is a guarantee of the work you’ll do. Your client may not see your references while browsing your profile and may request them from you via private message. Therefore, you should prepare a PDF document for your portfolio. When your client requests your portfolio, you can quickly share the document you prepared earlier. Below I am sharing the sample description text and the current project image I used as an example: ![About my gig page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/76ysjhbmfinis6inkrc2.png) ## 15. Title Writing and SEO In your title, you should use the keywords you want to stand out with in search results. Instead of writing “I do graphic design jobs,” you can increase your visibility in searches by writing “I design custom characters for your game using the Maya program.” When your profile reaches a certain level, you can receive mentorship services from freelance platforms. On Fiverr, this program is called “Seller Plus.” By applying the advice your mentor gives you in a few meetings, you can further increase your visibility. ![Fiverr Mentor Page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e29ftv2wbrzqk56erk6g.png) Your title doesn’t have to consist of a single clear sentence. For example, I noticed that searches for “pinescript” and “pine script” are quite common. I couldn’t decide whether to use a space or not in the project title. Upon my mentor’s advice, I wrote both one after the other, and this gave me positive feedback in terms of visibility in searches. Here is the title I currently use for my profile: ![My Gig Main Title](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mbgzi6umbzbtdqu7cae5.png) ## 16. Frozen Account Experience on Payoneer — Do Not Make Transactions into Payoneer Other Than Through Freelancer Platforms Many freelance platforms work with a payment infrastructure called “Payoneer.” By opening a Payoneer account, you can link it to your freelance platform and transfer your payments to the relevant platform instantly. When you create your Payoneer account, you are assigned a bank account in the United States. Now I want to share a bad experience I had. A client I worked with outside of Fiverr sent their payment to my Payoneer account. After seeing that the payment went through smoothly, I used a similar method with another client. Months later, I received an email informing me that my account had been frozen and that I needed to provide information regarding these transactions. Thinking it was a standard procedure, I shared all the requested information. ![Payoneer Email Response](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h9le85ev50bxm93k2egj.png) About 20 days after sending the documents, I received a negative response requesting the documents again. It might be one of the worst customer service experiences I’ve ever had. I couldn’t get a clear answer to my questions about why my document was not accepted. I prepared an invoice with my client and shared it again with Payoneer. After another 20 days, I encountered another rejection email. I was very sure that I had provided the requested document correctly and completely. In addition, they requested another document regarding my business relationship with another client. I contacted my other client and prepared a similar invoice. After contacting customer service again and sending the requested documents within the day, they finally accepted the documents at the end of the standard 20-day review period and reactivated my account. My account was frozen for exactly 3 months. During this time, I continued to work as a freelancer but could not withdraw the money I earned to my bank account, which put a lot of strain on my cash flow and increased my work stress. After this negative experience, I never accepted direct payments from any other client to my Payoneer account. My advice to you is to use Payoneer only to withdraw your earnings from freelance platforms. Do not accept an external client’s payment to your Payoneer account to avoid account freezing! ## 17. Keep Track of Your Work Due Date — Use Trello for Prioritization, Do Easy Tasks First, and Handle Complex Tasks in the Morning with Your Best Brain Capacity A freelancer should always be organized and systematic. If you fail to deliver on time, it will have a negative impact on your profile. Therefore, it is better to prioritize your jobs approaching the deadline. You can use Trello to not forget your active jobs and prioritize them. You can do relatively difficult and attention-demanding tasks at the beginning of your work session and simpler tasks for the rest of the day. When you have multiple tasks to do, prioritizing the easy ones that you can easily complete will boost your motivation and make your head clearer while working on the task that requires more time. ## 18. Do Not Check Messages While Working If you are working on a task, try to isolate yourself from all distractions as much as possible. Stop checking messages from the freelance platform! By separating your working hours from the time you communicate with your clients, you can shorten your total daily working time. ## 19. Do Not Deliver Too Early Even if you tell your client that you will deliver the project in three days and complete the work, never deliver it at the end of the first day! Keep the product you prepared aside and share it with your client as your delivery time approaches. Otherwise, it may be thought that you did a rushed job. You should not put your client into the mindset of “since he/she told me three days and completed it early, he/she can also add these extra features.” ## 20. How to Reduce Stress — Take Payment After the Task is Done (For Well-Known Customers) On freelance platforms, when a buyer and seller agree on a job, the buyer must make the payment before the work begins. The paid amount is held in the main pool of the freelance platform, and once you deliver your work and receive approval from the buyer, your payment is transferred to your account after deducting the platform’s commission. The freelance platform follows this method to protect both you and the buyer. If you choose to receive payment after the work is completed, there will be no person or institution you can claim your rights from if you do not receive payment after delivery. Despite all this information, you may still choose to receive your payment after the work is completed in some special cases. Complex and difficult projects are often rejected by many freelancers. Therefore, the buyer may be willing to pay you an above-average fee. You can accept the job that no one dares to take on, taking the risk of not receiving payment. In case you cannot handle the task, you can ask your customer to seek advice from someone else by writing down the reasons. This way, you will not be affected by the negative points that would normally be reflected in your profile due to the cancellation application that should be applied and the negative impact on your profile. To reduce delivery stress, I usually receive payments from my customers after the work is completed, especially for customers I work with on two or more projects. For first-time customers, it’s best to wait for payments before starting the work. ## Bonus - What I'm doing right now? Over the last three years, I've been working with traders and analyzing their requests, which inspired me to create the [GetPineScript](https://getpinescript.com/) project. GetPineScript is a Pine Script code generator designed for traders who need custom indicators or strategy scripts for the TradingView platform. This platform automates the code generation process, making it easy to create scripts with the most requested and commonly used functionalities, saving you time and effort.
kriswaters
1,891,348
AIM Weekly 17 June 2024
17-June-2024 Tim Spann @PaaSDev Milvus - Towhee - Attu - Feder - GPTCache - VectorDB...
0
2024-06-17T14:06:00
https://dev.to/tspannhw/aim-weekly-17-june-2024-3cmp
milvus, vectordatabase, genai, opensource
## 17-June-2024 Tim Spann @PaaSDev Milvus - Towhee - Attu - Feder - GPTCache - VectorDB Bench Happy Father's Day To All! Also Happy Flag Day to Those in the United States. ![image](https://github.com/tspannhw/FLiPStackWeekly/assets/18673814/d8fe8770-710a-483e-984c-a70547b48ce0) ### AIM Weekly ### Towhee - Attu - Milvus (Tim-Tam) ### FLaNK - FLiPN With a name like that I am not sure how I don't add that to my group. SPANN: Highly-efficient Billion-scale Approximate Nearest Neighborhood Search https://proceedings.neurips.cc/paper/2021/hash/299dc35e747eb77177d9cea10a802da2-Abstract.html Congrats to Milvus https://www.dbta.com/Editorial/Trends-and-Applications/DBTA-100-2024-The-Companies-That-Matter-Most-in-Data-164289.aspx https://github.com/milvus-io/milvus https://pebble.is/PaaSDev https://vimeo.com/flankstack https://www.youtube.com/@FLaNK-Stack https://www.threads.net/@tspannhw https://medium.com/@tspann/subscribe https://ossinsight.io/analyze/tspannhw ### CODE + COMMUNITY Please join my meetup group NJ/NYC/Philly/Virtual. [https://www.meetup.com/unstructured-data-meetup-new-york/](https://www.meetup.com/unstructured-data-meetup-new-york/) This is Issue #142 #### New Releases Milvus Release 2.4.4 https://milvus.io/docs/release_notes.md Release date: May 31, 2024 It includes a critical bug fix, so if you use bulk insert definitely upgrade now. Also some compilation updates for other platforms. https://github.com/milvus-io/milvus/releases/tag/v2.4.4 #### Hardware https://www.seeedstudio.com/BeagleYr-AI-beagleboard-orgr-4-TOPS-AI-Acceleration-powered-by-TI-AM67A.html #### Upcoming Summary of the Last Awesome Meetup https://www.linkedin.com/feed/update/urn:li:activity:7202803256891248640/ #### Cool Stuff TIMM (Pytorch Image Models) https://timm.fast.ai/ Ben has gone deep on this article on the Future of Vector Search linking to some of Milvus & Zilliz' scalability and horizontal scalability deep description and highlighting a lot of interesting things going on. https://gradientflow.substack.com/p/the-future-of-vector-search See: https://zilliz.com/learn/scaling-vector-databases-to-meet-enterprise-demands?utm_source=tim #### Articles There's a lot of cool stuff with Milvus and new models, techniques, libraries and use cases. https://medium.com/@tspann/not-every-field-is-just-text-numbers-or-vectors-976231e90e4d https://medium.com/@tspann/unstructured-street-data-in-new-york-8d3cde0a1e5b https://medium.com/@tspann/tech-week-soft-meetup-debut-june-2024-fc4cdf79342d https://medium.com/@tspann/shining-some-light-on-the-new-milvus-lite-5a0565eb5dd9 https://medium.com/@zilliz_learn/using-vector-search-to-better-understand-computer-vision-data-08e137df9c6c https://www.infoq.com/presentations/ai-monopoly/ https://python.plainenglish.io/claude-3-the-king-of-data-extraction-f06ad161aabf https://towardsdatascience.com/rag-vs-finetuning-which-is-the-best-tool-to-boost-your-llm-application-94654b1eaba7 https://medium.com/@zilliz_learn/using-vector-search-to-better-understand-computer-vision-data-08e137df9c6c https://zilliz.com/blog/exploring-multimodal-embeddings-with-fiftyone-and-milvus https://medium.com/aiguys/yolov9-new-object-detection-king-6fc97b93dc9a https://www.linkedin.com/pulse/nifi-retrieval-augmented-generation-chris-gambino-gfsec/?trackingId=OliHLynzQEKic3SgAznO5w%3D%3D https://ml.dssconf.pl/ https://medium.com/@zilliz_learn/milvus-reference-architectures-e30a27c9f3c2 https://docs.openlit.io/latest/integrations/milvus https://builtin.com/articles/real-time-data-ai https://medium.com/@zilliz_learn/image-embeddings-for-enhanced-image-search-an-in-depth-explainer-6831859bedf0 https://medium.com/@zilliz_learn/semantic-search-with-milvus-and-openai-32573de80307 https://medium.com/@zilliz_learn/how-to-detect-and-correct-logical-fallacies-from-genai-models-3e4a9852d2ef https://gradientflow.substack.com/p/the-future-of-vector-search https://medium.com/vector-database/introducing-pymilvus-integration-with-embedding-models-a82f10d516ea https://medium.com/@zilliz_learn/local-agentic-rag-with-langgraph-and-llama-3-6c962979821f https://blogs.nvidia.com/blog/nemotron-4-synthetic-data-generation-llm-training/?utm_source=tim https://medium.com/walmartglobaltech/reliably-processing-trillions-of-kafka-messages-per-day-23494f553ef9 https://jack-vanlightly.com/blog/2024/6/10/a-cost-analysis-of-replication-vs-s3-express-one-zone-in-transactional-data-systems?utm_source=tim DSPy - Getting interesting https://medium.com/@sandyeep70/the-decline-of-traditional-prompt-engineering-and-the-rise-of-dspy-b27b9a5adc45 Why How + Milvus Lite https://medium.com/enterprise-rag/kickstart-your-genai-applications-with-milvus-lite-and-whyhow-ais-open-source-rule-based-retrieval-70873c7576f1 #### Videos Using JSON Fields with Milvus https://www.youtube.com/watch?v=HP5L3Hr6Mt8 Street Cams + Milvus https://medium.com/@tspann/unstructured-street-data-in-new-york-8d3cde0a1e5b Conf42: ML: Emerging GenAI https://youtu.be/ktVVdJB306U?feature=shared Generative AI with Milvus https://www.youtube.com/watch?v=IfWIzKsoHnA SF Unstructured Meetup - 03 June 2024 https://www.youtube.com/watch?v=UobR3czXqSo&ab_channel=Zilliz Fueling AI with Airbyte https://zilliz.com/event/fueling-ai-with-great-data-airbyte?utm_campaign=2024-06-13_webinar_Airbyte-fueling-ai-with-great-data_zilliz&utm_medium=tim Milvus Webinar https://www.youtube.com/watch?v=IowBdkeKi_M AI Generated Videos https://youtu.be/5tJDBSDrKLQ https://www.youtube.com/watch?v=YNh-WNFLe98 Voyage AI Embeddings and Rerankers for Search and RAG https://medium.com/@zilliz_learn/voyage-ai-embeddings-and-rerankers-for-search-and-rag-587d9bfff877 Evaluate RAG Apps https://medium.com/@zilliz_learn/how-to-evaluate-rag-applications-e2936c1275f9 #### Slides https://www.slideshare.net/slideshow/generative-ai-on-enterprise-cloud-with-nifi-and-milvus/267678399 https://www.slideshare.net/slideshow/06-04-2024-nyc-tech-week-discussion-on-vector-databases-unstructured-data-and-ai/269523214 https://ml.dssconf.pl/user.html#!/lecture/DSSML24-041a/rate https://www.slideshare.net/slideshow/dssml24_tspann_codelessgenerativeaipipelines/269634571 https://www.slideshare.net/slideshow/06-12-2024-budapestdataforum-buildingreal-timepipelineswithflank-aim/269645846 #### Events June 18, 2024: Princeton Meetup https://www.meetup.com/applied-generative-artificial-intelligence-applications/events/301336510/ https://www.startupgrind.com/events/details/startup-grind-princeton-presents-genai-gathering/ June 20, 2024: AI Camp Meetup. NYC. https://www.meetup.com/unstructured-data-meetup-new-york/events/301383476/ Nov 5-7, 10-12, 2024: CloudX. Online/Santa Clara. https://www.developerweek.com/cloudx/ Nov 19, 2024: XtremePython. Online. https://xtremepython.dev/2024/ #### Code * https://github.com/tspannhw/FLaNK-python-processors * https://github.com/tspannhw/AIM-MilvusLite * https://github.com/tspannhw/AIM-NYCStreetCams * https://github.com/stephen37/Milvus_demo/tree/main/llama_index_demos * https://github.com/tspannhw/AIM-MotorVehicleCollisions #### Models * https://huggingface.co/mistralai/Codestral-22B-v0.1 * https://huggingface.co/mixedbread-ai/mxbai-embed-large-v1 * https://www.mixedbread.ai/blog/mxbai-embed-large-v1 * https://huggingface.co/nvidia/Nemotron-4-340B-Instruct * https://huggingface.co/nvidia/Nemotron-4-340B-Reward * https://huggingface.co/spaces/allenai/reward-bench #### Tools * https://github.com/verazuo/jailbreak_llms * https://github.com/open-webui/open-webui * https://datafusion.apache.org/user-guide/example-usage.html * https://github.com/timeseries/qstudio * https://github.com/redotvideo/revideo * https://github.com/coqui-ai/TTS * https://flameshot.org/ * https://github.com/dflib/dflib * https://valentinaalto.medium.com/rag-with-phi-3-medium-as-a-model-as-a-service-from-azure-model-catalog-62e1411948f3 * https://github.com/iyaja/llama-fs * https://github.com/airbnb/chronon? * https://huggingface.co/stabilityai/stable-audio-open-1.0 * https://github.com/keylase/nvidia-patch * https://github.com/livekit/livekit * https://github.com/1943time/bluestone * https://litellm.vercel.app/docs/simple_proxy * https://github.com/cohere-ai/cohere-toolkit * https://github.com/lavague-ai/LaVague * https://contrib.rocks/preview?repo=milvus-io%2Fmilvus * https://github.com/mrHeavenli/akkufetch * https://github.com/visual-layer/fastdup * https://github.com/fastfetch-cli/fastfetch * https://github.com/allenai/PlaSma * https://github.com/letmutex/htmd * https://github.com/rossant/collatepdf * https://github.com/milvus-io/milvus-haystack #### Cool Let's do the Time Sync Again https://github.com/milvus-io/milvus/blob/master/docs/design_docs/20211215-milvus_timesync.md &copy; 2020-2024 Tim Spann https://www.youtube.com/@FLaNK-Stack FLaNK-AIM with LLAMA 3 ~~~~~~~~~~~~~~~ CONNECT ~~~~~~~~~~~~~~~ 🎥 Playlist: Unstructured Data Meetup [https://www.meetup.com/unstructured-data-bay-area/events/](https://www.meetup.com/unstructured-data-bay-area/events/) 🖥️ Website: [https://www.youtube.com/@MilvusVectorDatabase/videos](https://www.youtube.com/@MilvusVectorDatabase/videos) X Twitter - / milvusio [https://x.com/milvusio](https://x.com/milvusio) 🔗 Linkedin: / zilliz [https://www.linkedin.com/company/zilliz/](https://www.linkedin.com/company/zilliz/) 😺 GitHub: [https://github.com/milvus-io/milvus](https://github.com/milvus-io/milvus) 🦾 Invitation to join discord: / discord [https://discord.com/invite/FjCMmaJng6](https://discord.com/invite/FjCMmaJng6)
tspannhw
1,891,347
Melhores Novos Casinos Online em Portugal [2024]
A facilidade de uso e acessibilidade são características fundamentais dos novos casinos online em...
0
2024-06-17T14:02:39
https://dev.to/miguel_costa_be10627d781c/melhores-novos-casinos-online-em-portugal-2024-26gb
A facilidade de uso e acessibilidade são características fundamentais dos novos casinos online em Portugal. Esses sites são projetados para serem intuitivos, permitindo que jogadores de todos os níveis naveguem facilmente e encontrem seus jogos e serviços preferidos sem dificuldades. Acesse [https://casino-portugal.com.pt/casinos/novos/](https://casino-portugal.com.pt/casinos/novos/) para descobrir casinos que combinam design atraente com funcionalidade excepcional, oferecendo uma interface amigável que faz com que a experiência de jogo seja simples e prazerosa. Além disso, muitos destes casinos oferecem tutoriais e guias para ajudar novos jogadores a se acostumarem com o ambiente de jogo, garantindo que todos possam desfrutar de sua experiência desde o primeiro clique.
miguel_costa_be10627d781c
1,871,888
Tech radar: Keep an eye on the technology landscape
If you've been working in technology for a while, you've probably noticed that there are so many...
23,317
2024-06-17T14:00:00
https://dev.to/jdxlabs/tech-radar-keep-an-eye-on-the-technology-landscape-2pnd
methodology, cloud, learning, community
If you've been working in technology for a while, you've probably noticed that there are so many technologies out there and things are constantly evolving. My goal here is to give you some keys to not miss essential information and find your way in the technological jungle, while having the ability to completely disconnect once your working time is over. # Shared tech radars Some companies share the tech radar they created, you can consult them whenever you want : - [ThoughtWorks](https://www.thoughtworks.com/radar) - A global technology company - [Zalando Engineering](https://opensource.zalando.com/tech-radar/) - Company specializing in the sale of shoes and clothing - [Padok](https://www.padok.fr/en/tech-radar) - DevSecOps Experts - [Devoteam](https://techradar.devoteam.com/) - Technology consulting Agency - [Ippon](https://blog.ippon.fr/tech-radar/) - Another Technology consulting Agency ![Tech Radar](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kuu6aaxjekm06ytp2ilt.png) The main categories for a Tech radar are : - **Adopt** : Approaches in which we have great confidence - **Trial** : Approaches we've seen work successfully - **Assess** : Approaches that are promising and have clear potential added value - **Hold** : Approaches that are not recommended for use on new projects We can also mention Digital.ai which maintains a [Periodic table of DevSecOps tools](https://digital.ai/learn/devsecops-periodic-table/) allowing you to explore technologies around devops in an original and practical way. ![Periodic Table](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fkn8dy731tyy4535yhz9.png) # Build your own tech radar For targeted and punctual needs for you or your company, you can create an Excel sheet to help you make choices, or adopt the Tech Radar approach. Some companies share their method and tools for building your own Tech Radar, like [ThoughtWorks](https://www.thoughtworks.com/radar/byor) and [Zalando](https://github.com/zalando/tech-radar). Be careful though, if you build your own knowledge base, it will be a real effort to update it, given the speed at which technologies evolve today. # Software dictionaries and comparators The common use case is that you hear about a new technology, and very quickly need to understand what it does, what are the alternatives and where it fits into the big picture. Luckily, there are community platforms which reference software and allow you to find your way around, here is a selection : - [Alternativeto](https://alternativeto.net/) - References all types of software on all OS : Windows, Mac, Linux, Android, etc. - [Stackshare](https://stackshare.io/) - Aimed for companies building their technical stack - [Trust Radius](https://www.trustradius.com/) - Software database with comparisons and reviews - [Product Hunt](https://www.producthunt.com/) - To discover and share new tech products - [Cloud Native Landscape](https://landscape.cncf.io/) - Tools referenced by the CNCF, concerning Kubernetes and containers - [Future Tools](https://www.futuretools.io/) - A curated list of new AI tools Obviously your favorite search engine or AI assistant are also tools of choice for finding information on technologies you want to discover. One tip to find alternatives for a product is to type on Google : `<product> vs ` and the autocomplete feature will help you diligently. # Trends To help you to make decisions, [Google Trends](https://trends.google.com) shows you tendencies about the products, compared to others. It is very useful to have it in your toolbox. ![Google Trends](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dl0okojmx6kcnx4usb8j.png) We can also cite [Gartner](https://www.gartner.com), a business decision support organization, which publishes studies on the most recent hypes, they provide a part for free. # Obsolescence management Once you have adopted the tools in your stack, you must monitor the life of the projects in order to always keep them up to date, with regularly scheduled updates. The [End-of-life](https://endoflife.date/) website is made for this : ![End of life](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oaovnrgsqefi7pld33x1.png) # Learning roadmaps if you want to master all the technologies that make up your profession, there is the [roadmap.sh](http://roadmap.sh/) site, which gives you learning paths for particular areas (Devops, Kubernetes, Python, etc.) ![Roadmap](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7dp2zju96c7ry17ehvvp.png) # Tech blogs & feeds aggregation Some people enjoy sharing their knowledge on technical areas, blogs remain accessible and effective media. [Feedly](https://feedly.com) or [Inoreader](https://www.inoreader.com/) will allow you to follow the RSS feeds of these websites and consult them at your leisure, so you can follow the specific areas that interest you. Also think about the [Dev.to](http://dev.to/) platform which allows developers to easily share on their topics. Other preferred media at the moment are [Podcasts](https://www.apple.com/apple-podcasts/) and [YouTube](https://www.youtube.com/), that have a good base of quality content, and have the advantage of being able to be listened to in transport or during a running session. # Newsletters Newsletters can remain a good option for keeping up to date without searching for information yourself. The [TLDR Newsletter](https://tldr.tech/newsletters) is an interesting option if you don't have a lot of time. It summarizes new developments in general, or in particular areas: Devops, AI, etc. This allows you to optimize your time and get the essentials without getting too lost. # Latest trends and interactions If you have a little more time to spare, you can connect on social networks. [X (Twitter)](https://x.com) remains the platform where you will have the information in advance with a significant user base, and with the fun that is its own. It is also a good medium for following tech events and conferences. You can favor other alternatives for ideological reasons, like [Mastodon](https://joinmastodon.org/) or [Bluesky](https://bsky.app/), but you should know that there is much less exposure at the moment, although it attracts important people in tech. [LinkedIn](https://www.linkedin.com) is also used to share tech news these days. [Reddit](https://www.reddit.com/) is a platform with a strong user base, which provides exclusive news and technical knowledge. To share about specific technologies, [Slack](https://slack.com/) or [Discord](https://discord.com/) channels are quite common and effective ways to interact with the community. # To conclude We've seen tools and methodologies to keep tabs on what's happening in tech news and always have an overview of the tech landscape so you know how to navigate it. It's now up to you to take advantage of these tools in order to stay up to date with current events and different tools while maintaining a good life balance on a personal level. Enjoy your technological exploration, which is an exciting journey and an endless source of inspiration.
jdxlabs
1,891,344
SSL Networking
What is SSL? SSL (Secure Sockets Layer) is a standard security protocol for establishing encrypted...
0
2024-06-17T13:59:32
https://dev.to/dariusc16/ssl-networking-15eh
**What is SSL?** SSL (Secure Sockets Layer) is a standard security protocol for establishing encrypted links between a web server and a browser in online communication. It ensures that all data transmitted between the web server and browser remains encrypted and secure. SSL uses encryption algorithms to scramble data in transit, preventing unauthorized access and tampering. This encryption process involves symmetric and asymmetric encryption methods to secure data integrity and privacy. Sources for more information: SSL.com - What is SSL/TLS? GlobalSign - Understanding SSL/TLS **What is SSL used for?** SSL is primarily used to secure sensitive data transmission over the internet, including login credentials, credit card information, and personal data. It ensures that data exchanged between users and websites/services cannot be intercepted by malicious entities. SSL is essential for e-commerce websites, online banking, email servers, and any application where secure data transfer is critical. Sources for more information: Symantec - What is SSL and what are SSL Certificates? **Process of SSL** The SSL (Secure Sockets Layer) handshake process is crucial for establishing a secure and encrypted connection between a client (such as a web browser) and a server. This process involves several key steps that ensure data confidentiality, integrity, and authentication. ![](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4och0x9ynnnksysw6day.png) Firstly, the SSL handshake begins with the Client Hello phase. Here, the client initiates the connection by sending a message to the server, specifying the SSL/TLS versions it supports, a list of cipher suites (encryption algorithms), and a random number. Upon receiving the Client Hello message, the Server Hello phase follows. In response, the server selects the highest SSL/TLS protocol version and cipher suite that both the server and client support. The server then sends back its own message, including its digital certificate, which contains its public key and other information necessary for the client to authenticate the server's identity. Once the client receives the server's certificate, it proceeds to verify its authenticity during the Certificate Validation phase. This involves checking whether the certificate was issued by a trusted certificate authority (CA) and whether it has not expired or been revoked. If validation succeeds, the client continues the handshake process. Next, the Key Exchange phase takes place. During this step, the client generates a random pre-master secret and encrypts it with the server's public key from the server's certificate. Only the server can decrypt this pre-master secret using its private key. Both the client and server then independently derive session keys from the pre-master secret to be used for symmetric encryption of data transmitted during the session. Finally, with the session keys established, the Secure Data Exchange phase begins. Now, all subsequent data transmitted between the client and server is encrypted using symmetric encryption with the session keys. This encryption ensures that data exchanged during the session remains confidential and integral. **Sources for more information:** SSL2BUY - History of SSL 5. Advantages and Disadvantages of SSL Advantages: _Data Encryption:_ Ensures data privacy and integrity. Trust and Authentication: Verifies the identity of websites and servers. Protection from Attacks: Mitigates risks of data interception and tampering. SEO Benefits: Google considers SSL/TLS encryption as a ranking factor. **Disadvantages:** Performance Overhead: SSL/TLS encryption can slightly slow down data transfer speeds. Cost: SSL certificates can incur costs for purchase and renewal. Configuration Complexity: Implementing and maintaining SSL/TLS configurations can be complex. Sources for more information: Digicert - Advantages and Disadvantages of SSL Additional Resources For further reading and in-depth understanding of SSL/TLS and its implementation, consider these resources: YouTube - SSL Explained by Thycotic (Video explanation of SSL/TLS) Mozilla - SSL Configuration Generator (Tool for generating SSL configurations) OWASP - Transport Layer Protection Cheat Sheet (Guidelines for secure SSL/TLS implementation)
dariusc16
1,891,342
Exploring the World of Ada and SPARK
In the realm of software development, particularly in safety-critical systems, reliability and...
0
2024-06-17T13:58:20
https://dev.to/ajgamer/exploring-the-world-of-ada-and-spark-33o3
beginners, programming, security
In the realm of software development, particularly in safety-critical systems, reliability and security are paramount. Languages like Ada and its subset SPARK have been designed with these goals in mind, offering robust tools for developers aiming to build dependable and error-free applications. This post delves into the key features, benefits, and real-world applications of Ada and SPARK, highlighting their significance in the software industry. ## The History of Ada Ada is a statically typed, high-level programming language developed in the early 1980s by the U.S. Department of Defense (DoD). Named after Ada Lovelace, who is often regarded as the first computer programmer, Ada was created in response to the DoD's need for a reliable and standardized programming language to be used in mission-critical systems. **Origins and Development** In the 1970s, the DoD faced challenges with software maintenance and interoperability due to the use of over 450 different programming languages and dialects. To address these issues, the DoD initiated the "High Order Language Working Group" (HOLWG), which set out to develop a single, unified programming language. The initiative, known as the Ada project, aimed to create a language that would improve software reliability, maintainability, and portability. After a rigorous selection process, the design proposal from Jean Ichbiah's team at CII Honeywell Bull was chosen. This proposal eventually evolved into the Ada programming language, named in honor of Ada Lovelace. The first version, Ada 83, was standardized by the American National Standards Institute (ANSI) and the International Organization for Standardization (ISO) in 1983. ## Evolution of Ada Ada has undergone several significant revisions since its inception: - **Ada 83**: The initial version focused on strong typing, modularity, concurrency, and exception handling, making it suitable for real-time and embedded systems. - **Ada 95**: This major update introduced support for object-oriented programming, hierarchical libraries, and protected objects for synchronization, enhancing the language's flexibility and robustness. - **Ada 2005**: Further enhancements included support for real-time systems, improved interoperability with other languages, and the addition of more powerful tasking features. - **Ada 2012**: This version introduced contract-based programming, adding preconditions, postconditions, and invariants to improve software correctness and reliability. It also enhanced support for multicore programming. - **Ada 202X**: The ongoing development of Ada includes further refinements to support modern programming paradigms and maintain the language's relevance in contemporary software development. **Key Features of Ada** - **Strong Typing**: Ada's strong typing system prevents type errors, ensuring that variables are used consistently according to their defined types. This reduces runtime errors and enhances code reliability. - **Modularity**: Ada supports modular programming through packages, allowing developers to encapsulate data and procedures. This promotes code reuse and maintainability. - **Concurrency**: Ada provides built-in support for concurrent programming through tasks, protected objects, and real-time systems annexes, making it suitable for real-time and parallel applications. - **Exception Handling**: Ada's robust exception handling mechanisms allow developers to manage unexpected conditions gracefully, improving the robustness of applications. - **Safety and Security**: Ada's design includes features for preventing common programming errors, such as buffer overflows and invalid memory accesses, making it a secure choice for critical systems. **What is SPARK?** SPARK is a formally defined subset of Ada, designed specifically for high-assurance systems where correctness and security are critical. SPARK eliminates ambiguities and non-determinism, enabling formal verification of software properties. It is used in domains where failure is not an option, such as aerospace, defense, and medical devices. **Key Features of SPARK** - **Formal Verification**: SPARK enables formal proof of code correctness, allowing developers to mathematically verify that their programs meet specified requirements and are free from certain classes of errors. - **Absence of Run-Time Errors**: SPARK ensures the absence of run-time errors such as division by zero, array bounds violations, and null pointer dereferences through static analysis and proof techniques. - **Information Flow Analysis**: SPARK provides tools for analyzing information flow within a program, ensuring that data is used appropriately and securely, which is crucial for maintaining confidentiality and integrity. - **Deterministic Execution**: SPARK enforces deterministic execution, which is essential for systems requiring predictable behavior, such as avionics and control systems. **Benefits of Using Ada and SPARK** - **Increased Reliability**: The strong typing, modularity, and concurrency features of Ada, combined with SPARK's formal verification, significantly enhance the reliability of software systems. - **Enhanced Security**: Ada and SPARK's emphasis on preventing common programming errors and ensuring correct information flow makes them ideal for developing secure applications. - **Reduced Development Costs**: By catching errors early in the development process through static analysis and formal verification, Ada and SPARK help reduce the cost and effort associated with debugging and testing. - **Compliance with Standards**: Ada and SPARK are often used in industries with stringent safety and security standards, such as DO-178C for avionics software and IEC 61508 for industrial control systems, aiding compliance efforts. **Real-World Applications** - **Aerospace and Defense**: Ada and SPARK are extensively used in aerospace and defense applications, including flight control systems, missile guidance systems, and satellite software, where reliability and safety are critical. - **Railway Systems**: Ada is employed in railway signaling and control systems to ensure safe and efficient operation of trains. - Medical Devices: The medical industry leverages SPARK for developing life-critical devices, such as infusion pumps and pacemakers, where software correctness is crucial. - **Automotive Industry**: Ada is used in the development of automotive control systems, including engine control units and advanced driver-assistance systems (ADAS), to ensure safety and reliability. **Conclusion** Ada and SPARK represent powerful tools in the arsenal of software developers working on safety-critical and high-assurance systems. Their strong emphasis on reliability, security, and formal verification makes them indispensable in industries where software failures can have catastrophic consequences. By embracing Ada and SPARK, developers can build robust, error-free applications that stand the test of time and meet the highest standards of safety and security. Whether you're working in aerospace, defense, medical, or any other domain requiring dependable software, Ada and SPARK offer the features and assurance needed to succeed in delivering high-quality solutions. For more in depth research into Ada and SPARK, here the resources I used for my information: 1. [Official Ada Documentation](https://www.adacore.com/documentation) 2. [About Ada](https://www.adacore.com/about-ada) 3. [Introduction to Ada](https://learn.adacore.com/courses/intro-to-ada/index.html) 4. [SPARK Overview](https://learn.adacore.com/courses/intro-to-spark/chapters/01_Overview.html) 5. [Introduction to SPARK](https://learn.adacore.com/courses/intro-to-spark/index.html) 6. [Memory Safety in Ada and SPARK](https://blog.adacore.com/memory-safety-in-ada-and-spark-through-language-features-and-tool-support) 7. [Avionics | AdaCore](https://www.adacore.com/industries/avionics)
ajgamer
1,891,341
I Found a way to Automate 2FA and TOTP.
Hello Readers, After very long time now I am back with a very interesting topic for Automation...
0
2024-06-17T13:56:05
https://dev.to/gokul2172001/i-found-a-way-to-automate-2fa-and-totp-38pe
beeceptors, automation, testing, 2fa
Hello Readers, After very long time now I am back with a very interesting topic for Automation Testers. I’ve recently see a post in linked In about the beeceptors. That post contains a lot of information of a very good advantage of use the Beeceptor Api which we can use to overcome the dificulty of 2FA ( 2-Factor Authentication) and TOTP (Time-based One-Time Passwords). We know most of the big application like Github, Zerodha, Instagram even Google and many more. All these applications are have a 2FA and TOTP to login for security purpose. For these 2FA and TOTP are nightmare of Automation Tester. In order to overcome that problem, some of the companies skip the authentication with constant code or disabled authentication page login. Recently I get to know that there a tool called Beeceptors through a significant linked In post. After I read that along with some more of the other blogs and sources, I get know that, we can mock that 2 factor authentication by using Mock API to simulating different API’s for authentication. We can create our own end points for different response and get the results for the best automation scripts with all the possible scenarios and use cases. Using a website https://beeceptor.com/ , we can create our own mock servers and endpoints with specific rule, secret and authentications. Time-Based One-Time Passwords (TOTP) are often used in 2FA. Mock APIs can generate mock TOTP codes to test the verification process within the applications. Using mock APIs helps us isolating 2FA and TOTP testing from production environments, reducing risks associated with live testing. By using the services like beeceptors which provides mock API’s similar capabilities make Testing is very helpful to test the application more robust and reliable before Deployment. These were the things I learned in weekends to improve my automation testing skills. In near future I’ll write another blog with more detailed information with practical implementation. Thank you for your time to read my blog. Until Next Time. Best wishes to all.
gokul2172001
1,891,340
Where Do I Find WPS Pin on HP Printer
Finding the WPS PIN on Your HP Printer Connecting your HP printer to a wireless network...
0
2024-06-17T13:53:38
https://dev.to/printerhelp/where-do-i-find-wps-pin-on-hp-printer-l76
beginners, printer, wifi, hp
## Finding the WPS PIN on Your HP Printer Connecting your HP printer to a wireless network using the Wi-Fi Protected Setup (WPS) PIN method is a secure and convenient way to ensure a seamless printing experience. The WPS PIN is a unique code generated by your printer, allowing you to connect to your Wi-Fi network without manually entering a password. Here’s a quick guide on how to find the WPS PIN on your HP printer. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/03d8jm9650vbx8yabqt2.jpg) ## What is the WPS PIN? The WPS PIN (Wi-Fi Protected Setup Personal Identification Number) is an eight-digit code used to establish a secure connection between your HP printer and your wireless network. This method is particularly useful for simplifying the setup process and enhancing security. ## How to Find the WPS PIN on Your HP Printer **Using the Printer Control Panel** Power On the Printer: Ensure your printer is powered on and ready. Access Network Settings: Navigate to the "Network" or "Wireless Settings" menu on the printer’s control panel. This is typically found under "Setup" or "Settings." Select WPS Option: Look for the "Wi-Fi Protected Setup" option and select it. Generate WPS PIN: Choose the "WPS PIN" option. The printer will display the WPS PIN on the screen. **Using the HP Smart App** Download and Install HP Smart: Ensure you have the HP Smart app on your smartphone or tablet. Connect to the Printer: Open the app and connect to your HP printer. Access Printer Settings: Navigate to the printer settings within the app. Find the WPS PIN: Look for the network settings or Wi-Fi settings section to locate the WPS PIN option. ## Using the WPS PIN to Connect Access Router Settings: Go to your Wi-Fi router’s settings. Find WPS Setup: Locate the WPS setup section. Enter the WPS PIN: Input the WPS PIN generated by your printer. Complete the Connection: Follow any additional prompts to finalize the connection. ## Conclusion The WPS PIN method offers a quick and secure way to connect your HP printer to a wireless network. By following the steps outlined above, you can easily locate the WPS PIN and establish a reliable connection, ensuring that your printing tasks proceed smoothly and efficiently.
printerhelp
1,891,339
What Is Staging in Software Development? | Guide for Beginners
I realized how important the staging environment is. It is essentially a copy of the live site and...
0
2024-06-17T13:52:58
https://dev.to/igor_ag_aaa2341e64b1f4cb4/what-is-staging-in-software-development-3hfo
softwaredevelopment, beginners, development
I realized how important the staging environment is. It is essentially a copy of the live site and serves as the last checkpoint before making changes to the live site.By mirroring the live environment, it allows us to test new changes meticulously before they reach the public. This step isn't mandatory, but for large or intricate projects, it's a game-changer. ## What is Staging in Software Development? Staging in software development refers to a testing environment that closely replicates the production environment where the final product will be deployed. It is a crucial step in the deployment pipeline, allowing developers and testers to validate changes, identify issues, and ensure that new code works as expected before it goes live. ## The Benefits of a Staging Environment Testing new updates in a staging environment significantly lowers the risk of errors or issues impacting users, leading to happier users and more consistent uptime for the website. Here are some key benefits: - **Risk Mitigation**: By using a staging environment, you can test new features and updates without affecting the live site. This helps identify and fix issues before they reach end-users. - **Improved User Experience**: Ensuring that updates are bug-free and functioning correctly before deployment results in a smoother and more reliable user experience. - **Increased Confidence in Deployments**: Developers and stakeholders can have greater confidence that updates will perform as expected, reducing anxiety and uncertainty associated with deploying changes. - **Performance Testing**: A staging environment allows for thorough performance testing, ensuring that new features do not degrade the website's performance. - **Security Validation**: Security vulnerabilities can be identified and addressed in a staging environment before they are exposed to the public, enhancing the overall security of the website. - **Realistic Testing Scenarios**: Staging environments can closely mimic the production environment, providing a more accurate testing ground for updates and changes. - **User Feedback**: Early testing in a staging environment allows for feedback from a select group of users, helping to refine features and improve overall quality before a full release. ## Different Environments in Web Development In web development, having multiple environments is often essential. It allows developers and testers to work independently and simultaneously without affecting the live site. Typically, changes are tested in various stages before being deployed live. ## Local Environment I start with a local environment, which is offline and runs on my local machine or server. This setup is cost-effective and convenient, as it doesn’t rely on internet connectivity, allowing me to work from anywhere. Working in a local environment also means I can experiment freely without the risk of affecting other environments or the live site. This is particularly useful for trying out new tools, libraries, or frameworks. Additionally, it provides a safe space to debug and resolve issues at an early stage, ensuring a smoother development process. Version control systems like Git are often integrated here to keep track of changes. ## Development Environment Next is the development environment, my sandbox for testing new features and changes. Here, anything goes, and it can sometimes overlap with the local environment, depending on the workflow. The development environment is shared among the development team, allowing collaboration and code review. It typically includes a more comprehensive setup with access to databases and other resources that mimic the live environment. Continuous integration (CI) tools are often used here to automate testing and deployment processes. This environment is crucial for catching errors and bugs early in the development cycle, providing a platform for extensive testing and refinement. ## Staging Environment The staging environment is the penultimate step, ensuring all changes are functioning correctly before they go live. This environment mirrors the live site, minus the recent updates, providing a near-real scenario for final testing and quality assurance (QA). For agencies, this is often where the client gets to see the final project and provide approval before launch. The staging environment is designed to simulate the production environment as closely as possible, allowing for thorough testing of performance, security, and usability. It’s also an opportunity to conduct user acceptance testing (UAT) and gather feedback from stakeholders. This stage helps to ensure that all potential issues are identified and resolved before the final deployment. ## Live Environment Finally, the live environment, or production environment, is what users interact with. If everything goes smoothly through the previous stages, this version should be bug-free, offering users an optimal experience. The live environment is highly monitored and maintained to ensure high availability and performance. Regular backups and disaster recovery plans are essential to safeguard against data loss and downtime. Performance monitoring tools and analytics are used to track user behavior and site performance, allowing for continuous improvement. Security measures are also critical here to protect against threats and vulnerabilities. The live environment is the culmination of all the hard work done in previous stages, delivering a polished and reliable product to end-users. ## Staging Environment vs. Testing Environment The key difference between a staging environment and a testing environment lies in their fidelity to the live site. The staging environment mirrors the live site closely, ensuring that any new changes won’t cause unexpected issues upon deployment. On the other hand, a testing environment may not replicate the live site as precisely, focusing instead on testing specific code changes quickly. ## When to Use a Staging Environment? Ideally, every website should have a staging environment. It offers a valuable opportunity to catch and fix bugs before they affect users, enhancing the overall user experience. Here are some specific scenarios when using a staging environment is crucial: **Before Major Updates**: When rolling out significant updates or new features, testing in a staging environment ensures that these changes do not disrupt the live site. **After Code Refactoring**: Major code changes or refactoring can introduce unexpected issues. A staging environment allows for thorough testing to ensure stability. **Integration Testing**: When integrating third-party services or APIs, it’s essential to validate these integrations in a staging environment to avoid disruptions. **Performance Enhancements**: Testing performance improvements or optimizations in a staging environment helps ensure that they achieve the desired results without unintended side effects. **Security Patches**: Applying and testing security patches in a staging environment ensures that they do not inadvertently introduce new vulnerabilities or issues. **User Acceptance Testing (UAT)**: Before a full rollout, conducting UAT in a staging environment allows selected users to test and provide feedback, ensuring the update meets user expectations. **Disaster Recovery Drills**: Simulating disaster recovery scenarios in a staging environment prepares the team for real-world incidents, ensuring a smoother response and recovery. ## Why Do We Need a Staging Environment? The necessity of a staging site depends on the complexity and size of your website, as well as the frequency of major changes. For smaller sites with infrequent updates, the preview function in your CMS might suffice. However, if you’re looking to add additional environments, consider the costs involved in setting them up. While local environments can save on hosting costs, for larger projects, the benefits of a staging environment far outweigh the expenses. ## Sum up Incorporating a staging environment into your deployment process can greatly enhance the reliability and user satisfaction of your website, making it a crucial element in the software development.
igor_ag_aaa2341e64b1f4cb4
1,891,337
Escrevendo Boas Mensagens de Commit
Se você é um desenvolvedor, já deve estar familiarizado com sistemas de gerenciamento de versão, como...
0
2024-06-17T13:48:42
https://dev.to/darlangui/pt-br-escrevendo-boas-mensagens-de-commit-3pka
braziliandevs, git, tutorial, learning
Se você é um desenvolvedor, já deve estar familiarizado com sistemas de gerenciamento de versão, como o amplamente usado Git. Este resumo vai te guiar na importância da mensagem do commit e no que deve ser incluído nele, de uma maneira bem prática e direta. ## Iniciando com o Git init Vamos começar do começo, bora iniciar um repositório Git: ```bash $ git init Initialized empty Git repository in C:/Users/User/Projects/exemplo/.git/ ``` Agora que o repositório está pronto, vamos prosseguir adicionando arquivos. Dependendo do seu projeto (vamos usar o exemplo do Vue.js), você provavelmente verá algo assim ao verificar o status: ```bash $ git status On branch master No commits yet Untracked files: (use "git add <file>..." to include in what will be committed) .gitignore README.md babel.config.js jsconfig.json package.json public/ src/ vue.config.js yarn.lock nothing added to commit but untracked files present (use "git add" to track) ``` ## Gerenciar seus arquivos Aqui, precisamos adicionar arquivos ao "working tree" do Git antes de fazer um commit. É aqui que entra o `.gitignore`, crucial para evitar commitar coisas sensíveis como chaves de API ou arquivos de configuração local. Um exemplo de `.gitignore` genérico: ```.gitignore # Node modules node_modules/ # Logs logs/ *.log npm-debug.log* yarn-debug.log* yarn-error.log* # Dependency directories jspm_packages/ # Compiled output dist/ build/ # Build directory build/ # Coverage directory used by tools like istanbul coverage/ # IDEs and editors .vscode/ .idea/ *.sublime-project *.sublime-workspace *.code-workspace # User-specific configuration files *.env.local *.env.*.local .env .env.* !.env.example # Local env files .env.local .env.*.local # Temporary files and folders temp/ tmp/ *.swp *.swo *.swn # OS-specific files .DS_Store Thumbs.db # Vuepress build output .vuepress/dist # Next.js build output .next/ # Nuxt.js build output .nuxt/ node_modules/ # Expo .expo/ .expo-shared/ # SvelteKit .svelte-kit/ # Styleguidist .styleguidist/ # Custom output directories out/ public/ # MacOS *.DS_Store # Windows Thumbs.db ehthumbs.db ``` ## Preparando para o commit Agora vamos adicionar os arquivos modificados à stage do Git. Evite o `git add .` para evitar adicionar arquivos indesejados. Use `git add -i` para interativamente escolher o que incluir no commit e então use o `git status` para ver se deu tudo certo: ```bash $ git status On branch master No commits yet Changes to be committed: (use "git rm --cached <file>..." to unstage) new file: .gitignore new file: README.md new file: babel.config.js new file: jsconfig.json new file: package.json new file: public/favicon.ico new file: public/index.html new file: src/App.vue new file: src/assets/logo.png new file: src/components/HelloWorld.vue new file: src/main.js new file: vue.config.js new file: yarn.lock ``` ## Escrevendo uma boa mensagem de commit A mensagem de commit é crucial. Deve ser clara sobre o que foi feito e por que. Um bom padrão pode ser: ``` fix: Corrigir o bug na página de login Porque o formulário não estava validando corretamente os campos vazios. Alterado tal detalhe da página de login. ``` Bom essa parte até que é fácil tem que ser um pouco criativo mas com o tempo se vai pegando o jeito. Aqui temos a estrutura da mensagem divida em três tópicos: - Title (Titulo): Juntamente com um prefixo, existem diversos tipos recomendo dar uma estudada sobre convencional commits pra entender mais sobre, e o título em si que deve ser direto e simples, gosto de seguir algo como "se eu commitar isso ele vai {Title}". - Porque: Basicamente o porque ou qual foi o intuito de estar realizando esse commit seja ele alteração ou adição de uma nova feacture por exemplo. - Descrição: Aqui basicamente se descreve o que foi alterado ou adicionado nesse commit. Configurar seu editor padrão para `git commit` facilita essa parte: ```bash git config --global core.editor "vim" ``` No `vim` a tela ficara assim dependendo da forma que seu editor esta configurado: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s5xtd7odzbo2vryr8ej8.png) ## Finalizando o commit E então colocamos a mensagem do commit seguinte aquele padrão: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qryas04adh3fsd3xgv6j.png) Simples não? Bom agora só finalizar o commit, vamos dar uma olhada como ficou com o ``git log`: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4hge4w383pbtjp2ewxhp.png) Perfeito agora a história da sua aplicação esta organizada e bem escrita. Esses passos são só coisas extremamente simples, ainda recomendo estudar e ler a própria documentação do Git para entender como deixar seu histórico do Git limpo e bem organizado. # Conclusão Lembre-se, essas são diretrizes gerais. Git é sobre controle e histórico. Com commits bem estruturados e mensagens claras, você pode navegar pelo histórico do seu projeto de forma eficiente e entender mudanças passadas. Experimente, ajuste conforme suas necessidades e mantenha seu fluxo de trabalho organizado! E como sempre falo aqui: Se você leu até aqui, recomendo que comece a implementar esses princípios em seus projetos, pois para aprender a programar, não há nada melhor que: PROGRAMAR. Obrigado por ler S2! ## Referências - https://www.youtube.com/watch?v=6OokP-NE49k&t=2174s&pp=ygUVZ2l0IGEgbWVsaG9yIG1lbnNhZ2Vt - https://www.youtube.com/watch?v=sStBPj7JJpM&t=191s&pp=ygUhbyBqZWl0byBjZXJ0byBkZSBlc2NyZXZlciBjb21taXRz - https://www.git-scm.com/docs - https://www.youtube.com/watch?v=azw4kmyaWyM&pp=ygUhbyBqZWl0byBjZXJ0byBkZSBlc2NyZXZlciBjb21taXRz - https://www.conventionalcommits.org/en/v1.0.0/
darlangui
1,891,336
Online Crackers Sivakasi for Vel Traders Crackers
Introduction: A Boom in the Online Crackers Market Remember the excitement of buying...
0
2024-06-17T13:48:36
https://dev.to/vel_traderscrackers_10df/online-crackers-sivakasi-for-vel-traders-crackers-2oob
### Introduction: A Boom in the Online Crackers Market Remember the excitement of [buying crackers for festivals?](https://www.velcrackerssivakasi.com/shop/) The colorful sparks, the loud bangs, and the joyous celebrations. Now, imagine all of that excitement at your fingertips. That's right! Buying crackers online has become the new trend, making it easier and more convenient for everyone. And if we talk about online crackers, one name that stands out is Vel Traders Crackers from Sivakasi. Let’s dive into the world of online crackers and explore why Vel Traders Crackers is the go-to choice for festive celebrations. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l5gvaemr3opjoxklh3vv.png) ## The Evolution of the Cracker Industry ## A Brief History of Crackers Crackers have been a part of celebrations for centuries. Originating in China, they made their way to India and became an integral part of our festivals, especially Diwali. Over the years, the manufacturing of crackers has evolved, with Sivakasi emerging as the hub of the cracker industry in India. ## The Rise of Sivakasi: The Cracker Capital of India Sivakasi, a small town in Tamil Nadu, is synonymous with crackers. Known as the "Cracker Capital of India," Sivakasi produces around 90% of the country's fireworks. The town is home to numerous cracker manufacturers, including Vel Traders, who have been in the business for decades. ## Why Buy Crackers Online? ## Convenience at Your Fingertips Buying crackers online offers unparalleled convenience. No more standing in long queues or braving the scorching sun. With just a few clicks, you can order your [favorite crackers](https://www.velcrackerssivakasi.com/shop/) from the comfort of your home and have them delivered to your doorstep. ## Wide Variety and Choice Online stores offer a vast range of crackers, from traditional sparklers to fancy aerial fireworks. You can browse through different categories, compare prices, read reviews, and make an informed decision. ## Safe and Secure Safety is a top priority when it comes to crackers. Reputable online stores like Vel Traders Crackers ensure that all their products meet safety standards. They also provide detailed instructions on how to use the crackers safely. ## Attractive Discounts and Offers Who doesn’t love a good discount? Online stores often offer attractive discounts and deals, making it more economical to buy crackers online. Vel Traders Crackers frequently has special offers during the festive season. [Vel Traders Crackers: The Best Choice for Online Crackers](https://www.velcrackerssivakasi.com/shop/ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kiza6i3u95a0vrc1i0zc.jpeg)) ## A Trusted Name in the Industry Vel Traders Crackers has been a trusted name in the cracker industry for years. Their commitment to quality, safety, and customer satisfaction has earned them a loyal customer base. ## High-Quality Products Quality is non-negotiable when it comes to crackers. Vel Traders Crackers ensures that all their products are made using high-quality materials and are thoroughly tested for safety. ## Eco-Friendly Options In today’s environmentally conscious world, Vel Traders Crackers offers eco-friendly cracker options. These crackers produce less smoke and noise, making them a better choice for the environment. ## User-Friendly Website Their website is designed to provide a seamless shopping experience. With easy navigation, detailed product descriptions, and secure payment options, buying crackers online has never been easier. ## Customer Support Vel Traders Crackers prides itself on excellent customer support. Whether you have a query about a product or need help with your order, their dedicated support team is always ready to assist. ## Types of Crackers Available at Vel Traders Crackers Sparklers Sparklers are a must-have for any celebration. They are safe, easy to use, and loved by kids and adults alike. Flower Pots Flower pots are known for their beautiful display of colorful sparks. They are a popular choice for Diwali and other festive occasions. Chakras (Ground Spinners) Chakras spin on the ground, creating a mesmerizing effect of colorful sparks. They are fun to watch and add excitement to any celebration. Rockets Rockets shoot up into the sky and burst into a spectacular display of lights and colors. They are perfect for creating a grand finale for your celebrations. Bombs and Sound Crackers For those who love loud bangs, bombs and sound crackers are the way to go. They come in various sizes and intensities, adding a thrilling element to the festivities. Fancy Fireworks Fancy fireworks are designed to create stunning visual effects. They include items like sky shots, multi-color fountains, and more, making your celebrations truly memorable. Safety Tips for Using Crackers Read the Instructions Always read the instructions on the cracker package. It provides important information on how to use the cracker safely. Keep a Safe Distance Maintain a safe distance while lighting crackers. Make sure children are supervised at all times. Use a Candle or Agarbatti Use a candle or agarbatti (incense stick) to light crackers. It keeps you at a safe distance from the cracker. Keep Water Handy Keep a bucket of water or a fire extinguisher nearby in case of any mishaps. Avoid Loose Clothing Avoid wearing loose clothing while bursting crackers. It reduces the risk of catching fire. ## Customer Reviews John D. "I have been buying crackers from Vel Traders for years. Their quality is unmatched, and the variety they offer is amazing. Buying online is so convenient, and their customer service is top-notch." Priya S. "I was a bit skeptical about buying crackers online, but Vel Traders Crackers changed my perception. The entire process was smooth, and the crackers were delivered on time. The quality was excellent, and the discounts were a bonus." Ravi K. "Vel Traders Crackers offers the best eco-friendly crackers. They are safe, produce less smoke, and are perfect for the environment-conscious people like me. Highly recommend!" FAQs 1. Are the crackers from Vel Traders Crackers safe to use? Yes, all crackers from Vel Traders Crackers meet safety standards and come with detailed usage instructions. 2. How long does it take to deliver the crackers? Delivery times vary depending on your location, but Vel Traders Crackers usually delivers within 3-5 business days. 3. Do they offer discounts on bulk purchases? Yes, Vel Traders Crackers offers attractive discounts on bulk purchases, especially during the festive season. 4. Can I track my order? Yes, once your order is dispatched, you will receive a tracking number to monitor the delivery status. 5. What payment methods are accepted? Vel Traders Crackers accepts various payment methods, including credit/debit cards, net banking, and digital wallets. ## Conclusion: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1d4p6jksg78xi0b31qe2.jpeg) Light Up Your Celebrations with Vel Traders Crackers Buying crackers online has never been easier or more convenient. With Vel Traders Crackers, you get high-quality, safe, and eco-friendly crackers delivered right to your doorstep. So, what are you waiting for? Make your celebrations brighter and more memorable with Vel Traders Crackers. Visit their website today and place your order to enjoy exciting discounts and offers.
vel_traderscrackers_10df
1,891,335
If I Were to Start Over as a Developer I'd...
If I Were to Start My Web Developer Career Over I'd... Looking back on my journey from a...
0
2024-06-17T13:46:07
https://dev.to/qyrusai/if-i-were-to-start-over-as-a-developer-id-2kam
webdev, developer, worklife, beginners
### If I Were to Start My Web Developer Career Over I'd... Looking back on my journey from a wide-eyed novice to a seasoned senior web developer, I often think about what I would do differently if I could start over. Hindsight, as they say, is 20/20. With the wisdom and experience I've gained, here are some things I would tell my younger self, or anyone just beginning their web development career. #### Master the Basics When I first started, I was so eager to dive into frameworks and libraries that I didn't spend enough time mastering the fundamentals: HTML, CSS, and JavaScript. These core technologies are the building blocks of web development. If I were to begin again, I’d dedicate more time to really understanding these basics. They make everything else easier to learn and use. #### Embrace Version Control Early I used to think version control systems like Git were just an unnecessary hassle. How wrong I was! Mastering Git has been one of the most valuable skills I’ve acquired. It not only makes collaboration easier but also allows you to track changes, experiment without fear, and manage different versions of your projects. If I could turn back time, I'd start using Git from day one. #### Develop Problem-Solving Skills Web development is fundamentally about solving problems. Early on, I often got caught up in learning the latest languages and frameworks. While that's important, focusing on developing strong problem-solving skills would have saved me countless hours of frustration. These skills transcend any specific technology and are invaluable throughout your career. #### Build a Portfolio Early I underestimated the power of a good portfolio in the beginning. While a resume can tell potential employers what you know, a portfolio shows them. If I could start over, I’d begin building real projects as soon as possible. Contributions to open source, freelance work, or personal projects not only showcase your skills but also provide practical experience and a sense of accomplishment. #### Join a Community For a long time, I tried to go it alone. Joining a community—whether through local meetups, online forums, or open source projects—has been one of the most enriching experiences of my career. The support, feedback, and opportunities you gain from being part of a community are invaluable. If I were starting over, I’d get involved in the community much sooner. #### Hone Soft Skills Technical skills are crucial, but soft skills like communication, teamwork, and time management are just as important. Early in my career, I focused solely on coding, but as I advanced, I realized that being able to communicate effectively with team members and stakeholders is critical. If I could start over, I’d work on these skills from the beginning. ### Stay Curious and Keep Learning The tech industry is constantly evolving. The best developers I know are always curious, always learning, and always experimenting with new ideas. Learning doesn’t stop when you leave the classroom or get your first job. It’s a lifelong journey. If I were starting over, I’d remind myself to stay curious and keep learning. ### Balance Work and Life Burnout is real, and it’s something I didn’t take seriously enough at the start. Working long hours might seem like the way to fast-track your career, but it's important to find a balance. Taking regular breaks, having hobbies, and spending time with loved ones can keep you motivated and creative. If I could redo my career, I’d place a greater emphasis on maintaining a healthy work-life balance. Starting a career in web development is both exciting and challenging, and I would absolutely do it all again if I could. While everyone's journey is unique, the wisdom gained from years of experience can provide a valuable roadmap. Master the basics, embrace essential tools, develop your problem-solving skills, and never stop learning. Join a community, build a portfolio, and don’t forget the importance of soft skills and work-life balance. Wishing you all the best in your journey, whether you're just starting now, or you've been working in it for 15+ years.
qyrusai
1,891,334
Why Set Doesn't Allow Duplicates in Java
In Java, a Set is a collection that does not allow duplicate elements. This behavior is enforced by...
0
2024-06-17T13:45:54
https://dev.to/codegreen/why-set-doesnt-allow-duplicates-in-java-3ad8
streams, java
In Java, a Set is a collection that does not allow duplicate elements. This behavior is enforced by the underlying implementations of the Set interface, such as HashSet, TreeSet, and LinkedHashSet. The main reason Sets do not allow duplicates is due to the contract defined by the Set interface: * **Uniqueness:** Sets are designed to store unique elements only. Each element must be unique based on its equals() method. If an attempt is made to add an element that already exists in the Set, the add operation will return false (for HashSet and TreeSet) or simply not add the element (for LinkedHashSet). * **Efficiency:** By disallowing duplicates, Sets can optimize operations like searching and adding elements. HashSet, for example, uses a hash table for storage, making lookups and insertions constant time on average. * **Behavior consistency:** Sets are often used in scenarios where uniqueness of elements is crucial, such as storing keys in maps or managing collections of distinct entities. Here's a simple example demonstrating the behavior of a HashSet, which is a common implementation of the Set interface: ```java import java.util.HashSet; import java.util.Set; public class SetExample { public static void main(String[] args) { Set<Integer> set = new HashSet<>(); set.add(1); set.add(2); set.add(3); set.add(2); // This duplicate element will not be added System.out.println(set); // Output: [1, 2, 3] } } ``` This example illustrates how duplicates are automatically prevented when using a Set in Java.
manishthakurani
1,891,330
In Java how to create a custom ArrayList that doesn't allow duplicate? #Interview Question
Creating a Custom ArrayList in Java that Doesn't Allow...
0
2024-06-17T13:39:30
https://dev.to/codegreen/in-java-how-to-create-a-custom-arraylist-that-doesnt-allow-duplicate-dbj
streams, java
##Creating a Custom ArrayList in Java that Doesn't Allow Duplicates## ================================================================= In Java, you can create a custom ArrayList that ensures no duplicate elements are added **by extending the ArrayList class** and **overriding the add method**. ```java import java.util.ArrayList; public class CustomArrayList<E> extends ArrayList<E> { // Override add method to prevent duplicates @Override public boolean add(E e) { if (!contains(e)) { return super.add(e); } return false; } public static void main(String[] args) { CustomArrayList<Integer> list = new CustomArrayList<>(); list.add(1); list.add(2); list.add(3); list.add(2); // This duplicate will not be added System.out.println(list); // Output: [1, 2, 3] } } ``` This custom ArrayList implementation checks if an element already exists before adding it. If the element is not found, it adds the element using the superclass's add method. Otherwise, it returns false, indicating that the element was not added due to duplication.
manishthakurani
1,891,329
Error Handling and Testing in Go
Error Handling Error handling is a crucial aspect of robust software development, ensuring...
0
2024-06-17T13:39:09
https://dev.to/gophers_kisumu/error-handling-and-testing-in-go-4c9n
### Error Handling Error handling is a crucial aspect of robust software development, ensuring that applications can gracefully handle unexpected conditions and provide meaningful feedback to users and developers. In Go, error handling is done explicitly, promoting clarity and simplicity. #### Error Types Go uses the built-in `error` interface to represent errors. The `error` interface is defined as: ```go type error interface { Error() string } ``` Any type that implements the `Error` method satisfies this interface. Common error types include: - **Basic Errors**: Created using the `errors.New` function from the `errors` package. ```go import "errors" err := errors.New("an error occurred") ``` - **Formatted Errors**: Created using the `fmt.Errorf` function, which allows for formatted error messages. ```go import "fmt" err := fmt.Errorf("an error occurred: %v", someValue) ``` - **Custom Errors**: Custom error types can be defined by implementing the `Error` method on a struct. ```go type MyError struct { Code int Message string } func (e *MyError) Error() string { return fmt.Sprintf("Error %d: %s", e.Code, e.Message) } err := &MyError{Code: 123, Message: "something went wrong"} ``` #### Custom Error Handling Custom error handling involves creating specific error types that convey additional context about the error. This is particularly useful for distinguishing between different error conditions and handling them appropriately. ```go type NotFoundError struct { Resource string } func (e *NotFoundError) Error() string { return fmt.Sprintf("%s not found", e.Resource) } func findResource(name string) error { if name != "expected" { return &NotFoundError{Resource: name} } return nil } err := findResource("unexpected") if err != nil { if _, ok := err.(*NotFoundError); ok { fmt.Println("Resource not found error:", err) } else { fmt.Println("An error occurred:", err) } } ``` #### Best Practices - **Return Errors, Don't Panic**: Use error returns instead of panics for expected errors. Panics should be reserved for truly exceptional conditions. - **Wrap Errors**: Use `fmt.Errorf` to wrap errors with additional context. - **Check Errors**: Always check and handle errors returned from functions. - **Sentinel Errors**: Define and use sentinel errors for common error cases. ```go var ErrNotFound = errors.New("not found") ``` - **Use `errors.Is` and `errors.As`**: For error comparisons and type assertions in Go 1.13 and later. ```go if errors.Is(err, ErrNotFound) { // handle not found error } var nfErr *NotFoundError if errors.As(err, &nfErr) { // handle custom not found error } ``` ### Testing Testing is essential to ensure the correctness, performance, and reliability of your Go code. The Go testing framework is simple and built into the language, making it easy to write and run tests. #### Writing Test Cases Test cases in Go are written using the `testing` package. A test function must: - Be named with the prefix `Test` - Take a single argument of type `*testing.T` ```go import "testing" func TestAdd(t *testing.T) { result := Add(2, 3) if result != 5 { t.Errorf("expected 5, got %d", result) } } ``` #### Using the `testing` Package The `testing` package provides various functions and methods for writing tests: - **t.Error / t.Errorf**: Report a test failure but continue execution. - **t.Fatal / t.Fatalf**: Report a test failure and stop execution. - **t.Run**: Run sub-tests for better test organization. ```go func TestMathOperations(t *testing.T) { t.Run("Add", func(t *testing.T) { result := Add(1, 2) if result != 3 { t.Fatalf("expected 3, got %d", result) } }) t.Run("Subtract", func(t *testing.T) { result := Subtract(2, 1) if result != 1 { t.Errorf("expected 1, got %d", result) } }) } ``` #### Benchmarking and Profiling Benchmarking tests the performance of your code, while profiling helps identify bottlenecks and optimize performance. Benchmarks are also written using the `testing` package and are named with the prefix `Benchmark`. ```go func BenchmarkAdd(b *testing.B) { for i := 0; i < b.N; i++ { Add(1, 2) } } ``` To run benchmarks, use `go test` with the `-bench` flag: ```sh go test -bench=. ``` Profiling can be done using the `-cpuprofile` and `-memprofile` flags: ```sh go test -cpuprofile=cpu.prof -memprofile=mem.prof ``` Use the `go tool pprof` command to analyze the profile data: ```sh go tool pprof cpu.prof ``` ### Conclusion Proper error handling and comprehensive testing are fundamental practices in Go programming. By defining clear error types, following best practices for error handling, and writing thorough test cases, you can build robust and reliable applications. Additionally, leveraging benchmarking and profiling tools will help ensure your code performs optimally.
gophers_kisumu
1,891,083
Swift 101: Collections part III - Tuples and Dictionaries
Hola Mundo! Welcome to a new article in a series of Swift 101 notes 📝 I created these...
27,019
2024-06-17T13:38:39
https://dev.to/silviaespanagil/swift-101-collections-part-iii-tuples-and-dictionaries-42p0
swift, beginners, learning, mobile
#Hola Mundo! <Enter> Welcome to a new article in a series of [Swift 101 notes](https://dev.to/silviaespanagil/swift-101-getting-into-ios-development-gji) 📝 I created these notes while learning the language and decided to share them because why not? If you're new to Swift or interested in learning more about this language, I invite you to follow my series! 🙊 Last week I shared a post with [the second part of collections which was sets](https://dev.to/silviaespanagil/swift-101-collections-part-ii-sets-3i41). Next week, I'll share the last part of collections with Enums! So, let's get to it 💪​! ___ As a quick refresher, "Collections" are types that store multiple values. They arrange all those values in certain ways so we can access them in the future. Think of it like a fruit basket with many fruits in it. Similarly, arrays in Swift store multiple values of the same type in an ordered list, like a line of apples, oranges, and bananas neatly arranged in your basket 🍎​🍊​🍌​ There are many types of Collections, like Arrays and Sets which we already discussed. Today, let's talk about ​​✨**Tuples and Dictionaries​**✨​. <Enter> ___ ###Tuples <Enter> Tuples are a collection type that allows us to group multiple values into a single compound value. These values can be of different types, making tuples a flexible way to group related values together. Here are some important characteristics and rules about tuples in Swift: * **Fixed Size:** The number of elements in a tuple is fixed when you create it. You cannot add or remove elements after the tuple is created. * **Immutable Types:** While you can change the values within a tuple, you cannot change the types of the elements. The types of the elements are defined when the tuple is created and must remain the same. <Enter> ####How to declare a Tuple Declaring a tuple in Swift is really easy. You simply enclose the elements in parentheses and separate them by commas. You can also name the elements for easier access. Here's an example of how to declare and use a tuple: ```swift var character = (name: "Aragorn", nickname: "Strider", ageOfDeath: 219) print(character) // Output: (name: "Aragorn", nickname: "Strider", ageOfDeath: 219) ``` ❌​​ You can't declare an empty tuple, because of the characteristics previously explained. Doing so will show an error in Xcode❌​ <Enter> ___ ### Tuples Managing: Accessing Values ####Modifying Tuples As with any variable, we can change the values of our Tuple. However, one of the main characteristics of the Tuples is that we cannot change their elements or types. To update a Tuple, we can do: ```swift ✅ character = (name: "Arwen", nickname: "Undomiel", ageOfDeath: 2901) ``` Attempting to assign values differently will result in an error: ```swift ❌ character = (name: "Frodo") // Error: cannot assign value of type '(name: String)' to type '(name: String, nickname: String, ageOfDeath: Int)' ``` <Enter> ##### **Accessing Tuples** To access a tuple value, we must only take the name of the variable and reference the element we want to access. ```swift var character = (name: "Aragorn", nickname: "Strider", ageOfDeath: 219) print(character.name) // Output: Aragorn ``` We can also access one element of our Tuple and then change its value, while the rest of the elements remain with the old value. ```swift var character = (name: "Aragorn", nickname: "Strider", ageOfDeath: 219) character.name = "Aragorn, son of Arathorn" print(character.name) // Output: Aragorn, son of Arathorn print(character) // Output: (name: "Aragorn, son of Arathorn", nickname: "Strider", ageOfDeath: 219) ``` <Enter> ___ #### Tuples methods and properties Tuples in Swift do not have any properties or methods. They are a value type that only groups values. However, there are a couple of operations possible with tuples: <Enter> ####Access to elements We already showed that we can access an element using its name, but we can also do so using the index ```swift var character = (name: "Aragorn", nickname: "Strider", ageOfDeath: 219) print(character.name) // Output: Aragorn print(character.1) // Output: Aragorn ``` ####Decomposition You can decompose the tuple into separated variables ```swift let (name, nickname, ageOfDeath) = character print(name) // Output: Aragorn print(nickname) // Output: Strider print(ageOfDeath) // Output: 219 ``` <Enter> ####Comparison You can compare tuples if the elements are comparable. ```swift var character = (name: "Aragorn", status: "Married") let anotherCharacter = (name: "Arwen Undomiel", status: "Married") print(character == anotherCharacter) // Output: false print(character.status == anotherCharacter.status) // Output: true ``` <Enter> ___ ###Dictionaries <Enter> Dictionaries are another collection type available in Swift. Dictionaries are stored not in a numeric position (index) but with a key. ####How to declare a Dictionary To **declare** a dictionary, we must specify the key type inside brackets [ ] and the value using a colon. ```swift var emptyDictionary = [String: String] = [:] emptyDictionary["keyName"] = "Value" print(emptyDictionary) //Output:["keyName": "Value"] var characterDetails = [ "Aragorn": "Son of Arathorn, also known as Strider", "Frodo": "Bearer of the One Ring", "Gandalf": "The Grey, later known as Gandalf the White" ] print(characterDetails["Aragorn"]) // Output: Son of Arathorn, also known as Strider ``` <Enter> ___ ### Dictionaries Managing: Accessing Values #### Accessing values with iterations We can access dictionary values by iterating on their values. When iterating through a dictionary, both keys and values can be accessed if needed. ```swift var characterDetails = [ "Aragorn": "Son of Arathorn, also known as Strider", "Frodo": "Bearer of the One Ring" ] for (key, value) in characterDetails { print("\(key) also known as \(value)") } /* Possible output: Aragorn also known as Son of Arathorn, also known as Strider Frodo also known as Bearer of the One Ring*/ ``` 💢​Note that when iterating over Dictionaries in Swift, the elements may be returned in an unordered manner for efficiency reasons.💢 <Enter> #### Accessing values using the key We may access a dictionary using the key name if we know it in advance. ```swift print(characterDetails["Aragorn"]) //Output: Son of Arathorn, also known as Strider ``` We may also store all the keys or all the values in a new collection to use. ```swift let characterNames = characterDetails.key print(characterNames) //Output: ["Frodo", "Aragorn"] let characterDescription = characterDetails.values print(characterDescription) //Output: ["Son of Arathorn, also known as Strider", "Bearer of the One Ring"] ``` <Enter> #### Adding new values to an existing dictionary To add a new value to an existing dictionary we should follow the next syntax `dictionaryName[NewKey] = NewValue` ```swift var characterDetails = [ "Aragorn": "Son of Arathorn, also known as Strider", "Frodo": "Bearer of the One Ring" ] //New value characterDetails["Galadriel"] = "Lady of Lothlórien, Nenya bearer" print(characterDetails) //Output: ["Frodo": "Bearer of the One Ring", "Galadriel": "Lady of Lothlórien, Nenya bearer", "Aragorn": "Son of Arathorn, also known as Strider"] ``` <Enter> ___ ### Dictionaries methods and properties #### **`.updateValue()`:** Allows us to update the value of an existing key ```swift characterDetails.updateValue("Just a hobbit", forKey: "Frodo") print(characterDetails["Frodo"]) //Output: "Just a hobbit" ``` <Enter> #### **`.remove()`:** Allows us to remove a value completely from our dictionary. ```swift characterDetails.removeValue(forKey: "Frodo") print(characterDetails) //Output: ["Aragorn": "Son of Arathorn, also known as Strider", "Galadriel": "Lady of Lothlórien, Nenya bearer"] ``` <Enter> #### **`.removeAll()`:** Allow us to remove all the values in the dictionary. ```swift characterDetails.removeAll() print(characterDetails) //Output: [:] ``` <Enter> #### **`.isEmpty()`:** Returns a boolean reflecting if the dictionary is empty or not ```swift print(characterDetails.isEmpty) //Output: true ``` <Enter> #### **`.count()`:** Count all the elements inside the dictionary ```swift var characterDetails = [ "Aragorn": "Son of Arathorn, also known as Strider", "Frodo": "Bearer of the One Ring" ] print(characterDetails.count) //Output: 2 ``` <Enter> ___ ###Want to keep learning about Swift? This a full series on [Swift 101](https://dev.to/silviaespanagil/swift-101-getting-into-ios-development-gji), next chapter will be the final part of Collections where I will share about Enums, so I hope to see you there! Remember that you can always go further into this information by checking the official documentation and courses [here](https://developer.apple.com/learn/) If you enjoyed this, please share, like, and comment. I hope this can be useful to someone and that it will inspire more people to learn and code with Swift
silviaespanagil
1,891,328
Bubble game CSS only
Check out this Pen I made! This is a beautiful bubble that fly in an area. And you should to click...
0
2024-06-17T13:38:09
https://dev.to/tidycoder/bubble-game-css-only-2occ
codepen
Check out this Pen I made! This is a beautiful bubble that fly in an area. And you should to click on. {% codepen https://codepen.io/TidyCoder/pen/KKLZoYL %}
tidycoder
1,891,327
Helm 3: Advanced Package Management for Kubernetes Applications
Helm 3 is a significant upgrade to the Helm package manager for Kubernetes applications. This version...
0
2024-06-17T13:37:25
https://dev.to/platform_engineers/helm-3-advanced-package-management-for-kubernetes-applications-2o6b
Helm 3 is a significant upgrade to the Helm package manager for Kubernetes applications. This version introduces several key features that enhance the management and deployment of applications on Kubernetes clusters. In this blog, we will delve into the technical details of Helm 3 and explore its capabilities. ### Package Management Helm 3 uses a package manager called `helm` to manage and install applications on Kubernetes clusters. The `helm` command-line tool provides a simple and intuitive interface for users to manage their applications. The package manager is responsible for installing, upgrading, and uninstalling applications on the cluster. ### Charts Helm 3 uses a concept called charts to define and manage applications. Charts are collections of YAML files that describe the application, its dependencies, and the resources required to deploy it. Charts can be stored in a chart repository, which is a centralized location for managing and sharing charts. Here is an example of a simple chart: ```yaml # Chart.yaml name: my-app version: 1.0.0 description: A simple web application ``` ### Repositories Helm 3 supports multiple chart repositories, which can be added and managed using the `helm repo` command. Repositories can be local or remote, allowing users to share and manage charts across different environments. To add a repository: ```bash helm repo add my-repo https://my-repo.com/charts ``` ### Dependencies Helm 3 introduces a new concept called dependencies, which allows charts to depend on other charts. This feature enables the creation of complex applications by combining multiple charts. Here is an example of a chart with dependencies: ```yaml # Chart.yaml name: my-app version: 1.0.0 description: A simple web application dependencies: - name: mysql version: 1.0.0 repository: https://my-repo.com/charts ``` ### Releases Helm 3 introduces a new concept called releases, which represents a specific deployment of an application on a Kubernetes cluster. Releases are managed using the `helm release` command. To create a new release: ```bash helm install my-app --set mysql.password=mysecretpassword ``` ### Upgrades and Rollbacks Helm 3 provides features for upgrading and rolling back releases. Upgrades can be performed using the `helm upgrade` command, and rollbacks can be performed using the `helm rollback` command. To upgrade a release: ```bash helm upgrade my-app --set mysql.password=newpassword ``` To rollback a release: ```bash helm rollback my-app 1 ``` ### Security Helm 3 introduces several security features, including support for SSL/TLS certificates and encryption of sensitive data. To add an SSL/TLS certificate: ```bash helm install my-app --set-file tls.crt=cert.pem --set-file tls.key=key.pem ``` Helm 3 is designed to integrate with [platform engineering](www.platformengineers.io) tools and workflows, providing a robust and scalable solution for managing Kubernetes applications. ### Conclusion Helm 3 is a powerful tool for managing and deploying Kubernetes applications. Its advanced package management features, including charts, repositories, dependencies, releases, upgrades, and rollbacks, make it an ideal choice for complex application deployments. With its [robust security](https://platformengineers.io/blog/securing-kubernetes-beyond-rbac-and-pod-security-policies-psp/) features and integration with platform engineering tools, Helm 3 is a valuable addition to any Kubernetes environment.
shahangita
1,891,326
🐚🦀Shell commands rewritten in rust
Introduction For every software engineer, developer, or programmer, command-line tools are...
0
2024-06-17T13:36:54
https://dev.to/girordo/shell-commands-rewritten-in-rust-23id
rust, shell, terminal, beginners
## Introduction For every software engineer, developer, or programmer, command-line tools are essential for making life easier, especially when working with code. In recent years, with the rise in popularity of the Rust language, we have seen a growing number of these tools being rewritten in Rust, providing better performance and reliability. But you might be wondering: Why should I use a command rewritten in Rust? The answer is simple: because it's amazing! In this article, we will present a list of some of these tools that we use in our daily lives. 🚨🚨🚨 **WARNING:If you like it this article please click in reaction and save, also follow me here and on my [Github](https://github.com/girordo)** ## Essential Tools ### [**bat**](https://github.com/sharkdp/bat) "bat" is a clone of `cat` with syntax highlighting and Git integration. It works on Windows, MacOS, and Linux, offering syntax highlighting for a wide variety of file extensions. ### [**bottom**](https://github.com/ClementTsang/bottom) "bottom" is a process and system monitoring tool. It provides graphical widgets to monitor CPU, RAM, network, disk, and other resources, making it a versatile and highly customizable choice for Linux, MacOS, and Windows. ### [**lsd**](https://github.com/lsd-rs/lsd) "lsd" is an enhanced version of the `ls` command. It adds colors, icons, tree view, and more formatting options, making directory listing more informative and visually pleasing. ### [**fd**](https://github.com/sharkdp/fd) "fd" is a fast and user-friendly alternative to the `find` command for locating entries in your file system. While it does not aim to replicate all the advanced functionalities of `find`, it offers opinionated defaults for most use cases. ### [**sd**](https://github.com/chmln/sd) "sd" is an intuitive command-line tool for finding and replacing text. It simplifies the search and replace process with an easy-to-understand syntax, eliminating the need to deal with backslashes and other special characters. ## Bonus Here are some additional tools that we plan to start using: ### [**xh**](https://github.com/ducaale/xh) "xh" is a friendly and fast tool for making HTTP requests. It shares many of the great design features of HTTPie but focuses on improving performance. While we still use HTTPie out of habit, we plan to switch to "xh" soon. ### [**joshuto**](https://github.com/kamiyaa/joshuto) A terminal file manager inspired by Ranger. Currently, we use Ranger, but we are considering switching to "joshuto." ### [**jql**](https://github.com/yamafaktory/jql) "jql" is a JSON Query Language tool built in Rust. It provides a convenient way to query JSON data effectively. ### [**xcp**](https://github.com/tarka/xcp) "xcp" is a partial clone of the Unix "cp" command. It is not intended to be a complete replacement but rather a complementary utility that provides more user-friendly feedback and specific optimizations for certain tasks. ## Custom Configurations Here are some useful aliases that we have configured for "bat" and "lsd": ``` # Aliases alias bat='batcat' alias ll='lsd -la' alias lld='lsd -la --date=relative --group-dirs=first' alias lle='lsd -la --date=relative --sort=extension --group-dirs=first' alias la='lsd -a' alias l='lsd -l' alias lt='lsd --tree' ``` ## Conclusion These are some of the tools we love to use in our daily workflow, and we hope you find them useful as well. Additionally, we introduced some options that we plan to adopt in the future. For more information and to explore more commands and applications rewritten in Rust, check out the following links: - [List of commands](https://gist.github.com/sts10/daadbc2f403bdffad1b6d33aff016c0a) - [Original article](https://zaiste.net/posts/shell-commands-rust/) Try out these tools and see how they can enhance your development and terminal experience!
girordo
1,891,230
How I've implemented the Medallion architecture using Apache Spark and Apache Hdoop
Table of contents What is this project all about System architecture The Retail...
0
2024-06-17T13:35:05
https://dev.to/fadygrab/how-ive-implemented-the-medallion-architecture-using-apache-spark-and-apache-hdoop-3lmb
dataengineering, apachespark, data, opensource
## Table of contents - [What is this project all about](#what-is-this-project-all-about) - [System architecture](#system-architecture) - [The Retail app](#the-retail-app) - [Apache Spark Cluster](#apache-spark-cluster) - [The Development Jupyter server](#the-development-jupyter-server) - [The Orchestrator](#the-orchestrator) - [Spark incremental database loading](#spark-incremental-database-loading) - [Hadoop cluster](#hadoop-cluster) - [The dashboard app](#the-dashboard-app) - [Spark Operations](#spark-operations) - [Bronze stage](#bronze-stage) - [Silver stage](#silver-stage) - [Gold stage](#gold-stage) - [Exposed ports and services:](#exposed-ports-and-services) - [Instructions](#instructions) - [Stopping the project](#stopping-the-project) - [Things to consider if you want to make this into real-world project](#things-to-consider-if-you-want-to-make-this-into-real-world-project) - [Key takeaways](#key-takeaways) - [References](#references) ## What is this project all about In this project, I'm exploring the *[Medallion Architecture](https://learn.microsoft.com/en-us/azure/databricks/lakehouse/medallion)* which is a data design pattern that organizes data into different layers based on structure and/or quality. I'm creating a fictional scenario where a large enterprise that has several branches across the country. Each branch receives purchase orders from an app and deliver the goods to their customers. The enterprise wants to identify the branch that receives the most purchase requests and the branch that has the minimum average delivery time. To achieve that, I've used [Apache Spark](https://spark.apache.org/) as a distributed compute engine and [Apache Hadoop](https://hadoop.apache.org/), in particular HDFS, as my data storage layer. Apache Spark ingest, processes, and stores the app's data on HDFS to be served to a custom dashboard app. You can find all about it, in [this Github repo](https://github.com/FadyGrAb/data-engineering-mini-projects/tree/main/mini-porjects/03-medallion-architecture-with-spark-and-hadoop) ![project](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wzv7eb6ynbl1xau94s5b.jpg) ## System architecture ### The Retail app I've simulated the retail app with a python script that randomly generates purchases and handle their lifecycle as NEW -> INTRANSIT -> DELIVERED. The app then stores that in a [Postgresql](https://www.postgresql.org/) database named *retail* that has two tables, *orders* and *ordershistory*. Separate users with appropriate privileges are created to the app to write the data to the database and to Spark, to read the data. The database is initiated with `postgresql/init.sql` script. ### Apache Spark Cluster A [standalone ](https://spark.apache.org/docs/latest/spark-standalone.html) cluster with one master none (spark-master) and two worker nodes (spark-worker-1 and spark-worker-2). The REST API endpoint and history server are enabled. I've build its docker image and didn't use any image from Docker Hub. ### The Development Jupyter server A jupyter server on the Master Node of the spark cluster running on port 8888 that is exposed to localhost in the Docker compose file. This is how I developed the pipeline's PySpark scripts. This allowed rapid and convenient development for the Spark jobs. The server is configures to have a password `pass` that can be changed from `spark/jupyter/jupyter_notebook_config.py` or via `JUPYTER_NOTEBOOK_PASSWORD` if passed to the Docker compose file. ### The Orchestrator A python script that communicates with Apache Spark via its REST API endpoint to submit *PySpark* scripts stored on HDFS in `hdfs:<namenode>/jobs/*` and "orchestrate" the pipeline *Ingestion* -> *Transformation* -> *Serving* via Spark's drivers status API. All the PySpark files are in `spark/pipeline/`. ### Spark incremental database loading The `ingestion.py` script implements a *bookmarking* mechanism which allows Spark to check for and sets bookmarks during the database data ingestion so it only loads new data and not to strain the app's databases with long running queries each time the pipeline is executed. The bookmark is stored in HDFS in `hdfs:<namenode>/data/bronze/bookmark` as JSON. ### Hadoop cluster The official image on Docker Hub with minimal configuration changes on `hadoop/config` where WebHDFS is enabled. ### The dashboard app A Flask app the read the csv file stored in the *Gold* folder in HDFS and outputs the resulting data as graphs. It meant to be the data consumer for this pipeline where business users can get their metrics. ## Spark Operations ### Bronze stage Spark is incrementally ingesting the *orders* and *ordershistory* table partitions by date in *parquet* format in the *Bronze*. ### Silver stage Spark joins the two table from the *Bronze* stage and filter out "INTRANSIT" records and stores them in the *Silver* directory in *parquet* format. ### Gold stage Spark pivot the joined table to get one record per order that contains the order date and delivery date, subtracts then and get the delivery time in hours. Then aggregates the result to get the orders count and average delivery time per branch. Finally, it stores them as CSV files in the *Gold* directory. ## Exposed ports and services: I've configured the the majority of the the services in the Docker compose file to be accessible from localhost: - **Dashboard App**: http://localhost:5000 - **Spark master node UI**: http://localhost:8080 - **Spark worker node 1 UI**: http://localhost:8081 - **Spark worker node 2 UI**: http://localhost:8082 - **Spark history server**: http://localhost:18080 - **Spark running driver UI**: http://localhost:4040 - **Jupyter server running on Spark cluster**: http://localhost:8888 - **Hadoop namenode UI**: http://localhost:9870 ## Instructions I assume that you have Python and Docker installed in your system. - Go to the project's root directory. - Create a Python virtual env and install the requirements in `./requirements.txt` - Build the custom Spark docker image (this could take a while) ```bash docker build spark:homemade ./spark/ ``` - Build the Dashboard app docker image ```bash docker build dashboard:homemade ./dashboard/ ``` - Spin up the system ```bash docker compose up -d ``` - Activate the virtual env and run the retail app. You will see simulated transactions in your terminal. ```bash python ./retailapp/retailapp.py ``` - Open a new terminal and initiate HDFS directories and copy the pipeline PySpark scripts to HDFS ```bash docker compose exec hadoop-namenode bash -c "/opt/hadoop/init.sh" ``` - Go to the dashboard's app url (http://localhost:5000) and validate that there aren't any data yet. - Wait a while for the retail app's database to populate then run the Spark pipeline withe the orchestrator ```bash python ./orchestrator/scheduler.py ``` - (Optional) You can check the progress from Spark's master node UI. - After the pipeline finishes successfully (all stages are ended with FINISHED), visit the dashboard's app url again and refresh. You will get the processed data now. - (Optional) You can rerun the pipeline and notice the dashboard's data changes. ## Stopping the project I didn't configure any Named volumes on Docker, so once you stop the project, all the data is lost. To stop the project just execute `docker compose down` ## Things to consider if you want to make this into real-world project - Instead of the custom *orchestrator* I used, a proper orchestration tool should replace it like [Apache Airflow](https://airflow.apache.org/), [Dagster](https://dagster.io/), ..., etc. - Also, instead of the custom Dashboard app, a proper BI tool like [Power BI](https://app.powerbi.com/home), [Tableau](https://www.tableau.com/), [Apache Superset](https://superset.apache.org/), ..., etc. will be more powerful and flexible. - Bookmarking should be in the *Silver* stage as well. - In a multi-tenant environment, controlling access to the Spark and Hadoop cluster is a must. Tool like [Apache Ranger](https://ranger.apache.org/) are usually used to serve this purpose. - This architecture isn't limited to ingesting from one source, multiple sources can be ingested too and enriched from other sources in the *Silver* stage. - The Spark operation in this project are rather simple. More complex operations can be done leveraging Spark's powerful analytics engine. ## Key takeaways - You can control almost everything in a spark cluster from the `spark-defaults.conf` file. - Spark has a REST endpoint to submit jobs and manage the cluster but it isn't enabled by default. - Spark history server isn't enabled by default. - Don't use the latest Java version for Spark even if the docs says it's supported. (I had compatibility issue that appeared down the road as I was using Java 17). - Developing Spark driver apps in a jupyter notebook will make your life way easier. - Hadoop also has an http endpoint called WebHDFS which isn't enabled by default too. - docker-entrypoint-initdb.d is very useful for Postgresql initiation. - Never use plaintext passwords in your code, instead you env vars or cloud secrets services. ## References All the used tool official docs.
fadygrab
1,891,324
Writing the best custom header UX for Zoraxy
Recently, I am working on improving the Zoraxy (my open source reverse proxy server written in...
0
2024-06-17T13:33:49
https://dev.to/tobychui/writing-the-best-custom-header-ux-for-zoraxy-367n
ux, ui, html, design
Recently, I am working on improving the [Zoraxy](https://github.com/tobychui/zoraxy) (my open source reverse proxy server written in Golang). As this is not a post about reverse proxy but UX, I am gonna sum up this project in one sentence: Zoraxy is a reverse proxy server that uses UX first design. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7wwdn1oeg066snvep1o4.png) And one of the most challenging part I recently encounter is designing a UI for the header rewrite rule sets. ## What is a header rewrite rule? A header rewrite rule set is a few lines of header key-value that is gonna be injected into the HTTP request header when the proxy is processing the request. The rewrite can happens in two directions. One is when the request is proxying from downstream to upstream (aka from client like user's browsers to origin server like web server), another one is from upstream to downstream. Not to mention each direction there will be two operations, either you can add a new header to the request header, or delete a header from the current request. ## Design for beginners vs professionals For those who have networking background, it is a really simple thing to understand. However, Zoraxy is aiming for users who have little to no networking background, how to express the correct meaning to user gives me some great headache in designing the UI for this function. ### UI for adding custom headers Eventually, I come up with a solution. So first I created a web form that try to avoid the upstream and downstream technical terms and favor for a more easily understandable diagrams. Next, adding the next step of the operations as a new sets of buttons below the direction choosing buttons. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/skyqzd02v95wg9opea0n.png) Then now, we have a very clear and easy to operate web form for creating and editing custom headers. Next, how can we list the current existing custom headers? ### UI for Listing Current Custom Headers To list the custom headers with direction indication, we can reuse the cognitive awareness that the user learnt while creating the custom header. So in the table of custom headers, I uses the same color and direction arrows to indicate the header add / remove directions. I also pick the same add and remove icon as the operations buttons. So now, we can easily get a sense on how and when the header is being rewritten. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5c3vy708fbzucrfzjeyj.png) And well, sometime user forgets their experience in creating custom headers. They might be setting this up last year and now revising the table and forgotten what those arrows means. So I added two lines of instruction at the bottom of the table. For the "add" and "remove" icon, I think that is common sense enough that I don't need to explain them so I didn't left a remarks for them. ## Permission-Policy Permission-Policy is a HTTP header that defines what policy / sensors that your website can access. It is a terribly long header that looks something like this ``` Permissions-Policy: accelerometer=*, ambient-light-sensor=(self), autoplay=*, battery=(), camera=*, cross-origin-isolated=(), display-capture=*, document-domain=(self), encrypted-media=*, execution-while-not-rendered=*, fullscreen=(self), geolocation=*, gyroscope=(), keyboard-map=(self), magnetometer=*, microphone=*, midi=*, navigation-override=*, payment=*, picture-in-picture=(), publickey-credentials-get=*, screen-wake-lock=*, sync-xhr=(self), usb=*, web-share=*, xr-spatial-tracking=*, clipboard-read=*, clipboard-write=(self), gamepad=(), speaker-selection=(self), conversion-measurement=*, focus-without-user-activation=*, hid=(), idle-detection=*, interest-cohort=*, serial=(), sync-script=(self), trust-token-redemption=*, unload=(self), window-placement=*, vertical-scroll=* ``` If I am developing for professionals, I could have just left a text-area and call it a day. However, Zoraxy is design for beginner and if I leave a text-area in place, people are gonna fill in some weird things and open bug issues claiming Zoraxy is not working and buggy. That is why I designed a web form for it again. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ikoofxaadfl78wz4gqnr.png) At the bottom of the list, there are a separately color section for experimental policies. This can help notify the user that these might not be supported by all browsers. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/njvk0e9qftrp7w90dd2r.png) And this is probably the longest web form I have developed for this project. But at least this details here can make sure no one fill in invalid stuffs into the Permission-Policy header field (I hope) Anyway, this project really challenge my UX design experience. I guess nowadays most devs will just use framework and make the UI looks good, but UI looking good is not the same as good UX. Sometime attention to details like this might helps a lot in allow user to stick with your software and willing to contribute. Lastly, here is the git repo if you are looking for an easy to use reverse proxy server and you are interested to give it a try. {% embed https://github.com/tobychui/zoraxy %} That is all for today. Have a nice day :)
tobychui
1,891,323
Short Overview: A guide of Auth in JS 🛡️
Let's Take It Quick! After you start your project, make sure to install all...
0
2024-06-17T13:33:32
https://dev.to/buildwebcrumbs/creating-a-secure-login-system-in-react-a-quick-guide-633
webdev, javascript, react, beginners
## Let's Take It Quick! After you start your project, make sure to install all dependencies: ```js npm install axios react-router-dom ``` This way, you already have all the dependencies to create authentication for your project! ## Login Component The second step is creating your login component, so you can input the **login** information. You need to import `useState` to manage form input state and `useHistory` to redirect users after successful login. ```js import React, { useState } from 'react'; import axios from 'axios'; import { useHistory } from 'react-router-dom'; const Login = () => { const [email, setEmail] = useState(''); const [password, setPassword] = useState(''); const history = useHistory(); const handleSubmit = async (e) => { e.preventDefault(); try { const response = await axios.post('/api/login', { email, password }); if (response.data.success) { localStorage.setItem('token', response.data.token); history.push('/dashboard'); } else { alert('Invalid credentials'); } } catch (error) { console.error('Login error:', error); } }; return ( <form onSubmit={handleSubmit}> <h2>Login</h2> <input type="email" placeholder="Email" value={email} onChange={(e) => setEmail(e.target.value)} required /> <input type="password" placeholder="Password" value={password} onChange={(e) => setPassword(e.target.value)} required /> <button type="submit">Login</button> </form> ); }; export default Login; ``` ## Setting Your Routes In `src/App.js`, set up the routes. Import `BrowserRouter`, `Route`, and `Switch` from `react-router-dom` to define the routes. This sets up two routes: one for the **login** page and one for the **dashboard**. ```js import React from 'react'; import { BrowserRouter as Router, Route, Switch } from 'react-router-dom'; import Login from './Login'; import Dashboard from './Dashboard'; function App() { return ( <Router> <Switch> <Route path="/" exact component={Login} /> <Route path="/dashboard" component={Dashboard} /> </Switch> </Router> ); } export default App; ``` ## Creating Private Routes Create the `PrivateRoute.js` component to protect your routes. This component will check if the user is authenticated before granting access. ```js import React from 'react'; import { Route, Redirect } from 'react-router-dom'; const PrivateRoute = ({ component: Component, ...rest }) => { const isAuthenticated = !!localStorage.getItem('token'); return ( <Route {...rest} render={(props) => isAuthenticated ? <Component {...props} /> : <Redirect to="/" /> } /> ); }; export default PrivateRoute; ``` Update your routing in `App.js` to use the `PrivateRoute` for the **dashboard**. ```js <PrivateRoute path="/dashboard" component={Dashboard} /> ``` ## Backend API This code will be validating a user's email and password and returning a JSON Web Token (JWT). ```js const express = require('express'); const jwt = require('jsonwebtoken'); const bodyParser = require('body-parser'); const app = express(); app.use(bodyParser.json()); app.post('/api/login', (req, res) => { const { email, password } = req.body; if (email === 'test@example.com' && password === 'password123') { const token = jwt.sign({ email }, 'your_jwt_secret', { expiresIn: '1h' }); return res.json({ success: true, token }); } res.json({ success: false, message: 'Invalid credentials' }); }); app.listen(5000, () => { console.log('Server is running on port 5000'); }); ``` ## Tips After a successful **login**, store the `JWT token` in `localStorage`. To keep it secure: - Do not store sensitive information in the token. - Use HTTPS to encrypt data in transit. - Implement token expiration and refresh mechanisms. Thank you. Please Follow: [Frontend Ai](tools.webcrumbs.org)
m4rcxs
1,891,322
Terraform Drift Detection and Remediation
Organizations using Terraform to manage their infrastructure as code (IaC) need a reliable solution...
0
2024-06-17T13:32:56
https://spacelift.io/blog/terraform-drift-detection
terraform, devops, infrastructureascode
Organizations using Terraform to manage their infrastructure as code (IaC) need a reliable solution to ensure their infrastructure's actual state aligns with its intended state. Terraform stores information about the infrastructure it manages in [state files](https://spacelift.io/blog/terraform-state), and any change to the infrastructure Terraform manages that Terraform has not triggered is called "drift." In this post, we will explore the reasons why drift happens, its associated risks, and the options available to remediate it. ##What is Terraform drift? Infrastructure drift refers to changes made outside of your Infrastructure as Code (IaC) tool for resources that are managed by it. To this extent, Terraform drift refers to resources that are managed by Terraform and changed outside the Terraform workflow. This can happen for several reasons, such as: - Manual changes - As a DevOps engineer, when you have severity one issues, you may make manual changes just to get the systems up and running, but this also means that you have to make these changes in the code afterward. Sometimes, you forget that you've made these changes, and your configuration will drift. - External processes - You may have automated processes outside Terraform's control, such as autoscaling actions triggered by cloud providers or external scripts that make changes to your infrastructure. - Resource eviction - Due to cost-saving measures and policy violations, resources can be evicted or deleted, which can cause drift. Drift is a significant concern and can lead to inconsistencies that complicate infrastructure management. ##Common sources of infrastructure drift Consistency is a key goal when managing infrastructure using Terraform. With IaC, you can keep multiple environments consistent, irrespective of how many times they are recreated. Infrastructure drift undermines that consistency. Here are some of its common sources: ### Manual changes Manual changes are a primary cause of infrastructure drift. These can be made either deliberately or unintentionally. If a deployed system's configuration needs to be changed to address a critical production incident, doing it manually can be the fastest way to fix it. Similarly, certain network configurations are tweaked for testing purposes to address a certain network security vulnerability. These are examples of intentional manual changes to the infrastructure. However, sometimes users are not even aware they have made a manual change to the infrastructure. Identifying the components managed by Terraform is not always intuitive. When users log into the web console, they may perform specific tasks on resources without the knowledge of Terraform's state file. Executing scripts that make API calls to the cloud platform is another possible source of unintentional change. Irrespective of whether the change is deliberate or unintentional if the changes are not ported back into the Terraform configurations, this results in drift. ### Automation tools Organizations and large teams implement multiple [automation tools](https://spacelift.io/blog/devops-automation-tools) to streamline operations. These tools all have specific workflows and lifecycle management capabilities, and responsibilities can overlap if boundaries of influence for these tools are unclear or wrongly implemented. For example, using Terraform for infrastructure management alongside a configuration management tool like Ansible creates a high possibility of infrastructure drift. Although Ansible is responsible for managing the application layer of a business service, [it also has infrastructure provisioning capabilities](https://spacelift.io/blog/ansible-vs-terraform). Ironically, the more automation tools you implement, the more manual effort is required to reconcile the changes they create in Terraform state files. ### User scripts Cloud platforms facilitate the triggering of certain event-driven user-defined scripts. These scripts allow users to perform actions on a resource or execute API calls to modify another resource. For example, when creating Linux-based EC2 instances in AWS, it is possible to execute bash/shell scripts when the instance boots. These scripts are provided in the user_data field when creating an instance from the web console. Similarly, Terraform provides a way to supply the same using IaC. Although providing user_data is not mandatory, it enables various automation capabilities to manage virtual machines. User_data scripts are used to run upgrades, install security patches, install dependencies, invoke various system processes, etc., as soon as the system boots. Bash and shell scripts are powerful because they can change any network configuration of the system and execute API calls to modify other resources. This has the potential to introduce drift in infrastructure. 💡 You might also like: - [Terraform Security Best Practices](https://spacelift.io/blog/terraform-security) - [How to Manage On-premise Infrastructure with Terraform](https://spacelift.io/blog/terraform-on-premise) - [Automating Infrastructure Deployments Using Terraform](https://spacelift.io/blog/terraform-automation) ##Risks and impact of infrastructure drift [Terraform IaC](https://spacelift.io/blog/terraform-infrastructure-as-code) manages the infrastructure's end-to-end lifecycle. It is responsible for creating and recreating cloud resources and consistently introducing changes. To do this successfully, up-to-date information is saved in the state files. Essentially, infrastructure drift is untracked changes. These untracked changes pose risks of varying severity and could have a drastic impact on the system. Similarly, some changes may be beneficial for the system, improving attributes like reliability, security, and performance. Given the nature of infrastructure drift in the context of Terraform IaC, if it's not addressed, it can create blind spots while managing the infrastructure in scope. Infrastructure changes that fall out of the scope of Terraform management go unnoticed. ### Security vulnerabilities Infrastructure drift exposes the system's security vulnerabilities to attackers. This has the potential to cause serious damage not just to the system but to businesses in general. For example, when security group rules are manually modified to test a certain use case for public access, this can have multiple impacts, ranging from data breaches to the entire system being compromised. ### Compliance violations Automated policy execution or manual configuration changes can lead to breaches of regulatory requirements --- for example, drift that results in the personal data of users being exposed to the public or actions that enable unauthorized access to data and resources. ### Performance and operational difficulties Infrastructure drift can impair system performance because of latency or reduced network throughput, underprovisioning of resources, disabling of auto-scaling configurations, etc. Drift also makes it challenging to identify, analyze, and investigate the root cause of the issues. Unknown and untracked changes introduce challenges that increase downtime and also impact the mean time to resolution. ### Higher costs Changes caused by infrastructure drift can have wide-ranging financial implications. Provisioning of unutilized cloud resources generates unnecessary cloud platform costs, and, because the changes are not tracked, the cost of remediation and maintenance also increases. You can learn more about drift in this article: [Infrastructure Drift Detection and How to Fix It With IaC Tools](https://spacelift.io/blog/drift-detection). ##Terraform drift example A simple example would be that you have a terraform configuration that creates three EC2 instances. After these changes are deployed, someone goes into the AWS console and deletes one of them manually. This has caused a drift because your current state of infrastructure doesn't reflect your Terraform state, or your Terraform configuration. To solve the drift, you have two options: - Reapply the terraform code to recreate the missing instance. - Change the Terraform configuration to reflect the current state of your infrastructure. ##How to detect Terraform drifts? When infrastructure drift occurs, the first challenge is to identify it. As we have seen, drift has multiple sources, so it is not possible to track where and when the drift happens without a monitoring mechanism. You can identify the existence of drift by running a couple of Terraform commands. The [`terraform refresh`](https://spacelift.io/blog/terraform-refresh) command helps refresh the state file, and the [`terraform plan`](https://spacelift.io/blog/terraform-plan) command provides a plan of action by analyzing the state file and current configuration. The output provided by the plan command helps us identify drift. Without changing the Terraform config, if the execution of a plan command suggests either modifying or recreating a certain resource, this indicates that something else has modified the infrastructure. But this depends on when exactly the commands are run. It often happens when we prepare and check the status before implementing other intended changes. ##Monitoring drift with Spacelift Periodic monitoring of IaC-managed infrastructure to proactively check for drift is challenging. [Drift detection provided by Spacelift](https://spacelift.io/drift-detection) helps to identify and highlight infrastructure drift promptly. Configuring a drift monitor is as simple as configuring a cron job. To start, select the stack you wish to configure drift detection for and navigate to Settings > Scheduling. A couple of notable control options are provided here: 1. Reconcile: When this is enabled, Spacelift automatically remediates the drift. When infrastructure drift is identified, Spacelift triggers the "terraform apply" workflow to restore the original state of infrastructure as per the Terraform configuration. 2. Schedule: This is a simple cron job notation that determines the scanning frequency and compares the state of deployment. In the example below, the drift detection happens every 15 minutes. ![scheduling drift detection](https://spacelift.io/_next/image?url=https%3A%2F%2Fspaceliftio.wpcomstaging.com%2Fwp-content%2Fuploads%2F2023%2F04%2Fscheluding-drift-detection.png&w=3840&q=75) When drift is detected, it is represented in a very intuitive way, making it easy to interpret its impact. The screenshot below shows that one of the network components has drifted. ![drift overview](https://spacelift.io/_next/image?url=https%3A%2F%2Fspaceliftio.wpcomstaging.com%2Fwp-content%2Fuploads%2F2023%2F04%2Fdrift-overview.png&w=3840&q=75) Clicking on the drifted block quickly reveals details of the drift. ![details of the drift](https://spacelift.io/_next/image?url=https%3A%2F%2Fspaceliftio.wpcomstaging.com%2Fwp-content%2Fuploads%2F2023%2F04%2Fdetails-of-the-drift.png&w=3840&q=75) Read more in the [documentation](https://docs.spacelift.io/concepts/stack/drift-detection). ##Understanding and reconciling drift with Spacelift Whenever drift is detected, it is important to identify the factors that caused it. As discussed previously, the changes introduced out of the scope of Terraform could be either desirable or unwanted. Here are some examples: 1. Changes caused by the scopes of automation tools overlapping are usually desirable. However, the responsibility is not clearly defined. 2. Changes introduced manually to troubleshoot a related issue elsewhere but overlooked and not reverted are unwanted because they expose the system to various vulnerabilities and could have cost implications. 3. Hotfixes implemented to address issues in critical services can be either permanent fixes or temporary workarounds, which makes it difficult to classify them as either desirable or unwanted. Further investigation is needed to decide. As seen from the examples above, understanding drift needs some analysis. The course of remediation action usually boils down to the following: 1. If the changes are desirable, import the configuration under Terraform management scope. 2. If the changes are not desired, reinstate the original state by running "terraform apply". 3. If a resource is not supposed to be managed by Terraform, disassociate it from Terraform state and configuration. When drift detection is enabled, Spacelift highlights the drift in the very next run. It depends on how frequently the drift detection runs are configured. When we enable the "Reconcile" option in the drift detection schedule, Spacelift automatically triggers Terraform runs to reinstate the original configuration. This is appropriate when the scope is clearly defined, and resource management policies are in place. However, if the boundaries are not clearly defined, you should turn off the "Reconcile" option. This is because there may be a need to either import drift or disassociate infrastructure from the current Terraform configuration. The drift detection schedule again plays an important role in confirming mitigation actions post-import/disassociation. ##Terraform drift detection tools Several tools can help you identify drift and some of them can even remediate the drift for you. Below is a list of these tools: - Terraform drift detection documentation - Brainboard - Terratest - Driftctl - TestInfra - Kitchen-Terraform ### Terraform drift detection documentation The Terraform drift detection documentation offers a comprehensive guide to identifying and managing drift within your Terraform configurations. It outlines how to use some of Terraform's native features, such as plan and apply, to detect changes not reflected in your Terraform configuration. ### Brainboard Brainboard is a cloud architecture design tool that offers several features to manage and visualize your cloud infrastructure effectively. It can help you identify discrepancies between your deployed resources and your Terraform configurations, thus making it easier for engineers to address drift and enforce compliance in their IaC definitions. ### Terratest While [Terratest](https://spacelift.io/blog/what-is-terratest) is a go library for testing infrastructure code, it can be used to automate testing of infrastructure states, indirectly helping to identify drift by validating the resources with the Terraform configuration. ### Driftctl Driftcl is a dedicated Terraform toll for detecting drift that scans your infrastructure state and compares it with the actual state of your resources. This approach helps to quickly identify and address drift, ensuring your infrastructure aligns with the IaC definition. ### TestInfra TestInfra is another testing framework for your infrastructure and even though it is not dedicated to Terraform, it can be used to test the state of the infrastructure managed by Terraform. It helps in identifying configuration drifts by asserting the actual state of your infrastructure against expected configurations. ### Kitchen-Terraform Kitchen-Terraform integrates the test kitchen automation tool with Terraform, allowing you to define tests for your Terraform configurations. Similar to Terratest and TestInfra, it can verify your configurations against the actual state of your infrastructure, thus detecting drift. ##Key points Managing infrastructure drift is challenging because it may originate from any source. It is difficult to get absolute certainty of who changed what and when. The risk potential of such drift can range from low to critical, and the impact can affect the system's security, cost, and reliability. Terraform IaC and state files are the only reliable and predictable sources of information about the managed infrastructure. Spacelift's drift detection encompasses monitoring and an intuitive UI to highlight the drift (and optionally automate the reconciliation). This makes it easy to identify what has changed and how to proceed with investigating it. _Written by Sumeet Ninawe and Flavius Dinu_
spacelift_team
1,891,321
Osama : not a Web Developer
Osama Bin Laden America 1993- world trade center - 6 died 1998 - U.S....
0
2024-06-17T13:32:35
https://dev.to/keshavgbpecdel/osama-not-a-web-developer-5dbb
![Osama not a web dev](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j6sub9j5ov0uog6p6ghd.jpg) ## Osama Bin Laden ## America - 1993- world trade center - 6 died - 1998 - U.S. Embassies - 224 died - 2001 - 9/11 - 3K died `Note : Operation Infinite Reach - was failed before` ## Operation Neptune Spear - US Navy Seal (United states Navy Sea, Air, Land) - 1 May 2011 : Jalalabad (US Navy seal base) to Abbottabad (hidden Osama) - Helicopter - chinook (backup), black hawks(chalk 1 & 2 - steath technology, fabric layer) - Chalk2 was good - chalk1 got air vortex flow ### Firing sequence - Abu Ahmad, Abrar, Khalid (osama son) - Team had night vision goggles, camera - obama, joe biden, hillary clinton watching ``` GInnonimo EKIA- osma enemy killed in action ``` ## Osama story: - Saudi Aribia - Riyadh - 10 March 1957 birthday - King abdul aziz university - civil engineering - Afganistan - AL QAEDA = the base vs USSR - Saudi America relation - U.S Naval Station Gunatanamo Bay, CUba - Jail torture - thrown in Arabian sea - with muslim rituals
keshavgbpecdel
1,891,318
Made an online HTML editor
Okay, so this is the latest addition to my side project. I made an HTML editor. To be honest, it...
0
2024-06-17T13:28:07
https://dev.to/anjandutta/made-an-online-html-editor-mji
html, webdev, javascript, beginners
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/13mdf4n45vdhzescst0i.png) Okay, so this is the latest addition to my side project. I made an HTML editor. To be honest, it was not that hard but it was special! I enjoyed the magical moment when I first saw my html code rendering in the preview window, that too in real-time. I know it's not a big deal for many but for me it feels like the completion of a long awaited thing. The last one(at least for now) that the project needed. Along with that, I can save my code snippets and share with others. Very much excited because I can showcase and share excellent web snippets from now on. Yes, and now I feel like I have added fairly good number of programming language support. People often ask, what do I have different than jsFiddle or Pastebin. My answer to that is "simplicity". See, most of my traffic is organic and I am not doing any paid promotion so far, I don't run ads and till now I am spending my own money to run this [side project](https://www.freecodecompiler.com). The biggest payback for me is seeing people using my product, they are saving code snippets, registering themselves, maybe for their school project or practicing code; who knows... what matters is all of this with no major differentiator and I'm loving it this way. There are plans for more developer friendly front-end and backend interview related features. Looking for more ways to simplify the dev journey.
anjandutta
1,891,317
Why Hire a Dedicated Vue.js Developer?
Streamlining Web Development with Efficiency &amp; Flexibility Gentle Learning Curve: Vue.js...
0
2024-06-17T13:24:02
https://dev.to/coderowersoftware/why-hire-a-dedicated-vuejs-developer-5fk5
webdev, vue, javascript, developers
**Streamlining Web Development with Efficiency & Flexibility** **Gentle Learning Curve:** Vue.js features a concise syntax and user-friendly design, enabling rapid adoption even for beginners in JavaScript & HTML. This accessibility reduces barriers and speeds up project timelines. **Component-Based Architecture:** Vue.js excels in reactivity: Data changes reflect in the UI automatically, simplifying state management and reducing manual DOM manipulation. This streamlines development and maintains clean code. **Performance Optimization:** Vue.js prioritizes performance, employing techniques such as virtual DOM and efficient rendering. This ensures a smooth user experience, particularly in dynamic, data-driven applications. **Versatility & Gradual Adoption:** Vue.js is incredibly versatile, spanning from small interactive elements to full single-page applications (SPAs). It also supports incremental integration into existing projects for a gradual adoption path. **Rich Ecosystem:** Though not as expansive as some counterparts, Vue.js provides a curated ecosystem of third-party libraries, tools, and UI components. This enhances Vue’s capabilities and simplifies development. **Active Community & Support:** The Vue.js community is passionate, supportive, and rapidly growing. You’ll find a wealth of resources, tutorials, and assistance online when needed. **Expertise & Efficiency:** Experienced Vue.js devs efficiently build top-notch apps, mastering structure, data management, and optimization. **Deep Understanding:** Their deep grasp of core Vue.js concepts like components, reactivity, and the build process ensures efficient and maintainable development. **Ecosystem Navigation:** Familiarity with popular Vue.js tools and libraries saves time and effort during development. They’ll know the right tools for the job. **UI/UX Focus:** A dedicated developer focuses on crafting a user-centric, visually appealing experience, ensuring your application is both functional and delightful to use. **Long-Term Support:** They can provide ongoing maintenance and support for your Vuejs application, keeping it stable and up-to-date as your project evolves. **[Hire a dedicated Vue.js Developer for Your Dream Project.](http://coderower.com/)**
coderower
1,891,232
The Advantages of .Net: Build Robust Scalable with .Net and Its Dedicated Developers
Cross-platform Compatibility: .NET Core, a core part of the .NET framework, allows you to build...
0
2024-06-17T13:19:56
https://dev.to/coderowersoftware/the-advantages-of-net-build-robust-scalable-with-net-and-its-dedicated-developers-bm9
dotnet, developer, webdev, developers
**Cross-platform Compatibility:** .NET Core, a core part of the .NET framework, allows you to build applications that run seamlessly on Windows, Linux, and macOS. This flexibility caters to diverse deployment needs. **Object-oriented Approach:** .NET’s object-oriented foundation promotes code reusability, maintainability, and modular development. Imagine building with well-organized code blocks! **Rich Ecosystem of Tools and Libraries:** The .NET ecosystem offers a vast collection of prebuilt tools and libraries that streamline development and extend functionality. This saves time and effort compared to building everything from scratch. **Focus on Security:** Microsoft prioritizes security in the .NET framework. Built-in features and best security practices help create applications resistant to vulnerabilities. **Strong Community and Support:** The .NET community is large and active, providing many resources, tutorials, and troubleshooting assistance. You’ll never be without help if you need it. **Expertise and Efficiency:** A seasoned .NET developer can leverage the framework’s strengths to build high-quality. secure applications efficiently. They’ll be well-versed in best practices for cross-platform development. **Deep Understanding of Framework:** Their in-depth knowledge of .NET libraries, tools, and the build process ensures efficient and maintainable development. **Performance and Scalability:** They can optimize your application for performance and scalability, ensuring it can handle growing user bases and complex functionalities. **Long-term Maintenance and Support:** A dedicated developer can provide ongoing maintenance and support for your .NET application, keeping it stable, secure, and up-to-date with the latest advancements in the framework. **[Hire a Dedicated .Net Developer for Your Dream Project.](http://coderower.com/)**
coderower
1,891,231
If you are considering a car purchase
If you are considering a car purchase and value high quality service, I highly recommend looking into...
0
2024-06-17T13:19:18
https://dev.to/lexzub/if-you-are-considering-a-car-purchase-4fc7
If you are considering a car purchase and value high quality service, I highly recommend looking into Ford Vidi, especially if you are interested in the Ford Kuga. I was personally thrilled with the level of service I received. The professionalism and attention to detail makes the process of choosing a vehicle easy and enjoyable. Visit their website at [.../avtomobili/kuga/](https://ford-vidi.com.ua/ua/avtomobili/kuga/) and see for yourself how much Ford Vidi cares about their customers.
lexzub
1,891,473
100 Salesforce Project Manager Interview Questions and Answers
A Salesforce Project Manager is responsible for overseeing the planning, execution, and delivery of...
0
2024-06-18T15:09:21
https://www.sfapps.info/100-salesforce-project-manager-interview-questions-and-answers/
blog, interviewquestions
--- title: 100 Salesforce Project Manager Interview Questions and Answers published: true date: 2024-06-17 13:15:00 UTC tags: Blog,InterviewQuestions canonical_url: https://www.sfapps.info/100-salesforce-project-manager-interview-questions-and-answers/ --- A Salesforce Project Manager is responsible for overseeing the planning, execution, and delivery of Salesforce projects within an organization. This role involves coordinating with cross-functional teams, managing timelines and budgets, and ensuring that the project aligns with the organization’s strategic goals. The Salesforce Project Manager ensures that the implementation of Salesforce solutions meets the specified requirements and delivers the expected business benefits. This position demands a strong understanding of Salesforce capabilities, project management methodologies, and excellent communication and leadership skills. ### Common Requirements for a Salesforce Project Manager - Bachelor’s degree in Computer Science, Information Technology, Business Administration, or a related field - Project Management Certification (PMP, PRINCE2, or similar) - Strong understanding of Salesforce products and services, including Sales Cloud, Service Cloud, Marketing Cloud, and others - Proven experience managing Salesforce projects from inception to completion - Proficiency in project management software (e.g., MS Project, JIRA, Asana) - Excellent communication and stakeholder management skills - Ability to lead and coordinate cross-functional teams - Experience with Agile and Waterfall project management methodologies - Strong problem-solving skills and attention to detail - Ability to manage multiple projects simultaneously - Experience with Salesforce integration and customization - Understanding of data migration and management within Salesforce - Ability to create and manage project documentation, including scope, schedule, and budget - Knowledge of change management principles - Salesforce certifications (e.g., Salesforce Certified Administrator, Salesforce Certified Platform App Builder) are a plus ## Common Salesforce PM Questions and Answers 1. **What is Salesforce?** Salesforce is a cloud-based software company known for its customer relationship management (CRM) product, which includes sales, service, marketing automation, analytics, and application development. 1. **How does Salesforce help businesses?** Salesforce provides businesses with an integrated platform that facilitates marketing, sales, commerce, and service through a single interface, improving efficiency and enhancing the customer experience. 1. **What is CRM? Why is it important?** Customer Relationship Management (CRM) is a strategy and technology for managing all your company’s relationships and interactions with potential and current customers. It helps improve business relationships, streamline processes, and improve profitability. 1. **What are some key features of Salesforce?** Some key features include Sales Cloud, Service Cloud, Marketing Cloud, Salesforce Analytics, Salesforce App Development, and more. These tools help organizations manage customer information, track sales leads, conduct and monitor marketing campaigns, and provide service post-sale. 1. **Can you explain the concept of a ‘Salesforce record’?** A Salesforce record is an entry stored in Salesforce that contains information required to track something specific, such as a customer, interaction, transaction, or case. 1. **What is the role of a Salesforce PM?** A Salesforce Product Manager oversees the development and enhancement of Salesforce products, ensuring they meet user needs and continue to innovate within the marketplace. 1. **What is a Salesforce AppExchange? How is it useful?** The Salesforce AppSecondary public market, where third-party businesses can develop and sell their Salesforce-compatible applications. It’s useful for extending Salesforce functionalities beyond the standard offerings. 1. **How would you prioritize feature development in Salesforce?** Feature development should be prioritized based on user needs, market trends, strategic business objectives, and the potential return on investment (ROI). 1. **Describe a challenging Salesforce integration project and how you managed it.** Answers should outline a specific project, the challenges faced such as technical issues or resistance to change, and the strategies used to overcome these challenges, including stakeholder communication and technical solutions. 1. **What is Salesforce Lightning?** Salesforce Lightning is the latest Salesforce interface, enhancing speed, usability, and customization capabilities compared to its predecessor, Salesforce Classic. 1. **How do you handle feedback from multiple stakeholders?** I prioritize open communication, regular updates, and feedback sessions to ensure that all stakeholder inputs are considered and integrated into the product development process in a balanced manner. 1. **What methodologies do you prefer in product management?** Agile methodologies are preferable due to their flexibility, focus on collaboration, and ability to adapt to change rapidly, which is crucial in a dynamic environment like Salesforce. 1. **Explain a time when you had to make a decision without all the necessary information.** Answers should detail a specific instance, the limited information available, the decision-making process used, and the outcome. 1. **How do you ensure product scalability in Salesforce?** By designing features and integrations that can easily scale with minimal adjustments, ensuring that the infrastructure can handle increased loads, and regularly reviewing performance metrics. 1. **What do you think sets Salesforce apart from its competitors?** Salesforce’s comprehensive suite of services, its ecosystem of apps via AppExchange, constant innovation, and strong community engagement set it apart from competitors. 1. **Can you describe Salesforce’s typical implementation process?** The implementation process generally involves planning, system design, customization, integrations, testing, training, and deployment phases. 1. **What tools do you use to manage Salesforce projects?** Tools like JIRA, Asana, Trello, and Salesforce’s own project management tools such as Salesforce Scheduler or Salesforce Project. 1. **How do you measure the success of a new feature or product?** Success can be measured through metrics such as user adoption rates, performance enhancements, revenue impact, and customer satisfaction feedback. 1. **What are some common challenges faced by Salesforce PMs?** Challenges may include keeping up with Salesforce’s rapid pace of innovation, managing complex integrations, and addressing the diverse needs of a broad user base. 1. **How do you stay updated with Salesforce developments?** Regularly attending Salesforce events, participating in training sessions, following Salesforce blogs, and engaging with the Salesforce community. Each answer should be adapted based on personal experience and specific job requirements to ensure authenticity during interviews. **You might be interested:** [Salesforce Technical Interview Questions](https://www.sfapps.info/salesforce-developer-interview-questions/) ### Insight: When interviewing candidates for a Salesforce Project Manager role, it’s crucial to focus on their ability to blend technical expertise with strategic project management skills. Common Salesforce PM interview questions often delve into their experience with Salesforce tools, methodologies they employ, and how they handle project challenges. Look for answers that highlight their problem-solving capabilities, understanding of Salesforce’s ecosystem, and ability to manage cross-functional teams. Candidates should demonstrate a strong grasp of both the technical aspects and the business impacts of their projects. Effective communication, leadership qualities, and a track record of successful project deliveries are key indicators of a promising candidate. Use these Salesforce project manager interview questions as a guide to assess whether the candidate can not only manage but also drive Salesforce projects towards achieving organizational goals. ## Salesforce Program Manager Interview Questions and Answers 1. **What is the role of a Salesforce Program Manager?** A Salesforce Program Manager coordinates and oversees multiple Salesforce projects within an organization, ensuring they align with the business objectives, are delivered on time, and stay within budget. 1. **How do you manage risks in Salesforce projects?** I identify potential risks early through stakeholder meetings and regular reviews, prioritize them based on impact and likelihood, and implement mitigation strategies, including contingency plans. 1. **Describe your experience with Salesforce implementations.** A candidate should provide a summary of your experience, focusing on specific projects, the scale of implementations, the challenges faced, and the outcomes. 1. **How do you ensure project alignment with strategic goals?** Regular communication with stakeholders to reaffirm strategic objectives, align project milestones and KPIs with these goals, and adjust projects as needed to ensure alignment. 1. **What methodologies do you use for managing Salesforce programs?** I primarily use Agile for its flexibility and responsiveness, but I adapt the methodology to fit the specific needs of the program, which can sometimes include blending in elements of Waterfall for certain deliverables. 1. **How do you handle changes in project scope?** I evaluate the impact of scope changes on resources, timelines, and outcomes, discuss them with stakeholders for approval, and update project documentation and plans accordingly. 1. **Can you explain how you manage and prioritize multiple projects?** I prioritize projects based on strategic importance, urgency, and resource availability, using tools like Gantt charts and project management software to keep track of progress and dependencies. 1. **How do you measure the success of a Salesforce program?** Success is measured through specific KPIs such as project delivery times, budget adherence, stake connectivity to strategic outcomes, user adoption rates, and overall business impact. 1. **Describe a challenging Salesforce project you managed. What was the outcome?** A candidate must provide a detailed example, including the nature of the challenge, how you addressed it, and the project’s result. 1. **What is your experience with Salesforce integrations?** I have managed several projects involving integrations with ERP systems, marketing automation, and other third-party applications, focusing on ensuring seamless data flow and functionality. 1. **How do you manage project communication?** I establish clear communication channels and schedules, use project management tools to share updates and documentation, and hold regular meetings with all stakeholders. 1. **What tools do you prefer for Salesforce project management?** I utilize tools like Asana, JIRA, and Salesforce’s own Project Management tool to track progress, manage tasks, and facilitate collaboration. 1. **How do you handle team conflicts?** I address conflicts by listening to all parties involved, understanding the issues, and facilitating discussions to reach a consensus or fair resolution. 1. **What strategies do you use to ensure team productivity?** I set clear goals, provide necessary resources, maintain open lines of communication, and implement regular check-ins to keep the team motivated and on track. 1. **How do you stay updated on Salesforce features and updates?** I regularly attend Salesforce webinars and training, participate in Salesforce user groups, and follow relevant blogs and forums to stay current. 1. **Describe how you would onboard a new team member into a Salesforce project.** I would provide comprehensive training on Salesforce functionalities relevant to the project, introduce them to team members, review project documentation with them, and align them with project goals and timelines. 1. **What is your approach to stakeholder management?** I engage with stakeholders early and often, ensuring their needs and expectations are clearly understood and considered throughout the project lifecycle. 1. **How do you ensure data security in Salesforce projects?** I ensure adherence to best practices such as using role-based access controls, regular audits, and compliance checks, and I collaborate with IT security teams to implement robust security measures. 1. **Can you describe a time when you had to escalate an issue? What was the situation and outcome?** A candidate should provide a detailed scenario where escalation was necessary, why it was escalated, how you handled the communication, and what was the result. 1. **What do you see as the biggest challenge facing Salesforce programs today?** Keeping up with the rapid evolution of Salesforce technologies and ensuring that the skills and knowledge within the team are up-to-date to leverage these advancements effectively. These Salesforce program manager interview questions and answers are designed to demonstrate your experience and skills in managing complex Salesforce programs. Customize your responses based on your personal experiences to provide authenticity and depth during your interview. ### Insight: Interviewing candidates for a Salesforce Program Manager role requires a focus on their ability to oversee multiple projects and align them with strategic business objectives. Common questions should assess their experience with large-scale Salesforce implementations, their understanding of program management principles, and their ability to handle complex stakeholder relationships. Look for candidates who can demonstrate a proven track record in managing program-level initiatives, optimizing resource allocation, and delivering consistent results across various projects. ## Salesforce Release Manager Interview Questions and Answers 1. **What is the role of a Salesforce Release Manager?** A Salesforce Release Manager oversees the release process, ensuring smooth deployment of changes and updates to the Salesforce platform. This includes managing release schedules, coordinating with development and operations teams, and ensuring that all changes are properly tested and documented. 1. **Can you describe your experience with Salesforce release management?** A candidate should provide a summary of your experience, focusing on specific projects, the release management processes you have implemented, and the outcomes of those projects. 1. **What tools do you use for managing Salesforce releases?** I use tools such as Jenkins, Git, Salesforce DX, and change management tools like Jira and Asana to manage and automate the release process, ensuring consistent and error-free deployments. 1. **How do you ensure the quality of releases?** I ensure quality by implementing rigorous testing processes, including unit tests, integration tests, and user acceptance testing (UAT). Additionally, I use continuous integration/continuous deployment (CI/CD) pipelines to automate testing and deployment. 1. **How do you handle a failed deployment?** In the event of a failed deployment, I follow a rollback plan to revert to the last stable version, analyze the root cause of the failure, and implement fixes before attempting the deployment again. 1. **Can you explain the concept of a sandbox in Salesforce?** A sandbox is a copy of the production environment used for development, testing, and training without affecting live data. Sandboxes allow teams to work on changes in isolation before deploying them to production. 1. **What is Salesforce DX, and how do you use it in release management?** Salesforce DX is a set of tools that improve the development and release management process in Salesforce. It includes features like scratch orgs, source-driven development, and continuous integration. I use Salesforce DX to manage source code and automate deployments effectively. 1. **How do you manage version control in Salesforce?** I use version control systems like Git to track changes in the codebase, manage different branches for development, testing, and production, and ensure that all changes are properly documented and reviewed. 1. **Describe your process for coordinating releases with multiple teams.** I coordinate releases by scheduling regular meetings, maintaining clear communication channels, using project management tools to track progress, and ensuring all teams are aligned on release schedules and dependencies. 1. **How do you handle release notes and documentation?** I create detailed release notes that include information on new features, bug fixes, and any changes that impact users. These notes are shared with all stakeholders and are also documented in a centralized repository for future reference. 1. **What strategies do you use to minimize downtime during releases?** I use strategies like zero-downtime deployment techniques, blue-green deployments, and thorough pre-release testing to minimize downtime and ensure a seamless user experience. 1. **Can you describe a challenging release you managed and how you handled it?** A candidate should provide a detailed example, including the nature of the challenge, the steps you took to address it, and the final outcome. 1. **What is a release pipeline, and how do you manage it in Salesforce?** A release pipeline is a series of automated steps that take code from development to production. In Salesforce, I manage it using CI/CD tools, ensuring that each step, from code commit to deployment, is automated and tested. 1. **How do you ensure compliance and security during releases?** I ensure compliance and security by following best practices, such as conducting security reviews, ensuring adherence to regulatory requirements, and implementing role-based access controls. 1. **What are some common risks in Salesforce releases, and how do you mitigate them?** Common risks include deployment failures, data loss, and performance issues. I mitigate these risks by thorough testing, implementing rollback plans, and conducting performance testing before releases. 1. **How do you handle feedback from users after a release?** I collect feedback through surveys, support tickets, and direct communication. I prioritize and address any issues or improvement suggestions in subsequent releases. 1. **What is your experience with automated testing in Salesforce?** I have experience setting up and managing automated testing frameworks, including unit tests, integration tests, and end-to-end tests, using tools like Selenium and Apex test classes. 1. **How do you manage hotfixes and emergency releases?** I manage hotfixes by prioritizing the issue, isolating the fix, thoroughly testing it, and deploying it as quickly as possible while ensuring minimal disruption to the users. 1. **What is your approach to continuous improvement in release management?** I regularly review the release process, gather feedback from the team, and implement changes to improve efficiency, reduce errors, and streamline the overall process. 1. **How do you stay updated with Salesforce best practices and new features?** I stay updated by attending Salesforce events, participating in Trailhead modules, following Salesforce blogs and forums, and engaging with the Salesforce community. These questions and answers are designed to help you prepare for an interview as a Salesforce Release Manager. Tailor your responses to reflect your personal experiences and the specific requirements of the job you are applying for. ### Insight: When evaluating candidates for a Salesforce Release Manager position, focus on their expertise in managing and coordinating release cycles within a complex Salesforce environment. Key Salesforce release manager interview questions should explore their experience with release management tools, their approach to handling deployment challenges, and their strategies for ensuring quality and minimizing downtime. Look for candidates who can clearly articulate their process for planning and executing releases, including their methods for testing, rollback planning, and stakeholder communication. Ideal candidates will demonstrate strong organizational skills, attention to detail, and a proactive approach to problem-solving. Their answers should reflect a deep understanding of Salesforce’s ecosystem, continuous integration and continuous deployment (CI/CD) practices, and the ability to collaborate effectively with development and operations teams. Identifying these traits will help ensure the candidate can maintain system stability and support seamless release cycles. **You might be interested:** [Salesforce Technical Lead Interview Questions](https://www.sfapps.info/salesforce-architect-interview-questions-and-answers/) ## Salesforce Technical Manager Interview Questions and Answers 1. **What is the role of a Salesforce Technical Manager?** A Salesforce Technical Manager oversees the technical aspects of Salesforce projects, leading development teams, ensuring alignment with business goals, managing system architecture, and ensuring the quality and performance of Salesforce implementations. 1. **Can you describe your experience with Salesforce development and management?** A candidate should provide a summary of your experience, focusing on specific projects, your role in those projects, technologies used, and outcomes achieved. 1. **How do you manage and prioritize technical debt in Salesforce projects?** I assess the impact of technical debt on current and future projects, prioritize based on risk and business impact, and schedule regular refactoring and optimization efforts to address it. 1. **What tools and technologies do you use for Salesforce development?** I use tools like Salesforce DX, Visual Studio Code, Git for version control, Jenkins for CI/CD, and various Salesforce-specific tools like Apex, Lightning Components, and Visualforce. 1. **How do you ensure the quality of Salesforce implementations?** By implementing rigorous testing processes, including unit tests, integration tests, and user acceptance testing (UAT), along with code reviews and continuous integration/continuous deployment (CI/CD) pipelines. 1. **Can you explain a complex Salesforce architecture you have worked on?** A candidate should provide details about a specific project, the architecture design, challenges faced, and how you addressed them. 1. **How do you handle integration with third-party systems in Salesforce?** I use APIs, middleware, and integration tools like MuleSoft or Informatica to facilitate seamless data exchange and ensure robust, scalable, and secure integrations. 1. **What is your experience with Salesforce Lightning?** I have extensive experience developing with Salesforce Lightning, including building Lightning Components, using the Lightning App Builder, and migrating from Salesforce Classic to Lightning. 1. **How do you manage a Salesforce development team?** I manage teams by setting clear goals, providing necessary resources, conducting regular check-ins, facilitating collaboration, and ensuring continuous skill development through training and mentoring. 1. **What is your approach to code reviews and ensuring coding standards?** I establish coding standards and best practices, conduct regular code reviews to ensure adherence, and use tools like SonarQube to automate code quality checks. 1. **How do you handle performance tuning in Salesforce?** I optimize code, use efficient queries, leverage caching strategies, and regularly monitor system performance to identify and address bottlenecks. 1. **What strategies do you use to ensure data security in Salesforce?** I implement role-based access controls, encrypt sensitive data, regularly audit data access, and follow Salesforce’s security best practices and compliance guidelines. 1. **Can you describe a challenging Salesforce project and how you managed it?** A candidate should provide a detailed example, including the nature of the challenge, the steps you took to address it, and the project’s outcome. 1. **How do you stay current with Salesforce updates and best practices?** I regularly attend Salesforce webinars and conferences, complete Trailhead modules, participate in Salesforce user groups, and follow industry blogs and forums. 1. **What is your experience with Salesforce CPQ (Configure, Price, Quote)?** I have implemented and customized Salesforce CPQ solutions to streamline sales processes, ensuring accurate and efficient configuration, pricing, and quoting for complex products. 1. **How do you handle conflicts within your team?** I address conflicts by listening to all parties involved, understanding the underlying issues, and facilitating discussions to reach a fair and constructive resolution. 1. **What is your approach to project management in Salesforce?** I use Agile methodologies to manage projects, ensuring flexibility, regular communication, iterative progress, and continuous feedback from stakeholders. 1. **How do you ensure successful user adoption of Salesforce solutions?** I ensure successful user adoption through thorough training programs, providing comprehensive documentation, and offering ongoing support and resources to users. 1. **What is your experience with Salesforce Marketing Cloud?** I have experience implementing and managing Salesforce Marketing Cloud, using its tools for email marketing, social media marketing, and customer journey management to drive engagement and growth. 1. **How do you manage resource allocation for multiple Salesforce projects?** I prioritize projects based on business impact and urgency, allocate resources accordingly, and use project management tools to track progress and adjust resource allocation as needed. These Salesforce technical project manager interview questions and answers are designed to help you prepare for an interview as a Salesforce Technical Manager. Customize your responses based on your personal experiences and the specific requirements of the job you are applying for. ### Insight: Interviewing for a Salesforce Technical Manager position requires assessing both technical proficiency and leadership capabilities. Focus on Salesforce technical manager interview questions that delve into their experience with Salesforce architecture, development, and integration. Effective candidates should demonstrate a solid grasp of Salesforce tools and technologies, as well as the ability to lead technical teams and manage complex projects. Key attributes to look for include their approach to problem-solving, handling performance issues, and ensuring data security. ## Salesforce Production Support Interview Questions and Answers 1. **What is the role of a Salesforce Production Support Analyst?** A Salesforce Production Support Analyst is responsible for maintaining and supporting the Salesforce environment, troubleshooting issues, ensuring system stability, and providing end-user support to maximize the efficiency and effectiveness of the Salesforce platform. 1. **Can you describe your experience with Salesforce production support?** A candidate should provide a summary of your experience, focusing on specific issues you’ve resolved, the tools you use, and any relevant metrics demonstrating your impact. 1. **How do you prioritize and manage support tickets in Salesforce?** I prioritize tickets based on their impact on business operations and user needs, using a tiered support system to address critical issues first and ensure timely resolution for all support requests. 1. **What tools do you use for Salesforce production support?** I use tools like ServiceNow or Jira for ticket management, Salesforce’s own support tools, and monitoring tools like New Relic or Splunk to track system performance and identify issues. 1. **How do you handle a high-severity incident in Salesforce?** I immediately gather all relevant information, assemble a response team, communicate with stakeholders, and follow a predefined incident management process to resolve the issue as quickly as possible while minimizing impact. 1. **Can you explain how you troubleshoot a Salesforce issue?** I start by gathering detailed information about the issue, replicating it if possible, checking logs and error messages, and using Salesforce diagnostic tools to identify the root cause before implementing a solution. 1. **What is your experience with Salesforce change management?** I manage changes by following a structured change management process, including proper documentation, testing in a sandbox environment, and scheduling deployments during low-impact windows to minimize disruption. 1. **How do you ensure data integrity in Salesforce?** I implement validation rules, use data cleaning tools, conduct regular data audits, and ensure proper data governance practices to maintain data integrity. 1. **Describe a time when you resolved a critical issue in Salesforce.** A candidate should provide a specific example, detailing the nature of the issue, the steps you took to resolve it, and the outcome. 1. **What is your approach to user training and support?** I provide comprehensive training sessions, create detailed user guides and FAQs, and offer ongoing support through various channels, including email, chat, and in-person sessions. 1. **How do you handle system updates and upgrades in Salesforce?** I plan updates and upgrades by thoroughly testing them in a sandbox environment, coordinating with stakeholders, scheduling deployments during off-peak hours, and monitoring the system post-update to ensure stability. 1. **What strategies do you use to prevent downtime in Salesforce?** I use proactive monitoring, regular maintenance, load balancing, and redundancy strategies to ensure high availability and minimize downtime. 1. **How do you handle data migrations in Salesforce?** I carefully plan and execute data migrations by mapping data fields, using tools like Data Loader or Salesforce Data Import Wizard, and performing thorough testing to ensure data accuracy and integrity. 1. **What is your experience with Salesforce security management?** I manage security by implementing role-based access controls, conducting regular security audits, ensuring compliance with best practices, and educating users on security protocols. 1. **How do you stay current with Salesforce updates and best practices?** I regularly attend Salesforce webinars and events, complete Trailhead modules, participate in user groups, and follow industry blogs and forums. 1. **What is your experience with Salesforce Service Cloud?** I have supported Salesforce Service Cloud by configuring and customizing service processes, managing case workflows, and ensuring efficient handling of customer service requests. 1. **How do you handle performance issues in Salesforce?** I identify performance bottlenecks through monitoring and diagnostic tools, optimize code and queries, and implement best practices to improve system performance. 1. **Describe your approach to documentation in Salesforce support.** I maintain thorough and up-to-date documentation for all processes, issues, and solutions, ensuring that the support team and end-users have access to clear and helpful information. 1. **How do you manage integrations with other systems in Salesforce?** I manage integrations by using APIs, middleware solutions, and ensuring proper data mapping and synchronization, while regularly monitoring for any integration issues. 1. **Can you provide an example of how you improved a Salesforce process?** A candidate should provide a specific example, detailing the process improvement, the steps you took, the tools or methods used, and the impact on the organization. These Salesforce production support interview questions and answers are designed to help you prepare for an interview as a Salesforce Production Support Analyst. Customize your responses based on your personal experiences and the specific requirements of the job you are applying for. ## Conclusion These Salesforce technical project manager interview questions and answers provide a solid foundation for preparing for a Salesforce-related interview, whether it’s for a Production Support Analyst, Project Manager, Technical Manager, or any other role. While they cover a broad range of topics and scenarios that you may encounter, remember that they are only samples. Tailoring your responses to reflect your unique experiences and the specific requirements of the job will make your preparation even more effective. Use these examples as a basis to build your confidence and refine your understanding of what it takes to excel in a Salesforce role. The post [100 Salesforce Project Manager Interview Questions and Answers](https://www.sfapps.info/100-salesforce-project-manager-interview-questions-and-answers/) first appeared on [Salesforce Apps](https://www.sfapps.info).
doriansabitov
1,891,228
Hire Dedicated DevOps Engineers: Unleash Your Development Velocity
Demolishing Silos: DevOps engineers promote teamwork across departments, dismantling barriers and...
0
2024-06-17T13:14:33
https://dev.to/coderowersoftware/hire-dedicated-devops-engineers-unleash-your-development-velocity-1mbp
webdev, devops, developer, development
**Demolishing Silos:** DevOps engineers promote teamwork across departments, dismantling barriers and ensuring unified objectives. **Building RockSolid Software:** DevOps engineers ensure smooth CI/CD practices, leading to fewer bugs, reliable software, and lower production risks. **Scaling with Ease:** DevOps engineers automate infrastructure, enabling seamless scalability and cost-effective performance as your business expands. **Optimizing the Flow:** Automation enhances DevOps pipelines, boosting efficiency, cutting costs, and fostering seamless collaboration for swift development. **Fort Knoxfor Your Code:** DevOps engineers prioritize security in the development lifecycle, integrating measures into the CI/CD pipeline for a safer environment. **Innovation Unleashed:** DevOps engineers manage infrastructure and deployments, freeing up development teams to innovate and build valuable features. **Proactive ProblemSolver:** DevOps engineers use monitoring tools and proactive methods to prevent user-impacting issues, ensuring minimal downtime and a smooth experience. **Disaster Recovery Ready**: DevOps engineers create resilient disaster recovery plans to minimize downtime and ensure quick recovery from outages. **Clear Vision, Informed Decisions:** DevOps engineers use advanced monitoring tools for better visibility, enabling proactive management and informed decisions. **[Hire a skilled Dedicated DevOps engineer for your next project and get started today!](http://coderower.com/)**
coderower
1,891,224
InnoDB Performance Tuning – 11 Critical InnoDB Variables to Optimize Your MySQL Database
InnoDB is the core storage engine for MySQL, celebrated for its reliability and performance in even...
0
2024-06-17T13:14:25
https://releem.com/blog/innodb-performance-tuning
webdev, mysql, php, programming
InnoDB is the core storage engine for MySQL, celebrated for its reliability and performance in even the most challenging of production settings. To truly optimize InnoDB, you need a deep understanding of various system variables and how they interact with your unique server setup and the specific demands of your workload. If you properly configure these settings – you can drastically cut down on latency, boost throughput, and maintain stability even under heavy loads. Whether you’re running busy web applications, large data warehouses, or agile enterprise applications, the insights and guidelines shared here will help you optimize your database to run smoothly and efficiently! ## 1. [innodb_buffer_pool_size](https://releem.com/docs/mysql-performance-tuning/innodb_buffer_pool_size) Perhaps the most critical setting for InnoDB performance tuning. It specifies the total amount of memory allocated to InnoDB for caching data and indexes from the database. By caching data in memory, innodb_buffer_pool_size significantly reduces your disk I/O. **Recommended Value** Set to 50 to 80% of your total RAM if InnoDB is the primary service running on the server. For servers running multiple services, this value may need to be adjusted to avoid starving other processes of memory. **Static** Server restart required to change value. **Insights** Setting this variable too high can lead to swapping if the OS runs out of physical memory, which would counteract the performance benefits. Monitor the server's overall memory usage when tuning this variable. ## 2. [innodb_buffer_pool_chunk_size](https://releem.com/docs/mysql-performance-tuning/innodb_buffer_pool_chunk_size) Defines the size of each chunk within the buffer pool. When you need to increase the size of the buffer pool, this is done by adding more chunks of a predefined size. This modular approach simplifies the scaling of memory allocation in response to changes in database demand. It is particularly important for systems with large buffer pools running on multiple instances, as it helps in managing memory more efficiently. **Recommended Value** This should be set based on the total size of the buffer pool and the number of instances. A common practice is to set the chunk size so that the buffer pool is evenly divided among the instances. **Static** Server restart required to change value. **Insights** The chunk size needs to be a divisor of the total buffer pool size to ensure an even distribution across all buffer pool instances. It's also important to ensure that the chunk size aligns with the system's page size to optimize memory allocation. ## 3. [innodb_buffer_pool_instances](https://releem.com/docs/mysql-performance-tuning/innodb_buffer_pool_instances) Determines the number of instances (or parts) that the buffer pool is divided into. Splitting the buffer pool into multiple instances can help reduce contention as different threads read and write to cached pages in different instances. **Recommended Value** For systems with a buffer pool size of over 1GB, it's recommended to have one instance per every 1GB of buffer pool size, with a typical upper limit of around 16 instances, depending on the workload. 8 instances is a good starting point. **Dynamic** You can change the value without restarting the server, but the change will only take effect after flushing the buffer pool. **Insights** Increasing the number of buffer pool instances can improve concurrency by reducing contention among threads. However, having too many instances can lead to overhead and diminished returns, so it's important to test changes to find the optimal setting for your specific workload. ## 4. [innodb_log_file_size](https://releem.com/docs/mysql-performance-tuning/innodb_log_file_size) Specifies the size of each log file in the InnoDB redo log. The redo log is a vital component for data recovery and performance, as it stores a record of all changes to InnoDB data. The size of these log files can greatly affect the efficiency of your database recovery process and overall system performance. **Recommended Value** Your ideal log file size can vary based on your workload and the total volume of writes. For a database with a high volume of transactions, larger log files might be necessary.Ideally, redo log files should be large enough to hold one hour's worth of write activities. This recommendation is about finding a balance between performance and recovery time. **Static** Server restart required to change value, as it involves resizing the physical files on the disk. **Insights** Increasing the log file size can reduce the frequency of log flushes to disk, which improves write performance. On the other hand, larger log files can lead to longer recovery times after a crash. It's essential to balance the size based on transaction volume and recovery performance requirements. ## 5. [innodb_log_buffer_size](https://releem.com/docs/mysql-performance-tuning/innodb_log_buffer_size) Sets the size of the buffer that InnoDB uses to write to the log files on disk. It temporarily holds data before it's written to the log file during a transaction. A larger log buffer allows transactions to run without having to write to the log file on disk until the buffer is full, which can improve performance by reducing disk I/O. **Recommended Value** Typically set between 16MB and 64MB, depending on the volume and frequency of transactions. **Dynamic** You can change the value without restarting the server. **Insights** Setting this too low might cause a bottleneck, especially in systems with high transaction rates, as the log buffer would need to be written to disk more frequently. ## 6. [innodb_write_io_threads](https://releem.com/docs/mysql-performance-tuning/innodb_write_io_threads) Controls the number of I/O threads for writing data and log files in InnoDB. Increasing the number of write I/O threads can improve the throughput of write operations, especially on systems with multiple CPUs or disks. **Recommended Value** Default is typically 4, but can be increased to 8 or 16 on systems with high I/O capacity and multiple disks. **Dynamic** You can change the value without restarting the server, as workload demands change. **Insights** While increasing the number of threads can improve performance, it may also increase CPU usage and contention. It's important to balance these factors based on system resources. ## 7. [innodb_read_io_threads](https://releem.com/docs/mysql-performance-tuning/innodb_read_io_threads) Similar to write I/O threads, this setting controls the number of threads used for reading data from the disk. Adjusting this can speed up data access operations, particularly under high read load scenarios. **Recommended Value** Typically starts at 4, similar to write threads, and can be increased based on system configuration and performance needs. **Dynamic** You can change the value without restarting the server. **Insights** As with write threads, increasing read threads can potentially lead to higher CPU usage, so adjustments should be tested for net performance gains. ## 8. [innodb_flush_log_at_trx_commit](https://releem.com/docs/mysql-performance-tuning/innodb_flush_log_at_trx_commit) Determines the balance between performance and reliability in transaction logging. A setting of 1 flushes logs to disk at the end of each transaction, offering the highest durability. A setting of 2 flushes logs to disk every second, which can improve performance but at a slight risk of data loss. **Recommended Value** Set to 1 for maximum data safety (ideal for financial or critical systems) and 2 or 0 for systems where performance is prioritized over transaction safety. **Dynamic** You can change the value without restarting the server, but changes will be applied immediately, so this should be done with an understanding of the risk to data integrity. **Insights** The choice heavily depends on your need for data integrity versus performance. It's a critical setting for databases handling sensitive data. ## 9. [innodb_thread_concurrency](https://releem.com/docs/mysql-performance-tuning/innodb_thread_concurrency) Limits the number of threads that can be active inside InnoDB at the same time. It is used to prevent thread thrashing that can occur when too many threads are competing for resources. **Recommended Value** The default setting is 0, which allows an unlimited number of threads. This may need to be adjusted based on system load and hardware capabilities, typically set to (2 x [number of CPUs] + number of disks), but heavily dependent on specific workloads. **Dynamic** You can change the value without restarting the server. **Insights** Properly setting this variable can greatly enhance performance by optimizing thread utilization without overloading system resources. ## 10. [innodb_purge_threads](https://releem.com/docs/mysql-performance-tuning/innodb_purge_threads) This setting controls the number of threads dedicated to purging old versions of rows that have been updated or deleted. It helps manage the undo tablespace by cleaning up old transactions, thus preventing it from growing unnecessarily large. **Recommended Value** The default setting is 1, which typically suffices for most systems. However, for high transaction systems, increasing this value can improve the performance of purge operations, often set to match the number of available CPU cores, but should not exceed the number of innodb_undo_log instances. **Dynamic** You can change the value without restarting the server. **Insights** Increasing the number of purge threads can reduce the time taken for purge operations. This minimizes the size of the undo logs and improves overall system efficiency. It's key, however, not to oversubscribe system resources which can lead to diminished returns. ## 11. [innodb_flush_method](https://releem.com/docs/mysql-performance-tuning/innodb_flush_method) Specifies the method used to flush data to the InnoDB data files and log files, which can impact I/O throughput and overall database performance. **Recommended Value** Common settings are O_DIRECT to avoid double buffering by the operating system or fsync() to guarantee data integrity by flushing the buffers to disk. The optimal setting may depend on the underlying hardware and filesystem characteristics. **Static** Server restart required to change value. **Insights** O_DIRECT minimizes operating system overhead by bypassing its cache, suitable for servers with a dedicated I/O system for the database. Conversely, fsync() can be beneficial when system stability and data integrity are prioritized over raw performance. ## Manage All 11 InnoDB Variables Automatically Tuning InnoDB involves a delicate balance between numerous system variables that can affect different aspects of database performance – from managing memory and I/O operations to ensuring data consistency and minimizing latency. While the variables discussed here provide a starting point, effective tuning requires ongoing monitoring and adjustments based on actual system performance and workloads. [Releem](https://releem.com) offers a streamlined solution to this complexity by automatically managing these critical variables. The platform continually monitors your server's performance and evaluates the effectiveness of various parameters and settings in real-time. Releem then makes precise configuration recommendations, including any necessary changes to the InnoDB variables. Using Releem's smart platform means achieving optimal performance, enhanced data integrity, and stable system operations with minimal effort on your part. This ease of management frees you up to concentrate on more strategic initiatives while Releem handles the technical optimizations behind the scenes.
drupaladmin
1,891,227
What is intent prompting?
When you need to build more complex applications using LLMs, you often need to determine a course of...
0
2024-06-17T13:13:39
https://dev.to/kwnaidoo/what-is-intent-prompting-26jb
ai, machinelearning, python, productivity
When you need to build more complex applications using LLMs, you often need to determine a course of action based on various scenarios. For example: if you are building an insurance application, the user may be interested in one of these insurance types: car, household, pets, etc... Based on their request, you need to route your AI to the correct workflow since each of these types will require a different set of questions. How to achieve this? ## Intent prompting Intent prompting allows you to instruct the LLM to return a structured response so that you can better understand the user's request and route your workflow accordingly. Here's an example: ```python Given the following question from the human, please determine which action the human wants to take: - The human is interested in car insurance. Respond with only the following tag: <car insurance />. - The human is interested in pet insurance. Respond with only the following tag: <pet insurance />. - The human is asking a general question. Respond with a relevant answer from the context data below wrapped in a <answer> tag. <context> SOME CONTEXT DATA HERE. </context> ``` You can then use some Python logic to check which tag was returned and take the next steps in your process flow. > Try Ragable: https://github.com/plexcorp-pty-ltd/ragable . Ragable is a RAG toolkit that will simplify your agent workflow and help you build complex AI powered apps with ease.
kwnaidoo
1,891,226
Learn a new tool, or just use what you know?
Diving into flask . I'm building a website as a way to give myself a 'reason' to learn and a...
0
2024-06-17T13:12:43
https://dev.to/oakla/learn-a-new-tool-or-just-use-what-you-know-obh
webdev, python, wtforms
Diving into flask . I'm building a website as a way to give myself a 'reason' to learn and a realistic problem to solve. As a go along with project, I keep discovering new nuances to the challenge, along with a world of framework designed to help makes things easier. Today I discovered WTForms. Will learning yet another framework be worth it compared to just solving the problem manually? Time will tell XD https://wtforms.readthedocs.io/en/3.1.x/
oakla
1,891,225
Improving Your SQL Indexing: How to Effectively Order Columns
A new blog post with the theory and an example of the art of indexing: finding the best index for one...
0
2024-06-17T13:12:04
https://dev.to/yugabyte/improving-your-sql-indexing-how-to-effectively-order-columns-24dj
yugabytedb, distributed, sql, database
A new blog post with the theory and an example of the art of indexing: finding the best index for one query and also understanding that you don't need the best index: {% embed https://www.yugabyte.com/blog/improving-sql-indexing-how-to-order-columns/ %}
franckpachot
1,891,219
SFU vs MCU vs P2P: WebRTC Architectures Explained
There are different WebRTC architecture available today, these are SFU, MCU and P2P, selecting one...
0
2024-06-17T13:09:21
https://www.metered.ca/blog/sfu-vs-mcu-vs-p2p-webrtc-architectures-explained/
webdev, javascript, devops, webrtc
There are different WebRTC architecture available today, these are SFU, MCU and P2P, selecting one depends on many factors these include ### **Network Conditions** ### **Bandwidth availability** * High Bandwidth If participants have good quality bandwidth then SFU and peer to peer calling will work for them * Variable Bandwidth If the participants do not have good quality bandwidth and there are a lot of participants than going with a MCU would be a good idead ### **Latency Sensitivity** * Low Latency Required If the situation requires low latency then going with an SFU and/ or peer to peer is a good idea * Latency Tolerance If there is a tolerance for latency and latency is not that important then MCU can also be considered. MCU processes the streams which creates latency. But in return for latency MCU reduces the number of streams that must be given to each meeting user and thus reduces requirement for client resources ### **Server Capacity** ### **Server Resources** * Limited Resources In peer to peer connections and SFU there is less CPU requirement on the server and more on the client. In MCU there is large CPU requirement on the server side and less on the client side If you have limited resources on the client side then it is adviseable to go with MCU. In large meetings with multiple streams using IoT devices could be a good use case for MCU * Moderate Resources If you have moderate resources on the client side like mobile phones, and smart devices and the meetings are not huge like 1000s of people then going with the SFU is a good idea. If provides good quality streams that you don't get with MCU and is moderately resource intensive on the client as well as on the server * High Resources If you have devices that can handle high resource requirements such as 4G connection smart phones etc then you can consider peer to peer as well peer to peer has minimal load on the server but provides all the streams to the client devices and the client devices itself has to send all the streams to all the other devices that are connected to the meeting ### **Use Cases** * One to One calls Peer to peer is a good use case for one to one call as it does not require a lot of resouces and individual devices can directly communicate with each other using a TURN server * Small Group Meetings Here SFU as well as MCU as well as peer to peer work. It depends on the number of users in the meeting also the client device capabilities including bandwidth and CPU capacity to deal with incoming and outgoing streams * Large Webinars/ Conferences SFU are perfect for this as the SFU streamlines the income streams and directs the stream to specific devices that need the stream * Interactive Live Events For this SFU and MCU both can be considered. MCU is going to cost a lot more as compared to SFU because it processes the incoming streams and creates a single stream that is then broadcast to all the users. ## **Quick Comparison Table between SFU, MCU and P2P** | **Feature** | **SFU (Selective Forwarding Unit)** | **MCU (Multipoint Control Uni)** | **P2P (Peer-to-Peer)** | | --- | --- | --- | --- | | Scalability | Highly Scalable | Moderately Scalable | Low Scalability, suitable for small groups | | Latency | Low to Moderate | High, because of mixing and processing of streams | low to moderate | | bandwidth usage | Efficient, streams are selectively forwarded | High usage, because all streams are forwarded to all users | Variable, it depends on how many users are connected | | Steam Quality | Excellent quality | Excellent quality | Excellent Quality | | Implementation Cost | Moderate | High | low | | Server Load | Moderate load | High load | Minimum load | ## **SFU ( Selective Forwarding Unit) in Detail** SFU or a Selective Forwarding Unit, is a server component in the WebRTC ecosystem. SFU recieves multiple media streams from different participants but selectively forwards the stream to other participants without mixing them. One thing to note is that the SFU does not process the streams, it just routes them to participants based on the need ### **How does SFU work** * **Stream Reception** Each participant in the SFU model sends their stream to the SFU * **Stream Selection** The SFU analyses all the incoming streams and decides which streams needs to be send to which partcipant * **Stream Forwarding** Based on its decision the SFU forwards the streams to each participant, thus optimizing bandwidth and CPU load * **Adaptive Bitrate Streaming** SFU have the ability to implement adaptive bitrate streaming, this means that SFU can adjust the quality to the stream based on the bandwidth and CPU power of the recieving participant. This makes sure that the participants that have a lower bandwidth or CPU power can also take part in the meeting. * **Simulcast** Simulcast is also an innovative technology that is implemented by the SFUs, Here a participant client device sends multiple streams of different quality. then SFU forwards the streams that are most appropriate for each participant based on the recieving participants bandwidth and CPU availability. ### **Advantages** 1. Scalability SFU can handle a large number of users efficiently by forwarding streams, and can work with a lot of streams in fairly small server size. This makes it a good solution for using it with multiplw simultaneous users in webinars and meetings 1. Lower Latency The SFUs does not mix or process the streams hence there is not a lot of latency when working with SFUS 1. Efficient resource use SFUs require comparatively less server resources and lower server costs in handling a large number of users ## **Exploring MCU (Multipoint control Unit)** The MCU is a server component in the WebRTC. It receives multiple media streams from multiple participants then mixes these streams and combines them to create a single stream which it then streams back to the participants ### **How does MCU work** * Stream Reception Each participant sends their stream to the MCU. * Stream Processing The MCU then gets all the streams from all the participants that are in a meeting, then processes the streams and creates a single audio and video stream * Stream Distribution Then the MCU sends this single stream to all the participants that are connected in the meeting ### **Advantages** * **Reduced bandwidth and CPU requirements for the client in large meetings:** In traditional webrtc systems like peer 2 peer or SFU all the streams are provided to all the participants. If there is a large meeting then each participant gets a lot of streams from other participants which takes away resources like bandwidth and CPU from the client devices. In MCU this is avoided because each client gets a single stream from the MCU thus optimizing client bandwidth and CPU time which is precious * **Consistent streaming quality:** In the MCU all the participants get the same quality stream from the MCU. This is not the case with p2p because different quality streams are provided by the peers and these streams are then provided to all the peers. ### **Disadvantages** * **Server load considerations:** In the MCU all the streams are directed towards the MCU which then processes all the streams and combines them and creates a new stream. This requires a lot CPU processing power on the server So the MCU requires a lot of power on the server, but reduces load on the client devices * **Scalability challenges** Because MCU offloads all the processing on the server, there are scalability challenges. MCU are perfect for video calling on client devices that are low on CPU and bandwidth requirements like mobile devices or IoT devices. But scaling the video to large number of participants creates exponential requirements of compute and bandwidth on the server. ## **Exploring P2P (Peer-to-Peer)** Peer to Peer is a webrtc communication model where media streams that is audio, video and data. These streams are send from client to client directly but many times relayed through a TURN server so as to traverse the NAT This Model provides direct communication between clients facilitating efficient and low latency communication ![Metered TURN servers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aelyie288vyt4oav7mqq.png) ## [**Metered TURN servers**](https://www.metered.ca/stun-turn) 1. **API:** TURN server management with powerful API. You can do things like Add/ Remove credentials via the API, Retrieve Per User / Credentials and User metrics via the API, Enable/ Disable credentials via the API, Retrive Usage data by date via the API. 2. **Global Geo-Location targeting:** Automatically directs traffic to the nearest servers, for lowest possible latency and highest quality performance. less than 50 ms latency anywhere around the world 3. **Servers in all the Regions of the world:** Toronto, Miami, San Francisco, Amsterdam, London, Frankfurt, Bangalore, Singapore,Sydney, Seoul, Dallas, New York 4. **Low Latency:** less than 50 ms latency, anywhere across the world. 5. **Cost-Effective:** pay-as-you-go pricing with bandwidth and volume discounts available. 6. **Easy Administration:** Get usage logs, emails when accounts reach threshold limits, billing records and email and phone support. 7. **Standards Compliant:** Conforms to RFCs 5389, 5769, 5780, 5766, 6062, 6156, 5245, 5768, 6336, 6544, 5928 over UDP, TCP, TLS, and DTLS. 8. **Multi‑Tenancy:** Create multiple credentials and separate the usage by customer, or different apps. Get Usage logs, billing records and threshold alerts. 9. **Enterprise Reliability:** 99.999% Uptime with SLA. 10. **Enterprise Scale:** With no limit on concurrent traffic or total traffic. Metered TURN Servers provide Enterprise Scalability 11. **5 GB/mo Free:** Get 5 GB every month free TURN server usage with the Free Plan 12. Runs on port 80 and 443 13. Support TURNS + SSL to allow connections through deep packet inspection firewalls. 14. Supports both TCP and UDP 15. Free Unlimited STUN ### **How does Peer to Peer work** * Connection Establishment with Signalling In peer to peer connection a signalling server is required which is used to exchange information such as the peer IP address and port number so that peers can identify each other and start the process of connection * ICE and TURN server for connection establishment WebRTC uses the ICE framework to find the best path to connect the peers directly. The ICE frameworks first tries to connect the peers using the STUN server if that fails then it tries to connect the peers using the TURN server. * Stream Transmission When the connection is established the media stream are transmitted between peers either directly or as in most cases through a TURN server. Data passing through a TURN server is end to end encrypted, thus no one not even the TURN server know what data is passing through it ### **Advantages** * Minimal latency The main advantage of peer to peer connection is minimum latency that is because the connection is established directly, so there is minimum latecy * Direct Communication In the peer to peer connection the connection is direct from one peer to another, sometimes a TURN server is used which also does not process the streams just forwards it to other participants ### **Technical Deep Dive** * TURN TURN server is Traversal using relays around NAT which is a server that relays the traffic from one peer to another Here is an article on TURN server that explain in detail what are turn servers and how do they work * STUN STUN server is called as the Session traversal utilities for NAT helps client devices descover their own external IP address and port number. The client devices do not know what their external IP address and port number is this because the NAT obfuscates this. The client device sends a request to the STUN server when then replies back with the external IP address and port number of the incoming request thus the device knows what the external IP address and port number is. If you are interested in learning more about how STUN servers work you can refer to the article: [**Stun Server: What is Session Traversal Utilities for NAT?**](https://www.metered.ca/tools/openrelay/stun-servers-and-friends/) * ICE Framework ICE Framework is a webrtc framework that finds out which is the best path to connect to devices directly in order to create a webrtc connection the ICE framework first tries to create a direct connection using STUN server and if that fails it tries to create a connection using the TURN servers There are many ICE servers available in the market today. If you like to learn more about [**ICE servers**](https://www.metered.ca/blog/interactive-connectivity-establishment-ice-server-the-complete-guide/) then here is a guide. * NAT Issues NAT does not elt the devices connect directly with each other, it blocks incoming connections (some types of NAT allow connections others do not) does not let teh devices know what their external IP address and port number is and create other issues There are different types of NAT some allow external connections and others do not. There are also firewall rules that block incoming connections from external devices For these purposes you need a TURN server
alakkadshaw
1,891,153
CRUD Operations: The Basics of Data Management
First, We learn what the CRUD operation is, then we will understand step by step with a backend code...
0
2024-06-17T13:09:16
https://dev.to/praneshcodecraft/crud-operations-the-basics-of-data-management-4c0a
backend, javascript, express, database
First, We learn what the CRUD operation is, then we will understand step by step with a backend code (express.js). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xrulqobj0w95sflahjja.jpg) CRUD stands for Create, Read, Update, and Delete. These are the four basic things you can do with data in a database: ### Create: Add new data to the database. > Example: Imagine you're adding a new coffee to your database. You enter the coffee name and price and save it. ### Read: Get data from the database. > Example: When you open your client site to look up coffee items, you read data from the database. ### Update: Change existing data in the database. > Example: If you want to change the coffee price, you can edit the coffee price to update the price. ### Delete: Remove data from the database. > Example: If you no longer need a coffee item, you can delete it from your database. Yeeessss!!! Now we learned the basic operations, ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/giqoskwm293czhyo7s60.jpg) ### Now we will understand CRUD operation by applying the code in express.js, ## Setup and Middleware ```js const express = require('express'); const cors = require('cors'); const { MongoClient, ServerApiVersion, ObjectId } = require('mongodb'); const app = express(); const port = process.env.PORT || 5000; require('dotenv').config(); // Middleware app.use(cors()); app.use(express.json()); ``` Express Setup: Sets up the Express application. cors: Middleware to enable Cross-Origin Resource Sharing, allowing the frontend to make requests to this server from different origins. express.json(): Middleware to parse JSON request bodies. ## MongoDB Connection Setup ```js const uri = `mongodb+srv://${process.env.DB_USER}:${process.env.DB_PASS}@cluster0.2pzzeio.mongodb.net/?retryWrites=true&w=majority`; const client = new MongoClient(uri, { serverApi: { version: ServerApiVersion.v1, strict: true, deprecationErrors: true, } }); ``` MongoDB Connection: Uses the MongoClient from the MongoDB driver to connect to the MongoDB Atlas cluster using the connection string stored in the .env file. ## CRUD Operations ```js app.get('/coffee', async (req, res) => { const cursor = coffeeCollection.find(); const result = await cursor.toArray(); res.send(result); }); ``` GET /coffee: Retrieves all documents (coffees) from the coffee collection in the coffeeDB database. ## GET Single Coffee ```js app.get('/coffee/:id', async (req, res) => { const id = req.params.id; const query = { _id: new ObjectId(id) }; const result = await coffeeCollection.findOne(query); res.send(result); }); ``` GET /coffee/: Retrieves a single coffee by its _id. Uses ObjectId from MongoDB to create a query object. ## PUT (Update) Coffee ```js app.put('/coffee/:id', async (req, res) => { const id = req.params.id; const filter = { _id: new ObjectId(id) }; const options = { upsert: true }; const updatedCoffee = req.body; const coffee = { $set: { name: updatedCoffee.name, quantity: updatedCoffee.quantity, supplier: updatedCoffee.supplier, taste: updatedCoffee.taste, category: updatedCoffee.category, details: updatedCoffee.details, photo: updatedCoffee.photo, } }; const result = await coffeeCollection.updateOne(filter, coffee, options); res.send(result); }); ``` POST /coffee: Inserts a new coffee document into the coffee collection using insertOne(). ## DELETE Coffee ```js app.delete('/coffee/:id', async (req, res) => { const id = req.params.id; const query = { _id: new ObjectId(id) }; const result = await coffeeCollection.deleteOne(query); res.send(result); }); ``` DELETE /coffee/: Deletes a coffee by its _id using deleteOne(). ## Server Startup and Ping ```js app.get('/', (req, res) => { res.send("Coffee making server is running..."); }); app.listen(port, () => { console.log(`Coffee server is running on port: ${port}`); }); ``` GET /: Simple route to verify that the server is running. Server Start: Starts the Express server on the specified port (5000 by default). --- ## MongoDB Connection Management The run() function is an asynchronous function that connects to MongoDB using `await client.connect()`. It also sets up the `coffeeCollection` variable to interact with the coffee collection in the coffeeDB database. It uses MongoDB's `findOne()`, `find()`, `updateOne()`, `insertOne()`, and `deleteOne()` methods to perform CRUD operations on the coffeeCollection. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hfj5nohm00mfrdz9ftmw.jpg) ### Code Link: https://github.com/Praneshchow/Coffee-express-server These operations are the basic actions that can be performed on any data set, whether it is contacts in your phone, products in an online store, or posts on a social media site. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/905p3nde41vejgdef1cx.gif)
praneshchow
1,891,222
Understanding Cybersecurity: A Beginner’s Guide
Imagine you live in a bustling city where every house is a computer, and every street represents the...
0
2024-06-17T13:07:30
https://dev.to/clom/understanding-cybersecurity-a-beginners-guide-10ji
cybersecurity, beginners
Imagine you live in a bustling city where every house is a computer, and every street represents the internet. Just as in any city, securing your house is crucial to ensure your safety and privacy. In the digital world, cybersecurity plays this critical role. Let’s explore the basics of cybersecurity through this analogy, which I learnt as an easier way to explain difficult concepts in layman terms. Public IP Addresses: Your Home Address Every house in a city has a unique address, known as a public IP address, that identifies its location. This public IP address is like the unique string of numbers that distinguishes your home on the global network, much like a postal worker needs your address to deliver a letter. Private IP Addresses: The people in your home Within your house, each person (or device) might have their own room, represented by a private IP address. These private IPs allow the devices inside your network to communicate with each other and the broader internet securely. Ports: Your Doors A house typically has doors with locks to control who can enter or exit. In the digital world, firewalls serve as these locks. Doors in your house, like the front door, garage door, or back door, are akin to ports on a computer network. Ports facilitate communication between your computer and the outside world. Each port is a different entry point through which data can enter or leave your system. Firewalls: Your Locks Just as you wouldn’t leave all your doors unlocked, you shouldn't leave all ports open. Firewalls manage these ports, allowing communication through necessary ones while keeping others closed to prevent unauthorized access. A firewall acts as a barrier between your computer (or network) and potential threats from the internet, much like a lock on your front door keeps out intruders. It monitors incoming and outgoing traffic, deciding what should be allowed in or out based on a set of security rules. Antivirus Software: Your Security System Many homes have security systems with alarms to detect and deter intruders. Similarly, antivirus software protects your computer by detecting, quarantining, and removing malicious software (malware). Malware can include viruses, spyware, ransomware, and more. Regular updates to your antivirus software ensure it can recognize and combat the latest threats. Encryption: Your Secret Code Imagine you want to send a valuable package, but you don’t want anyone to tamper with it during transit. You might use a secure lock that only the recipient can open. In the digital world, encryption serves this purpose. It converts your data into a code that only authorized parties can decipher, ensuring that even if the data is intercepted, it cannot be read without the decryption key. VPNs: Your Private Tunnel When you need to travel through the city unnoticed, you might use a private tunnel that hides your movements. A Virtual Private Network (VPN) works similarly by creating a secure, encrypted connection between your device and the internet. This makes it difficult for anyone to track your online activities or steal your data, especially when using public Wi-Fi networks. Two-Factor Authentication: Your Double-Lock System Sometimes, a single lock isn’t enough to secure your home. You might use a second lock for added security. Two-factor authentication (2FA) adds an extra layer of protection to your online accounts. Even if someone cracks your password, they would still need the second piece of information, like a code sent to your phone, to gain access. Regular Updates: Your Maintenance Routine Keeping your home in good repair prevents vulnerabilities like broken windows or doors. Similarly, regularly updating your software and systems is crucial in cybersecurity. Updates often include patches for security flaws that could be exploited by hackers. By keeping your systems updated, you ensure that your digital defenses remain strong. Conclusion Just as maintaining the security of your home requires vigilance and good practices, so does protecting your digital life. Understanding the basics of IP addresses, firewalls, ports, passwords, antivirus software, encryption, VPNs, two-factor authentication, and regular updates will help you create a robust cybersecurity strategy. Remember, in the digital city, staying secure is an ongoing effort that keeps your data and personal information safe from cyber threats.
clom
1,891,221
A Importância de Evitar a Generalização Precoce em Fluxos Complexos
Disclaimer Este texto foi gerado pela IA Generativa em função da transcrição do episódio do nosso...
0
2024-06-17T13:07:12
https://dev.to/dev-mais-eficiente/a-importancia-de-evitar-a-generalizacao-precoce-em-fluxos-complexos-4l0e
**Disclaimer** Este texto foi gerado pela IA Generativa em função da transcrição do episódio do nosso canal, Dev Eficiente. **Introdução** No desenvolvimento de software, é comum nos depararmos com a tentação de generalizar comportamentos e criar interfaces desde o início. No entanto, essa abordagem pode ser prejudicial, especialmente em fluxos mais complexos onde muitas variáveis precisam ser consideradas antes de tomar uma decisão. Neste post, vamos explorar a importância de evitar a generalização precoce e como isso pode impactar a qualidade e a eficiência do seu código. **Entendendo o Contexto** Quando estamos desenvolvendo uma funcionalidade, é crucial entender todas as variáveis envolvidas antes de tentar generalizar. Por exemplo, ao implementar um fluxo de pagamento, você pode se deparar com diferentes métodos de pagamento, como cartão de crédito, PIX e boleto. Cada um desses métodos possui suas próprias particularidades e dados específicos que precisam ser tratados de maneira única. **O Perigo da Generalização Precoce** Se você tentar generalizar desde o início, sem entender completamente as necessidades específicas de cada método de pagamento, pode acabar criando uma interface que não atende às necessidades reais do seu sistema. Isso pode resultar em um código mais complexo e difícil de manter, além de aumentar o tempo de implementação. **A Abordagem Recomendada** A abordagem recomendada é começar de maneira específica, implementando cada funcionalidade de forma isolada. Por exemplo, ao implementar o pagamento com cartão de crédito, você deve focar apenas nos dados e processos específicos desse método. Depois, ao implementar o pagamento com boleto, você pode copiar parte do código do cartão de crédito e adaptá-lo para o boleto. Esse processo permite que você identifique padrões e oportunidades de generalização de maneira mais natural e eficiente. **Exemplo Prático** Vamos considerar um exemplo prático de um fluxo de pagamento. Inicialmente, você pode implementar o pagamento com cartão de crédito, tratando todos os dados específicos desse método, como número do cartão, validade e CVV. Em seguida, ao implementar o pagamento com boleto, você perceberá que alguns dados, como nome e e-mail do cliente, são comuns a ambos os métodos. Nesse ponto, você pode começar a generalizar esses dados comuns, criando uma interface que os encapsule. **Benefícios da Abordagem Específica** Ao adotar essa abordagem, você ganha vários benefícios: 1. Clareza e Simplicidade: O código específico é mais fácil de entender e manter. 2. Flexibilidade: Você pode adaptar cada implementação às necessidades específicas sem se preocupar com uma interface genérica que pode não atender a todos os casos. 3. Eficiência: A implementação específica permite que você entregue funcionalidades mais rapidamente, sem a sobrecarga de tentar generalizar desde o início. **Conclusão** Evitar a generalização precoce é uma prática essencial para desenvolver código de alta qualidade e eficiente. Ao focar nas necessidades específicas de cada funcionalidade e identificar padrões ao longo do processo, você pode criar interfaces mais úteis e flexíveis. Lembre-se, a generalização deve ser um processo natural que surge da compreensão completa das variáveis envolvidas, e não uma imposição desde o início. Se você gostou deste conteúdo e quer aprender mais sobre como melhorar a qualidade e a eficiência do seu código, confira a Jornada Dev Eficiente. Qualquer dúvida, deixe um comentário abaixo. Até a próxima! Espero que este post tenha sido útil para você. Se tiver algum feedback ou quiser discutir mais sobre o assunto, sinta-se à vontade para comentar. Até a próxima! PS: [Confira o episódio completo no canal](https://youtu.be/ChpZzXdeE_0)
asouza
1,880,625
Sentiment Analysis using CPU-Friendly Small Language Models 😡 😃
What are Small Language Models? 🤔 You have probably heard of Large Language Models, but...
0
2024-06-17T13:06:59
https://dev.to/llmware/sentiment-analysis-using-cpu-friendly-small-language-models-1idd
ai, python, programming, tutorial
## What are Small Language Models? 🤔 You have probably heard of Large Language Models, but what about Small Language Models (SLMs)? These compact models excel on specialized tasks, are ideal to use with limited resources, and have faster processing times. Let's explore LLMWare's SLIM models! *** ## Harnessing SLIM Models 💡 In many real-world automated workflows, accomplishing a task requires multiple steps. LLMWare's SLIM models are specifically designed to streamline these processes. These models integrate seamlessly into your programmatic environment, acting as decision points within complex, multistep workflows. They achieve this by producing structured outputs like JSON, SQL, or Python dictionaries, ensuring smooth and efficient workflow automation. ![slim gif](https://media.giphy.com/media/j13anwZ64gafOwF69H/giphy.gif) Additionally, these models are designed to be CPU-friendly, making them accessible to many machines. Let's explore how LLMWare's slim-sentiment-tool, equipped with 1.1 billion parameters, can enhance our workflow in sentiment analysis with two practical examples! *** ## For the Visual Learners... 📺 Here is a video discussing the same topic as this article. A good idea would be to watch the video, and then work through the steps in this article. {% youtube ERCHP21oAN8 %} *** ## Framework 🖼️ **LLMWare** For our new readers, LLMWARE is a comprehensive, open-source framework that provides a unified platform for application patterns based on LLMs, including Retrieval Augmented Generation (RAG). Please run `pip3 install llmware` in the command line to download the package. *** ## Importing Libraries and Creating Transcripts 📚 ```python from llmware.agents import LLMfx ``` **LLMfx**: A class within `llmware` that provides an interface to models to interact with text, e.g. to first perform named entity recognition (ner) and then answer a question you want to have answered. It orchestrates multi-model, multi-step processes using SLIM classifier models with centralized journaling, structured work management, and information aggregation. ```python earnings_transcripts = [ "This is one of the best quarters we can remember for the industrial sector with significant growth across the " "board in new order volume, as well as price increases in excess of inflation. We continue to see very strong " "demand, especially in Asia and Europe. Accordingly, we remain bullish on the tier 1 suppliers and would be " "accumulating more stock on any dips. ", "Not the worst results, but overall we view as negative signals on the direction of the economy, and the likely " "short-term trajectory for the telecom sector, and especially larger market leaders, including AT&T, Comcast, and" "Deutsche Telekom.", "This quarter was a disaster for Tesla, with falling order volume, increased costs and supply, and negative " "guidance for future growth forecasts in 2024 and beyond.", "On balance, this was an average result, with earnings in line with expectations and no big surprises to either " "the positive or the negative." ] ``` Here, we instantiate the transcripts we will be using for sentiment analysis. Feel free to replace it with the data you would like to analyze. *** # Example 1: Simple Programmatic Use of Sentiment Classification ![sentiment gif](https://media.giphy.com/media/CwkYTf0u1xshFP9Keu/giphy.gif) ## Load the Agent 🤖 ```python agent = LLMfx(verbose=True) agent.load_tool("sentiment") sentiment = agent.sentiment(text) ``` This code will create the agent using LLMfx, load the sentiment analysis tool, and utilize the sentiment analysis tool to make an inference on the text's sentiment. **Verbose**: This parameter determines if the events written in the journal are also written in stdout. This is defaulted to True. *** ## Print the Output 🖨️ ```python print("sentiment: ", sentiment) for keys, values in sentiment.items(): print(f"{keys}-{values}") ``` This will print the inferred sentiment, the confidence in that sentiment, and the probability of all three of the emotions. *** ## Make a Decision Based on Sentiment ⚖️ ```python sentiment_value = sentiment["llm_response"]["sentiment"] confidence_level = sentiment["confidence_score"] if "positive" in sentiment_value: print("sentiment is positive .... will take 'positive' analysis path ...", sentiment_value) if "positive" in sentiment_value and confidence_level > 0.8: print("sentiment is positive with high confidence ... ", sentiment_value, confidence_level) return sentiment ``` This code extracts and stores the sentiment in the variable `sentiment_value`. Next, it retrieves the sentiment's confidence score and stores it in the variable `confidence_level`. The process then goes through two decision points: one if the sentiment is positive, and another if the sentiment is positive with a confidence level greater than 0.8. *** ## Complete Code for First Example ✅ ```python def get_one_sentiment_classification(text): agent = LLMfx(verbose=True) agent.load_tool("sentiment") sentiment = agent.sentiment(text) print("sentiment: ", sentiment) for keys, values in sentiment.items(): print(f"{keys}-{values}") sentiment_value = sentiment["llm_response"]["sentiment"] confidence_level = sentiment["confidence_score"] if "positive" in sentiment_value: print("sentiment is positive .... will take 'positive' analysis path ...", sentiment_value) if "positive" in sentiment_value and confidence_level > 0.8: print("sentiment is positive with high confidence ... ", sentiment_value, confidence_level) return sentiment ``` *** #Example 2: Batch Sentiment Analysis with Iteration ![simpsons gif](https://media.giphy.com/media/3orifaUuOHEqbRHqHm/giphy.gif) ##Load the Agent and Work 🤖 ```python agent = LLMfx() agent.load_tool("sentiment") agent.load_work(earnings_transcripts) ``` The first two lines are the same as in the first example, load the agent and the sentiment analysis tool. In this example, we will load the LLMfx load_work method. The load_work method is a flexible input mechanism - pass a string, list, dictionary, or combination, and it will 'package' as iterable units of processing work for the agent. *** ## Iterate Through Transcripts and Print Results 🔁 ```python while True: output = agent.sentiment() # print("update: test - output - ", output) if not agent.increment_work_iteration(): break ``` This code iterates through the transcripts and performs sentiment analysis on each one. For each iteration, it prints the sentiment, confidence score, and probability distribution of all three emotions. The loop terminates when the increment_work_iteration method of the agent returns False. *** ## Output Response and Clear Work and State 📋 ```python response_output = agent.response_list agent.clear_work() agent.clear_state() return response_output ``` The `response_output` variable holds a dictionary containing all the sentiment analysis responses to the transcripts. After gathering these responses, we clear the agent's work queue and key state variables. *** ## Complete Code for Second Example ✅ ```python def review_batch_earning_transcripts(): agent = LLMfx() agent.load_tool("sentiment") agent.load_work(earnings_transcripts) while True: output = agent.sentiment() # print("update: test - output - ", output) if not agent.increment_work_iteration(): break response_output = agent.response_list agent.clear_work() agent.clear_state() return response_output ``` *** ## Main Block 🔌 ```python if __name__ == "__main__": sentiment = get_one_sentiment_classification(earnings_transcripts[0]) response_output = review_batch_earning_transcripts() ``` In our main block, we will run both of these examples. In the first run, we will only analyze the sentiment of the first element of `earnings_transcripts`. *** ## Fully Integrated Code with Example 1 and 2 🧩 ```python from llmware.agents import LLMfx earnings_transcripts = [ "This is one of the best quarters we can remember for the industrial sector with significant growth across the " "board in new order volume, as well as price increases in excess of inflation. We continue to see very strong " "demand, especially in Asia and Europe. Accordingly, we remain bullish on the tier 1 suppliers and would be " "accumulating more stock on any dips. ", "Not the worst results, but overall we view as negative signals on the direction of the economy, and the likely " "short-term trajectory for the telecom sector, and especially larger market leaders, including AT&T, Comcast, and" "Deutsche Telekom.", "This quarter was a disaster for Tesla, with falling order volume, increased costs and supply, and negative " "guidance for future growth forecasts in 2024 and beyond.", "On balance, this was an average result, with earnings in line with expectations and no big surprises to either " "the positive or the negative." ] def get_one_sentiment_classification(text): agent = LLMfx(verbose=True) agent.load_tool("sentiment") sentiment = agent.sentiment(text) print("sentiment: ", sentiment) for keys, values in sentiment.items(): print(f"{keys}-{values}") sentiment_value = sentiment["llm_response"]["sentiment"] confidence_level = sentiment["confidence_score"] if "positive" in sentiment_value: print("sentiment is positive .... will take 'positive' analysis path ...", sentiment_value) if "positive" in sentiment_value and confidence_level > 0.8: print("sentiment is positive with high confidence ... ", sentiment_value, confidence_level) return sentiment def review_batch_earning_transcripts(): agent = LLMfx() agent.load_tool("sentiment") agent.load_work(earnings_transcripts) while True: output = agent.sentiment() # print("update: test - output - ", output) if not agent.increment_work_iteration(): break response_output = agent.response_list agent.clear_work() agent.clear_state() return response_output if __name__ == "__main__": sentiment = get_one_sentiment_classification(earnings_transcripts[0]) response_output = review_batch_earning_transcripts() ``` You may also find the fully integrated code on our Github repo [here] (https://github.com/llmware-ai/llmware/blob/main/examples/SLIM-Agents/sentiment-analysis.py) Additionally, the notebook version (ipynb) is available [here] (https://github.com/llmware-ai/llmware/blob/main/examples/Notebooks/NoteBook_Examples/sentiment_analysis.ipynb) *** ## Conclusion 🏁 ![computer gif](https://media.giphy.com/media/MKorKFj0Muz4P0CI7D/giphy.gif) We've shown how LLMWare's SLIM models can be effectively utilized for sentiment analysis. The model's simplicity and powerful capabilities make it an excellent tool for analyzing sentiment and simplifying the multistep process of sentiment analysis. Thank you for exploring this topic with us. We trust you now have the understanding needed to implement LLMWare's SLIM models for sentiment analysis and leverage its full range of benefits. Please check out our Github and leave a star! https://github.com/llmware-ai/llmware Follow us on Discord here: https://discord.gg/MgRaZz2VAB
will_taner
1,891,218
MENA Generic Oncology Drug Market: Recent Developments and Analysis
MENA Generic Oncology Drug Market Size &amp; Forecast The MENA Generic Oncology Drug Market was...
0
2024-06-17T13:05:09
https://dev.to/sri_harikrishnabalaji_2/mena-generic-oncology-drug-market-recent-developments-and-analysis-ho
MENA Generic Oncology Drug Market Size & Forecast The MENA Generic Oncology Drug Market was valued at USD 4 billion in 2023 and is expected to grow at a CAGR of 3% during the forecast period (2024-2032). MENA Generic Oncology Drug Market Analysis The MENA region’s generic oncology drug market is experiencing robust growth, fueled by several key drivers. Firstly, the rising prevalence of cancer across the region is propelling the demand for cost-effective treatment options, leading patients, and healthcare providers to turn to generic oncology drugs. Additionally, government initiatives aimed at improving access to healthcare and reducing treatment costs are further stimulating market expansion. Moreover, the increasing adoption of generic drugs by healthcare professionals due to their proven efficacy, safety, and affordability is bolstering market growth. Furthermore, favorable regulatory policies and streamlined approval processes for generic medications are encouraging pharmaceutical companies to invest in the development and production of oncology generics, enhancing market competitiveness. Overall, the MENA generic oncology drug market is poised for continued advancement, driven by a combination of rising cancer incidence, government support, healthcare professional acceptance, and regulatory facilitation. Request To Download Sample of This Strategic Report — https://univdatos.com/report/mena-generic-oncology-drug-market/get-a-free-sample-form.php?product_id=59037 The MENA Generic Oncology Drug Market Analysis The Middle East and North Africa (MENA) region is witnessing significant growth in the generic oncology drug market, driven by increasing cancer prevalence, rising healthcare expenditure, and a growing focus on cost-effective treatment solutions. This article delves into the latest developments and provides an in-depth analysis of the MENA generic oncology drug market. Growing Cancer Burden in the MENA Region Cancer is a major health concern in the MENA region, with an increasing number of cases each year. According to the World Health Organization (WHO), cancer rates in the region are expected to double by 2040. This alarming rise is attributed to factors such as aging populations, urbanization, lifestyle changes, and improved diagnostic capabilities. The high cost of patented oncology drugs poses a significant financial burden on healthcare systems and patients. As a result, there is a growing demand for affordable and accessible treatment options, leading to the rise of generic oncology drugs. These drugs are bioequivalent to their branded counterparts but are offered at a fraction of the cost, making them a viable solution for managing the cancer burden in the region. Key Market Drivers 1. Cost-Effectiveness and Affordability: Generic oncology drugs are significantly cheaper than branded drugs, making cancer treatment more affordable for patients and healthcare providers. This cost-effectiveness is a crucial factor driving the adoption of generics in the MENA region. 2. Patent Expirations: The expiration of patents for several key oncology drugs has opened the market for generic manufacturers. Companies are capitalizing on this opportunity to introduce generic versions of high-demand cancer treatments. 3. Government Initiatives and Policies: Many MENA countries are implementing policies to promote the use of generic drugs. Governments are encouraging local production and ensuring that generics meet international quality standards, thus fostering market growth. 4. Improving Healthcare Infrastructure: Investments in healthcare infrastructure and increased access to medical facilities are enhancing the availability of cancer treatments. This improvement is facilitating the distribution and acceptance of generic oncology drugs. 𝐃𝐨𝐰𝐧𝐥𝐨𝐚𝐝 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐢𝐜 𝐒𝐚𝐦𝐩𝐥𝐞 𝐏𝐃𝐅 𝐇𝐞𝐫𝐞- https://univdatos.com/report/mena-generic-oncology-drug-market/get-a-free-sample-form.php?product_id=59037 Recent Developments Regulatory Approvals and Collaborations In recent years, there has been a surge in regulatory approvals for generic oncology drugs in the MENA region. For instance, the Saudi Food and Drug Authority (SFDA) has been actively approving generic versions of essential cancer drugs. These approvals ensure that patients have access to safe and effective treatments. Collaborations between local and international pharmaceutical companies are also on the rise. These partnerships aim to enhance local manufacturing capabilities and ensure a steady supply of generic oncology drugs. For example, Jordan's Hikma Pharmaceuticals has established numerous partnerships to expand its oncology portfolio and market reach. Market Expansion and Product Launches Several pharmaceutical companies are expanding their presence in the MENA generic oncology drug market. In 2023, UAE-based Neopharma announced the launch of multiple generic oncology drugs, targeting various cancer types. These launches are expected to cater to the growing demand for affordable cancer treatments in the region. Additionally, multinational companies are investing in the MENA market to capitalize on the growth opportunities. For instance, India's Dr. Reddy's Laboratories has been expanding its product portfolio and distribution network in the region, aiming to become a key player in the generic oncology segment. Technological Advancements Technological advancements in drug development and manufacturing are playing a crucial role in the growth of the MENA generic oncology drug market. Companies are adopting advanced technologies to enhance the efficacy and safety of generic drugs. For instance, the use of nanotechnology and biosimilars is gaining traction, offering improved treatment options for cancer patients. Market Challenges Despite the positive growth trajectory, the MENA generic oncology drug market faces several challenges: 1. Regulatory Hurdles: Stringent regulatory requirements and lengthy approval processes can delay the introduction of generic drugs. Harmonizing regulations across the region could streamline approvals and enhance market accessibility. 2. Quality Concerns: Ensuring the quality and bioequivalence of generic drugs is critical. Regulatory bodies need to enforce strict quality control measures to maintain patient trust and safety. 3. Market Competition: The influx of generic manufacturers has intensified competition in the market. Companies need to differentiate themselves through innovation, quality, and strategic partnerships to gain a competitive edge. 𝐓𝐨 𝐆𝐞𝐭 𝐈𝐧𝐬𝐢𝐠𝐡𝐭𝐟𝐮𝐥 𝐑𝐞𝐬𝐞𝐚𝐫𝐜𝐡, 𝐑𝐞𝐪𝐮𝐞𝐬𝐭 𝐏𝐃𝐅 𝐂𝐨𝐩𝐲 — https://univdatos.com/report/mena-generic-oncology-drug-market/get-a-free-sample-form.php?product_id=59037 Future Outlook The future of the MENA generic oncology drug market looks promising, with several growth opportunities on the horizon. Increasing investments in healthcare infrastructure, favorable government policies, and rising awareness about cancer treatment options will continue to drive market expansion. Moreover, the focus on personalized medicine and targeted therapies is expected to shape the future of oncology treatment. Generic drug manufacturers are likely to explore these avenues to offer more precise and effective cancer treatments. In conclusion, the MENA generic oncology drug market is poised for significant growth, driven by the need for cost-effective cancer treatments and supportive government initiatives. Continuous advancements in technology and strategic collaborations will further propel the market, ensuring better access to life-saving cancer therapies for patients across the region. Contact Us: UnivDatos Market Insights Email - contact@univdatos.com Contact Number - +1 9782263411 Website - www.univdatos.com
sri_harikrishnabalaji_2
1,891,217
Latest Technologies in React Native
1. Overview of React Native Technology React Native is an open-source cross-platform...
0
2024-06-17T13:04:04
https://dev.to/happyer/latest-technologies-in-react-native-cmm
react, reactnative, mobile, development
## 1. Overview of React Native Technology React Native is an open-source cross-platform mobile application development framework developed by Facebook. It allows developers to write native applications using JavaScript and React. With React Native, developers can leverage React's declarative programming style and component-based architecture to achieve efficient and maintainable code structures. Additionally, React Native provides a rich set of UI components and native modules, enabling developers to create user experiences comparable to native applications with ease. ### 1.1. Cross-Platform Advantages The core advantage of React Native lies in its cross-platform capabilities. Developers only need to write code once, and the same application can run on both iOS and Android platforms. This significantly improves development efficiency and reduces maintenance costs. Code Example: ```javascript // Creating a simple cross-platform application using React Native import React from 'react'; import { View, Text } from 'react-native'; const App = () => { return ( <View style={{ flex: 1, justifyContent: 'center', alignItems: 'center' }}> <Text>Hello, World!</Text> </View> ); }; export default App; ``` The above code example demonstrates how to create a simple cross-platform application using React Native. The same code can run on both iOS and Android platforms. ### 1.2. Component-Based Development React Native adopts a component-based development approach, where each component can be independently encapsulated and reused. This development model makes the code structure clear, easy to understand, and maintain. Code Example: ```javascript // Creating a reusable component import React from 'react'; import { View, Text } from 'react-native'; const CustomButton = ({ title, onPress }) => { return ( <View style={{ backgroundColor: 'blue', padding: 10 }}> <Text style={{ color: 'white' }}>{title}</Text> </View> ); }; export default CustomButton; ``` The above code example demonstrates a reusable `CustomButton` component that accepts `title` and `onPress` properties. This component-based development approach makes the code structure clear, easy to understand, and maintain. ### 1.3. Hot Reloading and Debugging React Native supports hot reloading, allowing developers to see the effects of code changes in real-time without restarting the application. The framework also provides powerful debugging tools to help developers quickly locate and resolve issues. Developers can trigger the hot reload feature by pressing shortcut keys (usually Cmd+D or Cmd+R). Additionally, developers can use the Chrome browser for remote debugging to view console output, network requests, and more. ## 2. Latest Technological Advancements The React Native team continuously pushes the framework's iteration and upgrades, bringing more new features and performance optimizations to developers. Recently, the release of React Native 0.74 has become a focal point in the industry. ### 2.1. Yoga 3.0 Layout Engine The introduction of the Yoga 3.0 layout engine in React Native 0.74 brings a series of performance improvements aimed at enhancing the rendering performance and stability of applications. Here are the main performance improvements of Yoga 3.0 in React Native 0.74: **1. More Efficient Layout Calculation** Yoga 3.0 optimizes the layout calculation process, reducing calculation time and memory usage. This is achieved through: - **Algorithm Optimization**: The layout calculation algorithm has been optimized to improve calculation efficiency. - **Reducing Redundant Calculations**: By caching intermediate calculation results, redundant calculations are avoided, thus reducing calculation time and memory usage. - **Better Memory Management**: Memory allocation and recycling strategies have been optimized to reduce memory leaks and unnecessary memory usage. **2. Better CSS Layout Support** Yoga 3.0 supports more CSS layout properties, allowing developers to control element layouts more flexibly. This includes: - **Support for More Layout Properties**: Improved support for properties like `align-items`, `justify-content`, etc. - **Better Compatibility**: Enhanced compatibility with web standards, allowing React Native applications to better adapt to web platform layout requirements. **3. Web-Based Component Rendering Optimization** Yoga 3.0 introduces web-based component rendering capabilities, which is a significant improvement for applications that need to run on both web and native platforms. Specifically: - **Shared Layout Calculation**: Yoga 3.0 can share layout calculation results between web and native platforms, reducing redundant calculation overhead. - **Optimized Rendering Process**: By optimizing the rendering process, the transition time from layout calculation to final rendering is reduced, improving rendering performance. **4. Enhanced Stability** In addition to performance improvements, Yoga 3.0 also enhances stability, reducing errors and exceptions during the layout calculation process. This includes: - **Fixing Known Issues**: Addressing layout calculation errors and exceptions present in previous versions. - **Improved Error Handling**: Enhanced error handling mechanisms to more accurately locate and resolve issues during the layout calculation process. In summary, the introduction of the Yoga 3.0 layout engine in React Native 0.74 brings a series of performance improvements, including more efficient layout calculations, better CSS layout support, web-based component rendering optimization, and enhanced stability. These improvements will help improve the rendering performance and stability of React Native applications, providing developers with a better development experience. ### 2.2. Bridgeless Mode React Native 0.74 introduces Bridgeless Mode, an optimized way of communication between JavaScript and native platforms. In Bridgeless Mode, communication between JavaScript and native platforms no longer relies on a bridge but instead uses shared memory directly. This significantly reduces communication latency and improves application runtime efficiency. To enable Bridgeless Mode, add the following configuration to your project's `metro.config.js` file: ```javascript module.exports = { resolver: { sourceExts: ['js', 'jsx', 'ts', 'tsx', 'json'], extraNodeModules: new Proxy( {}, { get: (target, name) => path.join(process.cwd(), `node_modules/${name}`), } ), }, transformer: { getTransformOptions: async () => ({ transform: { experimentalImportSupport: false, inlineRequires: false, }, }), }, serializer: { getPolyfills: () => [], }, server: { port: 8081, projectRoot: path.resolve(__dirname, '..'), watchFolders: [path.resolve(__dirname, '..')], enhanceMiddleware: (middleware) => middleware, }, platforms: ['ios', 'android'], maxWorkers: 2, resetCache: false, sourceMaps: false, sourceMapUrl: '', sourceMapSourceUri: '', enableBabelRCLookup: false, enableHermes: false, // Set this value according to your project's needs hermesCommand: '../node_modules/hermes-engine/%OS-BIN%/hermesc', }; ``` ### 2.3. TypeScript Integration TypeScript, a strongly-typed, object-oriented programming language, helps developers write more robust and maintainable code. The React Native community has provided comprehensive TypeScript support, including type definition files and compilation toolchains. This allows developers to seamlessly integrate TypeScript into React Native projects, enjoying the benefits of type checking, code completion, and other advanced features. React Native 0.74 optimizes TypeScript support, making it easier for developers to use TypeScript in their projects. Specifically, this version fixes some TypeScript-related bugs and improves type inference and type checking performance. ### 2.4. Performance Optimization React Native 0.74 includes several performance optimizations, such as: - Optimized the communication mechanism between JavaScript and native platforms, reducing communication latency. - Improved the performance of the animation system, making animations smoother. React Native has made significant progress in animation performance. The new animation API provides richer animation effects and finer control. Additionally, the framework has optimized the rendering process of animations, reducing unnecessary redraws and layout calculations, thereby improving the smoothness and naturalness of animations. - Optimized memory management, reducing memory usage. ### 2.5. Other Improvements React Native 0.74 also includes the following improvements: - Improved startup performance, making applications start faster. - Enhanced error handling and debugging tools, making it easier for developers to locate and resolve issues. - Added support for new platforms, such as Windows and macOS. ## 3. Summary React Native is an open-source cross-platform mobile application development framework developed by Facebook. It allows developers to write native applications using JavaScript and React. Its core advantage lies in its cross-platform capabilities, enabling developers to write code once and run the same application on both iOS and Android platforms. React Native adopts a component-based development approach and supports hot reloading and debugging features, significantly improving development efficiency and code maintainability. In the latest React Native 0.74 version, the framework introduces the Yoga 3.0 layout engine, bringing more efficient layout calculations, better CSS layout support, web-based component rendering optimization, and enhanced stability. Additionally, the introduction of Bridgeless Mode optimizes communication between JavaScript and native platforms, significantly improving application runtime efficiency. The optimization of TypeScript integration makes it easier for developers to use TypeScript in their projects, enjoying the benefits of type checking and code completion. React Native 0.74 also includes several performance optimizations, such as improvements in communication mechanisms, animation systems, and memory management. Other improvements include enhanced startup performance, error handling, debugging tools, and support for new platforms like Windows and macOS. Overall, the release of React Native 0.74 brings more new features and performance optimizations to developers, further solidifying its position in the field of cross-platform mobile application development. ### 4. Codia AI's products Codia AI has rich experience in multimodal, image processing, development, and AI. 1.[**Codia AI Figma to code:HTML, CSS, React, Vue, iOS, Android, Flutter, Tailwind, Web, Native,...**](https://codia.ai/s/YBF9) ![Codia AI Figma to code](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xml2pgydfe3bre1qea32.png) 2.[**Codia AI DesignGen: Prompt to UI for Website, Landing Page, Blog**](https://codia.ai/t/pNFx) ![Codia AI DesignGen](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/55kyd4xj93iwmv487w14.jpeg) 3.[**Codia AI Design: Screenshot to Editable Figma Design**](https://codia.ai/d/5ZFb) ![Codia AI Design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qrl2lyk3m4zfma43asa0.png) 4.[**Codia AI VectorMagic: Image to Full-Color Vector/PNG to SVG**](https://codia.ai/v/bqFJ) ![Codia AI VectorMagic](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hylrdcdj9n62ces1s5jd.jpeg)
happyer
1,891,216
Image Optimization and Lazy Loading
In the era of modern frontends, we often raise the question, "How do I make my application more...
0
2024-06-17T13:03:18
https://dev.to/joaopaulo_avila/image-optimization-and-lazy-loading-30gj
angular, tutorial
In the era of modern frontends, we often raise the question, "How do I make my application more performant?" In this article, I will discuss image optimization and lazy loading to help improve performance and make our application increasingly faster. **_Images are always a problem in the life of frontenders._** Very large and heavy images can hinder our application in various aspects, from super slow loading when you have a not-so-fast internet connection, especially when on 3G or 4G, to the initial bootstrap on a desktop, failing to provide elegant feedback to the user. There are many other issues we encounter along the way as well, hehe. Here, I present some good practice ideas to assist in application development and perhaps provide some tips for those facing this issue. We will use the `NgOptimizedImage` directive, which helps improve image loading speed. We will create the project using the Angular CLI. I will assume you already know what we are talking about and import the `NgOptimizedImage` directive in the AppModule file. **Remember that this directive is available from version 14.2.0 onwards.** ```ts import { NgOptimizedImage } from '@angular/common'; @NgModule({ imports: [ NgOptimizedImage ] }) export class AppModule { } // If you are using standalone, it can be done as below. @Component({ standalone: true, imports: [NgOptimizedImage], }) class StandaloneComponent {} ``` Change the `src` property to `ngSrc` in your HTML template and it will also be necessary to set a width and height. ```html <img ngSrc="../assets/img/image.jpeg" width="4000" height="3000" /> ``` For responsive images, provide various sizes so the browser can choose the best one to render, thus avoiding performance loss. ```html <img ngSrc="../assets/img/image.jpeg" width="4000" height="3000" ngSrcset="200w, 400w, 600w, 800w, 1000w, 1200w, 1600w, 2000w, 3000w" /> ``` The Angular `NgOptimizedImage` directive already loads images by default with lazy loading, but non-visible images should also be loaded this way. For example, images in a carousel or those lower down the page do not need to be "loaded" when the page is first called. For this, we use another attribute to assist, which is `priority`. ```html <img ngSrc="../assets/img/image.jpeg" width="4000" height="3000" ngSrcset="200w, 400w, 600w, 800w, 1000w, 1200w, 1600w, 2000w, 3000w" priority /> ``` With this, all images shown on the page to the user will be marked with priority and will load first. If you have a CDN, from this version of Angular onwards, you can add the provider `provideImageKitLoader`, making it easier to import images. ```ts import { NgOptimizedImage, provideImageKitLoader } from '@angular/common'; @NgModule({ imports: [ NgOptimizedImage ], providers: [ provideImageKitLoader("cdn_url") ], }) export class AppModule { } ``` By doing this, you only need to use the image name in the `src`, without always typing the full image path, as it will already be called in the provider. ```html <img ngSrc="image.jpeg" width="4000" height="3000" ngSrcset="200w, 400w, 600w, 800w, 1000w, 1200w, 1600w, 2000w, 3000w" priority /> ``` Below are some data showing the differences between using lazy loading and not. Without Lazy Loading ![Without Lazy Loading](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/js5k9rl14u7d7sa631z5.png) With Lazy Loading ![With Lazy Loading](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hqqxfj27dcnmm3etk4e7.png) These were some tips on how we can use image lazy loading to our advantage, bringing faster loading speeds to our pages and providing good usability for the user. twitter @joaopaulo_avila
joaopaulo_avila
1,891,215
Concurrency in Go: Goroutines, Channels, and Concurrency Patterns
Introduction to Goroutines Concurrency in Go is a core feature that allows programs to...
0
2024-06-17T13:02:50
https://dev.to/gophers_kisumu/concurrency-in-go-goroutines-channels-and-concurrency-patterns-4hlc
## Introduction to Goroutines Concurrency in Go is a core feature that allows programs to perform multiple tasks simultaneously. One of the key tools for achieving concurrency in Go is the goroutine. Goroutines are lightweight threads managed by the Go runtime, enabling efficient execution of concurrent tasks. ## Creating and Managing Goroutines To create a goroutine, you simply use the `go` keyword followed by a function call. Here’s a basic example: ```go package main import ( "fmt" "time" ) func sayHello() { fmt.Println("Hello, Goroutine!") } func main() { go sayHello() time.Sleep(1 * time.Second) // Give the goroutine time to complete } ``` In this example, `sayHello` is executed as a goroutine, allowing the `main` function to continue running concurrently. The `time.Sleep` call ensures the program doesn’t terminate before the goroutine finishes executing. ## Channels Channels are a powerful feature in Go that allow goroutines to communicate with each other and synchronize their execution. Channels can be used to send and receive values between goroutines. ### Understanding Channels A channel is created using the `make` function: ```go ch := make(chan int) ``` This creates a channel that can send and receive `int` values. ### Sending and Receiving Data To send data to a channel, you use the `<-` operator: ```go ch <- 42 ``` To receive data from a channel, you also use the `<-` operator: ```go value := <-ch ``` Here’s a complete example demonstrating sending and receiving data: ```go package main import ( "fmt" ) func sendData(ch chan int) { ch <- 42 } func main() { ch := make(chan int) go sendData(ch) value := <-ch fmt.Println(value) } ``` ### Buffered vs Unbuffered Channels Channels can be either buffered or unbuffered. An unbuffered channel requires both send and receive operations to be ready before any communication can occur. A buffered channel allows sending and receiving to be decoupled. ```go // Buffered channel with capacity of 2 ch := make(chan int, 2) ch <- 1 ch <- 2 fmt.Println(<-ch) fmt.Println(<-ch) ``` Buffered channels are useful when you want to allow some amount of data to be sent without requiring immediate receiving. ## Concurrency Patterns ### Select Statement The `select` statement allows a goroutine to wait on multiple communication operations. ```go package main import ( "fmt" "time" ) func main() { ch1 := make(chan string) ch2 := make(chan string) go func() { time.Sleep(1 * time.Second) ch1 <- "one" }() go func() { time.Sleep(2 * time.Second) ch2 <- "two" }() select { case msg1 := <-ch1: fmt.Println("Received", msg1) case msg2 := <-ch2: fmt.Println("Received", msg2) } } ``` In this example, the `select` statement waits for either channel to send a message and then proceeds accordingly. ### Worker Pools A worker pool is a pattern used to manage a pool of goroutines that perform tasks from a shared queue. ```go package main import ( "fmt" "sync" ) func worker(id int, jobs <-chan int, results chan<- int, wg *sync.WaitGroup) { defer wg.Done() for job := range jobs { fmt.Printf("Worker %d processing job %d\n", id, job) results <- job * 2 } } func main() { const numJobs = 5 jobs := make(chan int, numJobs) results := make(chan int, numJobs) var wg sync.WaitGroup for w := 1; w <= 3; w++ { wg.Add(1) go worker(w, jobs, results, &wg) } for j := 1; j <= numJobs; j++ { jobs <- j } close(jobs) wg.Wait() close(results) for result := range results { fmt.Println("Result:", result) } } ``` In this example, a set of workers process jobs concurrently, distributing the workload among multiple goroutines. ### Pipeline Pattern The pipeline pattern connects multiple stages of processing, where the output of one stage is the input of the next. ```go package main import ( "fmt" ) func gen(nums ...int) <-chan int { out := make(chan int) go func() { for _, n := range nums { out <- n } close(out) }() return out } func sq(in <-chan int) <-chan int { out := make(chan int) go func() { for n := range in { out <- n * n } close(out) }() return out } func main() { nums := gen(2, 3, 4) squares := sq(nums) for n := range squares { fmt.Println(n) } } ``` In this example, the `gen` function generates numbers and sends them to the `sq` function, which squares them, forming a processing pipeline. ## Conclusion Concurrency in Go, facilitated by goroutines and channels, provides a powerful yet simple way to handle multiple tasks simultaneously. Understanding and utilizing concurrency patterns such as the `select` statement, worker pools, and pipelines can significantly enhance the efficiency and responsiveness of Go applications.
gophers_kisumu
1,542,229
Are you well-paid?
Checking out wages and compensation can give us some cool insights into how in-demand different...
0
2023-07-19T14:32:50
https://dev.to/developernationsurvey/are-you-well-paid-1069
salaries, news, discuss
Checking out wages and compensation can give us some cool insights into how in-demand different skills are in the job market. Even with layoffs happening in tech companies, the number of professional developers worldwide is still on the rise. One reason for this is that modern businesses have realized that **tech is becoming essential for all sorts of industries**. So, whether it's an online shop, or a car manufacturer, they all need to compete for developers to keep up with the tech game. Compensation plays a major role in attracting and retaining talented developers. Companies need to understand the salary, bonuses, equity, and perks they should offer to get those skilled folks on board. On the flip side, developers can use compensation data to make decisions about their own careers, like negotiating salaries and benefits. Let's dive into our latest Developer Nation survey, where we explore how developers get paid. We'll see how compensation varies in different regions, and how companies and developers need to consider living costs when negotiating. If you are interested to share your experiences to shape the trends you can participate in our NEW Developer Nation survey [here](https://survey.developernation.net/name/dn25/branch/main?utm_medium=some&utm_source=dev_to&utm_campaign=dev_to_well_paid). As a thank you, we will send you a virtual, goody bag with learning opportunities such as access to courses, ebooks, and certifications. Some cool findings: about 9% of professional developers make less than $1,000 a year, which includes part-timers, interns, and those on commission. As devs gain experience, they can command higher pay, with around $4,000 more each year for every extra year of experience. At the other end of the spectrum, roughly 6% of devs earn more than $200,000 per year, which puts them in the top 1% of earners worldwide. This shows that **developers generally earn more than folks in many other sectors**. ## Regional differences North American professional developers report the highest average annual compensation – more than $100,000. The median compensation in the region, however, is closer to $75,000. Meanwhile, on the opposite end of the spectrum, **developers working in South Asia report the lowest average compensation of just under $27,000** and the median compensation is around $5,500 per year. As is frequently the case with compensation, those with higher earnings greatly inflate the average, as is evident when we compare the median vs the average annual compensation. Anyone who has travelled outside their hometown recognises that the costs of goods and services can vary depending on where you are in the world. Compensation very often reflects these differences in the cost of living. As an example, we examine two countries with large developer populations: the United States of America and the People’s Republic of China. The median compensation of developers in the USA is around $75,000 per year. This is five times greater than the median developer compensation in China of $15,000 per year. However, when we account for differences in costs of living using the purchasing power parity index, we see that the average developer in China earning $15,000 per year can afford similar goods and services as a developer in the USA earning $25,000 a year. In practical terms, this means that developers in the USA still generally enjoy a higher wage compared to Chinese developers, but by a lesser margin (3 times more vs 5 times more) than is apparent when we directly compare compensation. ## Perceptions surrounding compensation While about half of developers think they're being paid fairly, **around 39% feel underpaid**, and 11% believe they're overpaid. Factors like gender, experience, company size, and education level can affect these sentiments. We find that men are significantly more likely to report feeling underpaid in their current role. More specifically, 16% of men report feeling underpaid compared to 11% of women and 14% of developers who identified as non-binary. Conversely, **7% of women feel overpaid compared to 4% of men and 1% of non-binary individuals**. We additionally see that developers with more experience and those working for larger companies are more likely to report feeling underpaid. For each additional year that a developer gains in experience, we estimate that there is approximately a 7% increase in the odds that the developer will report feeling underpaid compared to fairly compensated. This suggests that companies do not financially value experience to the same degree as developers do amongst themselves. Finally, if a developer has an undergraduate degree in software engineering, they are more likely to report feeling underpaid. The odds of a developer with an undergraduate degree in software engineering feeling underpaid vs paid fairly, are 9% greater when compared to all other developers. This effect disappears, however, once developers have a postgraduate degree; as having a postgraduate degree increases the odds of feeling overpaid by 50% compared to not having a postgraduate degree. Visit our Developer Nation [Blog](https://developernation.net/blog) for more insights.
developernationsurvey
1,891,214
I discovered Rust's zero-cost abstraction
Today I discovered Rust's zero-cost abstraction and learned how it optimizes software efficiency...
0
2024-06-17T13:02:25
https://dev.to/ashsajal/i-discovered-rusts-zero-cost-abstraction-akk
rust, coding, programming, tutorial
**Today I discovered Rust's zero-cost abstraction and learned how it optimizes software efficiency without sacrificing expressive code design.** Rust, celebrated for its emphasis on performance and safety, introduces a concept pivotal to its design philosophy: zero-cost abstraction. This principle allows developers to use high-level constructs like traits and generics without incurring runtime overhead, ensuring that the resulting binaries are as efficient as if they were handcrafted with lower-level approaches. ### Understanding Zero-Cost Abstraction in Rust At the heart of Rust's zero-cost abstraction lies its powerful type system and ownership model. These features enable the compiler to perform rigorous static analysis, ensuring memory safety and eliminating pitfalls such as null pointer dereferencing or data races. By enforcing these rules at compile-time rather than runtime, Rust achieves both safety and performance concurrently. Let's explore this through an example: ```rust // Define a trait `Shape` with an abstract method `area` trait Shape { fn area(&self) -> f64; } // Implement the `Shape` trait for a Rectangle struct struct Rectangle { width: f64, height: f64, } impl Shape for Rectangle { fn area(&self) -> f64 { self.width * self.height } } // Implement the `Shape` trait for a Circle struct struct Circle { radius: f64, } impl Shape for Circle { fn area(&self) -> f64 { std::f64::consts::PI * self.radius * self.radius } } // Function that calculates and prints the area of any type implementing `Shape` fn print_area(shape: &impl Shape) { println!("Area: {}", shape.area()); } fn main() { let rect = Rectangle { width: 3.0, height: 4.0 }; let circle = Circle { radius: 2.5 }; // Calling the function `print_area` with different shapes print_area(&rect); // Output: Area: 12 print_area(&circle); // Output: Area: 19.634954084936208 } ``` ### Exploring Zero-Cost Abstraction in Action 1. **Trait Definition (`Shape`)**: - The `Shape` trait defines a common interface for shapes, requiring them to implement the `area` method that returns a floating-point number (`f64`). 2. **Structs and Trait Implementation**: - Two structs, `Rectangle` and `Circle`, implement the `Shape` trait with their respective implementations of the `area` method. - The `Rectangle` calculates its area by multiplying its width and height, while the `Circle` computes its area using the formula π * radius^2. 3. **Static Dispatch**: - In the `print_area` function, the parameter `shape` is of type `&impl Shape`, meaning it accepts a reference to any type that implements the `Shape` trait. - Rust's compiler utilizes static dispatch (monomorphization) to generate specific versions of `print_area` for each concrete type (`Rectangle` and `Circle`) at compile-time. This optimization ensures that there is no runtime overhead associated with dynamic dispatch. 4. **Efficiency and Performance**: - By leveraging Rust's zero-cost abstraction, developers can write clear and expressive code using traits and generics without worrying about performance penalties. - The compiler optimizes method calls, inlining them directly into the generated machine code, thereby minimizing any overhead typically associated with polymorphic behavior in other languages. ### Learning Experience Exploring Rust's zero-cost abstraction has deepened my appreciation for how language design choices can harmonize safety with efficiency. By empowering developers to employ high-level abstractions without compromising on performance, Rust ensures that applications built with it are not only robust and maintainable but also performant in demanding environments. This capability makes Rust a compelling choice for systems programming, where both safety and speed are paramount. As I continue to explore Rust's ecosystem and apply its principles in practice, I am excited about its potential to redefine how we approach software development across diverse domains.
ashsajal
1,891,213
terminal app
im using robotjs to nuild an app now the problem is idk how to make that the user can have more than...
0
2024-06-17T13:01:14
https://dev.to/uriahn/terminal-app-6p
node, terminal, app, robotjs
im using robotjs to nuild an app now the problem is idk how to make that the user can have more than one action i mean that when he run the code he can do more than one thing in the app func bot: ` console.log("checkStartBOT") sleep(2000) console.clear() //res vaule to 0 let xValue = 0; let yValue = 0; console.log("choose were do you want to click") console.log("you have 5 seconds") console.log("just put your mouse there") sleep(5000) let pos_click = robot.getMousePos(); console.log("Current mouse position:", pos_click); //conform pos function confirmPOS() { console.clear let cheackPos = prompt("Are these the coordinates Y/N? ") if(cheackPos.toUpperCase() == "Y") { console.log("good!") xValue = pos_click.x; yValue = pos_click.y; stringASK() } else if (cheackPos.toUpperCase() === "N") { console.log("Okay, let's try again."); bot() } else { console.log("type Y/N") confirmPOS() } } function stringASK() { console.log("Do you want to put str? N for no") ans = prompt("what do you want? ") let time = parseInt(prompt("how much times do you want this to run?")) if(ans.toUpperCase() == "N") { console.log("4 seconds to start") sleep(4000) robot.moveMouse(xValue,yValue) robot.mouseClick() } else { console.log("4 seconds to start"); sleep(4000); // pause execution for 4 seconds for (let A = 0; A < time; A++) { sleep(200) robot.moveMouse(xValue, yValue); robot.mouseClick(); robot.typeString(ans); robot.keyTap('enter') } } } confirmPOS() console.log("finishCheackBOT") sleep(2000) }`
uriahn
1,891,212
From Noise to Art: Building Your First Generative Adversarial Network
I was introduced to this splendid machine learning idea known as Generative Adversarial Networks...
0
2024-06-17T13:00:06
https://dev.to/yuval728/from-noise-to-art-building-your-first-generative-adversarial-network-472o
python, ai, machinelearning, programming
I was introduced to this splendid machine learning idea known as Generative Adversarial Networks (GANs) especially in the image generation area. Another framework known as GANs was developed by Ian Goodfellow in 2014; its underlying architecture is built by utilizing a two-neural-network competition. According to the scope of this blog, let me first introduce what GAN is, and then tell you what I am going to do in this blog including the code in TensorFlow about how to train a simple GAN. ![Architecture](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i0yu4jhpe12gnmby1hst.png) **What are GANs?** At its core, a GAN consists of two neural networks: Of course, there is the generator of the fake data, and the discriminator that learns how to distinguish between the fake and the real thing. - Generator: After inputting noise and then passes them to produce an output data that resembles the pattern of the training data set. - Discriminator: The discriminator employed in the description of this model takes an input sample and tries to afford a guess if the sample was drawn from the training data or was just synthesized with the help of the generator. These two networks are trained simultaneously in a zero-sum game framework: while in GANs the generative network will feed information to the discriminative network in an effort to fool it into believing that the data fed to it is real but on the other side the discriminative network has the role of distinguishing real data from fake data. ![Example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m0v4kkhn0e8r2l53dszq.png) **Step-by-Step Guide to Building a Simple GAN** Step 1: Setting Up the Environment `pip install tensorflow` Step 2: Import Necessary Libraries ``` import tensorflow as tf from tensorflow.keras import layers import numpy as np import matplotlib.pyplot as plt ``` Step 3: Define the Generator The generator network will next take a randomly chosen noise vector and map it into a data point that looks like the actual training data. ``` def build_generator(): model = tf.keras.Sequential() model.add(layers.Dense(8*8*128, use_bias=False, input_shape=(100,))) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Reshape((8, 8, 128))) assert model.output_shape == (None, 8, 8, 128) # Note: None is the batch size model.add(layers.Conv2DTranspose(128, (5, 5), strides=(1, 1), padding='same', use_bias=False)) assert model.output_shape == (None, 8, 8, 128) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) model.add(layers.Conv2DTranspose(128, (5, 5), strides=(2, 2), padding='same', use_bias=False)) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) assert model.output_shape == (None, 16, 16, 128) model.add(layers.Conv2DTranspose(128, (5, 5), strides=(2, 2), padding='same', use_bias=False)) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) assert model.output_shape == (None, 32, 32, 128) model.add(layers.Conv2DTranspose(128, (5, 5), strides=(2, 2), padding='same', use_bias=False)) model.add(layers.BatchNormalization()) model.add(layers.LeakyReLU()) assert model.output_shape == (None, 64, 64, 128) model.add(layers.Conv2DTranspose(3, (5, 5), strides=(2, 2), padding='same', use_bias=False, activation='tanh')) print(model.output_shape) return model generator = build_generator() generator.summary() ``` Step 4: Define the Discriminator The discriminator network will take an input sample and classify it as real ``` def build_discriminator(): model = tf.keras.Sequential() model.add(layers.Conv2D(64, (5, 5), strides=(2, 2), padding='same', input_shape=[128, 128, 3])) model.add(layers.LeakyReLU()) model.add(layers.Dropout(0.3)) model.add(layers.Conv2D(128, (5, 5), strides=(2, 2), padding='same')) model.add(layers.LeakyReLU()) model.add(layers.Dropout(0.3)) model.add(layers.Conv2D(256, (5, 5), strides=(2, 2), padding='same')) model.add(layers.LeakyReLU()) model.add(layers.Dropout(0.3)) model.add(layers.Flatten()) model.add(layers.Dense(1)) return model discriminator = build_discriminator() discriminator.summary() ``` Step 5: Test the models ``` noise = tf.random.normal([1,100]) generated_image = generator(noise,training=False) print(discriminator(generated_image)) plt.imshow(generated_image[0]*127.5+127.5) ``` Step 6: Setup loss function and optimizer ``` cross_entropy=BinaryCrossentropy(from_logits=True) ``` ``` def discriminator_loss(real_output,fake_output): real_loss = cross_entropy(tf.ones_like(real_output),real_output) fake_loss = cross_entropy(tf.zeros_like(fake_output),fake_output) total_loss = real_loss + fake_loss return total_loss def generator_loss(fake_output): return cross_entropy(tf.ones_like(fake_output),fake_output) ``` ``` generator_optimizer = tf.keras.optimizers.Adam(1e-4) discriminator_optimizer = tf.keras.optimizers.Adam(1e-4) ``` Step 7: Setup checkpoint ``` checkpoint_dir = 'training_checkpoints' checkpoint_prefix = os.path.join(checkpoint_dir,'ckpt') checkpoint = tf.train.Checkpoint(generator_optimizer=generator_optimizer, discriminator_optimizer=discriminator_optimizer, generator=generator, discriminator=discriminator) ``` Step 8: Defining train step ``` @tf.function def train_step(images): noise=tf.random.normal([batch_size,noise_dims]) with tf.GradientTape() as gen_tape, tf.GradientTape() as dis_tape: generated_images=generator(noise,training=True) real_output=discriminator(images,training=True) fake_output=discriminator(generated_images,training=True) gen_loss=generator_loss(fake_output) disc_loss=discriminator_loss(real_output,fake_output) gen_gradients=gen_tape.gradient(gen_loss,generator.trainable_variables) dis_gradients=dis_tape.gradient(disc_loss,discriminator.trainable_variables) generator_optimizer.apply_gradients(zip(gen_gradients,generator.trainable_variables)) discriminator_optimizer.apply_gradients(zip(dis_gradients,discriminator.trainable_variables)) return gen_loss,disc_loss ``` Step 9: Setting up the training loop and saving generated images ``` from IPython import display import time total_gloss=[] total_dloss=[] def train(dataset,epochs): for epoch in range(epochs): disc_loss=gen_loss=0 start=time.time() count=0 for batch in dataset: losses=train_step(batch) count+=1 disc_loss+=losses[1] gen_loss+=losses[0] total_gloss.append(gen_loss.numpy()) total_dloss.append(disc_loss.numpy()) if (epoch+1)%50==0: checkpoint.save(file_prefix=checkpoint_prefix) display.clear_output(wait=True) generate_and_save_output(generator,epoch+1,seed) print(f'Time for epoch {epoch + 1} is {time.time()-start}') print(f'Gloss: {gen_loss.numpy()/count} , Dloss: {disc_loss.numpy()/count}',end='\n\n') display.clear_output(wait=True) generate_and_save_output(generator,epochs,seed) ``` ``` def generate_and_save_output(model,epoch,test_input): predictions = model(test_input,training=False) fig = plt.figure(figsize=(4,4)) for i in range(predictions.shape[0]): plt.subplot(4,4,i+1) plt.imshow((predictions[i]*127.5+127.5).numpy().astype(np.uint8),cmap='gray') plt.axis('off') plt.savefig(f'image_at_epoch_{epoch}.png') plt.show() ``` Step 10: Train the GAN Let's train our GAN, I have used dog image dataset, which is available on [Kaggle stanford dog dataset] (https://www.kaggle.com/datasets/jessicali9530/stanford-dogs-dataset) ``` EPOCHS = 500 noise_dims = 100 num_egs_to_generate = 16 seed = tf.random.normal([num_egs_to_generate,noise_dims]) train(train_images,EPOCHS) ``` _Note: To generate good-quality images, the model would require large number of epochs._ ![Image at epoch 500](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wctawbgiawgclxx1ud3d.png) Trying our model: ``` new_image = generator(tf.random.normal([1,100]),training=False) plt.imshow((new_image[0]*127.5+127.5).numpy().astype(np.uint8)) ``` ![Generated image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tbith61tukcjnrcf2g44.png) **Conclusion** GANs are useful in producing realistic datasets since they are a type of neural network that learns from the labeled training data and then creates new data. From here, it would be clear and feasible to formulate a sensible GAN and from here it is evident that there exists a relative rhythm between the motions of the generator as well as Discriminator. This distills the current guide’s aim to merely introduce the reader to the subject of GAN and offer them a first taste of what is possible in this burgeoning research area. **Resources:** [Ian Goodfellow's Original Paper](https://arxiv.org/abs/1406.2661) [TensorFlow Documentation](https://www.tensorflow.org/tutorials/generative/dcgan) [My Github Repo](https://github.com/yuval728/Dog-image-generation) Feel free to ask questions or share your GAN projects in the comments below!
yuval728
1,891,208
Boosting React app performance
Boosting performance of react application
0
2024-06-17T12:58:58
https://dev.to/pradeep3/boosting-react-app-performance-13k4
react, javascript, beginners, performance
--- title: Boosting React app performance published: true description: Boosting performance of react application tags: react, javascript, beginners, performance cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7vtfbjtql15a0n08llxs.png # Use a ratio of 100:42 for best results. # published_at: 2024-06-17 12:38 +0000 --- React is a leading library for building dynamic and interactive web applications. As your React application scales, maintaining performance becomes increasingly crucial. Here are some proven techniques to enhance the performance of your React applications. ##### Leverage React’s Native Optimizations #### Memoization with `React.memo` `React.memo` is a higher-order component that optimizes functional components by preventing unnecessary re-renders. It achieves this by performing a shallow comparison of props. ``` import React from 'react'; const MyComponent = React.memo((props) => { return <div>{props.value}</div>; }); ``` --- #### Optimize with `useMemo` and `useCallback` * **useMemo**: Cache expensive calculations to avoid recalculating on each render. * **useCallback**: Cache function references to prevent unnecessary re-creations. ``` import React, { useMemo, useCallback } from 'react'; const MyComponent = ({ items }) => { const total = useMemo(() => items.reduce((sum, item) => sum + item, 0), [items]); const handleClick = useCallback(() => { // Event handler logic }, []); return ( <div onClick={handleClick}> {total} </div> ); }; ``` --- #### Implement Code Splitting Reduce initial load times by splitting your code into manageable chunks that are loaded on demand. #### Dynamic Imports ``` import React, { Suspense } from 'react'; const LazyComponent = React.lazy(() => import('./LazyComponent')); const MyComponent = () => ( <Suspense fallback={<div>Loading...</div>}> <LazyComponent /> </Suspense> ); ``` #### Using React Loadable ``` import Loadable from 'react-loadable'; const LoadableComponent = Loadable({ loader: () => import('./LazyComponent'), loading: () => <div>Loading...</div>, }); const MyComponent = () => <LoadableComponent />; ``` --- #### Optimize Rendering #### Avoid Inline Functions Inline functions can cause unwanted re-renders due to new references being created on each render. ``` // Inefficient <div onClick={() => handleClick()}>Click me</div> // Efficient const handleClick = () => { // Logic here }; <div onClick={handleClick}>Click me</div>; ``` #### Utilize `PureComponent` and `shouldComponentUpdate` In class components, use `PureComponent` or `shouldComponentUpdate` to prevent unnecessary updates. ``` import React, { PureComponent } from 'react'; class MyComponent extends PureComponent { render() { return <div>{this.props.value}</div>; } } ``` --- #### Effective State Management #### Lift State Up Consolidate state to the closest common ancestor to minimize redundant prop drilling and re-renders. #### Use Context API Wisely While React's Context API is powerful, it can introduce performance issues if misused. Avoid frequent updates to context values and consider memoizing context values. --- #### Optimizing Lists and Tables #### Virtualization For large lists or tables, use libraries like `react-window` or `react-virtualized` to render only the visible items. ``` import { FixedSizeList as List } from 'react-window'; const MyList = ({ items }) => ( <List height={500} itemCount={items.length} itemSize={35} width={300} > {({ index, style }) => ( <div style={style}> {items[index]} </div> )} </List> ); ``` #### Use Stable Keys Ensure each list item has a unique, stable key to help React track items and minimize re-renders. ``` const items = ['apple', 'banana', 'cherry']; items.map((item) => <div key={item}>{item}</div>); ``` --- #### Optimize Asset Loading #### Lazy Load Images Use libraries like `react-lazyload` to defer loading of images until they are needed. ``` import LazyLoad from 'react-lazyload'; const MyComponent = () => ( <LazyLoad height={200}> <img src="path/to/image.jpg" alt="description" /> </LazyLoad> ); ``` #### Compress and Optimize Images Reduce image sizes using tools like `ImageOptim`, `TinyPNG`, or using WebP format for faster loading. #### Use Production Builds Ensure your application runs in production mode, which enables optimizations and minification for better performance. ``` # Create a production build npm run build # Serve the build serve -s build ``` #### Conclusion Optimizing React applications for performance involves leveraging React’s built-in tools and following best practices. By applying these strategies, you can significantly enhance your app’s responsiveness and efficiency, ensuring a smooth experience for your users.
pradeep3
1,891,210
How Artificial Intelligence (AI) is Reshaping the World of Programming
As we navigate through the digital age, it's undeniable that Artificial Intelligence (AI) is...
0
2024-06-17T12:58:33
https://dev.to/lelycomfort/how-artificial-intelligence-ai-is-reshaping-the-world-of-programming-243b
ai, coding, programming
As we navigate through the digital age, it's undeniable that Artificial Intelligence (AI) is dramatically altering the fabric of programming. In my experience, I see that AI is not just a tool but a revolutionary force, optimizing how we code and enhancing the capabilities of programmers worldwide. For professionals in the field, recognizing and adapting to AI's influence is crucial for staying relevant and competitive. AI in programming goes beyond simple automation; it's about creating smarter environments that can learn and adapt. I feel it's essential for programmers to understand AI's role in their daily work to harness its potential effectively. From my job perspective, embracing AI's capabilities can transform challenges into opportunities, paving the way for innovative solutions that were once deemed impossible. Engaging with AI is no longer a choice but a necessity for those looking to lead in the tech-driven future. ## Advancements in Programming Languages Driven by AI Programming languages have undergone significant transformations to accommodate the evolving demands of technology and developers. Initially, we dealt with basic machine and assembly languages, which required intricate understanding and were less forgiving. As technology advanced, high-level languages like C++, Java, and Python emerged, bringing more abstraction and simplicity, which made programming more accessible and efficient. AI is now pushing the envelope further by influencing the development of new programming languages that are even more intuitive and user-friendly. For instance, with AI's integration, natural language processing (NLP) algorithms are being developed to allow programming in plain English or other natural languages. I see that this shift is particularly beneficial, as it reduces the barrier to entry for new programmers and simplifies complex tasks. By transforming how we approach the creation of programming languages, AI is making coding a more inclusive skill accessible to a broader audience. ## AI-Powered Programming Tools and Platforms The integration of AI into programming tools and platforms is revolutionizing how developers approach their work, making tedious tasks more manageable and enhancing productivity. AI-powered tools have become integral in automating routine tasks, offering intelligent suggestions, and boosting efficiency. In my job, I've observed how these tools can significantly reduce the time spent on manual coding, allowing programmers to focus on more strategic and creative aspects of development. For example, code completion tools utilize machine learning algorithms to predict and suggest the next line of code based on the existing code context. This not only speeds up the coding process but also minimizes errors. Another example is bug detection tools, which leverage AI to analyze code for potential bugs or vulnerabilities. With AI, it's easy to maintain high code quality and security standards, fundamentally changing how we develop software. These tools exemplify how AI can [writer essay for you](https://papertyper.net/) or handle complex programming tasks, demonstrating AI's potential to enhance human capabilities rather than replace them. ## The Rise of AI-Driven Code Generation Automated code generation, powered by AI, is rapidly becoming a game-changer in the programming world. This technology allows for the generation of code from high-level specifications provided by developers, which can drastically reduce the time and effort required to build software. In my own work, I've seen how this can streamline the development process, making it more efficient and less prone to human error. AI algorithms in automated code generation can produce code that is often more accurate and reliable than manually written code. This can lead to enhanced software quality and stability, which are critical in high-stakes environments. However, this technology is not without its challenges. One significant concern is the potential loss of control over the coding process. Developers might find it difficult to understand or modify the automatically generated code, which can complicate debugging and maintenance tasks. Additionally, for projects that require highly customized solutions, automated code generation might not always provide the necessary flexibility. Despite these drawbacks, the benefits of AI in code generation are compelling, particularly for standard applications where speed and efficiency are paramount. ## The Role of AI in Software Testing and Debugging Artificial Intelligence is significantly transforming the landscape of software testing and debugging, making these processes more efficient and less labor-intensive. AI-powered tools in this area automate routine testing tasks and optimize the debugging process, which in turn enhances the overall quality and reliability of software products. From my experience, these advancements are crucial as they allow developers to allocate more time to more complex and impactful development tasks. For instance, AI can automatically generate test cases based on the application's requirements and past test data. This capability not only speeds up the testing cycle but also ensures thorough coverage that might be overlooked by human testers. Moreover, AI-powered debugging tools can intelligently analyze code to pinpoint potential issues, offering solutions or even fixing bugs automatically. This level of automation in testing and debugging can dramatically reduce the time developers spend on these tasks, enabling a focus on innovation and improvement. The question of "AI replace humans?" in this context transforms into how AI complements human efforts by handling repetitive and precise tasks, allowing programmers to engage more deeply with the creative and strategic aspects of their projects. ## AI-Enabled Predictive Analytics in Programming This technology uses historical data and machine learning algorithms to predict future outcomes, allowing developers to preemptively address potential issues before they become problematic. When i work, I've utilized [AI-driven predictive analytics](https://geniusee.com/single-blog/ai-and-predictive-analytics) to streamline project timelines and enhance code quality by identifying likely errors or inefficiencies early in the development process. AI algorithms can analyze vast amounts of data from code repositories and bug databases to detect patterns that might indicate future problems. For example, by recognizing frequent coding errors or anti-patterns, these systems can suggest modifications or flag areas of concern before the code goes into production. Additionally, predictive analytics can help estimate the necessary resources for a project, predict the likelihood of meeting deadlines, and identify potential bottlenecks. This capability not only improves operational efficiency but also supports better project management and decision-making. With AI, it's easy to harness the power of data to make informed predictions that keep software development agile and responsive to changing needs.
lelycomfort
1,891,209
The Evolution and Impact of Omegle: A Digital Frontier for Random Social Interactions
Imagine a portal that whisks you away from the curated feeds and familiar faces of social media,...
0
2024-06-17T12:58:08
https://dev.to/kohensteve/the-evolution-and-impact-of-omegle-a-digital-frontier-for-random-social-interactions-13n1
Imagine a portal that whisks you away from the curated feeds and familiar faces of social media, depositing you face-to-face (or screen-to-screen) with a complete stranger. A stranger who could be a seasoned traveler with tales from exotic lands, a passionate artist eager to share their work, or simply someone looking for a laugh. This is the thrilling uncertainty that lies at the heart of [bazoocam](https://bazo-cam.online/), a platform that revolutionized online connection by offering a gateway to spontaneous social interactions with random strangers from across the globe. Launched in 2009, Omegle's anonymity and unpredictability fueled a wave of curiosity, fostering a unique space for fleeting conversations, unexpected friendships, and glimpses into a world beyond our own social circles. ### The Appeal of Anonymity One of the primary reasons for Omegle's success is its emphasis on anonymity: Unveiling Personalities: Explore different facets of your personality and engage in conversations you might otherwise avoid in real life. Open Communication: Discuss sensitive topics, share personal stories, and seek advice without fear of judgment. However, anonymity also had its drawbacks. The lack of accountability sometimes resulted in inappropriate behavior, harassment, and the sharing of explicit content. To address these issues, Omegle implemented various moderation tools and features.
kohensteve
1,891,207
Beginner's Guide to Test Coverage with NUnit, Coverlet, and ReportGenerator
Hi guys, today I am going to show a quick and simple way to get started with Test Coverage in your...
0
2024-06-17T12:57:08
https://dev.to/bigboybamo/beginners-guide-to-test-coverage-with-nunit-coverlet-and-reportgenerator-3fde
csharp, dotnet, beginners, webdev
Hi guys, today I am going to show a quick and simple way to get started with Test Coverage in your code. ## What is Test Coverage? Test coverage is the percentage of code that is tested by automated tests. That is, it measures the extent to which a codebase is **covered** by tests we write. Prerequisites for this tutorial; Basic knowledge of C# and Unit testing. Requirements; 1. Visual Studio - Powerful IDE for .NET and C++ development on Windows. 2. NUnit - Unit Testing framework for .NET. More [here](https://github.com/nunit/nunit) 3. Coverlet - Cross-platform code coverage framework for .NET. More [here](https://github.com/coverlet-coverage/coverlet) 4. ReportGenerator - Powerful code coverage visualization tool. More [here](https://github.com/danielpalme/ReportGenerator) Let's set-up our test application. Create a new console Application using .NET Framework ![Create console app](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2guglcp6prddpuqlul7c.png) Give your project a name **TestCoverageDemo**. Then select .NET framework 4.7.2 as the target framework and click **Create**. Great, now In your application add a simple class called `Calculator` and add the following code ```cs public class Calculator { public static int Add(int number) { int sum = 20; return sum + number; } } ``` This class contains a simple implementation that takes a number, adds 20 to it, and returns the result. Seems simple enough. Write a unit test for the implementation. Create a new project by right-clicking on your solution file and selecting Add -> New project Search NUnit ![Configure Nunit project](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jtr7wyza2q72ab152spb.png) You can leave the default NUnit project name of **TestProject1**. Click next. Select .NET Framework 4.7.2 as the target framework and click Create. After Creating, your project should look like this. ![Project Structure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5l1ukvsx2rlfcdwy21ws.png) Next, add a reference to the main `TestCoverageDemo` project from the test project Now, Add the following code to the Class `UnitTest1` ```cs public class Tests { [SetUp] public void Setup() { } [Test] public void Test1() { // Arrange int number = 2; int expected = 22; // Act int actual = Calculator.Add(number); // Assert Assert.That(actual, Is.EqualTo(expected)); } } ``` Your class should look something like this; ![Class structure](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s3pi7z3bh12j9yhw2f7e.png) Now, right-click on the Test name and select **Run Tests** After running the test, we see that we get a passing test; ![Passing test](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n62czokd5gxbtlwiztie.png) Next, Install **Coverlet** by navigating to the command line of your test project and running this command; ``` dotnet add package coverlet.collector ``` Next, create a file in the directory of the test project, and call it `coverlet.runsettings`, and add this piece of code: ```cs <RunSettings> <DataCollectionRunSettings> <DataCollectors> <DataCollector friendlyName="XPlat Code Coverage"> <Configuration> <ExcludeByFile>**/Program.cs</ExcludeByFile> </Configuration> </DataCollector> </DataCollectors> </DataCollectionRunSettings> </RunSettings> ``` We're simply adding this so the program.cs does not get in the way of our demo. Then, while still in the the Test directory, run this command; ``` dotnet test --collect:"XPlat Code Coverage" --settings coverlet.runsettings ``` This will run your tests and generate a `coverage.cobertura` xml file in the **TestResults** folder This file is hard to read, so we need the ReportGenerator tool to visualize it. To Install ReportGenerator, run this command; ``` dotnet tool install -g dotnet-reportgenerator-globaltool ``` Next, run this command; ``` reportgenerator -reports:"TestCoverageDemo\TestProject1\TestResults\e36a31dc-6f0e-410a-a860-cc32118ec3a8\coverage.cobertura.xml" -targetdir:"coverageresults" -reporttypes:Html ``` This command will go into the location of where the **coverage.cobertura** file is located, parse that file and present the result as a viewable Html site in the target directory we specified: `coverageresults` Navigating to the Test project folder, we see that the folder `coverageresults` has been generated ![Coverage folder generated](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ruz81vviwg5nbdm4wj5.png) Navigate into the `coverageresults` folder and click on **index**, the coverage results have been presented in a nice way and we have 100% coverage; ![Coverage results](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30ma6icol3ukxwcp6lw2.png) Scrolling down, we will see a better breakdown of what is covered ![Coverage Breakdown](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cle3v26psi9q2w6anrzz.png) Clicking on TestCoverageDemo.Calculator we see that we have 100% of coverage and that all the lines of code we've written are covered by the unit test. ![Line Coverage](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rwd1akttmdbdrl2uy1vk.png) Great. Let's make some modifications to the Add method in the calculator class, by adding this code: ```cs public static int Add(int number) { int sum; if(number % 2 == 0) { sum = 20; } else { sum = 30; } return sum + number; } ``` Now, in your test explorer run the test `Test1` again We see that the test still passes as expected. ![Test passes after changes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v4vdxpttk7kes5ky2hg8.png) Now, Let's try to recalculate the test coverage once more. Run these 2 commands in the directory of the test project again, ``` dotnet test --collect:"XPlat Code Coverage" --settings coverlet.runsettings ``` ``` reportgenerator -reports:"TestCoverageDemo\TestProject1\TestResults\e36a31dc-6f0e-410a-a860-cc32118ec3a8\coverage.cobertura.xml" -targetdir:"coverageresults" -reporttypes:Html ``` Now, navigate to the coverageresults folder once more and click the index. ![Coverage has reduced](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/diqkwkhxs572vox7zm07.png) We see that the coverage has dropped from 100% to 70% If we investigate further by checking TestCoverageDemo.Calculator class, we can see why; ![Breakdown of coverage after changes](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8q0mwvp2pvcj4o6rjbzf.png) The `else` block is not covered by the test because we have not created any scenario where that part of the code is tested. Let's fix that. Add this piece of code to the UnitTest1 class; ```cs public void Test2() { // Arrange int number = 3; int expected = 33; // Act int actual = Calculator.Add(number); // Assert Assert.That(actual, Is.EqualTo(expected)); } ``` Run the test and see that it passes. Now run the commands to generate the test coverage once more. Navigate to coverageresult and click **Index** once more. ![Coverage after changing tests](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0baq6viljdo5rcfvts6u.png) We see that the test coverage is back to 100%. Checking TestCoverageDemo.Calculator will reveal a further breakdown; ![Breakdown of test coverage by line](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xuoelwa1t9jgh334uebp.png) And we see that the new code we introduced to the system is now covered by the tests as well. This was a very brief introduction to test coverage in C# applications we have gone through. You can check out some of the tools we used in this tutorial and use them in your projects as well. Happy coding
bigboybamo
1,891,096
Information Classification in Microsoft Purview: A Step-by-Step Guide
Microsoft Purview offers a robust framework for information classification and protection, enabling...
0
2024-06-17T12:54:14
https://dev.to/borisgigovic/information-classification-in-microsoft-purview-a-step-by-step-guide-167k
informationgovernance, informationsecurity, datasecurity, dataprotection
Microsoft Purview offers a robust framework for information classification and protection, enabling organizations to categorize and secure data across various services and platforms. This guide delves into the process of creating labels and classification policies within Microsoft Purview, ensuring your data is managed and protected efficiently. ## Understanding Information Classification in Microsoft Purview Information classification in Microsoft Purview involves categorizing data based on its sensitivity and the level of protection it requires. This process is facilitated by creating labels that can be applied to documents and emails, which help in identifying, classifying, and protecting data throughout its lifecycle. ## The configuration steps Note: For this guide, we will use the new Purview console that combines compliance globally, at the Microsoft 365 tenant level, including Microsoft 365, Azure, as well as the possibility to connect external sources for compliance. Let’s begin! 1 - Browse to [https://purview.microsoft.com/](https://purview.microsoft.com/), and log in with your credentials. 2 - In case the new Microsoft Purview screen shows, ensure you accept the terms, and click on **Try now** located at the bottom of the screen (it will become clickable): ![Step2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hkhgeoxw3bvqmvdzp7kb.png) 3 - Click on **Information Protection**: ![Step3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/af8wdpj4tiabr7r22aqg.png) 4 - Click on **Sensitivity labels** located on the left menu: ![Step4](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mmzk1vpflm8cklquwszq.png) 5 - This is the main page to configure sensitivity labels. Labels are applied at individual items to help with the classification process. Click on **+ Create a label**: ![Step5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dikn4mod6b60vpcenolc.png) 6 - At this stage, it is required to provide basic information regarding your sensitivity label, such as its name, the assignment priority, the description for uses (that is a mandatory field, informs users through applications as to what this is about), as well as another description for administrators. Once complete, click on **Next**: ![Step6](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vkzxpaziwt1au2urqbvd.png) 7 - The next step allows us to filter on what items the sensitivity label can be applied (files, emails, meetings with Teams and Outlook, and more). If you want to cover all data, you can select everything. By doing so, the next screens will adapt to what is selected here, so that we can individually specify the settings for each. Once your selection is done, click on **Next**: ![Step7](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fpxnxhvbklkm7nqpv4o1.png) 8 - Now, as we have selected items, we need to decide what kind of security will apply to them. As you can see, you can further control access to the information, use watermarks, and more. Click **Next** once your choice is done: ![Step8](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z93thtak35xbvez1udbx.png) 9 - This step in access control involves the selection of users and groups that will be authorized to use the content on which the sensitivity label will be applied. By clicking on **Assign permissions now**, you can make your selection. Click on **Save** (will become clickable). Note you can also provide other values such as content expiration, offline access and more. Click on **Next** to continue then (will become clickable): ![Step9](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/83cr9yaupynf08s0ayp2.png) 10 - Since we have selected the watermarking earlier, we can now setup a watermark text to be displayed at some parts of your files. Enable the content marking, provide values, click on **Save**, and then **Next** to pursue: ![Step10](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cnz4d8yjdy6ojctoaqjw.png) 11 - Here, we can enable auto-labeling, a feature that will automatically apply a label based on the content of the file or data itself. For example, it can investigate the format of the data to determine if the label must be applied, without user intervention. We have setup a filter to match credit card numbers. Click on **Add** within the _Content contains filter_, then _sensitive info types_, and search for a data format, (or leave it blank to see any appear) and make your selection: ![Step11](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xa364cnim2q36eyv6lqs.png) 12 - Provide a display message for users (recommended) and click on **Next** to continue: ![Step12](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pg6otaf7a2p72tuiqzg0.png) 13 - Here, you have the possibility of protecting Microsoft 365 groups, as well as SharePoint sites. It does require some prerequistes to make these items clickable, so in our case we will not do any changes here. Click **Next** to continue: ![Step13](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m982xl5hscj8pqfn1o3s.png) 14 - This section introduces new ways to secure information, such as the one that is found outside of the Microsoft 365 SaaS, for example in Azure, AWS and more. Click on **Next** at this step: ![Step14](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o3wlze2t5c4wiftd8nkp.png) 15 - Now, we are ready to create the label. Review the settings that have been created in the wizard, and click on **Create label** to proceed: ![Step15](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4jih4pk3iss57wdmg38d.png) 16 - A successful label creation message will show once the process is completed. With it, you will find the possibility to create a policy that will distribute the label to the users automatically and apply the labels to the existing content. This is accomplished by clicking on the **Automatically apply label to sensitive content** radio button. Click on **Done** once your choice has been made: ![Step16](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6ux4odljg20wfdlvw7e3.png) 17 - Click on **Create policy** after you review the settings it will apply: ![Step17](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1hkf9gr28tc2y2nnlrkw.png) 18 - Click on **Close** to confirm the creation of the policy has been completed: ![Step18](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7o8v5mruiklznim5gkro.png) 19 - Now, in the Sensitivity labels menu, you will find the label you have created earlier, active and ready to use: ![Step19](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mmz6kv06i24xksfq82h.png) 20 - There will also be a policy that auto-applies it as well, located in the **Auto-labeling policies** menu at the left (under **Policies**): ![Step20](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4kqku19m32de9kl96s6u.png) 21 - By clicking on the policy, you will notice it is not yet ready to use, as it first scans for matching items. This is a search of all matching content: ![Step21](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ifapq1f4540hanbvdob4.png) 22 - Once it has done its work, it will be in a simulation mode, a sort of audit and read-only mode allowing you to see matching items, as well as to enable the policy if you want: ![Step22](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jmga7mozpr4qht97mu5m.png) Additionally, once the detection is completed, an email is sent to administrators informing them the content is ready to be viewed: ![Step23](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3sstxe1ek8z1rlo05bld.png) ## Conclusion Information classification in Microsoft Purview empowers organizations to manage and protect their data effectively, aligning with compliance requirements and security best practices. By creating labels and establishing classification policies, organizations can ensure that sensitive information is adequately protected while remaining accessible to authorized users.Embracing information classification in Microsoft Purview is a critical step toward enhancing your organization's data security and compliance posture. With [Eccentrix's expert training](https://www.eccentrix.ca/en/courses/microsoft/security/microsoft-certified-information-protection-and-compliance-administrator) on Purview, you are well-prepared to navigate the complexities of data protection in the digital age, ensuring your organization's information assets are secure and well-managed.
borisgigovic
1,891,206
Unity Tutor
Hi guys I am looking to run some small online group classes in Minecraft to children and young...
0
2024-06-17T12:52:53
https://dev.to/summer24/unity-tutor-58gn
Hi guys I am looking to run some small online group classes in Minecraft to children and young people (beginner classes) over the UK summer holidays and I wondered if there is anyone on here that might be interested in running some of these. Drop a comment if this is something you would like to find out more about. Thanks Nikki
summer24