id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,893,076 | 7 Captivating C++ Programming Challenges from LabEx 🧠 | The article is about a curated collection of seven captivating C++ programming challenges from the LabEx platform. It covers a diverse range of topics, including private inheritance, arithmetic operations, book borrowing permutations, digit sum calculations, character-to-integer conversion, prime number finding, and using lambda functions. The article provides a concise overview of each challenge, highlighting the key concepts and skills involved, and includes direct links to the LabEx labs for readers to dive deeper into the exercises. Whether you're a beginner looking to hone your C++ skills or an experienced programmer seeking new coding challenges, this article offers a compelling and interactive learning experience. | 27,769 | 2024-06-19T04:20:27 | https://dev.to/labex/7-captivating-c-programming-challenges-from-labex-oip | coding, programming, tutorial |
Dive into the world of C++ programming with this curated collection of seven engaging challenges from the LabEx platform. These hands-on exercises cover a wide range of topics, from private inheritance and arithmetic operations to permutations and prime number calculations. 💻 Whether you're a beginner looking to hone your skills or an experienced programmer seeking new challenges, this article has something for everyone.
## Implementing Private Inheritance 🔒
In this lab, you'll learn the intricacies of private inheritance by creating a `Person` class with name and address variables, and then deriving an `Employee` class from `Person` in private mode. You'll need to initialize the `Employee` class attributes and create getter functions to retrieve the values. [Check out the challenge here.](https://labex.io/labs/114063)
## Arithmetic Operations in C++ 🧮
Unleash your calculator skills in this lab, where you'll create a program that uses a lambda function to perform basic arithmetic operations (addition, subtraction, multiplication, and division) with two integer inputs and a character input for the operator. The result will be neatly printed for you. [Dive into the challenge.](https://labex.io/labs/114069)
## All Possible Permutations for Borrowing Books 📚
In this challenge, you'll put your C programming prowess to the test by writing a program that displays all possible lending methods for 5 new books to 3 friends (A, B, and C), with each friend borrowing only one book at a time. The program should also count the total number of unique lending configurations. [Explore the challenge.](https://labex.io/labs/298166)
## Calculating Sum of Digits 🔢
Sharpen your problem-solving skills with this lab, where you'll create a program that takes an integer input and finds the sum of its digits using a while loop and division operations. 🧠 [Tackle the challenge.](https://labex.io/labs/114137)
## Converting Character to Integer 🔢
In this lab, you'll use the `static_cast` function to convert a character input to an integer and then print the result. 🔢 [Check out the challenge.](https://labex.io/labs/113958)
## Finding Prime Numbers Between Intervals 🔢
Put your number-crunching abilities to the test in this lab, where you'll create a function that finds all the prime numbers between two intervals and prints them. You'll need to check each number within the specified range for primality using a flag value, and then call this function for each number within the given range. 🔢 [Dive into the challenge.](https://labex.io/labs/114110)
## Using Lambda to Print Hello World 👋
In this lab, you'll write a lambda expression that prints "Hello World!" and store it in the `print_message` variable. Then, you'll call the `print_message()` function to see the magic happen. 🤩 [Explore the challenge.](https://labex.io/labs/114057)
Dive into these captivating C++ programming challenges and unlock your full potential as a coding enthusiast. Happy coding! 💻
---
## Want to learn more?
- 🌳 Learn the latest [C++ Skill Trees](https://labex.io/skilltrees/cpp)
- 📖 Read More [C++ Tutorials](https://labex.io/tutorials/category/cpp)
- 🚀 Practice thousands of programming labs on [LabEx](https://labex.io)
Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄 | labby |
1,893,075 | WHAT IS THE CONCEPT OF FIREWALL? | A FIREWALL is a network security device or software that monitors and controls incoming and outgoing... | 0 | 2024-06-19T04:18:57 | https://dev.to/aritra-iss/what-is-the-concept-of-firewall-4jjb | cschallenge, devchallenge, computerscience, beginners | A **FIREWALL** is a network security device or software that monitors and controls incoming and outgoing network traffic based on predetermined security rules, acting as a barrier between a trusted internal network and untrusted external networks, like the internet. | aritra-iss |
1,893,059 | One Byte Explainer: Recursion | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-19T03:53:17 | https://dev.to/david001/one-byte-explainer-recursion-240o | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Recursion is a technique where a function calls itself, allowing large complicated problems to be broken down into smaller, simpler problems. Since a recursive function can run indefinitely, a base case is used to terminate it.
<!-- Explain a computer science concept in 256 characters or less. -->
<!-- ## Additional Context -->
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | david001 |
1,893,065 | Custom Medical Bottles: Enhancing Brand Recognition and Patient Experience | Custom Medical Bottles: Enhancing Brand Recognition plus Patient Experience Your have in all... | 0 | 2024-06-19T03:49:27 | https://dev.to/jennifer_lewisg_4f56caf5f/custom-medical-bottles-enhancing-brand-recognition-and-patient-experience-5ha1 | design | Custom Medical Bottles: Enhancing Brand Recognition plus Patient Experience
Your have in all probability seen a true number of medicine containers and vials have you ever checked out the hospital or perhaps a doctor's workplace. These containers not only keep medicine safe but provide an system which is branding that is great. Custom containers that are medical help enhance brand recognition, patient experiences, and supply a great many other benefits. Let's explore this subject in more detail
Advantages of Choosing Custom Medical Bottles
The main benefit of custom medical bottles is that they provide an way which was effective brand name recognition that is establish. With the aid of customized labeling and styles, medical providers and companies that are pharmaceutical improve their bottle medical brand image plus create consumer loyalty. Custom containers that are medical also help to advertise suggestions that is crucial since dosage, expiration, and composition of the treatment
Innovation in Custom Medical Containers
Personalized containers that is medical withstood significant innovations over the years. The utilization of advanced items such as tempered glass, vinyl, plus polymers makes them safe, durable, plus long-lasting. The integration of smart tech sensors and RFID tags allows health care professionals to monitor the temperature, storage circumstances, plus stock status for the medicine
Ensuring Safety with Custom Medical Containers
One of the critical aspects of medicine is protection, and custom medical bottles could play the part which is crucial which that is ensuring. With the use of child-resistant caps plus seals that are tamper-proof personalized medical glass bottles provides security that are additional the contents, keeping them safe and safe. The usage of clear labeling, warnings, plus dosage instructions also ensures that patients and caregivers is fully mindful for the medication's potential risks and benefits
Utilizing Custom Medical Bottles
Custom containers that are medical easy to make use of plus require no instructions which are unique. Health care services can fill them using the medicine that is needed send them home with all the patient. Patients may then conveniently store and manage the assistance to their medicines of clear dosage and labeling directions printed on the bottles
Service and Quality of Custom Medical Bottles
The standard of custom containers which can be medical very important, plus companies that concentrate on customizing bottles that are medical ensure which product standards are came across. Several issue such because manufacturing procedure, items utilized, and design expertise need to be taken into consideration to make sure that the product that was last the best guidelines. Good customer support and distribution which is timely also essential facets of customized containers that are medical
Applications of Custom Medical Bottles
Custom containers which can be medical be used by many different medicine bottle services that are medical as for example hospitals, clinics, pharmacies, and residence health care providers. They can be used for unique needs such as pediatric treatments, veterinary medication, and for unusual plus expensive medicines which require unique storage space circumstances
| jennifer_lewisg_4f56caf5f |
1,893,049 | Advanced CI/CD Pipeline Configuration Strategies | _Welcome Aboard Week 3 of DevSecOps in 5: Your Ticket to Secure Development Superpowers! Hey there,... | 27,560 | 2024-06-19T03:48:00 | https://dev.to/gauri1504/advanced-cicd-pipeline-configuration-strategies-4mjh | devops, devsecops, cloud, security |
_Welcome Aboard Week 3 of DevSecOps in 5: Your Ticket to Secure Development Superpowers!
Hey there, security champions and coding warriors!
Are you itching to level up your DevSecOps game and become an architect of rock-solid software? Well, you've landed in the right place! This 5-week blog series is your fast track to mastering secure development and deployment._
---
In today's fast-paced development landscape, continuous integration and continuous delivery (CI/CD) pipelines have become the cornerstone of efficient software delivery. They automate repetitive tasks like building, testing, and deploying code, enabling teams to deliver features and bug fixes faster and more reliably. But beyond the basic functionalities, lies a world of advanced configurations that can unlock even greater efficiency and control. This blog delves deep into advanced CI/CD pipeline strategies, equipping you with the knowledge to build robust and scalable pipelines tailored to your specific needs.
## Deployment Strategies: Beyond Blue/Green
While blue/green deployments are a popular choice for minimizing downtime during updates, they're not the only option. Let's explore some advanced deployment strategies:
#### Blue/Green Deployments (In-Depth):
In a blue/green deployment, you maintain two identical production environments (blue and green). New code is deployed to the green environment first, undergoing rigorous testing. Once deemed stable, traffic is gradually shifted from the blue environment to the green environment, effectively replacing the old version. This approach minimizes downtime and allows for quick rollbacks if issues arise.

#### Canary Releases (Expanded):
Canary releases involve deploying a new version of the application to a small subset of users (the canary) first. This allows for real-world testing and monitoring before a full rollout. You can use advanced techniques like staged rollouts with percentage-based traffic shifting. Start by deploying the new version to a small percentage of users (e.g., 1%), gradually increase traffic as performance and stability are confirmed, and finally roll out to the entire user base. A/B testing can be integrated with canary releases to compare different application versions and gather user feedback before a full rollout.

#### Red Hat Deployment Stack (OpenShift):
OpenShift is a container orchestration platform that provides built-in deployment functionalities. It can be integrated with CI/CD pipelines to leverage advanced deployment strategies like blue/green deployments and canary releases. OpenShift manages the scaling and health of containerized applications, simplifying deployment workflows.

## Infrastructure Provisioning in CI/CD Pipelines:
Automating infrastructure provisioning alongside deployments is a powerful practice. Here's how to achieve it:
#### Infrastructure as Code (IaC) Tools:
Popular IaC tools like Terraform, Ansible, or CloudFormation allow you to define infrastructure configurations as code. These configurations can be integrated with CI/CD pipelines, enabling automated provisioning and management of infrastructure resources (e.g., virtual machines, storage) during deployments.

#### Multi-Cloud Infrastructure Management:
Managing infrastructure across different cloud providers (multi-cloud) can be complex. IaC tools can help by defining cloud-agnostic configurations that can be adapted to different cloud providers with minimal changes. CI/CD pipelines integrated with multi-cloud IaC tools can automate infrastructure provisioning and deployments across various cloud environments.

## Security Considerations for IaC in Pipelines:
When using IaC, security is paramount. Secure practices include:
Using secrets management tools like HashiCorp Vault to securely store sensitive information (API keys, passwords) within IaC configurations.
Implementing access controls to restrict who can modify IaC configurations and provision resources.
Regularly scanning IaC configurations for vulnerabilities to prevent security breaches.

#### Feature Flags and Branch Toggling:
Feature flags are mechanisms that allow you to enable or disable specific features in your application at runtime. They can be integrated with CI/CD pipelines and Git branching strategies. For instance, you can deploy code for a new feature to a specific branch and use a feature flag to control its visibility to different environments or user groups through the CI/CD pipeline.

#### Continuous Delivery vs. Continuous Deployment (Deep Dive):
While often used interchangeably, continuous delivery (CD) and continuous deployment (CD) have subtle differences. CD focuses on automating the entire build, test, and package pipeline up to a deployment-ready state. Human intervention is typically required to approve and trigger deployments. On the other hand, continuous deployment automates the entire process, including deployments to production environments. This requires robust testing and validation within the pipeline to ensure only stable code reaches production. Choose CD for deployments requiring manual approval or higher risk environments, and consider CD for frequent, low-risk deployments.
#### CI/CD for Serverless Applications:
Serverless functions are event-driven code snippets that execute on-demand in the cloud. Integrating CI/CD pipelines with serverless functions allows for automated deployment of these functions upon code changes. Consider using serverless frameworks like AWS Serverless Application Model (SAM) or Google Cloud Functions to simplify CI/CD workflows for serverless deployments.

## Monitoring and Performance Optimization :
Here's how to ensure optimal performance and health of your CI/CD pipelines:
#### Monitoring CI/CD Pipelines:
Continuously monitor your CI/CD pipelines to identify bottlenecks and potential issues. Monitor metrics like:
#### Build time:
Track the average time it takes for builds to complete. Identify and address slow-running builds to improve overall pipeline efficiency.
#### Deployment duration:
Monitor the time it takes to deploy new code to production. Investigate and optimize deployments that take excessively long.
#### Error rates:
Track the frequency of errors occurring within the pipeline stages (build failures, test failures). Analyze errors to identify root causes and implement solutions to prevent them.
#### Metrics and Dashboards for CI/CD:
Utilize dashboards to visualize key metrics from your CI/CD pipeline. This allows for quick identification of trends and potential issues. Popular tools for CI/CD monitoring include Prometheus, Grafana, and Datadog.

#### Performance Optimization Techniques:
Implement strategies to optimize your CI/CD pipelines:
#### Caching:
Cache frequently used dependencies, build artifacts, and test results to reduce redundant downloads and improve build times.
#### Parallelization:
Break down pipeline stages into smaller tasks that can be executed concurrently to speed up builds and deployments.
#### Containerized builds:
Leverage containerization technologies like Docker to create isolated build environments, ensuring consistency and faster builds across different environments.
### CI/CD for Machine Learning (ML) Projects:
Integrating ML models and data pipelines with CI/CD workflows requires specific considerations. These include:
Automating training data versioning and management within the pipeline.
Integrating unit and integration tests for ML models to ensure their accuracy and functionality.
Automating model deployment and rollback procedures.
### CI/CD Security Best Practices:
Enforce security throughout your CI/CD pipeline:
Implement code signing to validate the integrity of code deployed through the pipeline.
Integrate vulnerability scanning tools to identify security flaws within code dependencies.
Enforce strict access controls to restrict who can trigger deployments and access sensitive resources within the pipeline.
### The Future of CI/CD:
Emerging trends in CI/CD include:
AI/ML integration for automated decision-making within the pipeline, such as optimizing resource allocation or predicting potential issues.
Self-healing pipelines that can automatically detect and recover from failures.
Integration with GitOps for declarative infrastructure management, leveraging Git as the source of truth for both code and infrastructure configurations.
## CI/CD Pipeline Configuration for Different Considerations
Beyond the core functionalities, CI/CD pipelines can be tailored to various development methodologies and project requirements:
### CI/CD for Microservices Architecture:
Microservices architectures involve breaking down applications into small, independent services. CI/CD pipelines for microservices need to support independent deployments and testing of these services. This might involve using techniques like containerization and service discovery to manage deployments and dependencies effectively.

#### CI/CD for Agile Development:
Agile development methodologies emphasize frequent code changes and iterations. CI/CD pipelines can be configured to support this by enabling rapid builds, automated testing, and quick deployments on every code commit.
#### CI/CD for Legacy Applications:
Integrating CI/CD practices with legacy applications can be challenging. It might involve a phased approach, gradually introducing automation for specific parts of the development lifecycle (e.g., unit testing) before transitioning to full CI/CD integration.
## Advanced Security Considerations:
#### Software Composition Analysis (SCA):
SCA tools integrate with CI/CD pipelines to scan code dependencies for known vulnerabilities. This allows you to identify and address potential security risks before deployments.

#### Secret Management and Vault Integration:
Securely manage secrets (API keys, passwords) used within the CI/CD pipeline by leveraging tools like HashiCorp Vault or cloud-based secrets managers. These tools provide secure storage and access control mechanisms for sensitive information.
#### Compliance and Regulatory Requirements:
CI/CD pipelines can be configured to meet specific compliance and regulatory requirements for your industry or security standards. This might involve implementing audit logging, enforcing access controls, and integrating with compliance scanning tools.
## CI/CD Pipeline Optimization for Scalability
As your project and deployments grow, so should your CI/CD pipeline's ability to handle increased workloads:
#### Horizontal Scaling with Container Orchestrators:
Container orchestration platforms like Kubernetes can be used to horizontally scale CI/CD pipelines by running multiple instances of pipeline agents across a cluster. This allows for parallel execution of tasks and improved performance under heavy workloads.

#### Caching Strategies for Improved Performance:
Implement caching throughout the pipeline to reduce redundant operations:
Cache build artifacts (compiled code) to avoid rebuilding them on every subsequent build if the source code hasn't changed.
Cache dependency downloads to avoid re-downloading them for each build.
#### Monitoring and Alerting for Pipeline Health:
Set up comprehensive monitoring and alerting systems to identify issues within the CI/CD pipeline. This might involve:
Monitoring resource utilization of the CI/CD infrastructure to identify potential bottlenecks.
Setting alerts for pipeline failures, slow builds, or errors to ensure timely intervention and troubleshooting.
## Emerging Trends in CI/CD
Stay ahead of the curve by exploring these emerging trends in CI/CD:
#### CI/CD for GitLab and GitHub Actions:
Both GitLab and GitHub offer built-in CI/CD functionalities. Utilize these features for automated deployments and code testing directly within your Git repositories.
#### Infrastructure as Code for Testing Environments:
Leverage IaC to provision and manage temporary testing environments within the CI/CD pipeline. This allows for efficient creation and destruction of testing environments as needed, reducing infrastructure overhead.
#### CI/CD for Data Pipelines:
Integrate data pipelines with CI/CD workflows to automate data testing, version control, and deployment alongside your application code. This ensures data pipelines are kept in sync with application changes and data quality is maintained.
### CI/CD for Disaster Recovery:
CI/CD pipelines can be used to automate disaster recovery workflows. By scripting infrastructure provisioning, application deployment, and data restoration procedures within the pipeline, you can expedite recovery times in case of outages or incidents.
### A/B Testing Integration with CI/CD:
Integrate A/B testing tools with CI/CD pipelines to facilitate controlled deployments and feature experimentation. This allows you to deploy different versions of features to a subset of users and gather data on their performance before rolling them out to the entire user base.

### CI/CD Cost Optimization Strategies:
Optimize costs associated with CI/CD pipelines:
Utilize on-demand resources (cloud instances, container instances) for CI/CD infrastructure to pay only for what you use.
Optimize pipeline configurations to minimize resource consumption during builds and deployments.
Consider using spot instances or preemptible VMs in the cloud for cost-effective CI/CD infrastructure.
## Conclusion
CI/CD pipelines are powerful tools that can significantly improve the speed, reliability, and efficiency of your software delivery process. By leveraging the advanced strategies and considerations explored in this blog, you can unlock the full potential of CI/CD and streamline your development workflows. Remember to tailor your CI/CD pipeline configuration to your specific project needs and development environment. As CI/CD continues to evolve, stay updated on emerging trends and best practices to ensure your pipelines remain robust and efficient in the ever-changing world of software development.
---
I'm grateful for the opportunity to delve into Advanced CI/CD Pipeline Configuration Strategies with you today. It's a fascinating area with so much potential to improve the security landscape.
Thanks for joining me on this exploration of Advanced CI/CD Pipeline Configuration Strategies. Your continued interest and engagement fuel this journey!
If you found this discussion on Advanced CI/CD Pipeline Configuration Strategies helpful, consider sharing it with your network! Knowledge is power, especially when it comes to security.
Let's keep the conversation going! Share your thoughts, questions, or experiences Advanced CI/CD Pipeline Configuration Strategies in the comments below.
Eager to learn more about DevSecOps best practices? Stay tuned for the next post!
By working together and adopting secure development practices, we can build a more resilient and trustworthy software ecosystem.
Remember, the journey to secure development is a continuous learning process. Here's to continuous improvement!🥂
| gauri1504 |
1,893,064 | Disable BitLocker in windows drive | Disable BitLocker in Windows drive | 0 | 2024-06-19T03:47:35 | https://dev.to/ktrajasekar/disable-bitlocker-in-windows-drive-1efg | windowsbitlocker | ---
title: Disable BitLocker in windows drive
published: true
description: Disable BitLocker in Windows drive
tags: windowsbitlocker
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-09 01:03 +0000
---
`
Disable-BitLocker -MountPoint "C:"
manage-bde -status
`
| ktrajasekar |
1,893,061 | Comprehensive Guide to Workshop Manuals PDF | When it comes to vehicle maintenance, repair, and service, having access to accurate and detailed... | 0 | 2024-06-19T03:39:17 | https://dev.to/rog14mn/comprehensive-guide-to-workshop-manuals-pdf-4ddm | When it comes to vehicle maintenance, repair, and service, having access to accurate and detailed information is crucial. Workshop manuals, often available in PDF format, are indispensable tools for anyone who works on vehicles, from professional mechanics to DIY enthusiasts. These manuals provide comprehensive guidance on various aspects of vehicle maintenance and repair, ensuring that every task is performed correctly and efficiently. In this article, we will explore the significance of **[workshop manuals in PDF format](https://workshopmanuals.org/)**, their benefits, and how to use them effectively.
**What Are Workshop Manuals?**
Workshop manuals, also known as service manuals or repair manuals, are detailed guides that cover every aspect of maintaining and repairing a vehicle. These manuals are typically produced by the vehicle's manufacturer and include step-by-step instructions, diagrams, and specifications for every part of the vehicle. They are essential resources for diagnosing issues, performing repairs, and conducting routine maintenance.
**The Benefits of Workshop Manuals in PDF Format**
1. Accessibility
One of the primary advantages of having workshop manuals in PDF format is accessibility. PDFs can be easily downloaded and viewed on various devices, including computers, tablets, and smartphones. This means you can have the manual with you in the garage or workshop, making it convenient to reference instructions while working on a vehicle.
2. Searchability
PDFs are searchable, allowing you to quickly find the information you need. Instead of flipping through hundreds of pages to locate a specific section, you can use the search function to jump directly to the relevant content. This feature significantly reduces the time spent looking for information and increases efficiency.
3. Portability
With a PDF workshop manual, you can carry an entire library of manuals on a single device. Whether you're a mobile mechanic or simply prefer to work on your vehicle in different locations, having all the manuals you need in digital format ensures you always have the necessary information at your fingertips.
4. Cost-Effectiveness
Purchasing physical copies of workshop manuals can be expensive, especially if you need manuals for multiple vehicles. PDF versions are often more affordable, and in some cases, they may even be available for free. This cost-effectiveness makes it easier to obtain the necessary manuals without breaking the bank.
5. Environmental Impact
Using digital manuals reduces the demand for printed copies, which in turn decreases paper consumption and the associated environmental impact. By opting for PDF workshop manuals, you are making a more sustainable choice that helps conserve natural resources.
**How to Use Workshop Manuals Effectively**
1. Familiarize Yourself with the Manual
Before starting any repair or maintenance task, take some time to familiarize yourself with the workshop manual. Understand the layout, the type of information provided, and how to navigate the document. This will make it easier to find the information you need when you need it.
2. Use the Table of Contents and Index
Most workshop manuals include a table of contents and an index. These sections are invaluable for quickly locating specific topics or procedures. Make a habit of referring to these sections to streamline your workflow.
3. Follow Instructions Carefully
Workshop manuals provide detailed instructions for a reason. To ensure the best results and avoid causing damage, follow the instructions precisely. Pay attention to torque specifications, sequence of operations, and safety precautions.
4. Utilize Diagrams and Illustrations
Diagrams and illustrations are included in workshop manuals to provide visual guidance. These visuals can help you understand complex procedures and identify parts more easily. Make sure to refer to these diagrams whenever you are unsure about a particular step.
5. Keep Your Manual Up to Date
Vehicle manufacturers often release updates and revisions to workshop manuals. Ensure you have the latest version of the manual for your vehicle to benefit from the most accurate and up-to-date information.
**Common Sections Found in Workshop Manuals**
1. General Information
This section provides an overview of the vehicle, including its specifications, identification numbers, and general maintenance guidelines.
2. Engine
Detailed information on the engine, including disassembly, inspection, and reassembly procedures. This section covers everything from routine maintenance to major repairs.
3. Transmission
Guidance on servicing and repairing the transmission system, including both automatic and manual transmissions.
4. Electrical System
Instructions for diagnosing and repairing electrical issues, including wiring diagrams, circuit descriptions, and troubleshooting tips.
5. Suspension and Steering
Procedures for maintaining and repairing the suspension and steering systems, ensuring optimal handling and ride quality.
6. Brakes
Comprehensive information on the braking system, including inspection, maintenance, and repair of brake components.
7. Body and Exterior
Guidance on repairing and maintaining the vehicle's body and exterior components, including doors, windows, and trim.
8. Heating and Air Conditioning
Instructions for servicing the heating and air conditioning systems, ensuring comfort and climate control.
**Tips for Finding Workshop Manuals PDF**
1. Manufacturer Websites
Many vehicle manufacturers offer digital versions of their workshop manuals on their official websites. Check the manufacturer's website for downloadable PDFs.
2. Online Forums and Communities
Automotive forums and communities often share links to workshop manuals. These resources can be invaluable for finding specific manuals that may be difficult to locate elsewhere.
3. Specialized Websites
Several websites specialize in providing workshop manuals in PDF format. These sites often offer a wide range of manuals for different makes and models.
4. Libraries and Educational Institutions
Some libraries and educational institutions have digital collections of workshop manuals. These resources can be accessed for free or through a membership.
**Conclusion**
Workshop manuals in PDF format are essential tools for anyone involved in vehicle maintenance and repair. Their accessibility, searchability, portability, cost-effectiveness, and environmental benefits make them the preferred choice for both professionals and DIY enthusiasts. By understanding how to use these manuals effectively and knowing where to find them, you can ensure that you have the necessary information to perform any task on your vehicle with confidence and precision. Whether you're tackling a simple oil change or a complex engine rebuild, a workshop manual PDF is your trusted guide to getting the job done right.
| rog14mn | |
1,893,058 | Nail Manufacturing Made Easy: Exploring the Nail Production Line | H91ec7f2c56b8469ea5c9803effd0cc923.png Nail Manufacturing Made Easy: Exploring the Nail... | 0 | 2024-06-19T03:38:01 | https://dev.to/jennifer_lewisg_4f56caf5f/nail-manufacturing-made-easy-exploring-the-nail-production-line-14nf | design | H91ec7f2c56b8469ea5c9803effd0cc923.png
Nail Manufacturing Made Easy: Exploring the Nail Manufacturing Line
Would you enjoy nail which was learning how it helps their. Read on to discover the benefits that are huge innovation, protection, use, quality, application, solution distributed It made easy.
Advantages of Nail Manufacturing
It is good for those who want larger quantities of fingernails for manufacturing since construction specifications. It fingernails is produced faster at the cost which are paid off hand made nail machinery fingernails. The fingernails was individualized according and also to ones own specific requirements, such as size, colors, content.
Innovation in Nail Manufacturing
Technology has revolutionized It devices which will build fingernails in bulk, accurately precisely. The unit are made and solutions which can make it easy for them to generate types of fingernails, fingernails which are typical completing fingernails, roofing fingernails.
Security in Nail Manufacturing
Protection is a aspect which is a should of manufacturing. The equipment are manufactured to own protection properties which protect employees from potential risks debris which was traveling razor sharp things, noise that has been noisy.
Usage of fingernails
It can be used in several organizations, construction, manufacturing, and woodworking. Inside the construction company, fingernails are accustomed to fasten elements such as lumber metal together. In to the manufacturing company, fingernails are used to the production of furniture, doorways, windows. In woodworking, It accustomed to join items of timber together.
Using Fingernails
Utilizing It is easy. First, decide the type or kind of nail necessary for the duty. Next, discover the size that will be item which works aided by the nail. Finally, insert the nail to your coil winding equipment items being fastened by using It because hammer gun.
Quality of fingernails
Quality is essential in It. Top quality It was durable, resistant to corrosion, the finish that has been smooth. The equipment found in nail manufacturing are made to generate fingernails that are top quality meet company recommendations.
Application of fingernails
It is employed in several techniques with regards to the areas. In to the construction company, fingernails can manually be used having a hammer. In to the manufacturing company, It can be used nail that are utilizing, which could make the procedure faster and many other things efficient.
Service Provided by Nail Services
It's systems which can be providing in terms of example modification of fingernails, packaging, circulation. Modification enables a person to specify the sort as type, size, item concerning the nail suggested. Packing means the fingernails try safeguarded during transportation.
Nail production nail products It made offers that are effortless faster manufacturing, economical, modification, innovation. Protection test guaranteed in full and protection characteristics on Nail Machine equipment products which is individual try protective. It can be employed in several businesses, their quality is a must in terms of their application. Services distributed by nail services modification which try incorporate packaging, circulation. | jennifer_lewisg_4f56caf5f |
1,893,057 | Use trading terminal plug-in to facilitate manual trading | Introduction FMZ.COM, as a quantitative trading platform, is mainly to serve programmatic... | 0 | 2024-06-19T03:27:37 | https://dev.to/fmzquant/use-trading-terminal-plug-in-to-facilitate-manual-trading-o3m | tarding, termianl, plugin, fmzquant | ## Introduction
FMZ.COM, as a quantitative trading platform, is mainly to serve programmatic traders. But it also provides a basic trading terminal. Although the function is simple, sometimes it can be useful. For example, if the exchange is busy and cannot be operated, but the API still works. At this time, you can withdraw orders, place orders, and view them through the terminal. In order to improve the experience of the trading terminal, plug-ins are now added. Sometimes, we need a small function to assist the transaction, such as ladder pending orders, iceberg orders, one-click hedging, one-click closing positions and other operations. It is not necessary to look at the execution log. It is a bit cumbersome to create a new robot. Just click the plugin in the terminal , The corresponding functions can be realized immediately, which can greatly facilitate manual transactions. The plug-in location is as follows:

## Plug-in principle
There are two modes of plug-in operation, immediate operation and background operation. Running in the background is equivalent to creating a robot (normal charges). The principle of immediate operation is the same as the debugging tool: send a piece of code to the docker of the trading terminal page for execution, and support to return charts and tables (the debugging tool is currently also upgraded to support), the same can only be executed for 5 minutes, no fees, no restrictions Language. Plug-ins with a short execution time can use immediate run mode, while complex and long-time running strategies still need to run robots.
When writing a strategy, you need to select the strategy type as a plug-in. The result of the main function return of the plug-in will be popped up in the terminal after the operation is over, supporting strings, drawing and tables. Because the plug-in execution cannot see the log, you can return the execution result of the plug-in.
## How to use
- Add strategy
Search directly in the search box as shown in the figure. Note that only trading plugin type strategies can be run, and then click Add. The public plug-ins can be found in Strategy Square: https://www.fmz.com/square/21/1


- Run the plugin
Click on the strategy to enter the parameter setting interface. If there are no parameters, it will run directly. The docker, trading pair, and K-line period selected by the trading terminal are the default corresponding parameters. Click the execution strategy to start execution, and select the "Execute Now" mode (you can remember the default operation mode). The plugin will not display the log.

- Stop plugin
Click the icon position to stop the plug-in. Since all plug-ins are executed in a debugging tool process, all plug-ins will be stopped.

## Examples of plug-in uses
Plug-ins can execute code for a period of time and perform some simple operations. In many cases, manual operations that require repeated operations can be implemented with plug-ins to facilitate transactions. The following will introduce specific examples, and the source code given can be used for reference to customize your own strategy.
## Assist manual futures intertemporal hedging trading
Futures intertemporal hedging trading is a very common strategy. Because the frequency is not very high, many people will manually operate it. It is necessary to make one contract long and one contract short, so it is better to analyze the spread trend. Using plug-ins in the trading terminal will save your energy.
The first introduction is to draw the inter-period price difference plugin:
```
var chart = {
__isStock: true,
title : { text : 'Spread analysis chart'},
xAxis: { type: 'datetime'},
yAxis : {
title: {text: 'Spread'},
opposite: false,
},
series : [
{name : "diff", data : []},
]
}
function main() {
exchange.SetContractType('quarter')
var recordsA = exchange.GetRecords(PERIOD_M5) //Cycle can be customized
exchange.SetContractType('this_week')
var recordsB = exchange.GetRecords(PERIOD_M5)
for(var i=0;i<Math.min(recordsA.length,recordsB.length);i++){
var diff = recordsA[recordsA.length-Math.min(recordsA.length,recordsB.length)+i].Close - recordsB[recordsB.length-Math.min(recordsA.length,recordsB.length)+i].Close
chart.series[0].data.push([recordsA[recordsA.length-Math.min(recordsA.length,recordsB.length)+i].Time, diff])
}
return chart
}
```
Click once, the recent inter-period price difference is clear at a glance, the plug-in source code copy address: https://www.fmz.com/strategy/187755

With the spread analysis, it is found that the spread is converging. It is an opportunity to short the quarterly contract and go long for the current week. This is an opportunity to use the one-click hedging plug-in, one click will automatically help you short the quarterly and long the weekly, which is faster than manual operation. The implementation principle of the strategy is to open the same number of positions with a sliding price. You can run several more times to slowly reach your desired position to avoid impacting the market. You can change the default parameters to place orders faster. Strategy copy address: https://www.fmz.com/strategy/191348
```
function main(){
exchange.SetContractType(Reverse ? Contract_B : Contract_A)
var ticker_A = exchange.GetTicker()
if(!ticker_A){return 'Unable to get quotes'}
exchange.SetDirection('buy')
var id_A = exchange.Buy(ticker_A.Sell+Slip, Amount)
exchange.SetContractType(Reverse ? Contract_B : Contract_A)
var ticker_B = exchange.GetTicker()
if(!ticker_B){return 'Unable to get quotes'}
exchange.SetDirection('sell')
var id_B = exchange.Sell(ticker_B.Buy-Slip, Amount)
if(id_A){
exchange.SetContractType(Reverse ? Contract_B : Contract_A)
exchange.CancelOrder(id_A)
}
if(id_B){
exchange.SetContractType(Reverse ? Contract_B : Contract_A)
exchange.CancelOrder(id_B)
}
return 'Position: ' + JSON.stringify(exchange.GetPosition())
}
```
Waiting for the price difference to converge and you need to close the position, you can run the one-click closing plugin to close the position as quickly as possible.
```
function main(){
while(ture){
var pos = exchange.GetPosition()
var ticker = exchange.GetTicekr()
if(!ticker){return 'Unable to get ticker'}
if(!pos || pos.length == 0 ){return 'No holding position'}
for(var i=0;i<pos.length;i++){
if(pos[i].Type == PD_LONG){
exchange.SetContractType(pos[i].ContractType)
exchange.SetDirection('closebuy')
exchange.Sell(ticker.Buy, pos[i].Amount - pos[i].FrozenAmount)
}
if(pos[i].Type == PD_SHORT){
exchange.SetContractType(pos[i].ContractType)
exchange.SetDirection('closesell')
exchange.Buy(ticker.Sell, pos[i].Amount - pos[i].FrozenAmount)
}
}
var orders = exchange.Getorders()
Sleep(500)
for(var j=0;j<orders.length;j++){
if(orders[i].Status == ORDER_STATE_PENDING){
exchange.CancelOrder(orders[i].Id)
}
}
}
}
```
## Plug-in to assist spot trading
The most common one is the iceberg commission, which splits large orders into small orders. Although it can be run as a robot, a 5-minute plug-in is actually sufficient. There are two types of iceberg orders, one is taking orders and the other is pending orders. If there is a preferential fee, you can choose pending orders, which means the execution time is longer.
The following code is the source code of the plug-in commissioned by iceberg: https://www.fmz.com/strategy/191771. For Selling: https://www.fmz.com/strategy/191772
```
function main(){
var initAccount = _C(exchange.GetAccount)
while(true){
var account = _C(exchange.GetAccount)
var dealAmount = account.Stocks - initAccount.Stocks
var ticker = _C(exchange.GetTicker)
if(BUYAMOUNT - dealAmount >= BUYSIZE){
var id = exchange.Buy(ticker.Sell, BUYSIZE)
Sleep(INTERVAL*1000)
if(id){
exchange.CancelOrder(id) // May cause error log when the order is completed, which is all right.
}else{
throw 'buy error'
}
}else{
account = _C(exchange.GetAccount)
var avgCost = (initAccount.Balance - account.Balance)/(account.Stocks - initAccount.Stocks)
return 'Iceberg order to buy is done, avg cost is '+avgCost
}
}
}
```
It is also a way to slowly "ship products" to occupy Buying 1 or Selling 1 price layer all the time, and the impact on the market is relatively small. There are some improvements to this strategy. You can manually change the minimum transaction volume or accuracy.
Buy: https://www.fmz.com/strategy/191582
Sell: https://www.fmz.com/strategy/191730
```
function GetPrecision(){
var precision = {price:0, amount:0}
var depth = exchange.GetDepth()
for(var i=0;i<exchange.GetDepth().Asks.length;i++){
var amountPrecision = exchange.GetDepth().Asks[i].Amount.toString().indexOf('.') > -1 ? exchange.GetDepth().Asks[i].Amount.toString().split('.')[1].length : 0
precision.amount = Math.max(precision.amount,amountPrecision)
var pricePrecision = exchange.GetDepth().Asks[i].Price.toString().indexOf('.') > -1 ? exchange.GetDepth().Asks[i].Price.toString().split('.')[1].length : 0
precision.price = Math.max(precision.price,pricePrecision)
}
return precision
}
function main(){
var initAccount = exchange.GetAccount()
if(!initAccount){return 'Unable to get account information'}
var precision = GetPrecision()
var buyPrice = 0
var lastId = 0
var done = false
while(true){
var account = _C(exchange.GetAccount)
var dealAmount = account.Stocks - initAccount.Stocks
var ticker = _C(exchange.GetTicker)
if(BuyAmount - dealAmount > 1/Math.pow(10,precision.amount) && ticker.Buy > buyPrice){
if(lastId){exchange.CancelOrder(lastId)}
var id = exchange.Buy(ticker.Buy, _N(BuyAmount - dealAmount,precision.amount))
if(id){
lastId = id
}else{
done = true
}
}
if(BuyAmount - dealAmount <= 1/Math.pow(10,precision.amount)){done = true}
if(done){
var avgCost = (initAccount.Balance - account.Balance)/dealAmount
return 'order is done, avg cost is ' + avgCost // including fee cost
}
Sleep(Intervel*1000)
}
}
```
Sometimes, in order to sell a better "shipping price" or wait for a "missing" pending order, multiple orders can be placed at a certain interval. This plugin can also be used for futures pending orders. Source copy address: https://www.fmz.com/strategy/190017
```
function main() {
var ticker = exchange.GetTicker()
if(!ticker){
return 'Unable to get price'
}
for(var i=0;i<N;i++){
if(Type == 0){
if(exchange.GetName().startsWith('Futures')){
exchange.SetDirection('buy')
}
exchange.Buy(Start_Price-i*Spread,Amount+i*Amount_Step)
}else if(Type == 1){
if(exchange.GetName().startsWith('Futures')){
exchange.SetDirection('sell')
}
exchange.Sell(Start_Price+i*Spread,Amount+i*Amount_Step)
}else if(Type == 2){
exchange.SetDirection('closesell')
exchange.Buy(Start_Price-i*Spread,Amount+i*Amount_Step)
}
else if(Type == 3){
exchange.SetDirection('closebuy')
exchange.Sell(Start_Price+i*Spread,Amount+i*Amount_Step)
}
Sleep(500)
}
return 'order complete'
}
```
## Plug-in to assist commodity futures trading
Commonly used futures trading software often has many advanced pending order functions, such as pending stop-loss orders, pending condition orders, etc., which can be easily written as plug-ins. Here is a plugin for closing a pending order immediately after the pending order is traded. Copy address: https://www.fmz.com/strategy/187736
```
var buy = false
var trade_amount = 0
function main(){
while(true){
if(exchange.IO("status")){
exchange.SetContractType(Contract)
if(!buy){
buy = true
if(Direction == 0){
exchange.SetDirection('buy')
exchange.Buy(Open_Price, Amount)
}else{
exchange.SetDirection('sell')
exchange.Sell(Open_Price, Amount)
}
}
var pos = exchange.GetPosition()
if(pos && pos.length > 0){
for(var i=0;i<pos.length;i++){
if(pos[i].ContractType == Contract && pos[i].Type == Direction && pos[i].Amount-pos[i].FrozenAmount>0){
var cover_amount = math.min(Amount-trade_amount, pos[i].Amount-pos[i].FrozenAmount)
if(cover_amount >= 1){
trade_amount += cover_amount
if(Direction == 0){
exchange.SetDirection('closebuy_today')
exchange.Sell(Close_Price, cover_amount)
}else{
exchange.SetDirection('closesell_today')
exchange.Buy(Close_Price, cover_amount)
}
}
}
}
}
} else {
LogStatus(_D(), "CTP is not connected!")
Sleep(10000)
}
if(trade_amount >= Amount){
Log('mission completed')
return
}
Sleep(1000)
}
}
```
## To sum up
After reading so many small functions, you should also have your own ideas. You may wish to write a plug-in to facilitate your manual trading.
From: https://www.fmz.com/digest-topic/5957 | fmzquant |
1,893,056 | The Next-Gen Power Couple Reinventing Sports Broadcasting | Listen up, sports fans! The broadcasting world you knew is getting a major overhaul, courtesy of this... | 0 | 2024-06-19T03:26:31 | https://dev.to/kevintse756/the-next-gen-power-couple-reinventing-sports-broadcasting-5ahk |
Listen up, sports fans! The broadcasting world you knew is getting a major overhaul, courtesy of this thing called the Remote Integration Model (REMI). But the real stars of this show are 5G and cloud technologies – they're not just supporting acts, but the leading duo driving a complete transformation in how we deliver live sports content. We're talking game-changing connectivity and flexibility that'll blow your mind!
Let's start with 5G deployment and its beastly high-bandwidth, low-latency capabilities. This bad boy has been an absolute gamechanger for sports broadcasting. Real-time, crystal-clear ultra-high-definition video streaming? Check! Smooth, seamless capture of every heart-pounding moment on the field? Double-check! With 5G, multiple UHD camera feeds can be transmitted simultaneously to production hubs, cutting down on all that bulky on-site gear. This gives the crew more creative freedom without breaking the bank on logistics.
But wait, there's more! Cloud computing is also stepping up to the plate, offering performance, scalability, and data management efficiency that's off the charts. Remote editing and mixing of live feeds? No sweat. Storing massive video files like it's nothing? Easy-peasy. Instant replays and content distribution at lightning speed? You got it! Thanks to cloud tech, broadcasters can now scale their operations to match any sporting event, big or small, ensuring a consistently epic viewing experience for fans.

Now, when you combine the powers of 5G and cloud tech with REMI, it's like a match made in broadcasting heaven. We're talking better quality, more immersive broadcasts delivered with maximum efficiency. And let's not forget the potential for real-time data analytics and AI-driven content customization. It's a whole new ball game, folks!
Some companies are already leading the charge in implementing these cutting-edge technologies. [TVU Networks](https://www.tvunetworks.com/) is using 5G and cloud solutions for real-time mobile network coverage and lightning-fast content distribution. [Grass Valley](https://www.grassvalley.com/) is leveraging these technologies to revolutionize live production, making operations more flexible and efficient than ever before. [And Amazon Web Services (AWS)](https://aws.amazon.com/) is offering a whole suite of cloud services that empower broadcasters to manage and scale their global sports production capabilities like a boss.
Of course, integrating these technologies ain't a walk in the park. There are challenges to tackle, like cybersecurity, data privacy, and the hefty infrastructure investment required. But the opportunities for enhancing viewer engagement, production flexibility, and scalability far outweigh the risks. Personalized content and dynamic production adjustments? Sign us up!
Combining 5G and cloud tech is redefining the sports broadcast landscape, setting new standards for innovation and agility. As these technologies continue to evolve, they'll shape the industry in ways we can't even imagine yet, offering ever more sophisticated features and creative solutions. If you're in the sports broadcasting game, it's crucial to stay informed about these advancements, participate in industry discussions, and explore new technologies. Staying ahead of the curve is the only way to maintain that competitive edge in this fast-paced, ever-evolving field. | kevintse756 | |
1,893,050 | API Testing: A Journey into Reconnaissance and Vulnerability Identification using BurpSuite | Think of API testing as embarking on a thrilling adventure, where you explore uncharted territories... | 0 | 2024-06-19T03:19:13 | https://dev.to/adebiyiitunuayo/api-testing-a-journey-into-reconnaissance-and-vulnerability-identification-using-burpsuite-50o | cybersecurity, webdev, vulnerabilities, api | Think of API testing as embarking on a thrilling adventure, where you explore uncharted territories to ensure the safety and reliability of your digital assets. This guide will take you through the exciting process of API testing, focusing on reconnaissance and vulnerability identification.
### API Reconnaissance
Before diving into the depths of API testing, you need a solid map like this {% embed https://static.wikia.nocookie.net/callofduty/images/9/94/Isolated_Map_Overview_Season_4_Fool%27s_Gold_CODM.png/revision/latest?cb=20240414070316 %} :) just kidding, not that [COD](https://callofduty.fandom.com/wiki/Call_of_Duty_(series)) Map, but with elements being somewhat similar; points, locations, e.t.c . This involves understanding the lay of the land, or in technical terms, the API's attack surface. Here’s your guide:
1. **Identify API Endpoints**:
- Endpoints are like landmarks on your journey. They are specific locations where APIs receive requests. For instance, imagine you’re visiting a library (the server) and you want to see a list of books. You’d head to the `/api/books` section.
- Another landmark could be `/api/books/mystery`, guiding you to a collection of mystery novels. Knowing these endpoints is like having a treasure map—it’s crucial for successful exploration.
2. **Determine Interaction Methods**:
- To interact with these landmarks, you need the right tools and instructions. Gather details on:
- **Input Data**: Think of these as the keys to different doors, including both mandatory and optional parameters.
- **Supported Request Types**: This is akin to knowing if a door requires a push, pull, or a keycard—whether it’s HTTP methods or media formats.
- **Rate Limits and Authentication Mechanisms**: These are the rules of the land, ensuring you don’t overstay your welcome or enter without permission.
### API Documentation
Imagine having a guidebook that explains everything about your journey. API documentation is that guidebook, and it comes in two forms:
1. **Human-Readable**: Like a travel guide with detailed explanations, examples, and usage scenarios.
2. **Machine-Readable**: Think of it as a GPS system that uses structured formats like JSON or XML for software automation.
**Finding API Documentation**:
- **Browsing Applications**: Use tools like Burp Scanner to explore the API, similar to using a metal detector to find hidden treasures. Look for endpoints such as `/api`, `/swagger/index.html`, or `/openapi.json`.
- **Using Intruder**: Deploy wordlists based on common API conventions, much like using a dictionary to decode an ancient language.
**Using Machine-Readable Documentation**:
- Tools like Burp Scanner can audit OpenAPI documentation, while Postman and SoapUI can help test the documented endpoints, much like testing the integrity of discovered artifacts.
### Identifying and Interacting with Endpoints
**Manual Exploration**:
- Use Burp Repeater and Burp Intruder to interact with identified endpoints. This is like probing different areas of a cave to uncover additional chambers or passageways.
**HTTP Methods**:
- Test various HTTP methods to uncover different functionalities:
- `GET /api/tasks`: Retrieves tasks, akin to finding a list of quests.
- `POST /api/tasks`: Creates a new task, like adding a new quest to your log.
- `DELETE /api/tasks/1`: Deletes a task, removing a quest from your log.
- Use Burp Intruder to cycle through these methods, focusing on low-priority areas to avoid unintended consequences, like a careful explorer avoiding dangerous traps.
**Content Types**:
- APIs often expect data in a specific format. Changing the Content-Type header can:
- Trigger errors revealing useful information, much like pressing the wrong button in an ancient temple might reveal hidden secrets.
- Bypass defenses or exploit logic differences, similar to finding a hidden passageway.
- Use the Content type converter BApp to switch between formats like XML and JSON, akin to translating ancient texts.
### Discovering Hidden Endpoints and Parameters
**Using Intruder for Hidden Endpoints**:
- Test structures like `/api/user/update` with Burp Intruder, adding payloads at common positions. Utilize wordlists, much like a treasure hunter uses maps marked with potential dig sites.
**Finding Hidden Parameters**:
- Use Burp Intruder and Param Miner BApp to uncover hidden parameters, and the Content Discovery Tool to find parameters not linked from visible content, like finding hidden switches in a secret room.
### Testing for Vulnerabilities
**Mass Assignment Vulnerabilities**:
- These occur when frameworks automatically bind request parameters to internal object fields. It’s like finding out that a hidden lever not only opens a secret door but also triggers a trap.
**Testing Mass Assignment**:
- Send PATCH requests with parameters like `{"username": "wiener", "email": "wiener@example.com", "isAdmin": false}`. Check responses for unusual behavior, much like testing each step for hidden pressure plates.
**Server-Side Parameter Pollution**:
- This vulnerability lets attackers manipulate server-side requests by injecting parameters into user inputs. It’s like inserting a false clue into a treasure map to mislead others.
- **Query String Pollution**: Test by inserting characters like `#`, `&`, and `=`, similar to testing different keys in a lock.
- **Path Traversal**: Test by adding path sequences like `peter/../admin`, akin to finding a hidden path through a maze.
### Testing with Automated Tools
**Burp Scanner and Backslash Powered Scanner**:
- Use Burp Scanner to detect suspicious input transformations, like using a metal detector to find hidden metal objects. The Backslash Powered Scanner BApp helps identify server-side injection vulnerabilities, much like using a spyglass to spot distant dangers.
### Preventing Vulnerabilities
**Design Considerations**:
- **Protect Documentation**: Keep it updated and secure, like safeguarding your treasure map.
- **Validate Inputs**: Use allowlists and blocklists to control input parameters, akin to setting up a security perimeter around your treasure.
- **Error Handling**: Use generic error messages to avoid revealing sensitive information, much like using decoy maps to mislead potential thieves.
- **Versioning**: Apply security measures across all API versions, ensuring that every iteration of your treasure map is secure.
**Preventing Mass Assignment**:
- Allowlist user-updatable properties and blocklist sensitive properties, preventing unintended parameter binding, like ensuring only trusted allies have access to the secret vault.
### Real-World Examples of API Testing Vulnerabilities
1. **Facebook Graph API - Access Token Exposure (2018)**
- **Incident**: A vulnerability exposed access tokens of 50 million users.
- **Consequences**: Account takeover, access to private messages, posting updates.
- **Lesson**: Implement robust token management and regular audits.
2. **Uber API - Personal Data Exposure (2016)**
- **Incident**: Insufficient access controls exposed personal data of riders and drivers.
- **Consequences**: Risk of privacy breaches and data misuse.
- **Lesson**: Ensure strict access controls and regularly update security policies.
3. **Twitter API - Direct Message Leak (2017)**
- **Incident**: A vulnerability allowed unauthorized access to DMs.
- **Consequences**: Exposure of private conversations.
- **Lesson**: Thoroughly test permission systems and authorization checks.
4. **Tesla API - Remote Control of Vehicles (2016)**
- **Incident**: Researchers accessed and controlled vehicle features remotely.
- **Consequences**: Significant security risk, potential vehicle theft or manipulation.
- **Lesson**: Rigorous security testing for critical systems and secure authentication.
5. **GitHub API - Repository Data Exposure (2020)**
- **Incident**: Unauthorized access to private repository data due to improper token handling.
- **Consequences**: Exposure of sensitive code and intellectual property.
- **Lesson**: Use secure token handling methods and regularly rotate tokens.
6. **Slack API - Bypassing Rate Limits (2019)**
- **Incident**: A vulnerability allowed attackers to bypass rate limits and spam messages.
- **Consequences**: Denial-of-service attacks, disrupted services.
- **Lesson**: Implement and enforce rate limiting and monitor API usage patterns.
7. **Google Cloud API - Sensitive Data Leak (2020)**
- **Incident**: Misconfiguration exposed sensitive data of enterprise customers.
- **Consequences**: Exposure of sensitive business information.
- **Lesson**: Ensure proper configuration management and regular security audits.
### Conclusion
Effective API testing is like embarking on an epic quest, requiring a thorough understanding of the API’s structure and behavior. By following these steps, you can identify and mitigate various vulnerabilities, ensuring the security and reliability of your digital treasure. Remember, continuous testing and updating security measures are key to safeguarding against emerging threats.
_Mischief Managed._ | adebiyiitunuayo |
1,893,034 | (Part 8)Golang Framework Hands-on - Cache/Params Data Caching and Data Parameters | Github: https://github.com/aceld/kis-flow Document:... | 0 | 2024-06-19T03:07:00 | https://dev.to/aceld/part-8golang-framework-hands-on-cacheparams-data-caching-and-data-parameters-5df5 | go | <img width="150px" src="https://github.com/aceld/kis-flow/assets/7778936/8729d750-897c-4ba3-98b4-c346188d034e" />
Github: https://github.com/aceld/kis-flow
Document: https://github.com/aceld/kis-flow/wiki
---
[Part1-OverView](https://dev.to/aceld/part-1-golang-framework-hands-on-kisflow-streaming-computing-framework-overview-8fh)
[Part2.1-Project Construction / Basic Modules](https://dev.to/aceld/part-2-golang-framework-hands-on-kisflow-streaming-computing-framework-project-construction-basic-modules-cia)
[Part2.2-Project Construction / Basic Modules](https://dev.to/aceld/part-3golang-framework-hands-on-kisflow-stream-computing-framework-project-construction-basic-modules-1epb)
[Part3-Data Stream](https://dev.to/aceld/part-4golang-framework-hands-on-kisflow-stream-computing-framework-data-stream-1mbd)
[Part4-Function Scheduling](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-function-scheduling-4p0h)
[Part5-Connector](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-connector-hcd)
[Part6-Configuration Import and Export](https://dev.to/aceld/part-6golang-framework-hands-on-kisflow-stream-computing-framework-configuration-import-and-export-47o1)
[Part7-KisFlow Action](https://dev.to/aceld/part-7golang-framework-hands-on-kisflow-stream-computing-framework-kisflow-action-3n05)
[Part8-Cache/Params Data Caching and Data Parameters](https://dev.to/aceld/part-8golang-framework-hands-on-cacheparams-data-caching-and-data-parameters-5df5)
[Part9-Multiple Copies of Flow](https://dev.to/aceld/part-8golang-framework-hands-on-multiple-copies-of-flow-c4k)
[Part10-Prometheus Metrics Statistics](https://dev.to/aceld/part-10golang-framework-hands-on-prometheus-metrics-statistics-22f0)
[Part11-Adaptive Registration of FaaS Parameter Types Based on Reflection](https://dev.to/aceld/part-11golang-framework-hands-on-adaptive-registration-of-faas-parameter-types-based-on-reflection-15i9)
---
[Case1-Quick Start](https://dev.to/aceld/case-i-kisflow-golang-stream-real-time-computing-quick-start-guide-f51)
---
## 8.1 Flow Cache - Data Stream Caching
KisFlow also provides shared caching in stream computing, using a simple local cache for developers to use as needed. For third-party local cache technology dependencies, refer to: https://github.com/patrickmn/go-cache.
### 8.1.1 go-cache
#### (1) Installation
```bash
go get github.com/patrickmn/go-cache
```
#### (2) Usage
```go
import (
"fmt"
"github.com/patrickmn/go-cache"
"time"
)
func main() {
// Create a cache with a default expiration time of 5 minutes, and which
// purges expired items every 10 minutes
c := cache.New(5*time.Minute, 10*time.Minute)
// Set the value of the key "foo" to "bar", with the default expiration time
c.Set("foo", "bar", cache.DefaultExpiration)
// Set the value of the key "baz" to 42, with no expiration time
// (the item won't be removed until it is re-set, or removed using
// c.Delete("baz")
c.Set("baz", 42, cache.NoExpiration)
// Get the string associated with the key "foo" from the cache
foo, found := c.Get("foo")
if found {
fmt.Println(foo)
}
// Since Go is statically typed, and cache values can be anything, type
// assertion is needed when values are being passed to functions that don't
// take arbitrary types, (i.e. interface{}). The simplest way to do this for
// values which will only be used once--e.g. for passing to another
// function--is:
foo, found := c.Get("foo")
if found {
MyFunction(foo.(string))
}
// This gets tedious if the value is used several times in the same function.
// You might do either of the following instead:
if x, found := c.Get("foo"); found {
foo := x.(string)
// ...
}
// or
var foo string
if x, found := c.Get("foo"); found {
foo = x.(string)
}
// ...
// foo can then be passed around freely as a string
// Want performance? Store pointers!
c.Set("foo", &MyStruct, cache.DefaultExpiration)
if x, found := c.Get("foo"); found {
foo := x.(*MyStruct)
// ...
}
}
```
For detailed reference: https://github.com/patrickmn/go-cache
### 8.1.2 KisFlow Integration with go-cache
#### (1) Flow Provides Abstract Interface
Flow provides interfaces for cache operations as follows:
> kis-flow/kis/flow.go
```go
type Flow interface {
// Run schedules the Flow, sequentially scheduling and executing Functions within the Flow
Run(ctx context.Context) error
// Link connects Functions within the Flow according to the configuration file
Link(fConf *config.KisFuncConfig, fParams config.FParam) error
// CommitRow submits Flow data to the Function layer about to be executed
CommitRow(row interface{}) error
// Input gets the input source data for the currently executing Function in the Flow
Input() common.KisRowArr
// GetName gets the name of the Flow
GetName() string
// GetThisFunction gets the currently executing Function
GetThisFunction() Function
// GetThisFuncConf gets the configuration of the currently executing Function
GetThisFuncConf() *config.KisFuncConfig
// GetConnector gets the Connector of the currently executing Function
GetConnector() (Connector, error)
// GetConnConf gets the configuration of the Connector for the currently executing Function
GetConnConf() (*config.KisConnConfig, error)
// GetConfig gets the configuration of the current Flow
GetConfig() *config.KisFlowConfig
// GetFuncConfigByName gets the configuration of the Function by its name
GetFuncConfigByName(funcName string) *config.KisFuncConfig
// Next advances the currently executing Function to the next Function with specified Action
Next(acts ...ActionFunc) error
// ++++++++++++++++++++++++++++++++++++++++
// GetCacheData gets the cache data of the current Flow
GetCacheData(key string) interface{}
// SetCacheData sets the cache data of the current Flow
SetCacheData(key string, value interface{}, Exp time.Duration)
}
```
`SetCacheData()` sets the local cache, with Exp as the expiration time. If Exp is 0, it is permanent.
`GetCacheData()` reads the local cache.
#### (2) Providing Constants
Provide some constants related to cache expiration time.
> kis-flow/common/const.go
```go
// cache
const (
// DeFaultFlowCacheCleanUp is the default cache cleanup interval for Flow objects in KisFlow, in minutes
DeFaultFlowCacheCleanUp = 5 // in minutes
// DefaultExpiration is the default GoCache time, permanently saved
DefaultExpiration time.Duration = 0
)
```
#### (3) Adding and Initializing Members in KisFlow
> kis-flow/flow/kis_flow.go
```go
// KisFlow represents the context environment for stream computing
type KisFlow struct {
// ... ...
// ... ...
// Local cache for the flow
cache *cache.Cache // Temporary cache context environment for Flow
}
// NewKisFlow creates a new KisFlow
func NewKisFlow(conf *config.KisFlowConfig) kis.Flow {
flow := new(KisFlow)
// ... ...
// ... ...
// Initialize local cache
flow.cache = cache.New(cache.NoExpiration, common.DeFaultFlowCacheCleanUp*time.Minute)
return flow
}
```
#### (4) Implementing the Interface
Finally, implement the two interfaces for cache read and write operations as follows:
> kis-flow/flow/kis_flow_data.go
```go
func (flow *KisFlow) GetCacheData(key string) interface{} {
if data, found := flow.cache.Get(key); found {
return data
}
return nil
}
func (flow *KisFlow) SetCacheData(key string, value interface{}, Exp time.Duration) {
if Exp == common.DefaultExpiration {
flow.cache.Set(key, value, cache.DefaultExpiration)
} else {
flow.cache.Set(key, value, Exp)
}
}
```
## 8.2 MetaData Temporary Cache Parameters
MetaData is defined as a `map[string]interface{}` structure available at each level of Flow, Function, and Connector to store temporary data. The lifespan of this data is consistent with the lifespan of each instance.
### 8.2.1 Adding MetaData to Flow
First, add the `metaData map[string]interface{}` member and corresponding read-write lock to KisFlow.
> kis-flow/flow/kis_flow.go
```go
// KisFlow represents the context environment throughout the entire stream computing
type KisFlow struct {
// ... ...
// ... ...
// +++++++++++++++++++++++++++++++++++++++++++
// metaData for the flow
metaData map[string]interface{} // Custom temporary data for Flow
mLock sync.RWMutex // Read-write lock to manage metaData
}
```
Also, initialize the `metaData` member in the KisFlow constructor as follows:
> kis-flow/flow/kis_flow.go
```go
// NewKisFlow creates a KisFlow
func NewKisFlow(conf *config.KisFlowConfig) kis.Flow {
flow := new(KisFlow)
// ... ...
// ... ...
// ++++++++++++++++++++++++++++++++++++++
// Initialize temporary data
flow.metaData = make(map[string]interface{})
return flow
}
```
Next, add the read and write interfaces for MetaData to the Flow as follows:
> kis-flow/kis/flow.go
```go
type Flow interface {
// Run schedules the Flow, sequentially scheduling and executing Functions within the Flow
Run(ctx context.Context) error
// Link connects Functions within the Flow according to the configuration file
Link(fConf *config.KisFuncConfig, fParams config.FParam) error
// CommitRow submits Flow data to the Function layer about to be executed
CommitRow(row interface{}) error
// Input gets the input source data for the currently executing Function in the Flow
Input() common.KisRowArr
// GetName gets the name of the Flow
GetName() string
// GetThisFunction gets the currently executing Function
GetThisFunction() Function
// GetThisFuncConf gets the configuration of the currently executing Function
GetThisFuncConf() *config.KisFuncConfig
// GetConnector gets the Connector of the currently executing Function
GetConnector() (Connector, error)
// GetConnConf gets the configuration of the Connector for the currently executing Function
GetConnConf() (*config.KisConnConfig, error)
// GetConfig gets the configuration of the current Flow
GetConfig() *config.KisFlowConfig
// GetFuncConfigByName gets the configuration of the Function by its name
GetFuncConfigByName(funcName string) *config.KisFuncConfig
// Next advances the currently executing Function to the next Function with specified Action
Next(acts ...ActionFunc) error
// GetCacheData gets the cache data of the current Flow
GetCacheData(key string) interface{}
// SetCacheData sets the cache data of the current Flow
SetCacheData(key string, value interface{}, Exp time.Duration)
// ++++++++++++++++++++++++++++
// GetMetaData gets the temporary data of the current Flow
GetMetaData(key string) interface{}
// SetMetaData sets the temporary data of the current Flow
SetMetaData(key string, value interface{})
}
```
Define the `GetMetaData()` and `SetMetaData()` interfaces for reading and writing respectively. Finally, implement these interfaces as follows:
> kis-flow/flow/kis_flow_data.go
```go
// GetMetaData retrieves the temporary data of the current Flow object
func (flow *KisFlow) GetMetaData(key string) interface{} {
flow.mLock.RLock()
defer flow.mLock.RUnlock()
data, ok := flow.metaData[key]
if !ok {
return nil
}
return data
}
// SetMetaData sets the temporary data of the current Flow object
func (flow *KisFlow) SetMetaData(key string, value interface{}) {
flow.mLock.Lock()
defer flow.mLock.Unlock()
flow.metaData[key] = value
}
```
### 8.2.2 Adding MetaData to Function
First, add the `metaData` member to `BaseFunction` as follows:
> kis-flow/function/kis_base_function.go
```go
type BaseFunction struct {
// Id, KisFunction instance ID, used to distinguish different instance objects within KisFlow
Id string
Config *config.KisFuncConfig
// flow
flow kis.Flow // Context environment KisFlow
// connector
connector kis.Connector
// ++++++++++++++++++++++++
// Custom temporary data for Function
metaData map[string]interface{}
// Read-write lock to manage metaData
mLock sync.RWMutex
// link
N kis.Function // Next stream computing Function
P kis.Function // Previous stream computing Function
}
```
In the Function constructor, each specific Function needs a constructor to initialize the `metaData` member. The changes are as follows:
> kis-flow/function/kis_base_function.go
```go
func NewKisFunction(flow kis.Flow, config *config.KisFuncConfig) kis.Function {
var f kis.Function
// Factory produces generalized objects
// ++++++++++++++
switch common.KisMode(config.FMode) {
case common.V:
f = NewKisFunctionV() // +++
case common.S:
f = NewKisFunctionS() // +++
case common.L:
f = NewKisFunctionL() // +++
case common.C:
f = NewKisFunctionC() // +++
case common.E:
f = NewKisFunctionE() // +++
default:
// LOG ERROR
return nil
}
// Generate random unique instance ID
f.CreateId()
// Set basic information properties
if err := f.SetConfig(config); err != nil {
panic(err)
}
// Set Flow
if err := f.SetFlow(flow); err != nil {
panic(err)
}
return f
}
```
Each constructor is as follows:
> kis-flow/function/kis_function_c.go
```go
func NewKisFunctionC() kis.Function {
f := new(KisFunctionC)
// Initialize metaData
f.metaData = make(map[string]interface{})
return f
}
```
> kis-flow/function/kis_function_v.go
```go
func NewKisFunctionV() kis.Function {
f := new(KisFunctionV)
// Initialize metaData
f.metaData = make(map[string]interface{})
return f
}
```
> kis-flow/function/kis_function_e.go
```go
func NewKisFunctionE() kis.Function {
f := new(KisFunctionE)
// Initialize metaData
f.metaData = make(map[string]interface{})
return f
}
```
> kis-flow/function/kis_function_s.go
```go
func NewKisFunctionS() kis.Function {
f := new(KisFunctionS)
// Initialize metaData
f.metaData = make(map[string]interface{})
return f
}
```
> kis-flow/function/kis_function_l.go
```go
func NewKisFunctionL() kis.Function {
f := new(KisFunctionL)
// Initialize metaData
f.metaData = make(map[string]interface{})
return f
}
```
Next, add interfaces to access the metaData member in the Function abstraction layer as follows:
```go
type Function interface {
// Call executes the stream computing logic
Call(ctx context.Context, flow Flow) error
// SetConfig configures the strategy for the current Function instance
SetConfig(s *config.KisFuncConfig) error
// GetConfig retrieves the configuration strategy of the current Function instance
GetConfig() *config.KisFuncConfig
// SetFlow sets the Flow instance that the current Function instance depends on
SetFlow(f Flow) error
// GetFlow retrieves the Flow instance that the current Function instance depends on
GetFlow() Flow
// AddConnector adds a Connector to the current Function instance
AddConnector(conn Connector) error
// GetConnector retrieves the Connector associated with the current Function instance
GetConnector() Connector
// CreateId generates a random instance KisID for the current Function instance
CreateId()
// GetId retrieves the FID of the current Function
GetId() string
// GetPrevId retrieves the FID of the previous Function node of the current Function
GetPrevId() string
// GetNextId retrieves the FID of the next Function node of the current Function
GetNextId() string
// Next returns the next layer computing stream Function, or nil if it is the last layer
Next() Function
// Prev returns the previous layer computing stream Function, or nil if it is the last layer
Prev() Function
// SetN sets the next Function instance
SetN(f Function)
// SetP sets the previous Function instance
SetP(f Function)
// ++++++++++++++++++++++++++++++++++
// GetMetaData retrieves the temporary data of the current Function
GetMetaData(key string) interface{}
// SetMetaData sets the temporary data of the current Function
SetMetaData(key string, value interface{})
}
```
Implement the above two interfaces in the BaseFunction.
> kis-flow/function/kis_base_function.go
```go
// GetMetaData retrieves the temporary data of the current Function
func (base *BaseFunction) GetMetaData(key string) interface{} {
base.mLock.RLock()
defer base.mLock.RUnlock()
data, ok := base.metaData[key]
if !ok {
return nil
}
return data
}
// SetMetaData sets the temporary data of the current Function
func (base *BaseFunction) SetMetaData(key string, value interface{}) {
base.mLock.Lock()
defer base.mLock.Unlock()
base.metaData[key] = value
}
```
### 8.2.3 Adding MetaData to Connector
First, add the `metaData` member to `KisConnector` as follows:
> kis-flow/conn/kis_connector.go
```go
type KisConnector struct {
// Connector ID
CId string
// Connector Name
CName string
// Connector Config
Conf *config.KisConnConfig
// Connector Init
onceInit sync.Once
// ++++++++++++++
// Custom temporary data for KisConnector
metaData map[string]interface{}
// Read-write lock to manage metaData
mLock sync.RWMutex
}
// NewKisConnector creates a KisConnector based on the configuration strategy
func NewKisConnector(config *config.KisConnConfig) *KisConnector {
conn := new(KisConnector)
conn.CId = id.KisID(common.KisIdTypeConnector)
conn.CName = config.CName
conn.Conf = config
// +++++++++++++++++++++++++++++++++++
conn.metaData = make(map[string]interface{})
return conn
}
```
Initialize `metaData` in the constructor.
Next, add interfaces to access and set MetaData in the Connector abstraction layer as follows:
> kis-flow/kis/connector.go
```go
type Connector interface {
// Init initializes the links of the storage engine associated with the Connector
Init() error
// Call invokes the read and write operations of the external storage logic of the Connector
Call(ctx context.Context, flow Flow, args interface{}) error
// GetId retrieves the ID of the Connector
GetId() string
// GetName retrieves the name of the Connector
GetName() string
// GetConfig retrieves the configuration information of the Connector
GetConfig() *config.KisConnConfig
// GetMetaData retrieves the temporary data of the current Connector
// +++++++++++++++++++++++++++++++
GetMetaData(key string) interface{}
// SetMetaData sets the temporary data of the current Connector
SetMetaData(key string, value interface{})
}
```
Finally, implement the above two interfaces in `KisConnector` as follows:
> kis-flow/conn/kis_connector.go
```go
// GetMetaData retrieves the temporary data of the current Connector
func (conn *KisConnector) GetMetaData(key string) interface{} {
conn.mLock.RLock()
defer conn.mLock.RUnlock()
data, ok := conn.metaData[key]
if !ok {
return nil
}
return data
}
// SetMetaData sets the temporary data of the current Connector
func (conn *KisConnector) SetMetaData(key string, value interface{}) {
conn.mLock.Lock()
defer conn.mLock.Unlock()
conn.metaData[key] = value
}
```
## 8.3 Configuration File Parameters
KisFlow allows developers to define default parameters (Params) for configuring Flow, Function, Connector, etc., in the configuration file. Here are some examples:
Function:
```yaml
kistype: func
fname: funcName1
fmode: Verify
source:
name: Official Account Douyin Mall Order Data
must:
- order_id
- user_id
option:
default_params:
default1: funcName1_param1
default2: funcName1_param2
```
Flow:
```yaml
kistype: flow
status: 1
flow_name: flowName1
flows:
- fname: funcName1
params:
myKey1: flowValue1-1
myKey2: flowValue1-2
- fname: funcName2
params:
myKey1: flowValue2-1
myKey2: flowValue2-2
- fname: funcName3
params:
myKey1: flowValue3-1
myKey2: flowValue3-2
```
Connector:
```yaml
kistype: conn
cname: ConnName1
addrs: '0.0.0.0:9988,0.0.0.0:9999,0.0.0.0:9990'
type: redis
key: redis-key
params:
args1: value1
args2: value2
load: null
save:
- funcName2
```
Developers can provide Params for each defined module. Params provided in Flow will also be added to the Functions.
In the previous steps, we already read these parameters into each module's memory, but we did not expose an interface for developers.
### 8.3.1 Adding Param Retrieval Interface to Flow
First, we provide an interface for Flow to query Params:
> kis-flow/kis/flow.go
```go
type Flow interface {
// ... ...
// ... ...
// GetFuncParam retrieves a key-value pair of the default parameters for the currently executing Function in the Flow
GetFuncParam(key string) string
// GetFuncParamAll retrieves all key-value pairs of the default parameters for the currently executing Function in the Flow
GetFuncParamAll() config.FParam
}
```
Implementation:
> kis-flow/flow/kis_flow_data.go
```go
// GetFuncParam retrieves a key-value pair of the default parameters for the currently executing Function in the Flow
func (flow *KisFlow) GetFuncParam(key string) string {
flow.fplock.RLock()
defer flow.fplock.RUnlock()
if param, ok := flow.funcParams[flow.ThisFunctionId]; ok {
if value, vok := param[key]; vok {
return value
}
}
return ""
}
// GetFuncParamAll retrieves all key-value pairs of the default parameters for the currently executing Function in the Flow
func (flow *KisFlow) GetFuncParamAll() config.FParam {
flow.fplock.RLock()
defer flow.fplock.RUnlock()
param, ok := flow.funcParams[flow.ThisFunctionId]
if !ok {
return nil
}
return param
}
```
`GetFuncParam()` and `GetFuncParamAll()` retrieve a single key or all parameters respectively, but both fetch the Params for the currently executing Function.
### 8.3.2 Unit Testing
We add some parameters to each Function in `flowName1`.
> kis-flow/test/load_conf/flow-FlowName1.yml
```yaml
kistype: flow
status: 1
flow_name: flowName1
flows:
- fname: funcName1
params:
myKey1: flowValue1-1
myKey2: flowValue1-2
- fname: funcName2
params:
myKey1: flowValue2-1
myKey2: flowValue2-2
- fname: funcName3
params:
myKey1: flowValue3-1
myKey2: flowValue3-2
```
Then configure some default custom parameters for each associated Function:
> kis-flow/test/load_conf/func/func-FuncName1.yml
```yaml
kistype: func
fname: funcName1
fmode: Verify
source:
name: Official Account Douyin Mall Order Data
must:
- order_id
- user_id
option:
default_params:
default1: funcName1_param1
default2: funcName1_param2
```
> kis-flow/test/load_conf/func/func-FuncName2.yml
```yaml
kistype: func
fname: funcName2
fmode: Save
source:
name: User Order Error Rate
must:
- order_id
- user_id
option:
cname: ConnName1
default_params:
default1: funcName2_param1
default2: funcName2_param2
```
> kis-flow/test/load_conf/func/func-FuncName3.yml
```yaml
kistype: func
fname: funcName3
fmode: Calculate
source:
name: User Order Error Rate
must:
- order_id
- user_id
option:
default_params:
default1: funcName3_param1
default2: funcName3_param2
```
We also configure some Param parameters for the Connector associated with `FuncName2`:
> kis-flow/test/load_conf/conn/conn-ConnName1.yml
```yaml
kistype: conn
cname: ConnName1
addrs: '0.0.0.0:9988,0.0.0.0:9999,0.0.0.0:9990'
type: redis
key: redis-key
params:
args1: value1
args2: value2
load: null
save:
- funcName2
```
To verify that our configuration parameters can be correctly retrieved during the execution of Functions, we modified each Function and Connector business function to print their Params:
> kis-flow/test/faas/faas_demo1.go
```go
func FuncDemo1Handler(ctx context.Context, flow kis.Flow) error {
fmt.Println("---> Call funcName1Handler ----")
// ++++++++++++++++
fmt.Printf("Params = %+v\n", flow.GetFuncParamAll())
for index, row := range flow.Input() {
// Print data
str := fmt.Sprintf("In FuncName = %s, FuncId = %s, row = %s", flow.GetThisFuncConf().FName, flow.GetThisFunction().GetId(), row)
fmt.Println(str)
// Calculate result data
resultStr := fmt.Sprintf("data from funcName[%s], index = %d", flow.GetThisFuncConf().FName, index)
// Commit result data
_ = flow.CommitRow(resultStr)
}
return nil
}
```
> kis-flow/test/faas/faas_demo2.go
```go
func FuncDemo2Handler(ctx context.Context, flow kis.Flow) error {
fmt.Println("---> Call funcName2Handler ----")
// ++++++++++++++++
fmt.Printf("Params = %+v\n", flow.GetFuncParamAll())
for index, row := range flow.Input() {
str := fmt.Sprintf("In FuncName = %s, FuncId = %s, row = %s", flow.GetThisFuncConf().FName, flow.GetThisFunction().GetId(), row)
fmt.Println(str)
conn, err := flow.GetConnector()
if err != nil {
log.Logger().ErrorFX(ctx, "FuncDemo2Handler(): GetConnector err = %s\n", err.Error())
return err
}
if conn.Call(ctx, flow, row) != nil {
log.Logger().ErrorFX(ctx, "FuncDemo2Handler(): Call err = %s\n", err.Error())
return err
}
// Calculate result data
resultStr := fmt.Sprintf("data from funcName[%s], index = %d", flow.GetThisFuncConf().FName, index)
// Commit result data
_ = flow.CommitRow(resultStr)
}
return nil
}
```
> kis-flow/test/faas/faas_demo3.go
```go
func FuncDemo3Handler(ctx context.Context, flow kis.Flow) error {
fmt.Println("---> Call funcName3Handler ----")
// ++++++++++++++++
fmt.Printf("Params = %+v\n", flow.GetFuncParamAll())
for _, row := range flow.Input() {
str := fmt.Sprintf("In FuncName = %s, FuncId = %s, row = %s", flow.GetThisFuncConf().FName, flow.GetThisFunction().GetId(), row)
fmt.Println(str)
}
return nil
}
```
> kis-flow/test/caas/caas_demo1.go
```go
func CaasDemoHanler1(ctx context.Context, conn kis.Connector, fn kis.Function, flow kis.Flow, args interface{}) error {
fmt.Printf("===> In CaasDemoHanler1: flowName: %s, cName:%s, fnName:%s, mode:%s\n",
flow.GetName(), conn.GetName(), fn.GetConfig().FName, fn.GetConfig().FMode)
// +++++++++++
fmt.Printf("Params = %+v\n", conn.GetConfig().Params)
fmt.Printf("===> Call Connector CaasDemoHanler1, args from funciton: %s\n", args)
return nil
}
```
Finally, we write the unit test cases:
> kis-flow/test/kis_params_test.go
```go
package test
import (
"context"
"kis-flow/common"
"kis-flow/file"
"kis-flow/kis"
"kis-flow/test/caas"
"kis-flow/test/faas"
"testing"
)
func TestParams(t *testing.T) {
ctx := context.Background()
// 0. Register Function callback businesses
kis.Pool().FaaS("funcName1", faas.FuncDemo1Handler)
kis.Pool().FaaS("funcName2", faas.FuncDemo2Handler)
kis.Pool().FaaS("funcName3", faas.FuncDemo3Handler)
// 0. Register ConnectorInit and Connector callback businesses
kis.Pool().CaaSInit("ConnName1", caas.InitConnDemo1)
kis.Pool().CaaS("ConnName1", "funcName2", common.S, caas.CaasDemoHanler1)
// 1. Load configuration files and build Flow
if err := file.ConfigImportYaml("/Users/tal/gopath/src/kis-flow/test/load_conf/"); err != nil {
panic(err)
}
// 2. Get Flow
flow1 := kis.Pool().GetFlow("flowName1")
// 3. Submit original data
_ = flow1.CommitRow("This is Data1 from Test")
_ = flow1.CommitRow("This is Data2 from Test")
_ = flow1.CommitRow("This is Data3 from Test")
// 4. Execute flow1
if err := flow1.Run(ctx); err != nil {
panic(err)
}
}
```
Navigate to the `kis-flow/test/` directory and execute:
```bash
go test -test.v -test.paniconexit0 -test.run TestParams
```
```bash
=== RUN TestParams
....
....
---> Call funcName1Handler ----
Params = map[default1:funcName1_param1 default2:funcName1_param2 myKey1:flowValue1-1 myKey2:flowValue1-2]
...
...
---> Call funcName2Handler ----
Params = map[default1:funcName2_param1 default2:funcName2_param2 myKey1:flowValue2-1 myKey2:flowValue2-2]
...
...
===> In CaasDemoHanler1: flowName: flowName1, cName:ConnName1, fnName:funcName2, mode:Save
Params = map[args1:value1 args2:value2]
...
...
===> In CaasDemoHanler1: flowName: flowName1, cName:ConnName1, fnName:funcName2, mode:Save
Params = map[args1:value1 args2:value2]
...
...
===> In CaasDemoHanler1: flowName: flowName1, cName:ConnName1, fnName:funcName2, mode:Save
Params = map[args1:value1 args2:value2]
...
...
---> Call funcName3Handler ----
Params = map[default1:funcName3_param1 default2:funcName3_param2 myKey1:flowValue3-1 myKey2:flowValue3-2]
...
...
--- PASS: TestParams (0.01s)
PASS
ok kis-flow/test 0.433s
```
As we can see, we can now correctly retrieve the Params configuration parameters at each level.
## 8.4 [V0.7] Source Code
https://github.com/aceld/kis-flow/releases/tag/v0.7
---
Author: Aceld
GitHub: https://github.com/aceld
KisFlow Open Source Project Address: https://github.com/aceld/kis-flow
Document: https://github.com/aceld/kis-flow/wiki
---
[Part1-OverView](https://dev.to/aceld/part-1-golang-framework-hands-on-kisflow-streaming-computing-framework-overview-8fh)
[Part2.1-Project Construction / Basic Modules](https://dev.to/aceld/part-2-golang-framework-hands-on-kisflow-streaming-computing-framework-project-construction-basic-modules-cia)
[Part2.2-Project Construction / Basic Modules](https://dev.to/aceld/part-3golang-framework-hands-on-kisflow-stream-computing-framework-project-construction-basic-modules-1epb)
[Part3-Data Stream](https://dev.to/aceld/part-4golang-framework-hands-on-kisflow-stream-computing-framework-data-stream-1mbd)
[Part4-Function Scheduling](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-function-scheduling-4p0h)
[Part5-Connector](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-connector-hcd)
[Part6-Configuration Import and Export](https://dev.to/aceld/part-6golang-framework-hands-on-kisflow-stream-computing-framework-configuration-import-and-export-47o1)
[Part7-KisFlow Action](https://dev.to/aceld/part-7golang-framework-hands-on-kisflow-stream-computing-framework-kisflow-action-3n05)
[Part8-Cache/Params Data Caching and Data Parameters](https://dev.to/aceld/part-8golang-framework-hands-on-cacheparams-data-caching-and-data-parameters-5df5)
[Part9-Multiple Copies of Flow](https://dev.to/aceld/part-8golang-framework-hands-on-multiple-copies-of-flow-c4k)
[Part10-Prometheus Metrics Statistics](https://dev.to/aceld/part-10golang-framework-hands-on-prometheus-metrics-statistics-22f0)
[Part11-Adaptive Registration of FaaS Parameter Types Based on Reflection](https://dev.to/aceld/part-11golang-framework-hands-on-adaptive-registration-of-faas-parameter-types-based-on-reflection-15i9)
---
[Case1-Quick Start](https://dev.to/aceld/case-i-kisflow-golang-stream-real-time-computing-quick-start-guide-f51)
---
| aceld |
1,893,047 | Set (One Byte Explainer) | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. Set:... | 0 | 2024-06-19T02:59:37 | https://dev.to/leaft/set-one-byte-explainer-1pi | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Set: A group of things, or nothing.
<!-- Explain a computer science concept in 256 characters or less. -->
## Where you came from, where you will go.
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | leaft |
1,893,042 | Day 2 of My Devops Journey: Automating Dockerized Application Deployment with GitHub Actions | Introduction: Welcome back to Day 2 of my 90-day DevOps journey! Inspired by my experience in the... | 0 | 2024-06-19T02:58:51 | https://dev.to/arbythecoder/day-2-of-my-sre-and-cloud-security-journey-automating-dockerized-application-deployment-with-github-actions-94c | githubactions, devops, docker, beginners | **Introduction:**
Welcome back to Day 2 of my 90-day DevOps journey! Inspired by my experience in the @SheCodeAfrica mentorship program, today we'll dive into automating the deployment of a Dockerized web application using GitHub Actions. This guide aims to simplify CI/CD processes, making application deployments more efficient and reliable.
**What You'll Learn:**
- Building a Dockerized Node.js application.
- Setting up GitHub Actions for automating Docker image builds and deployments.
- Overcoming challenges encountered during setup.
**Prerequisites:**
Before we begin, ensure you have:
- Basic understanding of Docker and GitHub (refer to my previous articles if needed).
- A GitHub account for repository hosting.
- Docker installed on your local machine for testing.
**Step-by-Step Guide: Automating Dockerized Application Deployment with GitHub Actions**
**Step 1: Prepare Your Dockerized Application**
1. **Create Your Application**:
- Start by creating a simple Node.js application. Navigate to your terminal and execute:
```bash
mkdir my-docker-app
cd my-docker-app
```
2. **Initialize Git Repository**:
- Initialize Git for version control:
```bash
git init
```
3. **Write Your Application Code**:
- Create a basic Node.js application. For example, create an `app.js` file:
**app.js**:
```javascript
const http = require('http');
const hostname = '0.0.0.0';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello, World!\n');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
```
4. **Create a Dockerfile**:
- Define a `Dockerfile` in the project root:
**Dockerfile**:
```dockerfile
FROM node:14
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD ["node", "app.js"]
```
5. **Test Locally**:
- Build and run your Docker container locally:
```bash
docker build -t my-docker-app .
docker run -p 3000:3000 my-docker-app
```
- Open `http://localhost:3000` in your browser to verify the application works.
**Step 2: Set Up GitHub Repository**
1. **Create a GitHub Repository**:
- Visit [GitHub](https://github.com/) and create a new repository (`my-docker-app`).
2. **Push Your Code**:
- Push your local Git repository to GitHub:
```bash
git remote add origin <repository-url>
git add .
git commit -m "Initial commit"
git branch -M main
git push -u origin main
```
**Step 3: Configure GitHub Actions Workflow**
1. **Create a Workflow File**:
- Inside `.github/workflows`, create `docker-build-deploy.yml`:
**docker-build-deploy.yml**:
```yaml
name: Docker Build & Deploy
on:
push:
branches:
- main
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout Repository
uses: actions/checkout@v2
- name: Login to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKER_USERNAME }}
password: ${{ secrets.DOCKER_PASSWORD }}
- name: Build and Push Docker Image
run: |
docker build -t ${{ secrets.DOCKER_USERNAME }}/my-docker-app:latest .
docker push ${{ secrets.DOCKER_USERNAME }}/my-docker-app:latest
- name: Deploy Docker Container
run: |
docker run -d -p 80:3000 --name my-docker-app ${{ secrets.DOCKER_USERNAME }}/my-docker-app:latest
```
2. **Add GitHub Secrets**:
- Go to your GitHub repository > Settings > Secrets.
- Add Docker Hub credentials (`DOCKER_USERNAME` and `DOCKER_PASSWORD`).
**Challenges Faced and Solutions:**
- **Port Already Allocated:** During testing, encountering a "port already allocated" error due to a previous Docker container using the same port. This was resolved by stopping the previous container (`docker stop <container-id>`) or specifying a different port during container deployment.
**Recommended Resources for Beginners:**
- **Docker Basics:** [Get Started with Docker](https://www.docker.com/get-started)
- **GitHub Actions:** Explore [GitHub Actions Documentation](https://docs.github.com/en/actions)
**Conclusion:**
Congratulations on completing Day 2 of our DevOps journey! You've automated Dockerized application deployment with GitHub Actions, improving your CI/CD pipeline skills. Stay tuned for more articles as we explore some more.
See you on Day 3!
| arbythecoder |
1,893,046 | How a Technical Content Marketing Agency Works | This article was originally published as a technical content marketing resource at SyntaxPen. In... | 27,180 | 2024-06-19T02:58:09 | https://syntaxpen.com/resources/how-a-technical-content-marketing-agency-works | contentwriting, marketing, seo, devtools | This article was originally published as [a technical content marketing resource at SyntaxPen.](https://syntaxpen.com/resources/how-a-technical-content-marketing-agency-works)
In today's competitive content landscape, technical content marketing agencies play a pivotal role in helping businesses reach and engage their target audiences. These specialized agencies blend deep technical knowledge with marketing expertise to craft compelling content that resonates with developers, engineers, and other technical professionals.
By focusing on creating high-quality, accurate, and valuable content, these agencies build trust and authority within developer communities. This article explores the unique challenges of technical content marketing and how partnering with an agency can help your business.
## How Content Marketing Works
Content marketing revolves around creating content to bring attention to your business. In this article, we'll mostly discuss written articles, but content marketing extends into videos, podcasts, and more!
**When you're doing content marketing well, your ideal customer asks a question and arrives at your site for an answer.**
Typically they'll ask their question in the form of a search query to a search engine like Google, but as you grow your audience you will attract traffic through other mediums like word of mouth or email lists.
When readers arrive at your site as a result of a search, they should read educational articles that quickly solve their problems. Giving readers this free, up-front value is a great way to build a reputation as a trusted source in your niche.
After you attract customers, your content educates them, then you have a chance to convert some subset of them into paying customers.
Not everyone who arrives at a particular article on your site will find that it solves their problem, and that's okay! Not everyone who has a problem solved by a piece of your content will convert into a customer, and that's okay too.
Content marketing is about sustainably building a reputation over the long term with traffic that has a heavy concentration of potential customers. You'll find that as your posts get more and more readers, you'll naturally have more and more inbound leads.
## What Makes Technical Content Hard
Technical content marketing is more than just a niche in content marketing. Much of the internet is full of content marketing written with executives in mind, as many B2B product purchasing decisions end up being made mostly by executives. Technical products like developer tools are unique in this way, as developers have a huge voice in product purchasing decisions that affect them. [Marketing technical products effectively](https://syntaxpen.com/resources/how-to-market-developer-tools) requires gaining the trust of developers.
Here's the hard part. **Traditional content marketing isn't effective for developers**. Content written for software developers, software engineers, or technical leadership has to be technically accurate. Technical audiences will scrutinize technical articles, and they'll quickly lose interest if they feel the content lacks value or accuracy.
When planning content strategy for technical audiences, you'll quickly find that you have to write about what software engineers are actually interested in. Most teams will arrive at the conclusion that **writing about real, specific, and novel problems in software is the best way to give real value to technical audiences**.
### Developers Trust Content Written by Developers
Marketing to developers requires building a brand with developers themselves, which is hard for professional writers or traditional content agencies.
**The best content for software developers is written by software developers**.
Who better to explain a technical concept or walk a reader through a programming tutorial than someone who is actually in the intended audience?
This presents another problem - **great programmers are not all great writers**. In fact, this is a pretty narrow set of people. [Writing great articles for programmers](https://syntaxpen.com/resources/how-to-make-great-content-for-developers) is not the same skill as programming. In the next section, we'll compare finding this talent in your business, leaning on freelancers, and working with a specialized agency.
## Doing Technical Content In-House vs Freelancers vs Agencies
Now that we've discussed what technical content marketing is and what makes it hard, you're probably wondering - **How do I get technical content on my blog?** Do I need to hire someone for that?
When it comes to producing technical content for your business, you have a few options:
- You do it in-house with talent on your payroll
- You hire a freelancer
- You work with an agency
If your business is in building products that developers are interested in, you might have someone on your team who would be able to help! Most teams start producing technical content this way but find it challenging to scale. Every hour you ask a developer to write for your blog is an hour they're not spending working on the rest of their job. Still, leaning on your existing team to write technical content is a great strategy for producing authentic, valuable, and high-quality articles without directly increasing your costs.
You could also expand your in-house team to include someone dedicated to technical content marketing. Sometimes developer advocates fill this role, and some companies even have dedicated technical writers.
Freelancers are a great option when you want to scale your technical content operation beyond your own team. There are tons of incredible freelance technical content writers out there writing great stuff - [I started as a freelance technical writer myself](https://jeffmorhous.com/technical-writing/)! It's hard to find a great freelancer with consistent availability in any industry, and marketplaces like Upwork can be unpredictable.
Technical Content Marketing Agencies help you produce this sort of content without so much involvement from your team. Typically, these businesses have processes and a team in place that make them more efficient. Many agencies will do everything from planning to writing to editing and even publishing. Having a full-service team working with you lets you treat technical content marketing as a product your business is purchasing instead of a task for your team.
## Different Technical Content Services
The most obvious service you can expect from a technical writing business is **article writing**. Typically this is company announcements, think pieces, articles that explore technical concepts, or even programming tutorials. This is packaged differently depending on who you're working with - Here at SyntaxPen our [full-service technical content marketing](https://syntaxpen.com/technical-content-marketing) is typically done in engagements of at least one quarter and includes a collaborative content planning process.
You might need more than just article writing for your business. Here is a more comprehensive list of services that technical content teams offer:
- Article writing
- Content strategy/planning
- Technical Editing ([SyntaxPen offers this as a standalone service!](https://syntaxpen.com/pricing))
- Website sales copy
- Ad copy
- Product documentation
- Example projects
- Technical Reviewing ([SyntaxPen offers this as a standalone service!](https://syntaxpen.com/pricing))
## Conclusion
By leveraging a technical content team's expertise in creating accurate, valuable, and compelling articles, you can build trust and authority with developers. Whether you decide to produce content in-house, hire freelancers, or work with a specialized agency, the key is to provide real value to your readers.
If you're looking for professional assistance with editing, reviewing, or full-service writing, SyntaxPen is here to help. Our team is the perfect intersection of software engineering expertise and writing ability, and we'd love to partner with you. | jeffmorhous |
1,893,044 | from Issue in the editor | Peace be upon you everyone , I am trying to start a sample project for learning purposes and to work... | 0 | 2024-06-19T02:54:26 | https://dev.to/nouralhudamali/from-issue-40jc | help | Peace be upon you everyone , I am trying to start a sample project for learning purposes and to work next on the real project ! which I have to delver it in max time in 24hrs ..
the issue here is in the form I thought it realted to the path still it needs a lot of time to find the soulation which I waste a lot on other issues to solve on my own ....I am self-learner to python3 ...it been nearly few days only to start coding python3 ...I am using this editor for the first time so I am not suprise facing issues or errors still I'm runing of time .Thank you for reading | nouralhudamali |
340,256 | SyncIn-Bring music closer to you | My Final Project My final year project is SyncIn- Brings music closer to you. This is a pr... | 0 | 2020-05-20T17:49:30 | https://dev.to/mmudit30/syncin-bring-music-closer-to-you-5ffp | octograd2020 | [Comment]: # (All of this is placeholder text. Use this format or any other format of your choosing to best describe your project and experience.)
[Note]: # (If you used the GitHub Student Developer Pack on your project, add the "#githubsdp" tag above. We’d also love to know which services you used and how you used it!)
## My Final Project
My final year project is SyncIn- Brings music closer to you.
This is a project which is an emotion or a mood-based music player. In this project, my team and I have created an extension that scans or reads the mood of the user or client and accordingly displays the music playlist to the user.
This application not only uses inbuilt face recognition technology of the camera but also diminishes the need for switching in between music apps to get good quality music.
The name of the project SyncIn itself suggests that our moods are incorporated or synced with the music which will be played.
This project tends to be effective in its performance and also tends to be visually interactive and appealing to the user.
## Link to Code
{% github mmudit30/SyncIn %}
[Note]: # (Our markdown editor supports pretty embeds. If you're sharing a GitHub repo, try this syntax: `{% github link_to_your_repo %}`)
## How I built it
This project is built by using techniques of face recognition, emotion detection, and music incorporation.
These techniques especially face recognition and emotion detection uses technologies like Open CV and TensorFlow.
The whole project is built on Python.
The project is divided into 3 phases:-
* 1st phase- Emotion detection
The initial phase mainly deals with face recognition and emotion detection. In this, we collected a fair amount of data sets of faces by applying to various universities and then trained them so as to recognize a face using Mobilenet v-2.
Then we collected some samples of data of every emotion. For now, 3 emotions are evaluated namely Angry, Sad/Neutral, Happy.
Each and every emotion's dataset consisted of pictures of each emotion and then were trained.
The emotion which was detected was on the basis of the confidence rate. The emotion having the highest confidence rate was the final emotion of the user.
* 2nd phase- Music Incorporation
In the second phase, we created datasets of music or songs according to the emotions prescribed. Each of the datasets was created in CSV file with the format of '.mp3'.
Now when the face got detected the emotion was mapped to each of the emotion-based music playlists. Once the appropriate playlist got recognized, any random song got played from that playlist.
We used the pygame library for python to use the music player. Also, one can pause, resume, stop, and exit the extension.
## Additional Thoughts
For the final phase, we plan on enhancing the visual interaction and appeal of the project and make an extension for the browser. For now, we are running the application in a command terminal. We plan to move to a GUI application and add color to every song being played so as to contrast with the user's mood.
This project is a revolutionary phase as it diminishes the distance between music and technology hence enhancing and taking a step forward in the world of automation and human comfort. So, in any case, searching for a music playlist for yourself maybe a thing in the past.
[Final Note]: # (CONGRATULATIONS!!! You are amazing!) | mmudit30 |
1,893,043 | Navigating Software Resiliency: A Comprehensive Classification | Introduction In today’s digital era, software systems must be robust and resilient to meet... | 0 | 2024-06-19T02:53:47 | https://dev.to/vipra_tech_solutions/navigating-software-resiliency-a-comprehensive-classification-3m1k | resiliency, reliability, webdev, beginners | ## Introduction
In today’s digital era, software systems must be robust and resilient to meet the demands of users and withstand various challenges. Software resiliency ensures that a system can handle and recover from failures gracefully, maintaining functionality even under adverse conditions. This comprehensive guide will introduce you to the key concepts and categories of software resiliency, setting the stage for deeper exploration in subsequent articles.
## What is Software Resiliency?
Software resiliency refers to the ability of a system to recover quickly from failures and continue to function effectively. This involves not just avoiding failures, but also being prepared to handle them when they occur. A resilient system can maintain service continuity, often in a degraded state, without significant impact on the end-users.
## The Importance of Software Resiliency
- **Business Continuity**: Ensures that critical services remain available even during failures.
- **Customer Satisfaction**: Minimizes downtime and maintains a seamless user experience.
- **Operational Efficiency**: Reduces the time and effort required to recover from failures.
- **Cost Savings**: Prevents revenue loss and reduces recovery costs associated with system outages.
## High-Level Classification of Software Resiliency Patterns and Practices
To build resilient systems, it's essential to understand various patterns and practices. These can be broadly classified into several categories:
### Fault Detection and Handling
Detecting and handling faults promptly is essential to minimize the impact of failures.
- **Health Checks**: Continuously checks the health of system components.
- **Timeout**: Sets limits on how long to wait for operations to complete.
- **Circuit Breaker**: Prevents calls to a failing service to avoid cascading failures.
### Fault Recovery
Strategies for recovering from faults ensure that systems can maintain service continuity.
- **Retry**: Implements retry logic for transient failures.
- **Fallback**: Provides alternative mechanisms when primary methods fail.
- **Autoscaling**: Adjusts the number of running instances based on load.
- **Graceful Degradation**: Allows a system to continue operating in a reduced capacity.
- **Self-Healing**: Automatically detects and recovers from faults.
- **Warmup**: Gradually increases load on new instances to prevent sudden failures.
### Fault Prevention
Preventing faults before they occur is key to maintaining system stability.
- **Multiple Instances**: Ensures redundancy by running multiple instances.
- **Service Level Objective (SLO)**: Defines acceptable levels of service reliability and performance.
- **Static Stability**: Ensures the system remains stable under expected load conditions.
- **Rate Limiting**: Controls the rate of requests to prevent system overload.
### Fault Isolation and Containment
Fault isolation and containment are crucial to prevent a failure in one part of the system from affecting the entire system.
- **Bulkhead**: Isolates different parts of a system to prevent cascading failures.
- **Multi-AZ (Availability Zone)**: Distributes applications across multiple availability zones within a region.
- **Multi-Region**: Distributes applications across different geographic regions for enhanced fault tolerance.
### Resiliency Testing
Testing is essential to ensure that systems can handle and recover from failures.
- **Chaos Engineering**: Intentionally introduces failures to test system resiliency.
- **Load Testing**: Simulates high load to ensure the system can handle peak traffic.
- **Stress Testing**: Tests the system's ability to cope with extreme conditions.
- **Failover Testing**: Simulates failures to ensure failover mechanisms work correctly.
### Architectural Patterns for Resiliency
Designing systems with resiliency in mind from the ground up is critical.
- **Microservices Architecture**: Designs systems as a collection of loosely coupled services.
- **Event-Driven Architecture**: Uses events to communicate between components.
- **CQRS (Command Query Responsibility Segregation)**: Separates read and write operations to optimize performance.
### Operational Practices
Operational practices play a vital role in maintaining resilient systems.
- **Continuous Monitoring**: Keeps track of system performance and health in real-time.
- **Incident Response Plans**: Prepares procedures to quickly address and recover from failures.
- **Disaster Recovery Plans**: Defines strategies for recovering from catastrophic failures.
- **Regular Maintenance**: Ensures the system is regularly updated and maintained.
## Conclusion
Building resilient software systems is not just about preventing failures but also about being prepared to handle them gracefully when they occur. By understanding and implementing these patterns and practices, you can ensure your systems are robust, reliable, and ready to meet the demands of today’s digital landscape.
In the upcoming articles, we will dive deeper into each of these classifications, exploring specific patterns, real-world examples, and practical implementation tips. Stay tuned to master the art of building resilient software systems! | vipra_tech_solutions |
1,891,960 | Why Coding Is Scary And How To Learn Better | Learning to program is hard. I am still learning to program. I have been at this for 9 years. I am... | 0 | 2024-06-19T02:51:00 | https://dev.to/thekarlesi/why-coding-is-scary-and-how-to-learn-better-6e0 | webdev, beginners, programming, html | Learning to program is hard.
I am still learning to program.
I have been at this for 9 years. I am still learning but I feel like an idiot all the time.
When you first start programming you will realize that the language is scary, the environments are scary and the people are scary.
In the online spaces especially Twitter(X), you will find people throwing crazy jargons.
## Bad beginner advice
There are a lot of advices online from people.
Great programmers will tell you to start programming by writing a simple game.
They will say things like "You want to program, just start by writing a simple game like Tetris or Tik-tak-Toe."
If you have ever tried to write Tetris, it is not simple.
Another person will tell you, "Start with C++. That is what they use in the industry. Especially in the gaming community, that is what they use to write gaming engines."
I have no problem with C++ but to begin with it, is madness.
Without a community that guides you along the way, you might get lost easily. That is why I came up with the [Uplifters - Web Development Community.](https://karlgusta.gumroad.com/l/nwbym)
## How to learn better
Things you need to know when you are starting your coding journey is that:
- Coding has only about eight main concepts.
You get them and you are done.
The concepts are universal across languages.
For those who know multiple languages, they know this is true.
Your first language will be hard, the second and the third, you will start to see patterns.
By the fourth or fifth language, you will be given a project and you can even do it in a weekend.
- Write out the concepts first, then convert to code later.
If you are lost in coding, it is almost because you shouldn't be coding yet.
Write out the concepts first, then convert to code later.
It is the same concept as an architecture when they start constructing a building.
They come up with the designs first, then from the designs they can come up with the building.
- Most beginners think they don't understand what code to write.
The real problem is they don't really understand the problem they are trying to solve.
They try to figure out how to do it instead of what to do.
Try to understand the problem first.
That's it.
Happy Coding!
P.S. [The 2 Hour Web Developer](https://karlgusta.gumroad.com/l/eofdr) is a complete guide from zero to becoming a web expert. | thekarlesi |
1,890,571 | The key of OOP Principles | As time passes, the way we develop software changes. It comes with new programming languages,... | 0 | 2024-06-19T02:47:56 | https://dev.to/terrerox/the-key-of-oop-principles-12bm | As time passes, the way we develop software changes. It comes with new programming languages, frameworks, libraries, and now paradigms. Object-oriented programming came to change how we build our apps, providing a more maintainable, reusable, and scalable way to build. But in order to do that, you must know all the basics. That's why I am going to teach the four principles of OOP.
### Abstraction
Abstraction essentially hides complex implementations and only shows the necessary features of an object.
Within this principle, there are two key topics we must know about:
- **Abstract Class**: Can't be instantiated and can have abstract methods that must be overridden in the implementing class.
```csharp
public abstract class AnimeCharacter {
public abstract string SpecialAttack();
public virtual string Goal() {
return "Save the planet";
}
}
public class Goku : AnimeCharacter {
public override string SpecialAttack() {
return "Kamehameha!!";
}
}
public class Gojo : AnimeCharacter {
public override string SpecialAttack() {
return "Domain expansion";
}
public override string Goal() {
return "Defeat Sukuna";
}
}
```
So, what's this `virtual` keyword 🤔? It allows setting default implementations for methods that can be overridden.
- **Interfaces**: A contract that the implementing class must define.
```csharp
public interface IMovable {
string Move();
}
public class Goku : IMovable {
public string Move() {
return "Fly";
}
}
public class Luffy : IMovable {
public string Move() {
return "Run";
}
}
```
### Encapsulation
Encapsulation is a shield that prevents a piece of code from being accessed outside its scope.
With access modifiers, you can restrict access to your code:
- **Public**: The code has no restrictions.
- **Private**: The code can only be accessed within the class itself.
- **Protected**: Only the superclass and subclass can access the code.
- **Internal**: The code has restrictions within the same assembly/project.
```csharp
public class AnimeCharacter {
// Private field
private string secretTechnique;
// Public property
public string Name { get; set; }
// Protected property
protected string Rank { get; set; }
// Public method to set the private field
public void SetSecretTechnique(string technique) {
secretTechnique = technique;
}
// Public method to get the private field
public string GetSecretTechnique() {
return secretTechnique;
}
}
public class Ninja : AnimeCharacter {
// Public method to access the protected property
public void SetRank(string rank) {
Rank = rank;
}
// Public method to get the protected property
public string GetRank() {
return Rank;
}
}
var naruto = new Ninja();
naruto.Name = "Naruto Uzumaki";
naruto.SetSecretTechnique("Shadow Clone Jutsu");
naruto.SetRank("Genin");
naruto.Name; // Outputs "Naruto Uzumaki"
naruto.GetSecretTechnique() // Outputs "Shadow Clone Jutsu"
naruto.GetRank(); // Outputs "Genin"
```
### Inheritance
Inheritance allows methods and properties to be inherited from another class.
- **Superclass or Parent Class**
```csharp
public class Minato {
public string HairColor = "yellow";
public string Attack() {
return "Rasengan!!";
}
}
```
- **Subclass or Child Class**
```csharp
public class Naruto : Minato {
public string FavoriteFood = "ramen";
public string Attack() {
return "Rasengan!!";
}
}
var naruto = new Naruto();
naruto.HairColor; // Outputs "yellow"
```
### Polymorphism
Polymorphism allows classes to have different implementations of methods that are called by the same name.
It comes in two forms:
- **Compile-time (method overload)**
```csharp
public class Zoro {
// Method overload for attack
public string Attack(int swords) {
return $"Attacks with {swords} swords.";
}
// Method overload for a named attack
public string Attack(string technique) {
return $"Uses the technique {technique}.";
}
}
var zoro = new Zoro();
zoro.Attack(3); // Outputs "Attacks with 3 swords."
zoro.Attack("Santoryu Ogi"); // Outputs "Uses the technique Santoryu Ogi."
```
- **Run-time (method override)**
```csharp
public class AnimeCharacter {
public virtual string SpecialMove() {
return "Basic Attack";
}
}
public class Ichigo : AnimeCharacter {
public override string SpecialMove() {
return "Getsuga Tensho";
}
}
public class Sasuke : AnimeCharacter {
public override string SpecialMove() {
return "Chidori";
}
}
AnimeCharacter ichigo = new Ichigo();
AnimeCharacter sasuke = new Sasuke();
ichigo.SpecialMove(); // Outputs "Getsuga Tensho"
sasuke.SpecialMove(); // Outputs "Chidori"
```
### Conclusion
Now you know the four principles of Object-Oriented Programming: Encapsulation, which allows you to restrict access by scope; Abstraction, which shows valuable features while hiding complex implementations; Inheritance, which enables you to reuse fields and methods; and Polymorphism, which allows you to change behavior at compile time and run time. Happy coding! | terrerox | |
1,893,040 | Mastering Git Rebase: Streamlining Your Commit History | As a developer, you often work on feature branches that need to integrate changes from the main... | 0 | 2024-06-19T02:47:56 | https://dev.to/vyan/mastering-git-rebase-streamlining-your-commit-history-3ce4 | webdev, beginners, git, react | As a developer, you often work on feature branches that need to integrate changes from the main branch. A common approach is to merge the main branch into your feature branch, but this can clutter the commit history with numerous merge commits. Git rebase offers a cleaner alternative, allowing you to maintain a linear commit history. In this blog, we'll delve into how to effectively use git rebase and the scenarios where it shines.
## Why Use Git Rebase?
Git rebase is particularly useful for maintaining a clean, linear commit history. When working on a feature branch, you may need to incorporate changes from the main branch to continue your work. Merging can solve this but often leads to a cluttered history filled with merge commits. Rebase, on the other hand, applies your commits on top of the latest changes from the main branch, resulting in a tidy, linear history.
### Benefits of Git Rebase:
1.**Clean History**: Avoids unnecessary merge commits.
2.**Simplified Navigation**: Easier to trace the history of a feature or bug.
3.**Collaborative Efficiency**: Makes it easier for team members to understand the project history.
## How to Use Git Rebase
### Scenario: Rebasing a Feature Branch
Let's consider a scenario where you're working on a feature branch and your teammates have merged new changes into the main branch. You need these changes to continue working on your feature.
#### Step-by-Step Guide:
1.**Switch to Your Feature Branch**:
```bash
git checkout feature-branch
```
2.**Rebase onto the Main Branch**:
```bash
git rebase main
```
This command temporarily sets aside your commits, integrates the latest commits from the main branch, and then reapplies your commits one by one on top of the main branch.
#### Example Workflow:
1.Your feature branch `feature-branch` starts from a certain commit on the main branch.
2.The main branch receives new commits from your teammates.
3.Instead of merging the main branch into your feature branch, you rebase your feature branch onto the main branch:
```bash
git rebase main
```
Rebasing modifies the commit history by reapplying your commits on top of the latest main branch commit, effectively updating your feature branch with the new changes without creating merge commits.
### Handling Merge Conflicts
During a rebase, conflicts may arise if the changes in the main branch conflict with your feature branch commits. Here's how to handle them:
1.**Identify Conflicts**:
Git will pause the rebase and indicate the files with conflicts.
2.**Resolve Conflicts**:
Open the conflicted files in your text editor, resolve the conflicts, and then stage the resolved files:
```bash
git add conflicted-file
```
3.**Continue the Rebase**:
```bash
git rebase --continue
```
If more conflicts occur, repeat the process until the rebase is complete.
### Best Practices and Considerations
1.**Avoid Rebasing Pushed Commits**:
Never rebase commits that have already been pushed to a shared repository. This can lead to mismatched commit hashes and potential data loss.
2.**Check Branch Status**:
Use the following command to ensure your branch has not been pushed:
```bash
git branch -r
```
3.**Interactive Rebase**:
For more control, use interactive rebase to edit, reorder, or squash commits:
```bash
git rebase -i main
```
### Example: Real-Life Rebase with Conflicts
Imagine you're on a branch `feature-login` with three commits. The main branch has two new commits that you need. Here's how a typical rebase with conflicts might look:
1.**Start the Rebase**:
```bash
git rebase main
```
2.**Resolve Conflicts**:
If a conflict arises during the first commit, fix it, stage the changes, and continue:
```bash
git add conflicted-file
git rebase --continue
```
3.**Repeat for Subsequent Conflicts**:
Continue resolving and staging any conflicts until the rebase is complete.
4.**Final Check**:
Verify that the rebase is complete:
```bash
git status
```
## Conclusion
Git rebase is a powerful tool for maintaining a clean and manageable commit history. By rebasing your feature branches onto the main branch, you avoid cluttering the history with merge commits, making it easier to navigate and understand. However, it's crucial to handle rebasing carefully, especially with pushed commits, to avoid potential issues.
For more advanced rebasing techniques, including interactive rebase, stay tuned for future posts.
| vyan |
1,893,039 | Measure and optimize your Flutter app size | When creating mobile apps, a competent mobile engineer must consider the applications' quality as... | 0 | 2024-06-19T02:43:21 | https://dev.to/tentanganak/measure-and-optimize-your-flutter-app-size-1nde | flutter, tutorial, android, ios |
When creating mobile apps, a competent mobile engineer must consider the applications' quality as well as its functionality to ensure that users have a positive experience. This goes beyond simply producing a product that functions as intended.
Some of the things we need to pay attention are, performance, application size, stability, security, compatibility, scalability, and maintenance. In this post, we'll talk about one component of quality: application size. Specifically, we'll go over how to measure the size of our app and optimize it to the perfect user size.
---
## 🛠 Measuring Flutter app size
To begin analyzing and measuring the size of our Flutter applications, we can use the Flutter analysis tool by passing the --analyze-size parameter when building. In this example, I'm going to illustrate how to analyze it on the Android app bundle, and then the command we can do is
```
flutter build appbundle --analyze-size --target-platform=android-arm64
```
Leave it until it's done, and we'll get the app size breakdown (Android App Bundle) on our app.

The tool displays a high level summary of the size breakdown in the terminal, and leaves a `*-code-size-analysis_*.json` file. To do a deeper analysis, we can use the results generated by our JSON file on Dart Dev Tools. To do it first, we need to activate and run Flutter Dev Tools by performing the following command
```
flutter pub global activate devtools; flutter pub global run devtools
```
When we run the Flutter dev tools, we can open the Flutter dev tools on our localhost at http://127.0.0.1:9100/.

After that, go to the "App Size Tooling" section and do the JSON file import that we have generated previously.

Here we can see in detail the size distribution of our applications, which makes it easy for us to determine which parts are our focus in doing app size optimization.

---
## 📊 Optimizing Flutter app size
Once we've done measuring on our application, we can proceed to optimize. With the details we got from the measuring, we could focus on optimizing parts that have significant sizes.

In the example of this application, we can see that the largest contributor size is lib, followed by assets, and finally dex.
**Optimizing Dex File**
DEX file is a compiled file format for the Dalvik Virtual Machine, which is the virtual machine that Android used to run applications. DEX stands for Dalvik Executable, and these files have a .dex extension. They contain compiled code written in the Java programming language that can be executed on the Android platform. In the context of Flutter, while your primary code is in Dart, any integration with Android's native functionality will involve DEX files for the Java/Kotlin components.
On applications that aren't too complex and don't use a lot of third-party libraries, dex files should not be that big. From there, we can optimize dex files in our application. There are several ways to reduce the size of our DEX files, such as:
**1. ProGuard/R8 Obfuscation and Shrinking**
ProGuard and R8 are tools provided by Android to help shrink, obfuscate, and optimize your code. R8 is the default compiler for Android that includes ProGuard’s capabilities.
In your `gradle.properties` file, ensure ProGuard or R8 is enabled:
```properties
android.useProguard=true
```
Enable code shrinking and obfuscation in `build.gradle`:
```gradle
buildTypes {
release {
minifyEnabled true
proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.pro'
}
}
```
**2. Enable Multidex**
MultiDex facilitates the separation of code into smaller modules. This can be combined with App Bundles or Split APKs to optimize which parts of the app are downloaded and installed on the user’s device. For instance, if parts of your app are only needed by certain users, these can be separated into different DEX files, potentially reducing the download size for users who don’t need all features.
In your `build.gradle` file, ensure multiDexEnabled is enabled:
```gradle
android {
defaultConfig {
multiDexEnabled true
}
}
dependencies {
implementation 'androidx.multidex:multidex:2.0.1'
}
```
**3. Split APKs**
In your `build.gradle` file, ensure you include different ABIs (Application Binary Interfaces).
```gradle
android {
splits {
abi {
enable true
reset()
include 'armeabi-v7a', 'arm64-v8a', 'x86', 'x86_64'
universalApk false
}
}
}
```
After doing some optimization, we can go back to build analysis on our application to see the differences that occur after implementation.

Here you can see that after implementation, our dex files experienced a significant decline of about 59.57% from 9.4 MB to 3.8 MB.
**Optimizing Assets File**
Next is assets. Assets are often the cause of the swelling of an application. The use of inappropriate asset images, such as images with too large resolutions, which are actually not needed on applications, becomes a common cause.
In the problem example of this time, we assume all the images already have the appropriate measurement that is needed in an application. Then what can we do to optimize our file assets? After checking back, the asset used in this project is to use image assets with extensions jpg and png. We can change this standard to the latest extension that is more optimum in terms of compression, namely WebP.
WebP is designed to provide better image compression with an image quality equivalent to or better than other image formats.
WebP Advantages:
- Better Compression Quality:
WebP typically provides smaller file sizes with equal or better image quality compared to other image formats such as JPEG and PNG.
- Transparency:
WebP supports alpha transparency, which allows it to be a good alternative to the PNG image format.
- Animation
WebP supports animation, making it a good alternative to the GIF image format.
After making changes to the image format in our application, we can rebuild the image size and see how much of a decrease happens to our assets.
Before:

After:

After we checked back on the analyzing tools, it turned out there was an asset-size increase, not a decrease. Don't worry, it turns out when we use WebP they might appear to have a larger size when analyzed with `flutter analyze`. This might seem counterintuitive since one of the main advantages of WebP is its superior compression compared to other image formats like PNG or JPEG.
The `flutter analyze` tool might interpret the size of the WebP images differently than other image formats. It might consider the uncompressed or maximum size of the image when calculating the size, leading to the appearance of a larger size.
So how do we know if the optimization of changes to our app is either to optimize our app size or to add more content to our app? Here we can take advantage of the Google Play app size analyzer, where Google Play does provide app download size breakdowns.



Here you can see that after implementation, our assets files experienced a significant decline of about 48.67% from 12 MB to 6.16 MB.
---
## 🌟 Conclusion
By measuring app size on our app, we can figure out the size distribution that happens on our application, and it makes it easy for us to focus on what needs to be optimized based on the breakdown size of our app.
---
## 📄 References
- [Measuring your app's size](https://docs.flutter.dev/perf/app-size)
| vincwestley |
1,893,038 | World Wide Web -Simplified | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-19T02:36:07 | https://dev.to/sbk888_sbk/world-wide-web-simplified-57f4 | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
<!-- Explain a computer science concept in 256 characters or less. -->
The web is a giant cabinet of knowledge. Websites are folders on a filing cabinet, and webpages are the documents inside. Your browser is the drawer you open to access them.
## Additional Context
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
World Wide Web is simply called Web
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | sbk888_sbk |
1,893,037 | In Excel, Find the Maximum Value and the Neighboring N Members Before and After | Problem description & analysis: The column below contains numeric values only: A 1 13 2... | 0 | 2024-06-19T02:34:58 | https://dev.to/judith677/in-excel-find-the-maximum-value-and-the-neighboring-n-members-before-and-after-16l9 | beginners, programming, tutorial, productivity | **Problem description & analysis**:
The column below contains numeric values only:
```
A
1 13
2 21
3 46
4 21
5 49
6 9
7 34
8 23
9 6
10 1
11 37
12 49
13 42
14 40
15 15
16 31
17 17
18 1147
19 18
20 30
21 22
22 4
23 25
24 19
25 13
26 27
27 38
28 30
29 16
30 12
31 23
32 3
33 23
34 19
35 14
36 46
37 23
38 37
39 38
40 28
```
We need to find out the maximum value and the 10 neighboring members both before and after it. Remember to perform out of bounds check as it is possible that the actual number of eligible values is less than 10.
```
A
1 23
2 6
3 1
4 37
5 49
6 42
7 40
8 15
9 31
10 17
11 1147
12 18
13 30
14 22
15 4
16 25
17 19
18 13
19 27
20 38
21 30
```
**Solution**:
Use **SPL XLL** to enter the formula below:
```
=spl("=p=(d=?).pmax(),d.calc(p,~[-10:10])",A1:A40)
```
As shown in the picture below:

**Explanation**:
pmax()function gets the position of the maximum value. calc() function performs the computation according to the specified positions; ~ represents the current member, and [] gets members according to the interval specified by the relative positions, which automatically prevents the array index out of bounds. | judith677 |
1,893,035 | The Future of Programming: Classical vs. Assisted Coding with Mentat and Aider | I was trying to catch a glimpse of the future of programming alongside LLMs. I think I ended up... | 0 | 2024-06-19T02:29:27 | https://dev.to/ykgoon/the-future-of-programming-classical-vs-assisted-coding-with-mentat-and-aider-2d9f | ai, llm | I was trying to catch a glimpse of the future of programming alongside LLMs. I think I ended up discovering a whole new art.
When you bring up AI and coding assistants, most people think of GitHub Copilot and similar alternatives. By now I can confidently say this: code completion is *not* the future. At best, it's just cute.
The discussion around this among seasoned coders is complicated. It's not that we're unwilling to use Copilot. But when we know our turf well, having a code-completion as assistant gets in the way half of the time. So much of it requires learning a different workflow to complement our existing one. By the time I've explained enough to the machine, I could've coded the solution myself. It's unclear if adopting a different workflow is worthwhile.
On the other hand, there's a sense that a lot of the reluctance comes from the ignorance of what these tools can achieve. Similar to learning vim-keybindings. I hesitated for many years. But once I've suffered through the learning curve, I swear by it.
So I put in some time to explore something entirely different. Instead of code-completion tools, I looked at *coding assistants* that live up to its true meaning. I narrowed the field down to two: Mentat and Aider.
## Mentat
I tried [Mentat](https://www.mentat.ai/) first, a seemingly smaller project of the two. The [demo](https://www.youtube.com/watch?v=lODjaWclwpY) looks promising, you should take a look first.
It's a terminal-based application. Installation via `pip` is easy. It's made with Textual TUI so that's a nice touch.
The UX had me at hello. It doesn't try to code with me in Emacs. Instead, I tell it what I want and it will try to deliver in the right places across the project.
To get it to work, I hooked up Mentat to use a coding LLM by Phind hosted by Together AI.
Next I have to pick a problem domain. This is my first mistake: I tried using it to solve a bug in my day job. It's made to work on a code base that is 9 year-old by now.
That broke any available context window limit from the get-go.
See, when working with Mentat we get to specify the relevant files to work on. Code changes by the machine would happen on those files. These files get submitted to the LLM as context (possibly on top of git logs too).
A single Python test file of mine run up to 3,000 lines, easy. No LLM would want to entertain that.
This obstacle got me thinking about fine-tuning a model with my entire code base; or some solution involving RAGs. This can get quite involved; it feels premature. But before I get there, I might as well try Aider first. I shall circle back to Mentat in the future.
## Aider
Watch the [demo](https://aider.chat/) first.
The UX and concepts involved here are similar to Mentat. The difference though is Aider supports Google's Gemini, which has the largest context window out there. If it can't handle my code base, nobody can.
And indeed it could not. I did the setup (similarly with `pip`), worked on the same files from my large code base and Gemini refused to return anything at all.
By now I think I'm making it do things it's not designed to. Most demos like this start idealistically, without the burden of a 9-year-old code base. So I pulled something out of my idea bank (things I wanted to code but never got to it) and made Aider code it from scratch. Now Aider worked as advertised.
This project is a web browser extension that's meant render web pages from within a 3D scene, made to be used within a VR device. The details of this application are immaterial. What matters is it make use of Three.js and various pieces of Javascript stack, something I'm not invested in and therefore out of my depth.
From the get-go Aider created the entire set of boilerplate files, enough for it to work as an empty browser extension. I subsequently spent the whole day working with Aider to get the project to a point where it successfully integrated Three.js.
Now I can start reflecting on the experience.
## How it's really like
Without Aider, a substantial amount of my time would've been spent shaving yak. That include setting manifest files by hand, configuring, doing it wrong and Googling back and forth. All these are low value work, make sense to be done by machines. I wouldn't have taken the project this far in one day coding it myself.
Real action takes place after the first hour. I made a point of telling it what I want like I would to a junior coder, sparing it from making assumptions. That worked out well.
When it gets things wrong, it needs help correcting its own mistakes. Chances are it's because I was not specific about what I was asking for.
When Aider did something unknowingly wrong, I didn't know enough to correct it and assumed it's correct. Further work is built on top of that mistake and cascade into larger mistakes.
There are two facets to mistakes. When Aider makes mistakes on its own, it needs human's help in pointing them out. Doing so involves being specific about the solution. Just saying the outcome is wrong is not helpful.
Secondly, the reason I was not specific enough about my request was because I didn't know enough about the intended solution to ask for it. Therefore Aider does *not* free you from knowing your stack and technical intricacies.
About testing. This is highly domain specific. Had I been doing backend work, I would've had Aider code my test cases for me. However mine is a VR project, so it's still down to me to test by clicking on browser. I think it most projects, Aider will end up encouraging a test-driven approach by making test cases easy to create.
With coding assistants, it's not the case where you ask for the result and it will deliver the solution. For any non-trivial problem, you would have to iterate with it to come to the right solution. So before machines can reason on their own, human is the reasoning component in this loop.
Like most new skills, learning to get good at working with coding assistants will make you slower before it makes you faster.
Which leads me to declare this: AI-assisted coding is an entirely different art. It's not better than *classical coding* (I have to coin that here); it's not worse either. It's different like Judo and Muay Thai; comparison is unfair without context.
## Classical vs Assisted
Now that I've established two different approaches to coding, I can now engage in some speculation.
Here's an easy one: assisted coding works well on popular programming languages (simply because LLMs are well-trained on them). Projects in *artisanal* languages (let me introduce you to [Hoon](https://developers.urbit.org/guides/core/hoon-school/A-intro)) have no choice but to be handcrafted the classical way.
Classical coders are about *how*; assisted-coders are about *what*. Consequently, assisted projects achieve objective faster but classical projects maintain better.
Should any given software project in the future be done with a mixture of assisted approach and classical? I suspect **no**. In that if a code base is assisted code to begin with, there should be minimal classical intervention.
Conversely a classical code base should not be tainted by assisted code commits. Even if this has no quality implication, I think it will be socially demanded by team members.
I can't qualify this point beyond falling back to my intuition, but this aspect will be interesting to observe.
I wonder how collaboration works differently for an assistedly-coded project. Would problems in a typical FOSS project still exist? If not, is the same pull request workflow of a classical project still relevant?
The final point is how physical limits of LLMs affect engineering approaches. Let's assume there will always be a limit to context windows in LLMs no matter how much fine-tuning and RAGs are pulled.
I think assisted projects are likely to discourage monoliths. Because LLMs couldn't fit a big monolith its figurative head, humans go around it by breaking it into pieces. The result end up looking like microservices, whether the problem domain demands for it or not.
Some may argue that's universally a good thing. That remains to be seen.
## Going forward
This will be an ongoing research. I'm hopeful to see my toy project to the end.
I may try Mentat again on a new project at some point.
| ykgoon |
1,893,033 | Configurar Arduino IDE para ESP32 en Windows 10 💻 | 1 Primero debemos instalar los drivers para el ESP32 desde: Abrimos el administrador de... | 0 | 2024-06-19T02:27:17 | https://dev.to/lucasginard/configurar-arduino-ide-para-esp32-en-windows-10-2ogh | arduino, esp32, windows, arduinoide | <img src="https://miro.medium.com/v2/resize:fit:640/format:webp/1*o6ydQUpilXb4t7fgliRLVw.png" />
## 1 Primero debemos instalar los drivers para el ESP32 desde:
Abrimos el **administrador de dispositivos o Device Manager.**
Abrimos la sección de otros dispositivos y conectamos el ESP32 nos debe salir **UART BRIDGE CONTROLLER**:
<img src="https://miro.medium.com/v2/resize:fit:720/format:webp/1*8PSmxqIB4Eon8h04cOxrMQ.png" />
Tenemos que instalar los drivers desde:
[https://www.silabs.com/developers/usb-to-uart-bridge-vcp-drivers?tab=downloads](https://www.silabs.com/developers/usb-to-uart-bridge-vcp-drivers?tab=downloads)
La opción recomendada es **CP210x Universal Windows Driver**
extraemos la carpeta y volvemos al **administrador de dispositivos**
damos click derecho al **Uart Brdige Controller** y damos en **actualizar controlador**
Seleccionamos **buscar en mi equipo**:
<img src="https://miro.medium.com/v2/resize:fit:640/format:webp/1*rpr0Lll41l_Azz8WXuIB7w.png" />
Buscamos el directorio de la carpeta que extrajimos del zip y marcamos la casilla de incluir subCarpetas:
<img src="https://miro.medium.com/v2/resize:fit:640/format:webp/1*LPLnGRGReFQbAVkTipmTDg.png" />
Una vez que tengamos le damos en siguiente y debería salir en el administrador de dispositivos en puertos
## 2 Segundo instalar el [Arduino IDE](https://www.arduino.cc/en/software):
> La opción recomendada es Windows Win 10 and newer, 64 bits
Damos todo siguiente no es necesario nada extra el proceso de instalar el IDE.
## 3 Posterior a instalar abrimos y vamos a file -> Preferences :
<img src="https://miro.medium.com/v2/resize:fit:510/format:webp/1*4sDUz-2WtMz_19Iu30ZrRA.png" />
Debemos poner en **Additional boards managers URLs** la siguiente url:
[https://dl.espressif.com/dl/package_esp32_index.json](https://dl.espressif.com/dl/package_esp32_index.json)
<img src="https://miro.medium.com/v2/resize:fit:720/format:webp/1*8lGqkI2CgvyNQ9MQrkU-YQ.png" />
Damos ok posterior a que traiga el .json vamos en la sección:
## 4 Boards Manager y buscamos esp32:
El recomendado a instalar es el **by Espressif**
<img src="https://miro.medium.com/v2/resize:fit:526/format:webp/1*FCuDxHY_bkibZHAjvFaeYw.png" />
## 5 Posterior a que nos instale vamos en la sección de Tools seleccionamos:
<img src="https://miro.medium.com/v2/resize:fit:720/format:webp/1*yyDRyfzuvECgow-k2X_ACA.png" />
Con eso ya podemos compilar nuestros scripts en el **ESP32** 🎉 | lucasginard |
1,893,032 | Quantitative typing rate trading strategy | About us We are a team that has been committed to researching quantitative trading... | 0 | 2024-06-19T02:25:07 | https://dev.to/fmzquant/quantitative-typing-rate-trading-strategy-3jak | trading, strategy, cryptocurrency, fmzquant | ## About us
We are a team that has been committed to researching quantitative trading strategies for a long time.
Last year, we have achieved a excellent results in the Tokeninsight Quantitative Contest.
Thanks you FMZ community for providing such a platform.
In order to better support the construction of quantitative communities, the design concept and design ideas of this strategy are now publicly published here.
I hope you can learn the design and application of quantitative trading.
## The origin of the quantitative rate trading strategy
The inspiration for the quantitative typing rate system is mainly from physics
The definition of speed in physics is: the distance moved per unit time.
If you regard price as distance, then in the financial market, the definition of speed is the size of price change per unit time.
If the price changes greatly in a unit time, such a market is usually called a rapid market; if the price changes in a unit time is small, such a market is called a slow market. Therefore, speed is a natural law that integrates time and price. A deep understanding of speed can help us understand the market to a greater extent.
If the rate increases, it means that the energy is increasing and can effectively predict the upward trend of the market.
If the rate drops, it means energy failure and the risk of flat or falling market conditions can be perceived.
Each transaction uses a certain number of lots for the transaction, so it is called a quantitative type rate trading system.
## Knowledge to be prepared
Highest price (HHV): The highest price reached in a specific period.
Lowest price (LLV): The lowest price reached in a specific period.
Moving Average (MA): A line connecting the average closing price of a specific period.
Slope of regression (SLOPE): The slope of a linear regression with a specific period. (That’s what we call rate)
The linear equation OLS slope formula is as follows:

The mathematical formula is very complicated, but the FMZ platform has already written the grammar formula (SLOPE) of the M language for us.
We can see that the algorithm is as follows:
- SLOPE
SLOPE, the slope of linear regression. SLOPE(X,N): Obtains the slope of the linear regression for the N periods of X.
Note:
1. N includes the current k-line.
2. N is a valid value, but if the current number of k-lines is less than N, the function will return a null value.
3. If N is 'e', the function will return a null value.
4. If N is a null value, the function will return a null value.
5. N can be a variable.
Example:
Using the least squares method to calculate the value of SLOPE(CLOSE,5) on the most recent k-line:
1. Establish a univariate linear equation: close = a + slope*i + m
2. Estimated value of close: close(i)^ = a + slope*i
3. Calculate the residual: m = close(i) - close(i) = close(i) - a - slope*i
4. Sum of squared errors:
Q = m(1)m(1) + ... + m(5)m(5) = [close(1) - a - slope1][close(1) - a - slope1] + ... + [close(5) - a - slope5][close(5) - a - slope5]
5. Take the first-order partial derivative of the parameters a and slope in the linear equation:
2*{[close(1) - a - slope1] + ... + [close(5) - a - slope5]}(-1) = 0
2{[close(1) - a - slope1] + ... + [close(5) - a - slope5]}*(-5) = 0
6. Solve the above two equations simultaneously to find the value of slope:
slope = {[5close(1) + ... + 1close(5)] - [close(1) + ... + close(5)](1+2+3+4+5)/5} / [(11 + ... + 5*5) - (1+...+5)(1+...+5)/5]
The above formula can be expressed using the Ma Language function as follows:
((5C + 4REF(C,1) + 3REF(C,2) + 2REF(C,3) + 1REF(C,4)) - SUM(C,5)(1+2+3+4+5)/5) / ((SQUARE(1) + SQUARE(2) + SQUARE(3) + SQUARE(4) + SQUARE(5)) - SQUARE(1+2+3+4+5)/5)
Example:
SLOPE(CLOSE,5); represents finding the slope of the 5-period linear regression line of the closing price.
The process is a bit more complicated, but everyone does not have to think about it. Just call the formula directly.
## Indicator design:
1. First calculate the highest and lowest prices in a certain period of time
2. Take the average of these 2 prices
3. Calculate a moving average line on the average
4. Find the regression slope of the moving average
```
len:=35;//Design cycles
hh^^HHV(H,len);//Take the highest price in a certain period
ll^^LLV(L,len);//Take the lowest price in a certain period
hl2^^(hh+ll)/2;//Average of highest price and lowest price
avg^^MA(hl2,5);//Calculate the moving average line of the average
ss:SLOPE(avg,len);//Calculate the regression slope of the moving average line
```
Through the design of indicators, we can see that in the main chart, we have the highest point (yellow line), the lowest point (green line), their average (red line), and the Smoothed price moving average calculated by red line (thick purple line)

Then we can calculate the regression slope ss in the attached figure, which represents the rising and falling rate of the moving average.

## Trading strategy design
As can be seen from the figure above, the green arrows indicate the inflection points at the lowest slope, and the orange arrows indicate the inflection points at the highest slope.
The reaction along the chart is on the k line, and the weakening of the rise and the weakening of the decline can also be clearly felt.
If you buy and sell at the inflection point, you can effectively operate the trading at the early stage, instead of chasing the rise or the fall at the high or low point.
- The design idea is:
The rising slope means that the market momentum is increasing, which may stop falling or start to rise.
The continuous decline of the slope means that the market momentum is weak, and may stop rising or start to fall.
The design and expression using M language are as follows:
```
len:=35;//Design cycles
hh^^HHV(H,len);//Take the highest price in a certain period
ll^^LLV(L,len);//Take the lowest price in a certain period
hl2^^(hh+ll)/2;//Average of highest price and lowest price
avg^^MA(hl2,5);//Calculate the moving average line of the average
ss:SLOPE(avg,len);//Calculate the regression slope of the moving average line
ss<REF(ss,1),SPK;//When the slope becomes smaller, it indicates that the market momentum is weakened, close long positions and open short positions.
ss>REF(ss,1),BPK;//When the slope becomes larger, it indicates that the market momentum is enhanced, close short positions and open long positions.
AUTOFILTER;
```
## Backtest and summary
In this way, we have completed the design of this algorithm, and then we will use the system to backtest the situation for one year.
The subject matter is okex quarterly contract btc;
The backtest period is from January 1, 2019 to present, and the time period is 1 hour;
3 btc for initial account, handling fee of 0.05%;
Set a fixed number of 200 lots per transaction.

It can be seen from the backtest that this income is relatively smooth and stable.
In this backtest, 1261 transactions were made throughout the year;
Estimated income of 4.68 crypto currency;
Annualized income is about 140%;
The maximum drawdown is 14%;
Sharpe ratio is 0.117.
## Source code sharing
Click to go to copy strategy https://www.fmz.com/strategy/183416
The above sharing is some ideas and content of my design, the following is the entire code of M language,
For your reference, study and research. If you need to reprint, please indicate the source. Thank you.
```
len:=35;//Design cycles
hh^^HHV(H,len);//Take the highest price in a certain period
ll^^LLV(L,len);//Take the lowest price in a certain period
hl2^^(hh+ll)/2;//Average of highest price and lowest price
avg^^MA(hl2,5);//Calculate the moving average line of the average
ss:SLOPE(avg,len);//Calculate the regression slope of the moving average line
ss<REF(ss,1),SPK;//When the slope becomes smaller, it indicates that the market momentum is weakened, close long positions and open short positions.
ss>REF(ss,1),BPK;//When the slope becomes larger, it indicates that the market momentum is enhanced, close short positions and open long positions.
AUTOFILTER;
```
From: https://www.fmz.com/digest-topic/5951 | fmzquant |
1,893,029 | Configuring Nginx for IP Redirection and Domain Configuration | Introduction Nginx, a powerful web server, can efficiently manage web traffic and serve... | 0 | 2024-06-19T02:22:00 | https://dev.to/karthiksdevopsengineer/configuring-nginx-for-ip-redirection-and-domain-configuration-232g | nginx, devops, linux, cloud | ## Introduction
Nginx, a powerful web server, can efficiently manage web traffic and serve multiple applications simultaneously. In this beginner-friendly guide, we’ll explore how to set up IP to domain redirection and application proxying using Nginx.
## Configuring IP Address to Domain Redirection and Application Proxying
Navigate to your Nginx configuration file, typically located at `/etc/nginx/sites-available/<your-file>`
```
server {
listen 80;
server_name <server-ip>;
location / {
return 301 http://<dns-name>$request_uri;
}
}
server {
listen 80;
server_name <dns-name>;
location / {
proxy_pass http://localhost:<port>;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
## Explanation
1. **IP Redirection:** The first server block listens on port 80 for requests directed to the specified `<server-ip>`. It then issues a 301 redirect to the designated `<dns-name>` for all incoming requests.
2. **Domain Configuration:** The second server block listens on port 80 for requests directed to the `<dns-name>`. It proxies the incoming traffic to the application running on `http://localhost:<port>`, ensuring seamless communication between Nginx and your application.
## Conclusion
That’s it! You’ve successfully configured Nginx to redirect traffic from an IP address to a domain name. Now, whenever someone accesses your application using the IP address, they will be automatically redirected to the specified domain name.
| karthiksdevopsengineer |
1,893,028 | Screw Manufacturers: Meeting Strict Quality Standards for Customer Satisfaction | Maintain It Risk-free along with High top premium Screws Are actually you searching for screws for... | 0 | 2024-06-19T02:21:43 | https://dev.to/katherine_floresg_f6e8fca/screw-manufacturers-meeting-strict-quality-standards-for-customer-satisfaction-5232 | design | Maintain It Risk-free along with High top premium Screws
Are actually you searching for screws for your DIY jobs? Look no more Turn producers are actually right below towards satisfy your requirements along with high top premium screws, we'll discuss the benefits of utilization high top premium Flange Bolt screws coming from ingenious producers that guarantee security as well as complete fulfillment for their clients
Benefits of High top premium Screws
High top premium screws are actually developed towards final much a lot longer compared to less expensive ranges as well as are actually created coming from much a lot better products This implies that they are actually much less most probably towards breather or even corrosion, maintaining your jobs or even devices undamaged Additionally, they deal a much better hold compared to lower-quality screws, creating all of them much a lot extra efficient when keeping points in position
Development for High top premium
Turn producers are actually continuously developing brand-brand new as well as ingenious styles that create screws simpler as well as much more secure towards utilize They utilize progressed innovations towards produce screws that are actually more powerful, immune towards corrosion or even rust, as well as could be utilized in different requests
Security Very initial
Security ought to constantly be actually a leading concern when dealing with screws Turn producers comprehend this as well as are actually for that reason dedicated towards creating screws that satisfy stringent high top premium requirements They examination their screws towards guarantee that they can easily endure various quantities of stress without damaging or even triggering any type of hurt
Utilizing Screws
Screws are actually utilized in various requests, like furnishings setting up, house modelling, automobile repair work, as well as lots of others Various kinds of Carriage Bolt screws have actually been actually developed towards suit particular requests, creating all of them much a lot extra efficient as well as effective in their utilize
Ways to Utilize Screws
Utilizing screws is actually easy as well as simple
The initial step is actually towards guarantee that you have actually the appropriate kind of turn for the job available The turn should be actually safely located in position prior to tightening up it along with either a screwdriver or even energy device It is essential towards guarantee that the turn is actually limited sufficient to prevent the preferred protest in position however certainly not therefore limited that it triggers damages or even distortion
Solution as well as High top premium
High top premium screws coming from relied on producers constantly include outstanding customer support They deal guarantees as well as gain plans towards guarantee client complete fulfillment In the event you have actually any type of problems along with your Shoulder Bolt screws, these producers are actually constantly offered to assist | katherine_floresg_f6e8fca |
1,893,022 | Building My Own JSON Parser in Rust | Introduction I recently took on a challenge to build my own JSON parser in Rust. This... | 0 | 2024-06-19T02:08:51 | https://dev.to/krymancer/building-my-own-json-parser-in-rust-fk3 | ### Introduction
I recently took on a challenge to build my own JSON parser in Rust. This project was a fantastic opportunity to dive deep into parsing techniques, which are crucial for everything from simple data formats to building compilers. You can find the full details of the challenge [here](https://codingchallenges.fyi/challenges/challenge-json-parser) and the source code in my [GitHub repository](https://github.com/Krymancer/cc-json-parser).
### The Challenge
The challenge was structured to incrementally build a JSON parser, starting with simple JSON objects and progressively handling more complex structures. Here's a step-by-step breakdown of the process.
### Step Zero: Thinking about the problem and data structure
Being honest I never really implement anything like a parser, my first idea was to count the curly braces and see if they are even (what passes the frist checks for this code challenge). But when I need to parse key-values I started to strugle. Then I took two steps backs and search about parser and remember that compiles do use tokens and parser in order to work so I started in creating a structre to handle all possible tokens in a json, and rust type system really helped me:
```rust
enum Token {
CurlyOpen,
CurlyClose,
SquareOpen,
SquareClose,
Colon,
Comma,
String(String),
Number(f64),
Bool(bool),
Null,
}
```
This struct contains all the possible tokens that we will need, I took then from the [json.org](https://www.json.org/json-en.html) website, that have a really great visual representation of how parsing a json file works.
### Step One: Tokenizing the input
The frist step was to get each character of the file and iterate them to create a array of tokens to be fed in a parser. Most of the tokens are straightfoward like curly braces, braces, colons and commas, but values like string, numbers, boolean and nulls required a little more work, I will not show each one because is mostly iterate throug chars making sure that the value is expected, like in string that started with `"` and finishes with `"`, numbers can not have trailing zeros and so on. But I had some troubles that I found during testing that I find interesting and will metion latter.
```rust
fn tokenize(input: String) -> Result<Vec<Token>> {
// some checks and initialization ...
while let Some(&ch) = chars.peek() {
match ch {
'{' => {
tokens.push(Token::CurlyOpen);
chars.next();
}
'}' => {
tokens.push(Token::CurlyClose);
chars.next();
}
'[' => {
tokens.push(Token::SquareOpen);
chars.next();
}
']' => {
tokens.push(Token::SquareClose);
chars.next();
}
':' => {
tokens.push(Token::Colon);
chars.next();
}
',' => {
tokens.push(Token::Comma);
chars.next();
}
'"' => {
tokens.push(Token::String(tokenize_string(&mut chars)?));
}
'0'..='9' | '-' => {
tokens.push(Token::Number(tokenize_number(&mut chars)?));
}
't' | 'f' => {
tokens.push(Token::Bool(tokenize_bool(&mut chars)));
}
'n' => {
tokenize_null(&mut chars);
tokens.push(Token::Null);
}
_ if ch.is_whitespace() => {
chars.next();
}
_ => return Err(anyhow!("Unexpected character: {}", ch)),
}
}
Ok(tokens)
}
```
### Step Two: Parsing the tokens
For parsing I had to use a struct to hold the values, using the json.org example I got all the possible values and created the following struct:
```rust
pub enum JsonValue {
Object(Vec<(String, JsonValue)>),
Array(Vec<JsonValue>),
String(String),
Number(f64),
Bool(bool),
Null,
}
```
With this struct I created a function that parse the a Token array. When testing I noticed that json.org has tests for depth. I didn't know that json had a maximum depth, this is in place to prevent some attacks, we can learn more in depth about this searching, this is pretty known actually and I'm sure someone can explain way better than me so let's keep going.
The function `parse_tokens` make sure that is only one root token and we made the check if is a object or an array also, since json only can have arrays or object as root nodes.
Here I already parse the object before checking, this is not ideal but is good enouth for the frist time so I didn't bother.
```rust
fn parse_tokens(tokens: Vec<Token>) -> Result<JsonValue> {
let mut iter = tokens.iter().peekable();
let value = parse_value(&mut iter, 0)?;
// Check if there are any remaining tokens after the top-level value
if iter.peek().is_some() {
return Err(anyhow!("Extra tokens after top-level value"));
}
match value {
JsonValue::Object(_) | JsonValue::Array(_) => Ok(value),
_ => Err(anyhow!(
"A JSON payload should be an object or array, not a string."
)),
}
}
```
The magic happens in the `parse_value` function:
```rust
fn parse_value<'a, I>(tokens: &mut std::iter::Peekable<I>, depth: usize) -> Result<JsonValue>
where
I: Iterator<Item = &'a Token>,
{
if depth > MAX_DEPTH {
return Err(anyhow!("Exceeded maximum nesting depth"));
}
match tokens.peek() {
Some(Token::CurlyOpen) => parse_object(tokens, depth),
Some(Token::SquareOpen) => parse_array(tokens, depth),
Some(Token::String(_)) => {
if let Some(Token::String(s)) = tokens.next() {
Ok(JsonValue::String(s.clone()))
} else {
Err(anyhow!("Expected a string"))
}
}
Some(Token::Number(_)) => {
if let Some(Token::Number(n)) = tokens.next() {
Ok(JsonValue::Number(*n))
} else {
Err(anyhow!("Expected a number"))
}
}
Some(Token::Bool(_)) => {
if let Some(Token::Bool(b)) = tokens.next() {
Ok(JsonValue::Bool(*b))
} else {
Err(anyhow!("Expected a boolean"))
}
}
Some(Token::Null) => {
tokens.next(); // Consume the Null token
Ok(JsonValue::Null)
}
_ => Err(anyhow!("Unexpected token")),
}
}
```
this basicly take care of objects, arrays, strings, numbers, booleans and null values, converting the tokens to an actuall value in rust.
### Step Three: Parsing Nested Objects and Arrays
The next step was to handle JSON objects containing other objects and arrays. This required implementing recursive parsing functions, keeping in mind the depth:
```rust
fn parse_object<'a, I>(tokens: &mut std::iter::Peekable<I>, depth: usize) -> Result<JsonValue>
where
I: Iterator<Item = &'a Token>,
{
let mut object = Vec::new();
tokens.next(); // Consume the '{'
loop {
match tokens.peek() {
Some(Token::CurlyClose) => {
tokens.next(); // Consume the '}'
break;
}
Some(Token::String(_)) => {
if let Some(Token::String(key)) = tokens.next() {
if let Some(Token::Colon) = tokens.next() {
let value = parse_value(tokens, depth + 1)?;
object.push((key.clone(), value));
match tokens.peek() {
Some(Token::Comma) => {
tokens.next(); // Consume the ','
if let Some(Token::CurlyClose) = tokens.peek() {
return Err(anyhow!("Trailing comma in object"));
}
}
Some(Token::CurlyClose) => {
tokens.next(); // Consume the '}'
break;
}
_ => return Err(anyhow!("Expected ',' or '}' after object value")),
}
} else {
return Err(anyhow!("Expected ':' after key in object"));
}
}
}
_ => return Err(anyhow!("Expected string key or '}' in object")),
}
}
Ok(JsonValue::Object(object))
}
fn parse_array<'a, I>(tokens: &mut std::iter::Peekable<I>, depth: usize) -> Result<JsonValue>
where
I: Iterator<Item = &'a Token>,
{
let mut array = Vec::new();
tokens.next(); // Consume the '['
loop {
match tokens.peek() {
Some(Token::SquareClose) => {
tokens.next(); // Consume the ']'
break;
}
Some(_) => {
let value = parse_value(tokens, depth + 1)?;
array.push(value);
match tokens.peek() {
Some(Token::Comma) => {
tokens.next(); // Consume the ','
if let Some(Token::SquareClose) = tokens.peek() {
return Err(anyhow!("Trailing comma in array"));
}
}
Some(Token::SquareClose) => {
tokens.next(); // Consume the ']'
break;
}
_ => return Err(anyhow!("Expected ',' or ']'")),
};
}
_ => return Err(anyhow!("Expected value or ']'")),
};
}
Ok(JsonValue::Array(array))
}
```
### Step Four: Testing and Validation
Finally, I added my own tests to ensure the parser handles both valid and invalid JSON correctly. I also used the JSON test suite from json.org to validate the parser:
```rust
#[cfg(test)]
mod tests {
use crate::parser::{parse_json, JsonValue};
#[test]
fn test_invalid_path() {
let path = String::from("invalid/path");
let result = parse_json(path);
assert!(result.is_err());
if let Err(e) = result {
assert_eq!(e.to_string(), "Failed to read File");
}
}
#[test]
fn test_step1_valid() {
let path = String::from("./tests/step1/valid.json");
let result = parse_json(path).expect("Error parsing JSON");
assert_eq!(result, JsonValue::Object(vec![]));
}
// Additional tests for other steps...
}
```
### Step Five: Testing and fixing...
When testing I found two problems that needed I little work to work, strings and numbers.
My problem with strings was enconded characters, the frist way that I was checking for strings was to look for the end '"' putting each character in between into the strings, but I found an test that had:
```json
{
"key": "\""
}
```
My parser broke, but when making an special case for this I found that I was not at all accouting for escaped chars so I rewrite the function to account for those:
```rust
fn tokenize_string(chars: &mut std::iter::Peekable<std::str::Chars>) -> Result<String> {
let mut result = String::new();
chars.next(); // Skip opening (") quote
while let Some(&ch) = chars.peek() {
match ch {
'\\' => {
chars.next(); // Skip the backslash
if let Some(&escaped_char) = chars.peek() {
match escaped_char {
'"' => result.push('"'),
'\\' => result.push('\\'),
'/' => result.push('/'),
'b' => result.push('\x08'), // Backspace rust don't like \b in char
'f' => result.push('\x0C'), // Form feed rust don't like \f in char
'n' => result.push('\n'),
'r' => result.push('\r'),
't' => result.push('\t'),
'u' => {
let unicode_sequence = tokenize_unicode_sequence(chars)?;
result += &unicode_sequence;
}
_ => return Err(anyhow!("Invalid escape sequence: \\{}", escaped_char)),
}
chars.next(); // Skip the escaped character
} else {
return Err(anyhow!("Unexpected end of input after escape character"));
}
}
'"' => {
chars.next(); // Skip closing (") quote
break; // Closing quote found
}
_ if ch.is_whitespace() && ch != ' ' => {
return Err(anyhow!(
"Invalid unescaped whitespace character in string: {}",
ch
));
}
_ => {
result.push(ch);
chars.next();
}
}
}
Ok(result)
}
```
for some reason rust didn't like that I put `'\b'` and `'\f'` into the push function so I used the ascii values instead. This also acount for whitespaces in the strings, the only whitespace allowed is `' '` space itself, tabs and others must be escaped in json.
I also found that I was not parsing unicode sequeces and they are a little different:
```rust
fn tokenize_unicode_sequence(chars: &mut std::iter::Peekable<std::str::Chars>) -> Result<String> {
let mut result = String::new();
chars.next(); // Skip 'u'
let mut unicode_sequence = String::new();
for _ in 0..4 {
if let Some(&hex_digit) = chars.peek() {
if hex_digit.is_ascii_hexdigit() {
unicode_sequence.push(hex_digit);
chars.next();
} else {
return Err(anyhow!("Invalid Unicode escape sequence"));
}
} else {
return Err(anyhow!(
"Unexpected end of input in Unicode escape sequence"
));
}
}
if let Ok(unicode_char) =
u16::from_str_radix(&unicode_sequence, 16).map(|u| char::from_u32(u as u32))
{
if let Some(c) = unicode_char {
result.push(c);
} else {
return Err(anyhow!("Invalid Unicode character"));
}
} else {
return Err(anyhow!("Invalid Unicode escape sequence"));
}
Ok(result)
}
```
This took care of the errors that I had with strings, now we go for numbers. Frist of all, I totally forgot `e` AND `E` can be in numbers, they represent scientific notation (e=10^n) like 2e5 is the same as 2 * 10^5. Also json don't allow leading zeros in numbers so I accounted for that. This got a simple function a lot complex but the end result (not at all perfect) was not too bad:
```rust
fn tokenize_number(chars: &mut std::iter::Peekable<std::str::Chars>) -> Result<f64, anyhow::Error> {
let mut result = String::new();
let mut is_first_char = true;
let mut has_dot = false;
while let Some(&ch) = chars.peek() {
match ch {
'0'..='9' => {
if is_first_char && ch == '0' {
chars.next(); // Consume the '0'
if let Some(&next_ch) = chars.peek() {
match next_ch {
'.' => {
// Handle 0.x numbers
result.push('0');
}
'0'..='9' => return Err(anyhow!("Invalid number with leading zero")),
_ => {
result.push('0'); // Just 0
break;
}
}
} else {
result.push('0'); // Just 0
break;
}
} else {
result.push(ch);
chars.next();
}
}
'.' => {
if has_dot {
return Err(anyhow!("Multiple decimal points in number"));
}
result.push(ch);
chars.next();
has_dot = true;
}
'-' | '+' if is_first_char => {
result.push(ch);
chars.next();
}
'e' | 'E' => {
result.push(ch);
chars.next();
// After 'e' or 'E', we should expect a digit or a sign
if let Some(&next_ch) = chars.peek() {
if next_ch == '-' || next_ch == '+' || next_ch.is_digit(10) {
result.push(next_ch);
chars.next();
} else {
return Err(anyhow!("Invalid character after exponent"));
}
} else {
return Err(anyhow!("Exponent without digits"));
}
}
_ => break,
}
is_first_char = false;
}
if let Some(ch) = result.chars().last() {
if matches!(ch, 'e' | 'E' | '.' | '-' | '+') {
return Err(anyhow!("Invalid number"));
}
}
match result.to_lowercase().parse() {
Ok(number) => Ok(number),
Err(..) => Err(anyhow!("Number is invalid")),
}
}
```
### Conclusion
Building a JSON parser from scratch was a challenging but rewarding experience. It provided valuable insights into parsing techniques and Rust programming. You can check out the complete source code and try the parser yourself by visiting my [GitHub repository](https://github.com/Krymancer/cc-json-parser).
I | krymancer | |
1,893,019 | Mastering Hybrid Workforce Management: Tips and Tricks | As businesses adapt to the new norms of work, hybrid workforce management has emerged as a crucial... | 0 | 2024-06-19T02:01:25 | https://dev.to/bocruz0033/mastering-hybrid-workforce-management-tips-and-tricks-33am | hybridworkforce, remotework | As businesses adapt to the new norms of work, hybrid workforce management has emerged as a crucial element of organizational strategy. Balancing remote and on-site operations can be challenging, but with the right approach, companies can thrive. Here are essential tips and tricks to master hybrid [workforce management](https://paylinedata.com/blog/hybrid).
**1. Define Clear Policies and Expectations**
Start by setting clear and consistent policies that address work hours, availability, communication methods, and performance metrics. Ensure these policies are equitable for both remote and in-office employees to maintain fairness and transparency.
**2. Invest in the Right Technology**
Effective hybrid workforce management relies on robust technology. Invest in tools that facilitate seamless communication, collaboration, and productivity. This includes project management software, virtual meeting platforms, and secure cloud storage solutions.
**3. Foster Communication and Collaboration**
Encourage regular communication through daily check-ins, weekly team meetings, and monthly all-hands meetings. Use collaborative tools that enable real-time editing, feedback, and file sharing to keep everyone on the same page regardless of their physical location.
**4. Promote a Strong Company Culture**
Maintaining a cohesive company culture is challenging in a hybrid environment. Organize virtual social events, workshops, and training sessions that reinforce company values and build team spirit. Encourage on-site employees to interact with remote colleagues to bridge any cultural gaps.
**5. Offer Flexible Working Arrangements**
Flexibility is at the heart of hybrid workforce management. Allow employees to choose work hours and locations that best suit their productivity peaks and personal commitments. This flexibility can lead to increased job satisfaction and reduced turnover.
**6. Prioritize Employee Well-being**
The hybrid model should enhance work-life balance. Provide resources and support for mental health, including access to counseling services, wellness programs, and regular check-ins to address any work-related stress or issues.
**7. Continuous Training and Development**
Ensure all employees have opportunities for growth and development, regardless of their work location. Offer virtual training programs and digital learning modules to help employees upgrade their skills and stay updated with industry trends.
**8. Use Data to Make Informed Decisions**
Monitor the productivity and engagement levels across different settings to gauge what works best for your team. Use data from employee feedback, performance metrics, and technology usage to refine your hybrid work model continually.
**9. Lead with Empathy and Flexibility**
Leadership in a hybrid environment should be adaptive and empathetic. Leaders must understand the unique challenges faced by remote and in-office teams and adjust management styles accordingly. This might involve more personalized communication and support for individual employee needs.
**10. Regularly Review and Adapt Your Strategies**
The hybrid model is still evolving, so it's important to stay flexible and open to changes. Regularly review your policies and strategies to accommodate new technologies, employee feedback, and shifts in the work environment.
Mastering hybrid workforce management is a dynamic and ongoing process. By implementing these tips and tricks, organizations can create a productive, engaging, and collaborative work environment that supports both remote and in-office employees. This approach not only boosts productivity but also contributes to the overall resilience and adaptability of the business in a rapidly changing world. | bocruz0033 |
1,893,018 | Comparison of Navicat and SQLynx Features (Overall Comparison) | Navicat and SQLynx are both database management tools. Navicat is more popular among individual users... | 0 | 2024-06-19T01:54:52 | https://dev.to/concerate/comparison-of-navicat-and-sqlynx-features-overall-comparison-2hhf | Navicat and SQLynx are both database management tools.
Navicat is more popular among individual users and is generally used for simple development needs with relatively small data volumes and straightforward development processes.
SQLynx, a database management tool from recent years, is web-based and its desktop version is packaged with Electron. It is primarily designed for enterprise-level clients, catering to large-scale enterprise data. However, it is also suitable for individual users.
In summary, if you have simple development needs as an individual, both Navicat and SQLynx can meet your requirements.
If you are engaged in enterprise development or dealing with large data volumes, it is generally recommended to use SQLynx.
Navicat is a powerful database management and development tool widely used for managing and operating various databases. It supports multiple databases, including MySQL, MariaDB, SQL Server, Oracle, PostgreSQL, and SQLite, offering a rich set of features to meet the needs of database administrators, developers, and data analysts.
SQLynx is an advanced web SQL integrated development environment (IDE) designed for database management, querying, and data analysis. As a browser-based tool (also supporting a desktop version), SQLynx provides highly convenient cross-platform access and collaboration features, enabling users to connect to and manage databases anytime and anywhere.
Below is a detailed comparison between SQLynx and Navicat to help you understand the similarities and differences between these two SQL tools.

Summarize
SQLynx:
It supports individual users but is more suitable for teams and enterprise-level clients requiring cross-platform access and real-time collaboration.
Modern web interface that is easy to use and maintain.
Provides advanced auditing and intelligent code completion features.
Navicat:
Suitable for users requiring offline access and local installation.
Supports multiple databases and platforms, with powerful and comprehensive features.
Traditional desktop interface, suitable for users familiar with desktop applications.
The choice of the appropriate tool depends on your specific needs and work environment.
SQLynx is suitable for individual users, small and medium-sized enterprises, and large enterprises that require modern, collaborative, and cloud-based features.
Navicat, on the other hand, is suitable for individual users who need a simple tool for development purposes.
| concerate | |
1,893,016 | How to conduct keyword research and choose the most effective keywords for your website | Keyword research is fundamental to SEO, crucial for boosting website visibility and attracting target... | 0 | 2024-06-19T01:52:31 | https://dev.to/juddiy/how-to-conduct-keyword-research-and-choose-the-most-effective-keywords-for-your-website-4ld3 | seo, learning | Keyword research is fundamental to SEO, crucial for boosting website visibility and attracting target audiences. Choosing the right keywords not only increases website traffic but also enhances user engagement and conversion rates. So, how do you conduct keyword research and select the most effective keywords for your website? Here are several steps:
1. **Understand Your Target Audience**:
- **Define Your Ideal Customers**: Consider who your ideal customers are, their interests, and needs. Understanding your audience helps identify the vocabulary and phrases they use in search engines.
- **Analyze User Intent**: Understanding user search intent is critical. When users search a keyword, they may be seeking information, making a purchase, or looking for solutions. Understanding these intents helps in selecting appropriate types of keywords.
2. **Utilize Keyword Research Tools**:
- **Google Keyword Planner**: This free tool helps discover relevant keywords, showing search volume and competition for each keyword.
- **Ahrefs, SEMrush**: These paid tools provide deeper keyword analysis, including keyword difficulty, search trends, and competitor keyword strategies.
- **Google Trends**: Used to understand keyword search trends, identifying seasonal or geographic variations.
3. **Brainstorm and List Keywords**:
- **List Relevant Topics**: Start with your business or website content to list primary topics and concepts.
- **Expand and Refine**: For each main topic, list related long-tail keywords and variations, which are typically more targeted and less competitive.
4. **Competitor Analysis**:
- **Study Competitors' Websites**: See which keywords similar websites use and how they perform in rankings.
- **Identify Competitive Gaps**: Recognize keywords that competitors overlook but are still valuable for your business.
5. **Evaluate Keyword Value**:
- **Search Volume and Competition**: Choose keywords with adequate search volume and manageable competition. Keywords that are too popular may be hard to rank for, while overly niche keywords may lack sufficient traffic.
- **Relevance and Intent Match**: Ensure selected keywords are highly relevant to your content and align with user search intent.
6. **Implement Keyword Grouping Strategies**:
- **Keyword Grouping**: Organize keywords by topic or intent to optimize different pages or sections of content.
- **Create Keyword Mapping**: Assign each keyword to specific pages on your website, ensuring each page is optimized for a primary keyword and its related variations.
7. **Monitor and Adjust Regularly**:
- **Track Performance**: Use [SEO AI tool](https://seoai.run/) to track keyword performance, understanding which keywords drive the most traffic and conversions.
- **Regular Updates**: Based on data and trends, regularly review and update your keyword strategy to maintain its effectiveness and competitiveness.
By following these steps, you can gain insights into conducting effective keyword research and selecting the most suitable keywords for your website. This not only enhances your search engine rankings but also attracts precise audiences, driving business growth. | juddiy |
1,893,015 | Chain Link Fences: Benefits and Applications | Chain web link fencing are actually a type of solid, steel fencing that's comprised of inter weaved... | 0 | 2024-06-19T01:43:50 | https://dev.to/katherine_floresg_f6e8fca/chain-link-fences-benefits-and-applications-h45 | design |
Chain web link fencing are actually a type of solid, steel fencing that's comprised of inter weaved steel cables, offering lots of advantages as well as requests for property owners as well as companies. Coming from offering security as well as safety and safety towards improving personal privacy, these fencing are actually a wise option for anybody wanting to safeguard their residential or commercial home
Benefits of Chain Web link Fencing:
Chain web link fencing are actually understood for their lots of benefits, consisting of their resilience as well as cost. These fencing are actually extremely durable as well as lasting, offering an obstacle versus intruders as well as undesirable pets. They are actually likewise low-maintenance, as well as need little bit of maintenance or even upkeep over their life time. Furthermore, chain web link fencing are available in a selection of shades as well as dimensions, creating all of them adjustable towards suit any 6ft chain link fence type of residential or commercial home as well as design choice
Development in Chain Web link Fencing:
Throughout the years, development in the production of chain web link fencing has actually allowed the development of advanced as well as better-performing fencing. Through integrating progressed products as well as innovations, producers have actually created these fencing much a lot extra immune towards corrosion, rust, as well as deterioration, prolonging the fence's life expectancy. Furthermore, ingenious methods like plastic covering are actually contributed to the fencing towards provide a smooth as well as contemporary look
Security as well as Utilize:
Chain web link fencing offer security towards homeowner, households, as well as animals through offering a solid level of security in between the residential or commercial home as well as the outdoors. Their resilient building as well as durable attributes create these 6 chain link fence fencing a dependable option for maintaining out intruders as well as undesirable pets. They likewise offer an outstanding option towards conventional timber or even steel fence choices, which might deteriorate or even need regular paint
Ways to Utilize a Chain Web link Fencing:
Setting up a wire mesh fence might look like a challenging job towards some. Nevertheless, along with a little bit of assist, it could be an easy as well as simple procedure. Everything begins along with choosing the appropriate fencing for your requirements, based upon the elevation as well as evaluate of the fencing. You will certainly likewise have to think about the surface as well as place of the fencing when choosing your fencing. After choosing your fencing, you will certainly have to dig messages, collection all of them in the ground, as well as connect the chain web link material. Lastly, connect any type of entrances as well as include complements towards finish the job
Solution as well as High top premium:
Selecting a dependable installer is actually essential towards guaranteeing a high quality services and product expertise. A high quality fencing business guarantees appropriate setup, high top premium products, as well as dependable customer support. You ought to looking for a business along with a credibility for offering outstanding client complete fulfillment, dependable solution, as well as prompt shipment
Requests:
Chain web link fencing are actually a flexible
choice
, appropriate for a wide variety of requests. They are actually typically discovered in domestic locations towards safeguard residential or commercial home as well as maintain animals risk-free, while likewise offering personal privacy coming from next-door neighbors. They are actually likewise discovered in industrial as well as commercial setups, like building web internet web sites. A wire mesh fence is actually likewise a favored option for leisure locations since, unlike various other fencing, 6ft chain link web link fencing are actually clear as well as do not block the sight
| katherine_floresg_f6e8fca |
1,893,013 | Challenge twilio | This is a submission for the Twilio Challenge What I Built Demo ... | 0 | 2024-06-19T01:40:11 | https://dev.to/yhordic/challenge-twilio-554o | devchallenge, twiliochallenge, ai, twilio | *This is a submission for the [Twilio Challenge ](https://dev.to/challenges/twilio)*
## What I Built
<!-- Share an overview about your project. -->
## Demo
<!-- Share a link to your app and include some screenshots here. -->
## Twilio and AI
<!-- Tell us how you leveraged Twilio’s capabilities with AI -->
## Additional Prize Categories
<!-- Does your submission qualify for any additional prize categories (Twilio Times Two, Impactful Innovators, Entertaining Endeavors)? Please list all that apply. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image (if you want). -->
<!-- Thanks for participating! → | yhordic |
1,893,012 | Learn to Build a GitHub Repository Showcase Using HTML, CSS, and JavaScript | Are you looking to create a personal portfolio that dynamically showcases your GitHub repositories?... | 0 | 2024-06-19T01:34:33 | https://raajaryan.tech/learn-to-build-a-github-repository-showcase-using-html-css-and-javascript | javascript, beginners, programming, tutorial | Are you looking to create a personal portfolio that dynamically showcases your GitHub repositories? In this blog post, I'll guide you through building a web project using HTML, CSS, and JavaScript to display your repositories from GitHub. This is a fantastic way to demonstrate your work and technical skills to potential employers or collaborators.
### 1. Setup Your Environment
Before starting, ensure you have a text editor (such as VSCode), a web browser, and a GitHub account.
1. **Text Editor:** Download and install [VSCode](https://code.visualstudio.com/).
2. **GitHub Account:** Ensure your repositories are public or use a GitHub token for private repositories.
### 2. Create the Basic Structure
Let's start by setting up the basic file structure for our project.
**Project Structure:**
```
/github-repos
|-- index.html
|-- style.css
|-- script.js
```
**index.html:**
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>My GitHub Repositories</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<header>
<h1>My GitHub Repositories</h1>
</header>
<main id="repo-container"></main>
<script src="script.js"></script>
</body>
</html>
```
**style.css:**
```css
body {
font-family: Arial, sans-serif;
background-color: #f4f4f4;
color: #333;
margin: 0;
padding: 0;
display: flex;
flex-direction: column;
align-items: center;
}
header {
background: #333;
color: #fff;
padding: 10px 0;
width: 100%;
text-align: center;
}
#repo-container {
display: flex;
flex-wrap: wrap;
justify-content: center;
margin-top: 20px;
}
.repo {
background: #fff;
margin: 10px;
padding: 15px;
border-radius: 8px;
box-shadow: 0 0 10px rgba(0,0,0,0.1);
width: 300px;
}
.repo h2 {
margin: 0 0 10px;
font-size: 1.5em;
}
.repo p {
margin: 0 0 10px;
}
```
### 3. Fetch GitHub Repositories
In the `script.js` file, we will fetch repositories from your GitHub account using the GitHub API.
**script.js:**
```js
document.addEventListener('DOMContentLoaded', function() {
const username = 'your-github-username';
const repoContainer = document.getElementById('repo-container');
async function fetchRepositories() {
try {
const response = await fetch(`https://api.github.com/users/${username}/repos`);
const repos = await response.json();
displayRepositories(repos);
} catch (error) {
console.error('Error fetching repositories:', error);
}
}
function displayRepositories(repos) {
repos.forEach(repo => {
const repoElement = document.createElement('div');
repoElement.classList.add('repo');
repoElement.innerHTML = `
<h2>${repo.name}</h2>
<p>${repo.description || 'No description available'}</p>
<a href="${repo.html_url}" target="_blank">View Repository</a>
`;
repoContainer.appendChild(repoElement);
});
}
fetchRepositories();
});
```
### 4. Display Repositories Dynamically
The `displayRepositories` function dynamically creates HTML elements for each repository and appends them to the `repo-container` in the DOM. Each repository card includes the name, description, and a link to the repository.
### 5. Style the Project
The provided CSS styles ensure that the project has a clean, responsive design. You can customize the styles further to match your personal branding.
### 6. Deploy Your Project
Once your project is complete, you can deploy it using GitHub Pages for free. Here’s how:
1. **Create a Repository:** Create a new repository on GitHub and push your project files.
2. **Enable GitHub Pages:** Go to the repository settings, scroll down to the "GitHub Pages" section, and select the branch (e.g., `main`) and folder (e.g., `/root`) to deploy from.
3. **Access Your Site:** Your site will be available at `https://<your-username>.github.io/<repository-name>/`.
### 7. Conclusion
You've now created a dynamic web project to showcase your GitHub repositories using HTML, CSS, and JavaScript. This project not only highlights your technical skills but also serves as an interactive portfolio for potential employers and collaborators.
Feel free to expand on this project by adding more features such as filtering repositories by language, sorting by stars, or including your profile information. Happy coding!
---
| raajaryan |
1,893,010 | Balance strategy and grid strategy | If the price of Bitcoin one day in the future will be the same as it is now, what strategy will you... | 0 | 2024-06-19T01:25:02 | https://dev.to/fmzquant/balance-strategy-and-grid-strategy-2dam | startegy, grid, balance, fmzquant | If the price of Bitcoin one day in the future will be the same as it is now, what strategy will you adopt to gain profit? The easy way to think of is to sell if it rises, buy if it falls, and wait for the price to recover again, and then earn the intermediate price difference. How to implement it? How much do you need to sell when it rises? If you sell too early, you will obviously lose money. Similarly, buying too early will make less profit. Balance strategy and grid strategy are both to solve this problem, they are also very similar, this article will specifically introduce these two strategies.
## Brief introduction of strategy
The principle of the balance strategy is very simple. The strategy only holds a fixed proportion of the crypto currency, such as 50%. When the value of the crypto currency exceeds 50%, it will be sold. On the contrary, the value of the crypto currency held will be maintained at around 50%. If the price of the crypto currency continues to rise, it will continue to be sold, but always holding a certain crypto currency will not be sold out. If the price of the crypto currency first rises and then falls back to the starting price, the strategy of selling high and buying low will increase the crypto currency and fiat currency.
The grid strategy is to buy and sell at a fixed price, and you can set multiple groups of trading intervals, such as 8000-8500, 8500-9000. The strategy will buy 0.1 crypto currency at 8000 yuan, sell 0.1 crypto currency when it rises to 8500, continue to sell 0.1 crypto currency at 9000, and then buy 0.1 crypto currency when it falls to 8500. Note that the grid will only peg the price at the other end only when one end of a range is traded. In this way, the strategy always buys low and sells high. It is also noticed that the crypto currency bought and sold are the same, so that when the price returns to the initial price, the crypto currency of the strategy remain unchanged, but the fiat currency increases.
The performance of these two strategies is very close, but also very different. The balance strategy always has funds to buy and sell, and the grid strategy has a certain range. If it exceeds the range, you may not be able to continue to buy, or you may have sold all the crypto currency.
## How to measure profit
Before evaluating the strategy, we need to formulate the criteria for evaluating the profit. Most people's view of income is absolute income, that is, income = current total funds - initial total funds. But if the crypto currency is initially held, this method cannot show the profit of the active behavior of the strategy.
For example, the initial account has a balance of 10,000 yuan, 1 crypto currency, and a crypto currency price of 8,000, and total funds = 10,000 + 1 * 8000 = 18,000 yuan. After running the strategy for a period of time, the current account balance is 2000u, 2 crypto currency, the crypto currency price is 9000, and the total funds = 2000 + 2 * 9000 = 20,000 yuan. Absolute income = 20000 - 18000 = 2000 yuan. But the initial account already holds a crypto currency, even if the strategy is not run, there will be a return of 1,000 yuan in the end, so the profit from the strategy is only 1,000 yuan. This calculation method is floating income, floating income = current balance + current crypto currency * current price - (initial balance + initial crypto currency * current price).
Next we look at the backtest results of these two strategies.
## Backtesting of balance strategies
The balance strategy requires two parameters, the holding value ratio and the adjustment ratio. The holding value is set to 0.5 here, that is, to hold half of the money and half of the crypto currency, and the adjustment ratio is set to 0.01, that is, when the crypto currency price rises and the value of the crypto currency exceeds 51%, 1% of the crypto currency is sold, and the same goes for the decline. The backtest time is the past year, the backtest crypto currency is the Binance BTC_USDT trading pair, and the handling fee is 0.1%.
Backtest results:

Profit curve:

Since the price of Bitcoin has been volatile in the past year, the strategy has achieved stable returns.
## Grid strategy backtest
The parameters required by the grid strategy are relatively complicated, and it is necessary to set the upper and lower limits of the grid, the grid type, the number of grids, and the investment funds. When the strategy is launched, the crypto currency price is 8,000 yuan, here the upper and lower limits of the grid are set to plus or minus 3,000 yuan, the total number of grids is 21, and all funds are invested. The parameters here are as follows:

Backtest results:

Profit curve:

## Comparison of two strategies
From the perspective of the floating income chart, the results of the two strategies are relatively similar. In the case of long-term sideways trading in Bitcoin prices, both have obtained stable returns and both have retracements at the same time. After all, the principles of the strategy are very close. Due to the different parameters, it is difficult to directly compare the two strategies. From the perspective of income/trading volume, the grid strategy is 18.6 and the balance strategy is 22.7. The balance strategy is more efficient.
However, the balance strategy is relatively rigid. In order to increase the transaction volume, apart from more frequent adjustments, it can only increase the total capital investment. The grid strategy has higher requirements for settings. If you choose a small range of price fluctuations, the funds per grid will be large and the transaction volume will be enlarged. If the price is always within the range, the profit will be high, of course. Face the danger of prices exceeding the set range. The balance strategy always has fiat currency to buy and sell crypto currency, which is equivalent to a grid that cannot be broken.
For beginner, it is recommended the balance strategy, the operation is simple, only need to set a parameter of holding ratio, you can run without brain. Those with certain experience can choose the grid strategy, decide the upper and lower limits of fluctuation and the funds per grid by themselves, improve the utilization rate of funds, and seek the maximum benefit.
The balance strategy can choose to balance multiple currencies at the same time. There are also many variants of the grid strategy, such as proportional grids, infinite grids, etc., I will not elaborate on them here, and leave it to the reader to study.
## Strategy source code
There are a lot of Balance Strategy on our strategy square. I recently wrote a new version with simple parameters and easy to understand. This version is used in this article. Source address: https://www.fmz.com/strategy/214943. in addition, our platform also has other versions of balance strategy: https://www.fmz.com/square/s:tag:balance/1, and https://www.fmz.com/ strategy/345
Grid Strategy also has published a lot, https://www.fmz.com/square/s:tag:Grid/1. here is another one: https://www.fmz.com/strategy/113144.
From: https://www.fmz.com/digest-topic/5944 | fmzquant |
1,892,998 | Open Successfully iOS Simulator with React Native & Expo | There are some downsides in the process to run IOS Simulator using Expo, here is a little guide to... | 0 | 2024-06-19T01:01:11 | https://dev.to/arielmejiadev/open-successfully-ios-simulator-with-react-native-expo-472b | javascript, react, reactnative, expo | There are some downsides in the process to run IOS Simulator using Expo, here is a little guide to fix these issues:
## Install X Code
Here a link to install
<a target="_blank" rel="noopener noreferrer" class="css-1rdh0p" href="https://apps.apple.com/us/app/xcode/id497799835">Xcode</a>
## Install Xcode Command Line Tools
This is a tricky section as at the first installation it looks like there is a `Command Line Tools` already selected, but it is not, you can find it in `Xcode > Preferences > Locations ` so we need to click the option that appears to set the `Command Line Tools` path

## Install Watchman
```
brew update
brew install watchman
```
## Run Expo
```
npx expo start
```
and press `i` to select `ios simulator`
## Fixes
- The easiest solution is to restart the computer.
- The second solution is to delete `node_modules` folder and `package-lock.json`, then run npm install, then run `npx expo start`.
## In Summary
The sections where most issues to run iOS Simulator are:
- User has not select `Command Line Tools`
- Sometimes previous attempts to make it run can produce issues, to avoid this just delete `node_modules` and re-run `npm install`.
| arielmejiadev |
1,892,996 | Australian Permaculture Courses | Crystal Waters Eco Village near Maleny / Conondale has established its reputation as a centre for... | 0 | 2024-06-19T00:59:30 | https://dev.to/permaculturepdc/australian-permaculture-courses-16c3 | permaculture, permaculturecourse, react, tutorial | Crystal Waters Eco Village near Maleny / Conondale has established its reputation as a centre for Permaculture here in Queensland with its history going back over 35 years.
The PDC Permaculture Course offering at Crystal Waters includes several Presenters with decades of experience in Permaculture, Sustainable agriculture, Soils & Seeds, Organics and Design principles. The Presenters for this PDC Course include Robin Clayfield, Max Lindegger, Annaliese Hordern & a range of other great special guest presenters.
Permaculture is a design system that attempts to emulate nature in the design and experience that we have within our living environments. Encompassing a wide range of topics on ecological living, from energy efficiency to sustainable and organic agriculture, examining our relationship with water, sun, food and soil - to living in harmony with our environment.
The idea of permaculture as ‘permanent agriculture' is evolving. In recent times there has been significant recognition of the need for more sustainable living and improving our ability to live in harmony with Nature and become more sustainable in all aspects of our lives. Many people are now seeking solutions to modern "out of balance" living and these Courses and the experience here are aimed to elevate people of any age to discover and design sustainable living strategies for their own life.
https://crystalwaters.org.au/ | permaculturepdc |
1,892,991 | The Halting Problem | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-19T00:54:42 | https://dev.to/damari/the-halting-problem-2o4a | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
<!-- Explain a computer science concept in 256 characters or less. -->
Can a program determine if another program will finish running or loop forever? It's the Halting Problem. Alan Turing proved it's undecidable: no algorithm can solve it for all possible programs. This limits what computers can predict about other programs.
## Additional Context
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | damari |
1,892,982 | An Overview of Django: Comprehensive Guide to Django | Django is a powerful and popular web framework for building web applications quickly and efficiently.... | 0 | 2024-06-19T00:51:59 | https://dev.to/kihuni/comprehensive-guide-to-django-38ie | django, webdev, python, beginners | Django is a powerful and popular web framework for building web applications quickly and efficiently. Here’s a comprehensive guide to understanding Django:
<u>Introduction to Django</u>
**What is Django?**
Django is an open-source, high-level web framework written in Python that enables rapid development of secure and maintainable websites.
**History**
Django was created in 2003 to manage news websites for the Lawrence Journal-World newspaper and was released publicly under a BSD license in 2005.
**Naming**
It is named after Django Reinhardt, a famous jazz guitarist, reflecting its creators’ admiration for his music.
<u>Core Features</u>
**MVC Architecture**
Django follows the Model-View-Controller (MVC) architecture, though it refers to it as Model-View-Template (MVT).
**Object-Relational Mapping (ORM)**
Django’s ORM allows developers to interact with the database using Python code instead of SQL queries, making database operations simpler and more intuitive.
**Admin Interface**
Django automatically generates an administrative interface for managing application data, significantly reducing the amount of code developers need to write.
**Form Handling**
Django simplifies form handling, including validation and processing, making it easier to create and manage web forms.
**Security**
Django provides built-in protection against common security threats such as SQL injection, cross-site scripting (XSS), and cross-site request forgery (CSRF).
<u>Components and Structure</u>
**Models**
Models define the data structure and schema in Django. Each model corresponds to a database table.
**Views**
Views contain the logic that processes user requests and returns the appropriate responses, such as rendering a webpage.
**Templates**
Templates define the layout and structure of the user interface, separating design from business logic.
**URLs**
Django uses a clean and flexible URL configuration to route web requests to the appropriate view functions.
**Middleware**
Middleware components process requests globally before they reach the view or after the view processes them, allowing for functionalities like session management and authentication.
<u>Advantages of Using Django</u>
**Rapid Development**
Django’s features like ORM, templates, and the admin interface accelerate development and reduce boilerplate code.
**Scalability**
Django is designed to handle high-traffic websites and large data volumes efficiently.
**Versatility**
It is suitable for various types of web applications, including content management systems (CMS), social networks, and e-commerce sites.
**Community and Documentation**
Django has a large, active community and comprehensive documentation, providing extensive resources and support.
<u>Setting Up Django</u>
**Installation**
Install Django using Python’s package installer with the command:
```
pip install django
```
**Creating a Project**
Initialize a new Django project using the command:
```
django-admin startproject projectname
```
**Creating an App**
Within a project, create individual applications with:
```
python manage.py startapp appname
```
to encapsulate different functionalities.
<u>Database Integration</u>
**Default Database**
By default, Django uses SQLite for simplicity but supports other databases like PostgreSQL, MySQL, and Oracle.
**Migration System**
Django’s migration system tracks and applies changes to models and the database schema, ensuring consistency.
<u>Template System</u>
**Template Language**
Django’s template language allows embedding Python-like expressions within HTML to generate dynamic web content.
**Template Inheritance**
This feature allows for reusable and maintainable templates by extending base templates, and promoting DRY (Don't Repeat Yourself) principles.
<u>Authentication and Authorization</u>
**User Authentication**
Django includes built-in authentication and authorization systems that support user login, logout, password management, and permissions.
**Custom User Models**
Developers can extend or replace the default user model to fit specific application needs.
<u>REST Framework</u>
**Django REST Framework (DRF)**
DRF is an essential library for building powerful and flexible APIs, handling serialization, authentication, permissions, and viewsets to simplify the creation of RESTful APIs.
<u>Deployment</u>
**Deployment Platforms**
Django can be deployed on various platforms including Heroku, AWS, and Google Cloud.
**WSGI/ASGI**
Django supports both WSGI (Web Server Gateway Interface) for synchronous applications and ASGI (Asynchronous Server Gateway Interface) for asynchronous applications.
<u>Popular Django Packages</u>
**Django Allauth**
Provides comprehensive user registration and authentication functionalities.
**Django Celery**
Integrates Celery for handling asynchronous tasks and scheduling.
**Django Channels**
Adds WebSocket support to Django, enabling real-time communication features.
<u>Best Practices</u>
**Project Structure**
Maintain a well-organized project structure by separating different functionalities into individual apps.
**Environment Variables**
Use environment variables for managing sensitive settings like secret keys and database configurations.
**Testing**
Write tests for your applications to ensure reliability and maintainability. Django includes a built-in test framework.
<u>Community and Resources</u>
**Django Project Website**
The official site provides extensive documentation and resources: [djangoproject.com](https://www.djangoproject.com/).
**Django Packages**
A directory of reusable apps, sites, tools, and more: [djangopackages.org](https://djangopackages.org/).
**Django Forum**
A place for discussion and questions: [forum.djangoproject.com](https://forum.djangoproject.com/).
**GitHub**
The source code is available on GitHub: [github.com/django/django](https://github.com/django/django).
<u>**References**</u>
- Django Documentation: [docs.djangoproject.com](https://docs.djangoproject.com/en/5.0/)
- Django Project on GitHub: [github.com/django/django](https://github.com/django/django)
- "Two Scoops of Django" by Daniel Roy Greenfeld and Audrey Roy Greenfeld (Book)
In conclusion, this guide provides a solid foundation for understanding Django, suitable for beginners and experienced developers alike. | kihuni |
1,892,995 | Travelling Salesman Problem (TSP) | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-19T00:47:26 | https://dev.to/damari/travelling-salesman-problem-tsp-5dc0 | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
<!-- Explain a computer science concept in 256 characters or less. -->
Can a salesman visit a set of cities, each once, and return to the start with the shortest route? This is an NP-hard problem, meaning no efficient algorithm exists for large numbers of cities. It's impacts logistics, DNA sequencing, and more.
## Additional Context
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | damari |
1,892,992 | Aya Rust Tutorial part 5: Using Maps | © steve latif Welcome to part 5. So far we have created a basic hello world program in Part... | 0 | 2024-06-19T00:38:30 | https://dev.to/stevelatif/aya-rust-tutorial-part-5-using-maps-1boe | ebpf, networking, linux, rust | <p>© steve latif </p>
<p>Welcome to part 5. So far we have created a basic hello world program in Part <a href="https://dev.to/stevelatif/aya-rust-tutorial-part-four-xdp-hello-world-4c85">Four</a>.
In this chapter we will start looking at how to pass data
between the kernel and user space using Maps.</p>
<h1>Overview: What is a Map and why do we need them ?</h1>
<p>The eBPF verifier enforces a 512 byte limit per stack frame,
if you need to handle more data you can store data using
<a href="https://docs.kernel.org/bpf/maps.html">maps</a></p>
<ul>
<li>Maps are created on the kernel code</li>
<li>Maps are accessible from user space using a system call and a key</li>
<li>Maps allow persistence across program invocations</li>
<li>Maps offer different storage types</li>
</ul>
<p>Maps in eBPF are a basic building block and we will be using them extensively
in the next sections. Our first example will build on the previous
example and will be a packet counter using an array.</p>
<p>Let's take a minute to look at the kernel code and
see the definitions there.
Maps are defined in <a href="https://elixir.bootlin.com/linux/latest/source/tools/lib/bpf/libbpf.c#L511">libbpf.h</a> </p>
<pre><code class="language-c">struct bpf<em>map</em>def {
unsigned int type;
unsigned int key_size;
unsigned int value_size;
unsigned int max_entries;
unsigned int map_flags;
};
</code></pre>
<p>The different map types:</p>
<ul>
<li>BPF<em>MAP_TYPE</em>ARRAY </li>
<li>BPF<em>MAP_TYPE_PERCPU</em>ARRAY </li>
<li>BPF<em>MAP_TYPE_PROG</em>ARRAY </li>
<li>BPF<em>MAP_TYPE_PERF_EVENT</em>ARRAY </li>
<li>BPF<em>MAP_TYPE_CGROUP</em>ARRAY </li>
<li>BPF<em>MAP_TYPE_CGROUP</em>STORAGE </li>
<li>BPF<em>MAP_TYPE_CGROUP</em>STORAGE </li>
<li>BPF<em>MAP_TYPE_PERCPU_CGROUP</em>STORAGE</li>
<li>BPF<em>MAP_TYPE</em>HASH </li>
<li>BPF<em>MAP_TYPE_PERCPU</em>HASH </li>
<li>BPF<em>MAP_TYPE_LRU</em>HASH </li>
<li>BPF<em>MAP_TYPE_LRU_PERCPU</em>HASH </li>
<li>BPF<em>MAP_TYPE_LPM</em>TRIE </li>
<li>BPF<em>MAP_TYPE_STACK</em>TRACE </li>
<li>BPF<em>MAP_TYPE_ARRAY_OF</em>MAPS </li>
<li>BPF<em>MAP_TYPE_HASH_OF</em>MAPS </li>
<li>BPF<em>MAP_TYPE_INODE</em>STORAGE </li>
<li>BPF<em>MAP_TYPE_TASK</em>STORAGE </li>
<li>BPF<em>MAP_TYPE</em>DEVMAP </li>
<li>BPF<em>MAP_TYPE_DEVMAP</em>HASH </li>
<li>BPF<em>MAP_TYPE_SK</em>STORAGE </li>
<li>BPF<em>MAP_TYPE</em>CPUMAP </li>
<li>BPF<em>MAP_TYPE</em>XSKMAP </li>
<li>BPF<em>MAP_TYPE</em>SOCKMAP </li>
<li>BPF<em>MAP_TYPE</em>SOCKHASH </li>
<li>BPF<em>MAP_TYPE_REUSEPORT</em>SOCKARRAY </li>
<li>BPF<em>MAP_TYPE</em>QUEUE </li>
<li>BPF<em>MAP_TYPE</em>STACK </li>
<li>BPF<em>MAP_TYPE_STRUCT</em>OPS </li>
<li>BPF<em>MAP_TYPE</em>RINGBUF </li>
<li>BPF<em>MAP_TYPE_BLOOM</em>FILTER </li>
<li>BPF<em>MAP_TYPE_USER</em>RINGBUF </li>
<li>BPF<em>MAP_TYPE</em>ARENA</li>
</ul>
<p>are defined in <a href="https://elixir.bootlin.com/linux/latest/source/include/linux/bpf_types.h#L87">map</a></p>
<p>The corresponding aya definitions are documented <a href="https://docs.aya-rs.dev/aya_ebpf/maps/">here</a>
for the kernel side.
The corresponding user space entries are <a href="https://docs.aya-rs.dev/aya/maps/">here</a></p>
<p>Our initial example will be a simple per CPU packet counter that will print out
from user space the number of packets arriving at an interface </p>
<p>In the C API helper functions are used on both the kernel and user space side.
The helpers have the same names on both sides.
The kernel side helper functions have access to a pointer to the map and key.</p>
<p>On the user space side the helper functions use a syscall and file descriptor. </p>
<p>In Aya on the kernel <a href="https://docs.aya-rs.dev/aya_ebpf/maps/">side</a></p>
<ul>
<li>array::Array</li>
<li>bloom_filter::BloomFilter</li>
<li>hash_map::HashMap</li>
<li>hash_map::LruHashMap</li>
<li>hash_map::LruPerCpuHashMap</li>
<li>hash_map::PerCpuHashMap</li>
<li>lpm_trie::LpmTrie</li>
<li>per<em>cpu</em>array::PerCpuArray</li>
<li>perf::PerfEventArray</li>
<li>perf::PerfEventByteArray</li>
<li>program_array::ProgramArray</li>
<li>queue::Queue</li>
<li>ring_buf::RingBuf</li>
<li>sock_hash::SockHash</li>
<li>sock_map::SockMap</li>
<li>stack::Stack</li>
<li>stack_trace::StackTrace</li>
<li>xdp::CpuMap</li>
<li>xdp::DevMap</li>
<li>xdp::DevMapHash</li>
<li>xdp::XskMap</li>
</ul>
<p>While on the user space <a href="https://docs.aya-rs.dev/aya_ebpf/maps/">side</a>
We have the same function list.</p>
<p>As before generate the code from the template using the command</p>
<pre><code class="language-shell"> cargo generate https://github.com/aya-rs/aya-template
</code></pre>
<p>I called the project <code>xdp-map-counter</code></p>
<p>Lets set up the packet counter, on the eBPF side:</p>
<pre><code class="language-rust">#![no_std]
#![no_main]
use aya<em>ebpf::{bindings::xdp</em>action,
macros::{xdp, map},
programs::XdpContext,
maps::PerCpuArray,
};
const CPU_CORES: u32 = 16;
#[map(name="PKT<em>CNT</em>ARRAY")]
static mut PACKET<em>COUNTER: PERCPU<u32> = PerCpuArray::with_max_entries(CPU</em>CORES , 0);
#[xdp]
pub fn xdp<em>map_counter(</em>ctx: XdpContext) -> u32 {
match try<em>xdp_map</em>counter() {
Ok(ret) => ret,
Err(<em>) => xdp_action::XDP</em>ABORTED,
}
}
#[inline(always)]
fn try<em>xdp_map</em>counter() -> Result<u32, ()> {
unsafe {
let counter = PACKET_COUNTER
.get<em>ptr</em>mut(0)
.ok_or(())? ;
*counter += 1;
}
Ok(xdp<em>action::XDP</em>PASS)
}
#[panic_handler]
fn panic(_info: &core::panic::PanicInfo) -> ! {
unsafe { core::hint::unreachable_unchecked() }
}
</code></pre>
<p>The map will be created on the eBPF side. We will use a PerCpuArray in
this first example. Arrays are simple to work with. With the PerCpuArray
each CPU sees its own instance of the map, this means that it avoids
lock contention and is therefore the most performant way to get
readings from eBPF to user land. The downside is that updating the
values from user space can't be done safely.</p>
<p>The size of array must be known at build time so we set a constant with an
upper bound on the number of cores on the system
<code>const CPU_CORES: u32 = 16</code></p>
<p>Then we can define a PerCpuArray with <code>CPU_CORES</code> entries initialized to 0.</p>
<pre><code class="language-rust">#[map(name="PKT<em>CNT</em>ARRAY")]
static mut PACKET<em>COUNTER: PerCpuArray<u32> = PerCpuArray::with_max_entries(CPU</em>CORES, 0);
</code></pre>
<p>The main work is in the <code>try<em>xdp</em>counter</code> function.
We get a pointer to the map and then increment the value</p>
<pre><code class="language-rust"> unsafe {
let counter = PACKET_COUNTER
.get<em>ptr</em>mut(0)
.ok_or(())? ;
*counter += 1;
}
</code></pre>
<p>Note that the call to <code>ok_or()</code> is required, failing to have the check
here will fail the eBPF verifier. </p>
<p>The packet is then passed on to the networking stack.</p>
<pre><code class="language-rust"> Ok(xdp<em>action::XDP</em>PASS)
</code></pre>
<p>The code on the user space side:</p>
<pre><code class="language-rust">use anyhow::Context;
use aya::programs::{Xdp, XdpFlags};
use aya::{include<em>bytes</em>aligned, Bpf};
use aya::maps::PerCpuValues;
use aya::maps::PerCpuArray;
use aya_log::BpfLogger;
use clap::Parser;
use log::{warn, debug};
use aya::util::nr_cpus;
//use tokio::signal;
#[derive(Debug, Parser)]
struct Opt {
#[clap(short, long, default_value = "eth0")]
iface: String,
}
#[tokio::main]
async fn main() -> Result<(), anyhow::Error> {
let opt = Opt::parse();
env_logger::init();
// Bump the memlock rlimit. This is needed for older kernels that don't use the
// new memcg based accounting, see https://lwn.net/Articles/837122/
let rlim = libc::rlimit {
rlim<em>cur: libc::RLIM</em>INFINITY,
rlim<em>max: libc::RLIM</em>INFINITY,
};
let ret = unsafe { libc::setrlimit(libc::RLIMIT_MEMLOCK, &rlim) };
if ret != 0 {
debug!("remove limit on locked memory failed, ret is: {}", ret);
}
// This will include your eBPF object file as raw bytes at compile-time and load it at
// runtime. This approach is recommended for most real-world use cases. If you would
// like to specify the eBPF program at runtime rather than at compile-time, you can
// reach for `Bpf::load_file` instead.
#[cfg(debug_assertions)]
let mut bpf = Bpf::load(include<em>bytes</em>aligned!(
"../../target/bpfel-unknown-none/debug/xdp-map-counter"
))?;
#[cfg(not(debug_assertions))]
let mut bpf = Bpf::load(include<em>bytes</em>aligned!(
"../../target/bpfel-unknown-none/release/xdp-map-counter"
))?;
if let Err(e) = BpfLogger::init(&mut bpf) {
// This can happen if you remove all log statements from your eBPF program.
warn!("failed to initialize eBPF logger: {}", e);
}
let program: &mut Xdp = bpf.program<em>mut("xdp_map_counter").unwrap().try</em>into()?;
program.load()?;
program.attach(&opt.iface, XdpFlags::default())
.context("failed to attach the XDP program with default flags - try changing XdpFlags::default() to XdpFlags::SKB_MODE")?;
let array = PerCpuArray::try<em>from(bpf.map_mut("PKT_CNT</em>ARRAY").unwrap())?;
loop {
let cc: PerCpuValues<u32> = array.get(&0, 0)?;
let mut total : u32 = 0;
//println!("{:?} packets", cc);
for ii in 1..nr_cpus().expect("failed to get number of cpus") {
print!("{} ", cc[ii]);
total += cc[ii];
}
println!("total: {} ", total);
std::thread::sleep(std::time::Duration::from_secs(1));
}
//signal::ctrl_c().await?;
}
</code></pre>
<p>The array reference is created on the user space side here with name
'PKT<em>CNT</em>ARRAY' </p>
<pre><code class="language-rust"> let array = PerCpuArray::try<em>from(bpf.map_mut("PKT_CNT</em>ARRAY").unwrap())?;
</code></pre>
<p>and must match the name declared in the eBPF code </p>
<pre><code class="language-rust">#[map(name="PKT<em>CNT</em>ARRAY")]
static mut COUNTER: PerCpuArray<u32> = PerCpuArray::with<em>max_entries(CPU</em>CORES , 0);
</code></pre>
<p>Most of the rest of the code is boilerplate except for the loop at the end which checks every
second for the results from the kernel eBPF code and then prints out the stats.</p>
<pre><code class="language-rust"> loop {
let cc: PerCpuValues<u32> = array.get(&0, 0)?;
let mut total : u32 = 0;
for ii in 1..nr_cpus().expect("failed to get number of cpus") {
print!("{} ", cc[ii]);
total += cc[ii];
}
println!("total: {} ", total);
std::thread::sleep(std::time::Duration::from_secs(1));
}
</code></pre>
<h2>Testing</h2>
<p>As we did before we can run it over the loopback interface.
Build as before</p>
<pre><code class="language-shell">cargo xtask build-ebpf
cargo build
</code></pre>
<p>Then run</p>
<pre><code class="language-shell">cargo xtask run -- -i lo
</code></pre>
<p>In another terminal ping the loopback interface</p>
<pre><code class="language-shell">
ping 127.0.0.1
</code></pre>
<p>In the terminal where you are running the <code>cargo run</code> command you should see </p>
<pre><code class="language-shell">0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 total: 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 total: 0
0 0 2 0 0 0 0 0 0 0 0 0 0 0 0 total: 2
0 0 4 0 0 0 0 0 0 0 0 0 0 0 0 total: 4
0 0 6 0 0 0 0 0 0 0 0 0 0 0 0 total: 6
0 0 8 0 0 0 0 0 0 0 0 0 0 0 0 total: 8
0 0 10 0 0 0 0 0 0 0 0 0 0 0 0 total: 10
</code></pre>
<p>We can see packets arriving and being processed on one core.
To run a more stressful test, replace the ping with the
following ssh command:</p>
<pre><code class="language-shell"> ssh 127.0.0.1 cat /dev/zero
</code></pre>
<p>You should see something like:</p>
<pre><code class="language-shell"> 0 0 0 0 0 0 0 0 0 0 0 0 0 0 total: 0
0 7 6 2 0 0 0 0 0 0 0 5 0 0 0 total: 20
0 7 6 2 0 0 0 0 0 0 0 6 0 0 0 total: 21
0 7 6 2 0 0 0 0 0 0 0 6 0 0 0 total: 21
0 7 6 2 0 0 0 0 0 0 0 6 0 0 0 total: 21
0 7 6 2 0 0 0 0 0 0 0 6 0 0 0 total: 21
0 48 12 2 0 16 0 0 0 0 0 8 0 702 0 total: 788
0 48 133 12 0 527 0 0 0 5 0 8 94 1978 558 total: 3363
0 48 133 243 0 527 0 17 0 5 0 8 94 1978 3179 total: 6232
0 48 133 243 0 645 0 144 64 23 0 8 94 1978 5800 total: 9180
0 48 133 243 0 733 0 201 64 203 2 8 94 1978 8406 total: 12113
0 48 133 243 0 903 0 333 64 228 2 8 94 1978 11027 total: 15061
0 48 133 243 0 1447 0 548 64 237 2 8 94 1978 13136 total: 17938
0 368 133 243 0 3908 0 548 64 237 2 8 94 1978 13136 total: 20719
0 595 133 416 0 6529 0 548 64 237 2 8 94 1978 13136 total: 23740
0 683 133 674 0 9294 0 548 64 237 2 8 94 1978 13136 total: 26851
0 854 133 927 0 11544 0 548 64 296 2 440 94 1978 13136 total: 30016
...
</code></pre>
<p>Run it for a few iterations before Ctrl-C ing it.</p>
<p>As before lets take a minute to look at the eBPF byte code which corresponds to
the code in the XDP section of <code>xdp-map-counter/xdp-map-counter-ebpf/src/main.rs</code></p>
<pre><code class="language-rust">fn try<em>xdp_map</em>counter() -> Result<u32, ()> {
unsafe {
let counter = COUNTER
.get<em>ptr</em>mut(0)
.ok_or(())? ;
*counter += 1;
}
Ok(xdp<em>action::XDP</em>PASS)
}
</code></pre>
<p>Using <code>llvm-objdump</code> as before</p>
<pre><code class="language-shell">$ llvm-obj dump --section=xdp -S target/bpfel-unknown-none/debug/xdp-map-counter
</code></pre>
<pre><code class="language-asm">target/bpfel-unknown-none/debug/xdp-map-counter: file format elf64-bpf
Disassembly of section xdp:
0000000000000000 <xdp<em>map</em>counter>:
0: b7 06 00 00 00 00 00 00 r6 = 0
1: 63 6a fc ff 00 00 00 00 *(u32 *)(r10 - 4) = r6
2: bf a2 00 00 00 00 00 00 r2 = r10
3: 07 02 00 00 fc ff ff ff r2 += -4
4: 18 01 00 00 00 00 00 00 00 00 00 00 00 00 00 00 r1 = 0 ll
6: 85 00 00 00 01 00 00 00 call 1
7: 15 00 04 00 00 00 00 00 if r0 == 0 goto +4 <LBB0_2>
8: 61 01 00 00 00 00 00 00 r1 = *(u32 *)(r0 + 0)
9: 07 01 00 00 01 00 00 00 r1 += 1
10: 63 10 00 00 00 00 00 00 *(u32 *)(r0 + 0) = r1
11: b7 06 00 00 02 00 00 00 r6 = 2
0000000000000060 <LBB0_2>:
12: bf 60 00 00 00 00 00 00 r0 = r6
13: 95 00 00 00 00 00 00 00 exit
</code></pre>
<p>We see that byte code maps closely to the rust code.
After setting up parameters in the first 4 lines, in line 6 we
have a <code>call 1</code> that is a system call to the bpf helper
<code>map<em>lookup</em>elem</code>
That value gets assigned to r1 where it is incremented on
line 9
and is then assigned to a location in memory.</p>
| stevelatif |
1,892,989 | Introduction to the Internet of Things (IoT) Security | Introduction The Internet of Things (IoT) is a rapidly growing network of interconnected... | 0 | 2024-06-19T00:33:06 | https://dev.to/kartikmehta8/introduction-to-the-internet-of-things-iot-security-1n27 | javascript, beginners, programming, tutorial | ## Introduction
The Internet of Things (IoT) is a rapidly growing network of interconnected devices that use sensors and software to exchange data over the internet. It has revolutionized the way we live and work, making our lives more convenient and efficient. However, with this increased connectivity comes the need for security measures to ensure personal information and sensitive data remain protected. In this article, we will discuss the importance of IoT security and its advantages, disadvantages, and key features.
## Advantages of IoT Security
1. **Protection of Personal Information:** IoT security ensures the protection of personal information, such as financial data and medical records, from hackers and cybercriminals.
2. **Prevention of Unauthorized Access:** It prevents unauthorized access and can detect and neutralize potential threats.
3. **Peace of Mind:** IoT security can also provide users with a sense of control and peace of mind, knowing that their devices and data are secure.
## Disadvantages of IoT Security
1. **High Costs:** Implementing IoT security can be expensive and complex, making it challenging for smaller companies and individuals to afford.
2. **Constant Maintenance:** Constant updates and maintenance are required to keep up with ever-evolving cybersecurity threats.
3. **Performance Issues:** Incorporating security measures may also slow down the speed and performance of devices.
## Key Features of IoT Security
1. **Encryption:** Encryption protects data by encoding it into uninterpretable code.
```plaintext
Data before encryption: Hello World
Data after encryption: XUR0AB129
```
2. **Authentication:** Ensures that only authorized users can access devices and data.
```plaintext
Example of authentication process:
User logs in -> System verifies credentials -> Access granted/denied
```
3. **Access Control:** Limits the number of devices that can connect to the network.
```plaintext
Example of access control settings:
Device A allowed: Yes
Device B allowed: No
```
4. **Device Monitoring:** Monitors activity and notifies users of any potential threats or breaches.
```plaintext
Example of device monitoring alert:
Alert: Unauthorized access detected on Device C at 3:45 PM
```
## Conclusion
In conclusion, the Internet of Things has brought about numerous advancements and conveniences, but it is crucial to prioritize security when it comes to connected devices. With proper implementation of IoT security measures, we can fully reap the benefits of this technology while safeguarding our personal information and sensitive data from potential cyber threats.
| kartikmehta8 |
1,892,988 | K8sGPT + Ollama - A Free Kubernetes Automated Diagnostic Solution | I checked my blog drafts over the weekend and found this one. I remember writing it with "Kubernetes... | 0 | 2024-06-19T00:24:34 | https://dev.to/addozhang/k8sgpt-ollama-a-free-kubernetes-automated-diagnostic-solution-3c8o | k8s, k8sgpt, ollama | I checked my blog drafts over the weekend and found this one. I remember writing it with "Kubernetes Automated Diagnosis Tool: k8sgpt-operator"(posted in Chinese) about a year ago. My procrastination seems to have reached a critical level. Initially, I planned to use K8sGPT + [LocalAI](https://localai.io). However, after trying [Ollama](https://ollama.com), I found it more user-friendly. Ollama also supports the [OpenAI API](https://github.com/ollama/ollama/blob/main/docs/openai.md), so I decided to switch to using Ollama.
---
After publishing the article introducing k8sgpt-operator, some readers mentioned the high barrier to entry for using OpenAI. This issue is indeed challenging but not insurmountable. However, this article is not about solving that problem but introducing an alternative to OpenAI: Ollama. Late last year, [k8sgpt entered the CNCF Sandbox](https://landscape.cncf.io/?item=observability-and-analysis--observability--k8sgpt).
### 1. Installing Ollama

Ollama is an open-source large model tool that allows you to easily install and run [various large models](https://ollama.com/library) locally or in the cloud. It is very user-friendly and can be run with simple commands. On macOS, you can install it with a single command using homebrew:
```sh
brew install ollama
```
The latest version is 0.1.44.
```sh
ollama -v
Warning: could not connect to a running Ollama instance
Warning: client version is 0.1.44
```
On Linux, you can also install it with the official script.
```sh
curl -sSL https://ollama.com/install.sh | sh
```
Start Ollama and set the listening address to `0.0.0.0` through an environment variable to allow access from containers or K8s clusters.
```sh
OLLAMA_HOST=0.0.0.0 ollama start
...
time=2024-06-16T07:54:57.329+08:00 level=INFO source=routes.go:1057 msg="Listening on 127.0.0.1:11434 (version 0.1.44)"
time=2024-06-16T07:54:57.329+08:00 level=INFO source=payload.go:30 msg="extracting embedded files" dir=/var/folders/9p/2tp6g0896715zst_bfkynff00000gn/T/ollama1722873865/runners
time=2024-06-16T07:54:57.346+08:00 level=INFO source=payload.go:44 msg="Dynamic LLM libraries [metal]"
time=2024-06-16T07:54:57.385+08:00 level=INFO source=types.go:71 msg="inference compute" id=0 library=metal compute="" driver=0.0 name="" total="21.3 GiB" available="21.3 GiB"
```
### 2. Downloading and Running Large Models
Llama3, one of the popular large models, was open-sourced by Meta in April. Llama3 has two versions: 8B and 70B.
I am running it on macOS, so I chose the 8B version. The 8B version is 4.7 GB, and it takes 3-4 minutes to download with a fast internet connection.
```sh
ollama run llama3
```
On my M1 Pro with 32GB of memory, it takes about 12 seconds to start.
```
time=2024-06-17T09:30:25.070+08:00 level=INFO source=server.go:572 msg="llama runner started in 12.58 seconds"
```
Each query takes about 14 seconds.
```
curl http://localhost:11434/api/generate -d '{
"model": "llama3",
"prompt": "Why is the sky blue?",
"stream": false
}'
....
"total_duration":14064009500,"load_duration":1605750,"prompt_eval_duration":166998000,"eval_count":419,"eval_duration":13894579000}
```
### 3. Configuring K8sGPT CLI Backend
If you want to test k8sgpt-operator, you can skip this step.
We will use the Ollama REST API as the backend for k8sgpt, serving as the inference provider. Here, we select the backend type as `localai` because [LocalAI](https://localai.io) is compatible with the OpenAI API, and the actual provider will still be Ollama running Llama.
```sh
k8sgpt auth add --backend localai --model llama3 --baseurl http://localhost:11434/v1
```
Set it as the default provider.
```sh
k8sgpt auth default --provider localai
Default provider set to localai
```
**Testing:**
Create a pod in k8s using the image `image-not-exist`.
```sh
kubectl get po k8sgpt-test
NAME READY STATUS RESTARTS AGE
k8sgpt-test 0/1 ErrImagePull 0 6s
```
Use k8sgpt to analyze the error.
```sh
k8sgpt analyze --explain --filter=Pod --namespace=default --output=json
{
"provider": "localai",
"errors": null,
"status": "ProblemDetected",
"problems": 1,
"results": [
{
"kind": "Pod",
"name": "default/k8sgpt-test",
"error": [
{
"Text": "Back-off pulling image \"image-not-exist\"",
"KubernetesDoc": "",
"Sensitive": []
}
],
"details": "Error: Back-off pulling image \"image-not-exist\"\n\nSolution: \n1. Check if the image exists on Docker Hub or your local registry.\n2. If not, create the image using a Dockerfile and build it.\n3. If the image exists, check the spelling and try again.\n4. Verify the image repository URL in your Kubernetes configuration file (e.g., deployment.yaml).",
"parentObject": ""
}
]
}
```
### 4. Deploying and Configuring k8sgpt-operator
k8sgpt-operator can automate k8sgpt in the cluster. You can install it using Helm.
```shell
helm repo add k8sgpt https://charts.k8sgpt.ai/
helm repo update
helm install release k8sgpt/k8sgpt-operator -n k8sgpt --create-namespace
```
k8sgpt-operator provides two CRDs: `K8sGPT` to configure k8sgpt and `Result` to output analysis results.
```shell
kubectl api-resources | grep -i gpt
k8sgpts core.k8sgpt.ai/v1alpha1 true K8sGPT
results core.k8sgpt.ai/v1alpha1 true Result
```
Configure `K8sGPT`, using Ollama's IP address for `baseUrl`.
```shell
kubectl apply -n k8sgpt -f - << EOF
apiVersion: core.k8sgpt.ai/v1alpha1
kind: K8sGPT
metadata:
name: k8sgpt-ollama
spec:
ai:
enabled: true
model: llama3
backend: localai
baseUrl: http://198.19.249.3:11434/v1
noCache: false
filters: ["Pod"]
repository: ghcr.io/k8sgpt-ai/k8sgpt
version: v0.3.8
EOF
```
After creating the `K8sGPT` CR, the operator will automatically create a pod for it. Checking the `Result` CR will show the same results.
```sh
kubectl get result -n k8sgpt -o jsonpath='{.items[].spec}' | jq .
{
"backend": "localai",
"details": "Error: Kubernetes is unable to pull the image \"image-not-exist\" due to it not existing.\n\nSolution: \n1. Check if the image actually exists.\n2. If not, create the image or use an alternative one.\n3. If the image does exist, ensure that the Docker daemon and registry are properly configured.",
"error": [
{
"text": "Back-off pulling image \"image-not-exist\""
}
],
"kind": "Pod",
"name": "default/k8sgpt-test",
"parentObject": ""
}
```
| addozhang |
1,884,290 | Tutorial de instalação do Storybook com Tailwind | Instalação Storybook Na pasta do seu projeto, execute o comando no terminal: npx... | 0 | 2024-06-19T00:10:31 | https://dev.to/gustavoacaetano/tutorial-de-instalacao-do-storybook-com-tailwind-324l | storybook, ledscommunity, tailwindcss | ## Instalação Storybook
Na pasta do seu projeto, execute o comando no terminal:
```
npx storybook@latest init
```
Você deverá ver o seguinte texto no terminal:
```
Need to install the following packages:
storybook@8.1.10
Ok to proceed? (y)
```
Responda com `y`.
O Storybook deve detectar se o seu projeto utiliza `Vite` ou `Webpack`:
```
Adding Storybook support to your "Vue 3" app
• Detected Vite project.
Setting builder to Vite.
```
Se isso não acontecer, selecione a ferramenta utilizada no seu projeto nas opções que aparecerão no terminal.
## Instalação Tailwind
Na pasta do projeto, instale o tailwind e outras dependências:
```
npm install -D tailwindcss@latest postcss@latest autoprefixer@latest
```
Utilize o seguinte comando para gerar os arquivos `tailwind.config.js` e `postcss.config.js`:
```
npx tailwindcss init -p
```
Arquivo `tailwind.config.js`:
```
export default {
content: [],
theme: {
extend: {},
},
plugins: [],
}
```
Altere o arquivo na segunda linha para:
```
content: ['./index.html', './src/**/*.{vue,js,ts,jsx,tsx}'],
```
OBS: Certifique-se de que o caminho `'./src/**/*.{vue,js,ts,jsx,tsx}'` esteja de acordo com a sua estrutura de arquivos!
Arquivo `postcss.config.js`:
```
export default {
plugins: {
tailwindcss: {},
autoprefixer: {},
},
}
```
Crie o arquivo `./src/index.css` e inclua o Tailwind `base`, `components` e `utilities` styles
Arquivo `./src/index.css`:
```
@tailwind base;
@tailwind components;
@tailwind utilities;
```
Agora, importe o `./src/index.css` no `./src/main.js`:
```
import './index.css'
```
## Integrando Tailwind e Storybook
Se estiver usando Webpack, execute o comando:
```
npx storybook@latest add @storybook/addon-styling-webpack
```
Para todos os casos (Vite ou Webpack), vá ao arquivo `./storybook/preview.js` e adicione:
```
import '../src/index.css';
```
E, assim, seus stories estarão integrados com o Tailwind! | gustavoacaetano |
1,892,984 | Iris Amabile Beaudeau is a Shoplifter | This is a reminder that Iris Amabile Beaudeau is a shoplifter. Iris Beaudeau was at an electronics... | 0 | 2024-06-19T00:00:41 | https://dev.to/verabernstein/iris-amabile-beaudeau-is-a-shoplifter-3cio | This is a reminder that Iris Amabile Beaudeau is a shoplifter. Iris Beaudeau was at an electronics store when she placed an item in her wool tights. A female employee tried to stop her. Father Jonathan Beaudeau shoved the woman and screamed "don't you dare touch my little girl!" Iris laughed and said "what you gonna do? I'm 7 years old."
Iris Beaudeau has a history of violence. Last year, she was caught throwing rocks at newborn infants. At school, she is called "Iris the Virus" because of an STD from the father during birth. I was at a restaurant last year when Iris was misbehaving. Jonathan Beaudeau spanked her over wool tights in public. It was disgraceful.
Iris Beaudeau resides in Florida, the address can be found on google. Birthday is June 21st. Normally the grandmother Chantal Beaudeau goes to Florida on birthdays.
 | verabernstein | |
1,812,679 | Welcome Thread - v281 | Leave a comment below to introduce yourself! You can talk about what brought you here, what... | 0 | 2024-06-19T00:00:00 | https://dev.to/devteam/welcome-thread-v281-5fc7 | welcome | ---
published_at : 2024-06-19 00:00 +0000
---

---
1. Leave a comment below to introduce yourself! You can talk about what brought you here, what you're learning, or just a fun fact about yourself.
2. Reply to someone's comment, either with a question or just a hello. 👋
3. Elevate your writing skills on DEV using our [Best Practices](https://dev.to/devteam/best-practices-for-writing-on-dev-creating-a-series-2bgj) series! | sloan |
1,893,294 | Dynamic Nested Forms with Rails and Stimulus | I've been away from Ruby on Rails for a couple of years. And in that time, Rails introduced Hotwire,... | 0 | 2024-06-19T12:32:30 | https://jonathanyeong.com/rails-stimulus-dynamic-nested-form/ | tutorial, ruby, rails | ---
title: Dynamic Nested Forms with Rails and Stimulus
published: true
date: 2024-06-19 00:00:00 UTC
tags: tutorial, ruby, rails
canonical_url: https://jonathanyeong.com/rails-stimulus-dynamic-nested-form/
---
I've been away from Ruby on Rails for a [couple of years](https://jonathanyeong.com/what-i-missed-about-ruby/). And in that time, Rails introduced [Hotwire](https://hotwired.dev/), and [Stimulus](https://stimulus.hotwired.dev/). In this post, I extend my [Rails nested from tutorial](https://jonathanyeong.com/what-i-missed-about-ruby/) by adding a button to dynamically add new nested form elements using a [Stimulus Nested Form component](https://www.stimulus-components.com/docs/stimulus-rails-nested-form/).
We're going all in on Rails, so I'll be using the default [importmap JS package manager](https://github.com/rails/importmap-rails).
## Set up
Let's first install the Stimulus nested form component
```bash
bin/importmap pin @stimulus-components/rails-nested-form
```
Update `app/javascript/controllers/application.js` to register the stimulus component.
```javascript
import { Application } from "@hotwired/stimulus" // Should exist already
import RailsNestedForm from '@stimulus-components/rails-nested-form'
const application = Application.start() // Should exist already
application.register('nested-form', RailsNestedForm)
...
```
Let's modify our [previous form](https://jonathanyeong.com/nested-forms-in-rails/) by moving our nested fields into a partial. First create the partial `app/views/training_sessions/_training_step_fields.html.erb` and add the following code:
```erb
# app/views/training_sessions/_training_step_fields.html.erb
<%= f.label :description %>
<%= f.text_field :description %>
```
After we create our partial, we can modify the form in `new.html.erb`:
```erb
# app/views/training_sessions/new.html.erb
<%= form_with model: @training_session do |form| %>
Session Steps:
<%= form.fields_for :training_steps do |training_steps_form| %>
<%= render partial: "training_step_fields", locals: { f: training_steps_form } %>
<% end %>
<%= form.submit "Create New Session" %>
<% end %>
```
At this point, when we go to `training_sessions/new` we should see the same behaviour as before.
## Implementing Stimulus Nested Form Component
Let's implement the Stimulus component. Modify your `new.html.erb` to add a Stimulus target:
```erb
<%= form_with model: @training_session do |form| %>
Session Steps:
<template data-nested-form-target="template">
<%= form.fields_for :training_steps, TrainingStep.new, child_index: 'NEW_RECORD' do |training_steps_form| %>
<%= render partial: "training_step_fields", locals: { f: training_steps_form }%>
<% end %>
</template>
<%= form.fields_for :training_steps do |training_steps_form| %>
<%= render partial: "training_step_fields", locals: { f: training_steps_form } %>
<% end %>
<div data-nested-form-target="target"></div>
<button type="button" data-action="nested-form#add">Add Training Step</button>
<%= form.submit "Create New Session" %>
<% end %>
```
Now, if we reload our app, there's a button that lets you add more training steps 🎉.

## Breaking down the changes
Let's break down the changes.
```html
<template data-nested-form-target="template">
...
</template>
<div data-nested-form-target="target"></div>
```
We register two targets with Stimulus, `template` and `target`. The `template` target, which in Stimulus, gets referred to as `this.templateTarget` is inserted into `this.targetTarget` via `insertAdjacentHTML`. The template is defined similar to our original definition of the training\_step fields. But it has two additional params: `TrainingSteps.new` and `child_index: 'NEW_RECORD'`
```erb
<%= form.fields_for :training_steps, TrainingStep.new, child_index: 'NEW_RECORD' do |training_steps_form| %>
...
<% end %>
```
Rails requires the index of nested fields to be unique. It does enforce uniqueness via a `child_index`. See the [Rails docs](https://apidock.com/rails/ActionView/Helpers/FormHelper/fields_for#512-Setting-child-index-while-using-nested-attributes-mass-assignment). We must keep the `NEW_RECORD` value. Stimulus nested form component will replace the term `NEW_RECORD` with `Date.getTime().toString()`.
`TrainingStep.new` is used to build a new instance of the Training Step model. Without this model, we'll see some odd behaviour where we add two fields every time we press the "Add Training Step" button. Note: the **two** comes from our controller: `2.times { @training_session.training_steps.build }`.
**Fun fact** : even though two form elements get added, only one of those elements gets saved! Can you guess why? That's right, it's because the child index for those elements are the same. Rails will only save the last of the two values.
For reference, here's [the component source code](https://github.com/stimulus-components/stimulus-rails-nested-form/blob/master/src/index.ts).
* * *
At the time of writing:
- Rails 7.1
- Ruby 3.3.2
- Stimulus Nested Form Component 5.0.0
- Stimulus 3.2.2 | jonoyeong |
1,893,974 | Lean on CSS Clip Path to Make Cool Shapes in the DOM without Images | Introduction Up until a few years ago if you wanted background shapes or sections of a... | 0 | 2024-06-26T17:44:42 | https://www.paigeniedringhaus.com/blog/lean-on-css-clip-path-to-make-cool-shapes-in-the-dom-without-images | css | ---
title: Lean on CSS Clip Path to Make Cool Shapes in the DOM without Images
published: true
date: 2024-06-19 00:00:00 UTC
tags: css
canonical_url: https://www.paigeniedringhaus.com/blog/lean-on-css-clip-path-to-make-cool-shapes-in-the-dom-without-images
---
[](/static/d5013df23a3e21be682ace94ae7684fb/5a190/shapes-hero.png)
## Introduction
Up until a few years ago if you wanted background shapes or sections of a website that were anything besides rectangles you most likely needed a designer to provide you with a static PNG or JPEG image that would be added as required, but CSS has come a long way since then, my friends.
When I was working on a website update that broke up the contents on the page into different colored background sections, alternating between pure white and soft gray colors, the design mock up I had included one section whose bottom edge tilted up and to the right instead of going across the page at a perfect 90 degree angle, as a typical block element does.
Now I could have asked the designer to make a background image to do this for me, but instead I wanted to see if I could do it on my own with the power of CSS. And lo and behold I could, with CSS `clip-path`.
**Interesting shapes and visuals in the DOM are no longer purely the domain of designers, with tools like CSS `clip-path`, devs have the power to reshape elements and I'll show you how.**
---
## CSS clip-path
If you're less familiar with the **[CSS `clip-path`](https://developer.mozilla.org/en-US/docs/Web/CSS/clip-path)** property, like me, it creates a clipping region that sets which parts of an element should be shown. Parts that are inside the region are shown, while those outside are hidden.
[](/static/71fa44d62102e493c15a3c12ea452d07/442cb/clip-path-demo.png)
_A demo from the MDN clip-path docs. Different clip-path options provide different views of the hot air balloon and text._
The `clip-path` property can accept a large variety of [values](https://developer.mozilla.org/en-US/docs/Web/CSS/clip-path#syntax):
- [`<clip-source>`](https://developer.mozilla.org/en-US/docs/Web/CSS/clip-path#clip-source), which accepts values like `url` for an SVG element with clipping path defined.
- [`<geometry-box>`](https://developer.mozilla.org/en-US/docs/Web/CSS/clip-path#geometry-box), which accepts values like `margin-box` and `border-box`.
- [`<basic-shape>`](https://developer.mozilla.org/en-US/docs/Web/CSS/basic-shape), which accepts values like `circle()` and `rect()`.
- `global-values`, which accepts values like `inherit` and `revert`.
The `<geometry-box>` and `<basic-shape>` values can even be combined together in one `clip-path`.
```css
/* this CSS combines two different clip path properties */
clip-path: padding-box circle(50px at 0 100px);
```
> This post doesn't go into great detail about all of the properties `clip-path` can accept and how they can be combined to create quite complex shapes. If you want more information and examples of `clip=path` in action, I recommend starting with the [Mozilla documentation](https://developer.mozilla.org/en-US/docs/Web/CSS/clip-path).
One of the `<basic-shape>` properties `clip-path` accepts is **[`polygon()`](https://developer.mozilla.org/en-US/docs/Web/CSS/basic-shape/polygon)**, and this ended up being the solution I needed for my tilted background section.
## The polygon I needed to recreate with CSS
[](/static/9860a75b4f0ba90f2f0f82967b1d5025/6c68b/css-clip-path.png)
_The gray polygon background I needed to create with CSS._
The image above is a screenshot of the gray background section I needed to recreate with CSS `clip-path`'s `polygon()` property. And the first thing I needed to do was create some HTML elements to apply the CSS to.
> **polygon() clip-path vs rect() clip-path**
>
> You might be wondering why I chose to use the `polygon()` property instead of the `rect()` property with `clip-path`. While the two are similar, `polygon()` can create more complex polygonal shapes and offers greater versatility for advanced designs by accepting pairs of coordinates to define each vertex of the polygon, whereas `rect()` can only handle rectangular shapes.
### Set up the HTML and CSS
The site I was working on relied on the static site generator [Hugo](https://gohugo.io/), a Go-based framework. Hugo uses [templates](https://gohugo.io/templates/introduction/) to render the site's HTML, so the example code below should look relatively familiar to you if you know HTML.
> **A note on templates:**
>
> If you've ever used JSX components, Node.js with Pug or Handlebars, or Jekyll - Hugo's templates are similar: HTML elements with Go variables and functions sprinkled in with `{{ }}` to render the correct information wherever the templates are injected.
Here's the code for what I'd nicknamed the "puzzle section" of the page due to the puzzle piece in the foreground of this section. For the purposes and clarity of this article, I've replaced the Go variables injected into the template with the generated HTML.
`single.html`
```html
<div class="about-body">
<!-- more HTML elements up here -->
<section class="puzzle-section section">
<div class="container">
<div class="row">
<div class="col-12 col-md-6 col-lg-6">
<h4 class="mb-3">
Lorem ipsum dolor
</h4>
<p class="mb-5">
Sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Ipsum dolor sit amet consectetur adipiscing elit pellentesque.
</p>
<h4 class="mb-3">
Duis aute irure dolor in reprehenderit
</h4>
<p class="mb-5">
in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. Consectetur adipiscing elit pellentesque habitant morbi tristique senectus et.
</p>
</div>
<div
class="col-sm-8 offset-sm-2 col-md-6 offset-md-0 col-lg-6 offset-lg-0"
>
<img
class="img-fluid"
src="/images/about/puzzle-pieces.png"
alt="Puzzle pieces"
/>
</div>
</div>
</div>
</section>
<!-- more HTML elements below -->
</div>
```
This section of code is relatively compact, but it deserves discussion. In addition to the HTML elements, there are quite a few CSS classes which come from the **[Bootstrap](https://getbootstrap.com/)** library, one of the original open source CSS frameworks for responsive web designs.
Among the custom classes like `about-body`, which I used for adding custom styling, there are classes like `container`, `row`, `col-12` or `col-md-6`, `mb-5`, and `mb-3`.
All of the latter classes are Bootstrap classes, which serve to make the text and image elements onscreen share the width of the page when the viewport is over a certain width (`col-md-6`), or apply a `margin-bottom` of a certain amount to the `<p>` tags (`mb-3` or `mb-5`).
The Bootstrap classes are beside the point for this post though, the class to focus on is the **`puzzle-section`** which wraps all the text and puzzle piece image.
This `puzzle-section` class is where we're going to add the `clip-path` property to display the light grey background behind the text and image with the slightly tilted, up-and-to-the-right design.
### Add the CSS clip-path to shape puzzle-section
As I wasn't quite sure how to style a normal, rectangular `<div>` into an uneven shape, I started looking for a solution online and found this helpful, interactive `clip-path`-focused site, [CSS clip-path maker](https://bennettfeely.com/clippy/).
[](/static/f804bb6617c0c319de6b241943cf8028/f98ee/clippy-website.png)
This CSS `clip-path` maker website is fantastic because it has a whole slew of preset shapes, adjustable image sizes and backgrounds, and the currently displayed image's vertices can be dragged into any arrangement you want. The line at the bottom of the screen shows the exact `clip-path` CSS values that you can copy/paste into your own project's CSS.
I chose the parallelogram preset shape as my starting point, and then dragged the corners to match the angle of the background section I was trying to recreate from scratch. Once I was satisfied it looked accurate, I copied the CSS line at the bottom of the page to my clipboard.
In my project's SCSS file, I added the copied `clip-path` CSS in addition to the light grey `background-color` property and some padding to give the text and puzzle piece images some breathing room on the page.
> **NOTE:** Even though this file shown in the example code is SCSS instead of pure CSS, for this post it shouldn't make a difference here. It should be a direct 1:1 comparison.
`about.scss`
```scss
.about-body {
// this white sets the white background color for the whole webpage
background-color: white;
.puzzle-section {
// clip-path code copied from the clip-path maker website
clip-path: polygon(0 0, 100% 0%, 100% 75%, 0% 100%);
background-color: light-grey;
padding: 2rem 0 10rem 0;
}
}
```
That little bit of CSS for `clip-path` was all that was needed to take my perfectly rectangular DOM element and turn it into an imperfect polygon instead. Not too shabby!
---
## Conclusion
CSS is pushing the boundaries of what web developers can do without resorting to images, videos, and custom designed elements all the time. And the satisfaction of figuring out how to do a cool little bit of design on all on your own feels pretty empowering.
A recent example of this was using the CSS `clip-path` property to create a background box for some text and images that had an uneven bottom edge. With the help of an [interactive website dedicated decoding to clip-paths](https://bennettfeely.com/clippy/) of all shapes and sizes, I was able to make quick work of this slightly skewed polygon.
And let me take a moment to shout out how much I appreciate the folks putting out those little sites or code snippets that solve a very specific problem for another developer - you folks continue to make the Internet a better place.
Check back in a few weeks — I’ll be writing more about JavaScript, React, IoT, or something else related to web development.
If you’d like to make sure you never miss an article I write, sign up for my newsletter here: https://paigeniedringhaus.substack.com
Thanks for reading. I hope learning to reshape how elements look in the DOM with just the power of CSS helps you as much as it's helped me.
---
## Further References & Resources
- MDN docs, [CSS clip-path](https://developer.mozilla.org/en-US/docs/Web/CSS/clip-path)
- [CSS clip-path generator website](https://bennettfeely.com/clippy/) | paigen11 |
1,851,851 | Design Patterns In Software | Design patterns are common solutions to common problems, represented by entities and the... | 0 | 2024-06-19T04:57:12 | https://coffeebytes.dev/en/design-patterns-in-software/ | programming, algorithms, beginners, tutorial | ---
title: Design Patterns In Software
published: true
date: 2024-06-19 12:00:00 UTC
tags: programming,algorithms,beginner,tutorial
canonical_url: https://coffeebytes.dev/en/design-patterns-in-software/
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p0vgm756xh2t6xshkh1j.jpg
---
Design patterns are common solutions to common problems, represented by entities and the relationships between them in programming, among them you have probably already heard of some such as: singleton, MVC or MTV, observer, among others. But that explanation of design patterns is too technical for a first approach, let me simplify it below.
## What are design patterns?
When we want to move from one place to another for short distances we use a land vehicle, these generally have 3 elements: wheels, a surface where we will place the object or person to transport and a medium that generates or transfers the energy necessary for the movement.
The wheels generally go in contact with the ground and the propulsion method is attached to the wheels to make them rotate and allow the movement.
When a person wants to create a vehicle that performs the function of transporting an object over land they will generally think of these elements and work on those elements to modify them to create something different or more sophisticated. This union of objects is a **design pattern**.
## Design Patterns in Software
Now imagine that you want to have only one instance of a class running at a time, so you decide that the process for doing this is as follows:
1. Check to see if there is an instance running.
2. If it does not exist, create it and return it.
3. If it already exists return that one.
It is as simple as that. This solution would be a design pattern to the common problem of running a single instance. In software, design patterns are the same, they are the arrangement and specific relationships of objects, methods and attributes that allow us to solve a problem. What kind of problems? Practically any problem that arises too frequently to come up with a standardized solution.
Some common problems are: [processing tasks using a fixed number of workers](https://coffeebytes.dev/en/worker-pool-design-pattern-explanation/), making sure that there is only one instance of a class running, adapting a complicated and impossible to modify API to a simpler and easier to understand one, or separating the part that handles the database, the part that decides the logic and the part that displays the HTML content of a web page.
Does this last one ring a bell? Yes, the MVC pattern used by many [frameworks, such as django](https://coffeebytes.dev/en/why-should-you-use-django-framework/), is a design pattern, or the [debounce-and-throttle](https://coffeebytes.dev/en/debounce-and-throttle-in-javascript/) pattern used mainly in JavaScript.
Design patterns make it easier to decouple the code, which makes it simpler to add or remove functions and also gives us the assurance that they are solutions that have already been tested over and over again over the years.
## Most common design patterns
There are numerous patterns in existence as problems to be solved, and patterns can be combined with each other in complex systems. However there are certain quite popular patterns that are the ones that have been compiled to be put in most books dealing with this subject. Generally you will find these:
- Singleton
- Prototype
- Factory
- Builder
- Adapter
- Decorator
- Facade
- Proxy
- Chain of responsability
- Command
- Interpreter
- Iterator
- Observer
- State
- Strategy
- Template method
- Visitor
- MVC
- Publish-subscribe
Just as these patterns emerged in response to existing problems, new patterns are created in response to new problems, so **there is no list of static patterns that are absolute and solve all problems**.
## Examples of design patterns
I’m going to walk you through four examples of design patterns in Python below. Why Python? Because it’s pretty simple to understand, even if you’ve never written Python code, and if you’re coming from a low-level language, it’ll probably be a piece of cake for you.
Design patterns make it easier to decouple the code, which makes it simpler to add or remove functions and also gives us the assurance that they are solutions that have already been tested over and over again over the years.
There are numerous patterns in existence as problems to be solved, as well as patterns can be combined with each other in complex systems. However, there are certain popular patterns that are the ones that have been compiled to be put in most of the books that deal with this subject.
Just as these patterns emerged in response to existing problems, new patterns are created in response to new problems, so there is no list of static patterns that are absolute and solve all problems.
I will explain three examples of design patterns below.
### Singleton
It is used when you want to prevent the creation of multiple instances of the same object. For example, you don’t want two objects that control the mouse or the printer running at the same time. Its indiscriminate use is [considered by many an anti-pattern.](http://97cosas.com/programador/resiste-tentacion-singleton.html)
The trick of its operation occurs in the _new_ method. This method is called when a class is created and receives the same class as parameter.
When a new object is created it will check if the _instance_ attribute exists in our class. If it does not detect the _instance_ attribute it will create an instance of the _SingletonObject_ class and assign it to _instance_. Subsequently it will return it. On the other hand, if it detects it, it will simply return it.
The _getattr_ and _setattr_ methods are modified to get and assign the attributes of the class that is defined in the _instance_ attribute.
``` python
#singleton_object.py
class SingletonObject(object):
class __SingletonObject():
def __init__ (self):
self.val = None
def __str__ (self):
return "{0!r} {1}".format(self, self.val)
instance = None
def __new__ (cls):
if not SingletonObject.instance:
SingletonObject.instance = SingletonObject.__SingletonObject()
return SingletonObject.instance
def __getattr__ (self, name):
return getattr(self.instance, name)
def __setattr__ (self, name):
return setattr(self.instance, name)
```
### Observer
The observer pattern allows an object to keep track of the state changes of another object. For example if we want to notify every user with an email every time the terms of use of a service are updated or to let all users know every time a digital newspaper publishes new material.
To achieve this we will make sure that each observer has an _update_ method (or whatever you want to call it), this method will be called by the Observable, in this class, by means of a _callback_ which is an anonymous function. This way we will have a decoupling of the classes that observe, because these do not need to know any method of the object, they only need to have an _update_ method.
``` python
class Task(object):
def __init__ (self, user, _type):
self.user = user
self._type = _type
def complete(self):
self.user.add_experience(1)
self.user.wallet.increase_balance(5)
for badge in self.user.badges:
if self._type == badge._type:
badge.add_points(2)
class ConcreteObserver(object):
def update(self, observed):
print("Observing: {}".format(observed))
class Observable(object):
def __init__ (self):
self.callbacks = set()
def register(self, callback):
self.callbacks.add(callback)
def unregister(self, callback):
self.callbacks.discard(callback)
def unregister_all(self):
self.callbacks = set()
def update_all(self):
for callback in self.callbacks:
callback(self)
def main():
observed = Observable()
observer1 = ConcreteObserver()
observed.register(lambda x: observer1.update(x))
observed.update_all()
if __name__ == " __main__":
main()
```
### Template
In this pattern we seek to use the decorator _@abstractmethod_ to guarantee the implementation of the methods in a derived class. In the following example we are forcing, under threat of an error occurring, that the child class implements the methods _step1_, _step2_ and _step3_. The _template\_method_ method is inherited as is, so it does not need to be defined.
``` python
import abc
class TemplateAbstractBaseClass(metaclass=abc.ABCMeta):
def template_method(self):
self._step_1()
self._step_2()
self._step_n()
@abc.abstractmethod
def _step_1(self): pass
@abc.abstractmethod
def _step_2(self): pass
@abc.abstractmethod
def _step_3(self): pass
class ConcreteImplementationClass(TemplateAbstractBaseClass):
def _step_1(self): pass
def _step_2(self): pass
def _step_3(self): pass
```
### Decorator
The decorator pattern allows us to add extra functionality to a function without modifying it directly. It is widely used in Django and other frameworks to restrict views according to permissions or to verify that a user is logged in. They work by creating a function that receives our function as an argument, inside this function we will create a wrapper, which gives the extra functionality to our function, our decorator will return that wrapper.
``` python
def requires_login(function):
def wrapper():
if user.is_logged_in():
return function()
return {"permission_denied": "Authenticated user required"}
return wrapper
@requires_login
def access_dashboard(request):
# ...
```
Now any access to the access\_dashboard function will check if the user is logged in, if he is the function will run normally, if he is not we will return an error message.
You can see how the original django [login\_required](https://docs.djangoproject.com/es/2.2/_modules/django/contrib/auth/decorators/) decorator was implemented in its documentation.
## Where to learn design patterns?
My recommendations for learning design patterns are as follows:
- Head First Design Patterns by Eric Freeman and Kathy Sierra (The most popular)
- Practical Python Design Patterns by Wessel Badenhorst (I learned my stuff with this one because it is complete and simple).
But I think there is enough information on this topic on the internet for you to read a whole book about it, besides, just giving you an idea of the most common patterns and their uses should be enough, you can go deeper into them as you need them.
## Source code for design patterns
All the code snippets for these examples were taken from the book _Practical Python Design Patterns_ by Wessel Badenhorst. | zeedu_dev |
1,893,935 | Creando un producto por menos de 10€ | Recientemente, he "sacado" al mercado un nuevo servicio que permite a las empresas que facturan a... | 0 | 2024-06-20T06:02:02 | https://jorge.aguilera.soy/blog/2024/recargo-equivalencia.html | startup, product, php, laravel | ---
title: Creando un producto por menos de 10€
published: true
date: 2024-06-19 00:00:00 UTC
tags: startup,product,php,laravel
canonical_url: https://jorge.aguilera.soy/blog/2024/recargo-equivalencia.html
---
Recientemente, he "sacado" al mercado un nuevo servicio que permite a las empresas que facturan a minoristas saber si alguno de ellos está acogido a un régimen especial de IVA, lo que le obliga a aplicar un tipo impositivo más.
Básicamente, la AEAT (la Hacienda española) ofrece un servicio para realizar esta consulta, de tal forma que rellenando un formulario, y tras identificarte con tu certificado electrónico, puedes consultar si un NIF está acogido a este servicio. Así mismo ofrece un servicio SOAP (ahí es nada) para realizar esta consulta mediante programación.
El caso es que a pocos clientes que manejes el tema de consultar uno a uno no es trivial por lo que he creado`recargo-de-equivalencia.es`, un servicio en el que tras darte de alta, subes un fichero con los NIFs y de forma periódica el servicio los chequeará y te notificará si encuentra alguno acogido al servicio.
La "gracia" del servicio es que no requiere un chequeo en tiempo real por lo que se puede realizar un escaneo semanal o mensual, por ejemplo, y notificar vía email (más adelante podría ser también mediante un WebHook).
Así pues los requisitos del servicio serían:
- dominio personalizado
- una landing page donde explicar el servicio
- un proceso de subscripcion al servicio
- un aplicativo con un dashboard simple donde el usuario pueda subir un fichero y consultar aquellos que estén acogidos al sistema de recargo
- un proceso que periódicamente consulte **todos** los NIFs y actualice su estado
- una notificación via email
Y el requisito más "importante": el MVP tiene que tener el costo de mantenimiento lo más reducido posible
## Infraestructura
La mayor parte del servicio se ha hecho en PHP y alojado en DreamHost (DH). ¿Porqué? porque es un proveedor donde tengo alojados varios dominios (pero de sites estáticos básicamente) y que me ofrece hosting con base de datos y email ilimitados. Más o menos pago al año unos 120 € y puedo alojar todos los sites que quiera.
Hasta hace poco permitían desplegar aplicaciones con NodeJS, Ruby y PHP pero hace poco han cambiado y sólo permiten PHP (8.3 eso sí), lo cual casi agradezco porque así me he obligado a hacer algo en PHP
Para el proceso que interactua con el servico SOAP de la AEAT he optado por hacer un programa de línea de comando, perfecto para ser ejecutado por un planificador de tareas de sistema (un cron) con Micronaut … y que se ejecuta en una RaspberryPi que tenía criando polvo en la estantería !!!!
**Es decir que el coste del MVP es prácticamente cero**.
El problema de DH es que no me sirve para registrar dominios `.es` directamente pero sí puedo registrarlo con un proveedor español (Strato en mi caso) y luego configurar los DNS para que use los de DH. A partir de ahí puedo crear tantos subdominios como quiera y alojarlos en DH, todo por el mismo precio
Porqué Strato? pues básicamente cuando he buscado proveedores para registrar el dominio el precio de casi todos era de unos 35 € pero en Strato era de …. 1€!!!!
## Landing Page
Una vez que he registrado el dominio `recargo-de-equivalencia.es` y configurado los DNS para usar los de DH he creado `info.recargo-de-equivalencia.es`
Este es un static-site creado con Hugo usando una plantilla OpenSource y que me ha llevado un par de días ir retocando y añadiendo mi contenido.
Una vez tenía la landing en mi local simplemente la copiaba a la carpeta correspondiente de DH y ya estaba disponible
## Subscripcion
Como es un MVP he creado un form de Google con 4 preguntas para conocer a las empresas que les pueda interesar el servicio y tener su email para ponerme en contacto con ellas.
**El servicio se encuentra actualmente en Beta, es gratis y cuenta con 2 clientes**
## Aplicación
Esta parte ha sido la más ardua pero probablemente por desconocimiento de PHP y el framework Laravel. Sin embargo en cierta forma ha sido la más placentera
Actualmente la aplicación es un simple dashboard donde, una vez registrado, puedes ver el total de NIFs que has subido y cúales de ellos están en recargo de equivalencia. Además puedes subir un fichero con más NIFs que serán incluidos en tu espacio.
Como nota técnica sobre Laravel comentaré que me ha encantado. Tiene muchos conceptos similares a los frameworks Java y es un ecosistema bastante maduro. Comentar que gracias a que DH también me permite ejecutar tareas planificadas ha sido muy fácil añadir Workers a la aplicación de tal forma que una vez el usuario sube un fichero el aplicativo lo guarda en un directorio temporal y un worker lo procesa al poco

## Micronaut
Para la parte de integración con AEAT he recurrido a lo "seguro": un Micronaut + Apache Axis para hacer llamadas al SOAP de la AEAT y que lee/escribe en la base de datos alojada en DH
Este Micronaut lo he convertido a binario con GraalVM y encima lo puedo ejecutar en una RaspberryPi por lo que el consumo energético del proceso es mínimo.
La idea es que este aplicativo está configurado para "peinar" los NIFs de los clientes y marcarlos si encuentra alguno en el régimen.
Posteriormente un worker en el aplicativo anterior detectará estos registrados marcados y los empaquetará en un email de notificación al cliente
## Conclusion
Actualmente el servicio se encuentra, como he dicho, en fase Beta pero ya maneja unos 35.000 NIFs
Los siguientes pasos, según el interés que despierte, sería crear algunas integraciones por ejemplo para FacturaScript, WordPress o incluso en avanzar la parte de WebHook | jagedn |
1,892,968 | Working with Modules and NPM in Node.js 🚀 | Hi your Instructor here #KOToka Modules and npm (Node Package Manager) are essential tools in the... | 0 | 2024-06-18T23:22:55 | https://dev.to/erasmuskotoka/working-with-modules-and-npm-in-nodejs-26o | Hi your Instructor here #KOToka
Modules and npm (Node Package Manager) are essential tools in the Node.js ecosystem that enhance your development workflow.
1. Modules: Break your code into reusable pieces. Use `require` to include built-in modules or your own custom modules.
```javascript
const fs = require('fs'); // Example of using the 'fs' module
```
2. NPM: Manage dependencies effortlessly. Use `npm init` to set up your project and `npm install` to add libraries.
```bash
npm init -y # Initialize a project
npm install express # Install Express.js
```
By mastering modules and npm, you can streamline your development process and build more efficient, scalable applications. Happy coding! 💻✨ #NodeJS #Modules #NPM #WebDevelopment #Coding
| erasmuskotoka | |
1,892,911 | SemVer - Versionamento de Código: Princípios e Práticas | Introdução* O versionamento de código é um componente crucial no desenvolvimento de... | 0 | 2024-06-18T23:18:15 | https://dev.to/thiagohnrt/semver-versionamento-de-codigo-principios-e-praticas-3aok | webdev, semver, programming, braziliandevs | ## Introdução*
O versionamento de código é um componente crucial no desenvolvimento de software, facilitando a comunicação entre desenvolvedores e usuários sobre as mudanças e melhorias feitas em um projeto. Entre os diferentes sistemas de versionamento, o Semantic Versioning ([SemVer](https://semver.org/lang/pt-BR/)) se destaca pela sua clareza e consistência. Este artigo explora os princípios do SemVer, suas regras e práticas recomendadas para sua implementação.
## O Que é SemVer?
Semantic Versioning (Versionamento Semântico) é um sistema de versionamento que utiliza uma convenção específica de numeração para indicar mudanças no software. O padrão SemVer foi criado por Tom Preston-Werner, cofundador do GitHub, e é descrito na especificação semver.org.
O formato básico do SemVer é `MAJOR.MINOR.PATCH`, onde cada número representa um tipo específico de mudança no software:
- **MAJOR**: Incrementado quando há mudanças incompatíveis com versões anteriores.
- **MINOR**: Incrementado quando funcionalidades são adicionadas de forma compatível com versões anteriores.
- **PATCH**: Incrementado quando correções de bugs compatíveis com versões anteriores são realizadas.
## Princípios do SemVer
O SemVer se baseia em alguns princípios fundamentais:
1. **Transparência**: Facilita a comunicação das mudanças de forma clara e previsível.
2. **Compatibilidade**: Ajuda a garantir a compatibilidade entre diferentes versões de um software.
3. **Estabilidade**: Proporciona um caminho estruturado para evoluir o software sem quebrar a funcionalidade existente.
## Regras do SemVer
A especificação SemVer define um conjunto claro de regras para incrementar os números de versão:
1. **Mudança MAJOR (1.x.x a 2.x.x)**:
- Alterações incompatíveis com versões anteriores.
- Exemplo: Remoção de uma função pública, alteração de comportamento existente que pode quebrar o código que depende dessa funcionalidade.
2. **Mudança MINOR (1.1.x a 1.2.x)**:
- Adição de novas funcionalidades de forma retrocompatível.
- Exemplo: Adição de uma nova função que não altera o comportamento das funções existentes.
3. **Mudança PATCH (1.1.1 a 1.1.2)**:
- Correções de bugs que são compatíveis com versões anteriores.
- Exemplo: Correção de um bug que não altera a API pública.
## Exemplos Práticos
Vamos considerar um projeto fictício `ExemploLib`:
- **Versão 1.0.0**: Primeira versão estável. Inclui funcionalidades básicas.
- **Versão 1.1.0**: Adiciona uma nova função sem alterar as existentes. Incrementa o MINOR.
- **Versão 1.1.1**: Corrige um bug na função existente. Incrementa o PATCH.
- **Versão 2.0.0**: Remove uma função e altera outra de forma incompatível com versões anteriores. Incrementa o MAJOR.
## Práticas Recomendadas
Para utilizar o SemVer de forma eficaz, siga estas práticas recomendadas:
1. **Documentação Clara**: Documente todas as mudanças de versão, especialmente aquelas que incrementam o MAJOR.
2. **Commit e Tags**: Utilize sistemas de controle de versão como Git para criar commits claros e tags de versão.
3. **Testes Automatizados**: Implementar uma suíte de testes automatizados para garantir que mudanças MINOR e PATCH não introduzam regressões.
4. **Planejamento de Releases**: Planeje as versões MAJOR de forma cuidadosa, considerando o impacto nas dependências e nos usuários do software.
## Conclusão
O versionamento semântico, ou SemVer, oferece uma abordagem estruturada e previsível para o gerenciamento de versões de software. Ao aderir às regras e princípios do SemVer, desenvolvedores podem garantir uma evolução ordenada do software, minimizando o risco de interrupções e maximizando a transparência e a confiabilidade. Implementar o SemVer não é apenas uma boa prática técnica, mas também uma maneira eficaz de comunicar mudanças aos usuários e colaboradores do projeto.
> *_Desculpe pela brincadeira com a imagem_ 😂 | thiagohnrt |
1,892,910 | Essentials of Home Improvement and Roofing in Los Angeles | When it comes to maintaining and enhancing one's home, the importance of a comprehensive approach... | 0 | 2024-06-18T23:16:54 | https://dev.to/matan01p/essentials-of-home-improvement-and-roofing-in-los-angeles-2c3k | When it comes to maintaining and enhancing one's home, the importance of a comprehensive approach cannot be overstated. A well-rounded home improvement plan encompasses everything from the aesthetics of bathrooms and kitchens to the functionality and efficiency of energy systems. In Los Angeles, where style meets sustainability, homeowners often seek out a reliable roofing company that can cater to a diverse range of needs.
Importance of Quality Roofing
At the heart of any home improvement endeavor in sunny Southern California is roofing. It's not just about shelter; it’s about ensuring your living space is secure, energy-efficient, and adds value to your property. A **[roofing company in Los Angeles](https://www.property-management-today.com/los-angeles-california/maingreen-remodel-construction)** understands the unique climate challenges – from intense sun exposure to the occasional heavy rainstorm – and provides solutions tailored to these conditions.
Comprehensive Home Improvement Services
Los Angeles residents looking for an all-in-one solution for their homes can find solace in companies that offer an ensemble of services. These include sophisticated bathroom remodels that transform basic utilities into spa-like retreats, expansions through home additions that seamlessly blend with existing structures, and state-of-the-art insulation techniques for optimal temperature control.
Windows & door installation go hand-in-hand with roofing services as they collectively enhance a home's facade while improving energy efficiency. Adapting outdoor spaces with landscape design or new concrete & pavers can create inviting environments perfect for the Los Angeles lifestyle.
Energy Efficiency and Eco-Friendly Solutions
With environmental concerns on the rise, many homeowners are turning to green upgrades like solar panels which not only cut down on electric bills but also contribute to a greener planet. High-quality roofing companies in Los Angeles often provide solar solutions integrated with roofing services, ensuring compatibility and efficient installation.
In addition to solar panels, features such as energy-efficient electric systems and HVAC upgrades play critical roles in reducing carbon footprints while maintaining comfort within homes across LA.
Structural Integrity and Essential Repairs
A robust foundation is key to any lasting structure. Services related to foundation care ensure that homes stand firm against geological threats prevalent in California. Similarly, chimney repair is vital for safety, preventing potential hazards associated with structural wear or blockages.
For general wear-and-tear or unforeseen issues like leaks or breakages, plumbing services are indispensable. The same goes for flooring solutions – whether you're opting for hardwood elegance or modern tiling – which tie together interior aesthetics and durability.
Choosing Your Home Improvement Partner
Finding a roofing company in Los Angeles that offers comprehensive services under one roof can be a game-changer for any homeowner. It simplifies coordination during projects when you're remodeling your kitchen’s culinary palette or modernizing your bathroom space alongside roofing projects.
The assurance that comes from working with professionals who understand diverse aspects of home improvement—from installing windows & doors that complement your roofing choice to laying down concrete & pavers for your driveway—is invaluable.
In conclusion, when considering home improvement options in Los Angeles—whether it's refreshing interiors or fortifying exteriors—a holistic approach ensures cohesiveness throughout your property. Utilizing a roofing company in Los Angeles renowned for its array of quality services means addressing all facets of home enhancement efficiently and effectively. From foundational work up through the peak of your newly installed roof, every detail contributes towards creating not just a house but a haven tailored perfectly for you.
**[Maingreen Remodel & Construction](https://maingreenconstruction.com/roofing-contractor-los-angeles-ca/)**
Address: 1124 Glenville Dr STE 2, Los Angeles, California, 90035
Phone: 866-802-3255
Email: matan@maingreenconstruction.com
Visit our profile:
[Maingreen Remodel & Construction - Facebook](https://www.facebook.com/maingreenconstruction)
[Maingreen Remodel & Construction - Instagram](https://www.instagram.com/maingreenconstructionca/)
[Maingreen Remodel & Construction - Yelp](https://www.yelp.com/biz/maingreen-remodel-and-construction-los-angeles-2)
| matan01p | |
1,892,597 | Firebase Authentication With Jetpack Compose. Part 1 | Hey! This post is part of a series of building an expense tracker app. In today's guide we're going... | 0 | 2024-06-18T23:10:21 | https://dev.to/evgensuit/firebase-authentication-with-jetpack-compose-part-1-3k82 | kotlin, android, androiddev, mobile | Hey! This post is part of a series of building an expense tracker app. In today's guide we're going to implement firebase authentication with input verification in jetpack compose. Also, this post features the usage of `CompositionLocalProvider`, which enables a clear and reusable way of showing a snackbar instead of injecting a separate callback into composables which is very junky. In one of the next guides I'll show you how to implement Google sign in, UI, and Unit testing of authentication, so stay tuned. If you have any questions or suggestions feel free to leave them in comments
Here's how the end result might look like

And the source code is [here](https://github.com/EvgenSuit/MoneyMonocle)
---
# Firebase setup
Head on to firebase and click on add project

After filling in the required info start configuring the project for Android by clicking on the corresponding icon

After following instructions head on to Authentication tab and add Email/Password provider

---
# Hilt setup
Hilt is a popular dependency injection tool which simplifies inserting dependencies (classes, objects, mocks, etc.) into other dependencies.
You can pick versions of plugins that suit you [here](https://mvnrepository.com)
Insert the following code into your root `build.gradle` file
```gradle
plugins {
id("com.google.dagger.hilt.android") version "2.50" apply false
// compatible with the 1.9.22 version of Kotlin
id("com.google.devtools.ksp") version "1.9.22-1.0.16" apply false
}
```
After syncing gradle, apply the below dependencies in your `app/build.gradle` file
```gradle
plugins {
id("com.google.dagger.hilt.android")
id("com.google.devtools.ksp")
}
dependencies {
implementation("com.google.dagger:hilt-android:2.50")
// required if using navigation together with hilt
implementation("androidx.hilt:hilt-navigation-compose:1.2.0")
ksp("com.google.dagger:hilt-compiler:2.50")
ksp("com.google.dagger:hilt-android-compiler:2.50")
}
```
Then annotate your `MainActivity` class with `@AndroidEntryPoint`
```kotlin
@AndroidEntryPoint
class MainActivity : ComponentActivity()
```
And last, create a `${YourAppName}Application.kt` file and insert the following code
```kotlin
@HiltAndroidApp
// name it however you want
class MyApplication: Application()
```
To learn more about Hilt, visit [this page](https://developer.android.com/training/dependency-injection/hilt-android):
---
# Repository and View Model
First let's create a `AuthUseCases.kt` file and put 3 input validators
```kotlin
class UsernameValidator {
operator fun invoke(username: String): UsernameValidationResult {
return if (username.isBlank()) UsernameValidationResult.IS_EMPTY
else UsernameValidationResult.CORRECT
}
}
class EmailValidator {
operator fun invoke(email: String): EmailValidationResult {
return if (Patterns.EMAIL_ADDRESS.matcher(email).matches()) {
EmailValidationResult.CORRECT
}
else EmailValidationResult.INCORRECT_FORMAT
}
}
class PasswordValidator {
operator fun invoke(password: String): PasswordValidationResult {
return if (password.length < 8) PasswordValidationResult.NOT_LONG_ENOUGH
else if (password.count(Char::isUpperCase) == 0) PasswordValidationResult.NOT_ENOUGH_UPPERCASE
else if (!password.contains("[0-9]".toRegex())) PasswordValidationResult.NOT_ENOUGH_DIGITS
else PasswordValidationResult.CORRECT
}
}
enum class UsernameValidationResult {
IS_EMPTY,
CORRECT
}
enum class EmailValidationResult {
INCORRECT_FORMAT,
CORRECT
}
enum class PasswordValidationResult {
NOT_LONG_ENOUGH,
NOT_ENOUGH_DIGITS,
NOT_ENOUGH_UPPERCASE,
CORRECT
}
```
`operator fun invoke()` is a special type of function in Kotlin that allows us to insert parameters into a class instance. Here's an example
```kotlin
val passwordValidator = PasswordValidator()
passwordValidator(password)
```
For email validation we're using an email address checker of `android.util` package that returns true if a given email passes the check
In password validator, `password.contains("[0-9]".toRegex())` returns true if a given input contains at least 1 number, otherwise, we will inform a user of this requirement
---
Next, we'll create a `AuthRepository.kt` file that will consist of a class responsible for authentication, which we'll later inject into our view model
```kotlin
class AuthRepository(
private val auth: FirebaseAuth,
private val firestore: FirebaseFirestore) {
suspend fun signUp(authState: AuthState) {
auth.createUserWithEmailAndPassword(authState.email!!, authState.password!!).await()
auth.currentUser?.updateProfile(UserProfileChangeRequest.Builder()
.setDisplayName(authState.username!!).build())?.await()
}
suspend fun signIn(authState: AuthState) {
auth.signInWithEmailAndPassword(authState.email!!, authState.password!!).await()
}
}
```
The code above contains a function that creates a user with email, password, and a username, which is set with the help of `UserProfileChangeRequest`.
---
Next, create a `CoroutineScopeProvider.kt` file and define the class that will come in handy during view model testing, as with it we'll be able to use `TestScope` in our view model
```kotlin
class CoroutineScopeProvider(private val coroutineScope: CoroutineScope? = null) {
fun provide() = coroutineScope
}
```
---
Create a `StringValue.kt` file and define the `StringValue` class. It will enable us to match input validation results to strings in an easy and clean way
```kotlin
sealed class StringValue {
data class DynamicString(val value: String) : StringValue()
data object Empty : StringValue()
class StringResource(
@StringRes val resId: Int
) : StringValue()
fun asString(context: Context): String {
return when (this) {
is Empty -> ""
is DynamicString -> value
is StringResource -> context.getString(resId))
}
}
}
```
---
Create a `AuthModule.kt` file and define `AuthModule`, that will tell hilt which instance of `AuthRepository` and `CoroutineScopeProvider` to provide (e.g mocked or real)
```kotlin
@Module
@InstallIn(SingletonComponent::class)
object AuthModule {
@Provides
@Singleton
fun provideAuthRepository(): AuthRepository =
AuthRepository(Firebase.auth, Firebase.firestore)
@Provides
@Singleton
fun provideCoroutineScopeProvider(): CoroutineScopeProvider =
CoroutineScopeProvider()
}
```
Singleton components are created only once per lifecycle of the application. Since there's absolutely no need to provide a different instance of `AuthRepository` and `CoroutineScopeProvider` each time they're requested, using `@Singleton` is a perfect choice.
---
Create a `Result.kt` file containing the `Result` class.
```kotlin
sealed class CustomResult(val error: StringValue = StringValue.Empty) {
data object Idle: CustomResult()
data object InProgress: CustomResult()
data object Empty: CustomResult()
data object Success: CustomResult()
class DynamicError(error: String): CustomResult(StringValue.DynamicString(error))
class ResourceError(@StringRes res: Int): CustomResult(StringValue.StringResource(res))
}
```
This class will be responsible for providing smooth user experience as well as other important features (like disabling auth fields and buttons when the result of a sign in operation is `InProgress`)
---
After that, create a `AuthViewModel.kt` file and start defining auth view model
```kotlin
@HiltViewModel
class AuthViewModel @Inject constructor(
private val authRepository: AuthRepository,
coroutineScopeProvider: CoroutineScopeProvider
): ViewModel() {
private val usernameValidator = UsernameValidator()
private val emailValidator = EmailValidator()
private val passwordValidator = PasswordValidator()
private val scope = coroutineScopeProvider.provide() ?: viewModelScope
private val _uiState = MutableStateFlow(UiState())
val uiState = _uiState.asStateFlow()
fun onUsername(username: String) {
val result = when (usernameValidator(username)) {
UsernameValidationResult.IS_EMPTY -> StringValue.StringResource(R.string.username_not_long_enough)
else -> StringValue.Empty
}
_uiState.update { it.copy(validationState = it.validationState.copy(usernameValidationError = result),
authState = it.authState.copy(username = username)) }
}
fun onEmail(email: String) {
val result = when (emailValidator(email)) {
EmailValidationResult.INCORRECT_FORMAT -> StringValue.StringResource(R.string.invalid_email_format)
else -> StringValue.Empty
}
_uiState.update { it.copy(validationState = it.validationState.copy(emailValidationError = result),
authState = it.authState.copy(email = email)) }
}
fun onPassword(password: String) {
val result = when (passwordValidator(password)) {
PasswordValidationResult.NOT_LONG_ENOUGH -> StringValue.StringResource(R.string.password_not_long_enough)
PasswordValidationResult.NOT_ENOUGH_UPPERCASE -> StringValue.StringResource(R.string.password_not_enough_uppercase)
PasswordValidationResult.NOT_ENOUGH_DIGITS -> StringValue.StringResource(R.string.password_not_enough_digits)
else -> StringValue.Empty
}
_uiState.update { it.copy(validationState = it.validationState.copy(passwordValidationError = result),
authState = it.authState.copy(password = password)) }
}
data class UiState(
val authType: AuthType = AuthType.SIGN_IN,
val authState: AuthState = AuthState(),
val validationState: ValidationState = ValidationState(),
val authResult: CustomResult = CustomResult.Idle)
data class ValidationState(
val usernameValidationError: StringValue = StringValue.Empty,
val emailValidationError: StringValue = StringValue.Empty,
val passwordValidationError: StringValue = StringValue.Empty,
)
}
data class AuthState(
val username: String? = null,
val email: String? = null,
val password: String? = null,
)
enum class AuthType {
SIGN_IN,
SIGN_UP
}
enum class FieldType {
USERNAME,
EMAIL,
PASSWORD
}
```
In this code `AuthViewModel` is annotated with `@HiltViewModel`, and its constructor is annotated with `@Inject`, which makes hilt inject the dependencies we defined in `AuthModule`.
`onUsername`, `onEmail`, and `onPassword` functions are responsible for updating auth fields and the appropriate verification results. These will be called every time a user types something in.
Depending on the value of `authType` variable we'll decide what kind of authentication is currently chosen (sign in or sign up), and make the UI and authentication pipeline react accordingly.
---
Now lets add a couple of functions to `AuthViewModel`, which are directly responsible for authenticating a user
```kotlin
fun changeAuthType() {
_uiState.update { it.copy(authType = AuthType.entries[it.authType.ordinal xor 1]) }
}
fun onCustomAuth() {
val authType = _uiState.value.authType
updateAuthResult(CustomResult.InProgress)
scope.launch {
try {
if (authType == AuthType.SIGN_UP) {
authRepository.signUp(_uiState.value.authState)
}
authRepository.signIn(_uiState.value.authState)
updateAuthResult(CustomResult.Success)
} catch (e: Exception) {
updateAuthResult(CustomResult.DynamicError(e.toStringIfMessageIsNull()))
}
}
}
```
`changeAuthType` changes authentication type using XOR logical operator. For example, when a user is viewing a sign in composable, `authType` ordinal has a value of 0. Upon calling the function, 0 xor 1 will be equal to 1, selecting the auth type at index 1, which is sign up. And vise versa, when a user is viewing a sign up composable and calls `changeAuthType` function, the result of 1 xor 1 will be 0.
`onCustomAuth` function starts off by telling a user that the authentication is running, then launches a coroutine scope with a `try` `catch` block. If a user currently wants to sign up, a corresponding repository function is executed. After successful sign up, an automatic sign in attempt is performed. If auth is successful, the function updates the result to `Success`.
---
# UI code
**(Optional)**
Let's create a fancy title for our app with gradient animation
```kotlin
@Composable
fun Title() {
val gradientColors = listOf(MaterialTheme.colorScheme.onBackground,
MaterialTheme.colorScheme.primary,
MaterialTheme.colorScheme.onPrimary.copy(0.5f))
var offsetX by remember { mutableStateOf(0f) }
LaunchedEffect(Unit) {
animate(
initialValue = 0f,
targetValue = 1000f,
animationSpec = tween(3000)) {value, _ ->
offsetX = value
}
}
val brush = Brush.linearGradient(
colors = gradientColors,
start = Offset(offsetX, 0f),
end = Offset(offsetX + 200f, 100f)
)
Text(stringResource(id = R.string.app_name),
style = MaterialTheme.typography.titleLarge.copy(brush = brush))
}
```
---
Let's define an auth field
```kotlin
@Composable
fun CustomInputField(
enabled: Boolean,
fieldType: FieldType,
value: String?,
error: String,
onValueChange: (String) -> Unit) {
val shape = RoundedCornerShape(dimensionResource(id = R.dimen.auth_corner))
val fieldTypeString = stringResource(id = when(fieldType) {
FieldType.USERNAME -> R.string.username
FieldType.EMAIL -> R.string.email
FieldType.PASSWORD -> R.string.password
})
Column(
horizontalAlignment = Alignment.CenterHorizontally,
modifier = Modifier.fillMaxWidth()
) {
OutlinedTextField(
value = value ?: "",
isError = error.isNotEmpty(),
onValueChange = onValueChange,
enabled = enabled,
shape = shape,
keyboardOptions = KeyboardOptions(imeAction =
if (fieldType == FieldType.USERNAME || fieldType == FieldType.EMAIL) ImeAction.Next else ImeAction.Done),
placeholder = {
if (value.isNullOrBlank())
Text(fieldTypeString)
},
singleLine = true,
modifier = Modifier.fillMaxWidth().testTag(fieldTypeString)
)
if (error.isNotEmpty()) {
Text(error,
textAlign = TextAlign.Center,
color = MaterialTheme.colorScheme.error,
modifier = Modifier.testTag(error))
}
}
}
```
Here, we use `fieldType` to decide the Ime action and show a placeholder. Also, we use the result of input validators to either show or not show an error text composable
---
Let's define an auth button
```kotlin
@Composable
fun CustomAuthButton(
authType: AuthType,
enabled: Boolean,
onClick: () -> Unit) {
val label = stringResource(
id = when (authType) {
AuthType.SIGN_IN -> R.string.sign_in
AuthType.SIGN_UP -> R.string.sign_up
}
)
ElevatedButton(onClick = onClick,
shape = RoundedCornerShape(dimensionResource(id = R.dimen.button_corner)),
colors = ButtonDefaults.buttonColors(),
enabled = enabled,
modifier = Modifier.fillMaxWidth()
) {
Text(label, style = MaterialTheme.typography.displaySmall,
modifier = Modifier.padding(10.dp))
}
}
```
Here `authType` comes in handy, as we can show an appropriate label for a button.
---
After that follows a "go to text" which would enable a user to transition between desired auth states
```kotlin
@Composable
fun GoToText(authType: AuthType,
enabled: Boolean,
onClick: () -> Unit) {
val label = stringResource(
id = if (authType == AuthType.SIGN_IN) R.string.go_to_signup else R.string.go_to_signin
)
TextButton(onClick = onClick,
enabled = enabled) {
Text(label, style = MaterialTheme.typography.labelSmall.copy(
textDecoration = TextDecoration.Underline
))
}
}
```
Here we show "Go to sign up" text if a user is in sign in state. And vise versa.
---
Now we combine all the composables above into a single card
```kotlin
@Composable
fun AuthFieldsColumn(
uiState: AuthViewModel.UiState,
authEnabled: Boolean,
onUsername: (String) -> Unit,
onEmail: (String) -> Unit,
onPassword: (String) -> Unit,
onChangeAuthType: () -> Unit,
onAuth: () -> Unit
) {
val shape = RoundedCornerShape(dimensionResource(id = R.dimen.auth_corner))
val validationState = uiState.validationState
val authState = uiState.authState
val authResult = uiState.authResult
val context = LocalContext.current
val usernameValidationError = validationState.usernameValidationError.asString(context)
val emailValidationError = validationState.emailValidationError.asString(context)
val passwordValidationError = validationState.passwordValidationError.asString(context)
val authButtonEnabled = (usernameValidationError.isEmpty() && authState.username != null || uiState.authType != AuthType.SIGN_UP) &&
(emailValidationError.isEmpty() && authState.email != null) &&
(passwordValidationError.isEmpty() && authState.password != null) && authEnabled
ElevatedCard(
modifier = Modifier
.width(350.dp)
.shadow(
dimensionResource(id = R.dimen.shadow_elevation),
shape = shape,
spotColor = MaterialTheme.colorScheme.primary
)
.clip(shape)
.background(MaterialTheme.colorScheme.onBackground.copy(0.1f))
.border(1.dp, MaterialTheme.colorScheme.primary, shape)
) {
Column(
horizontalAlignment = Alignment.CenterHorizontally,
verticalArrangement = Arrangement.spacedBy(20.dp),
modifier = Modifier
.fillMaxWidth()
.padding(10.dp)
) {
AnimatedVisibility(uiState.authType == AuthType.SIGN_UP) {
CustomInputField(
fieldType = FieldType.USERNAME,
enabled = authEnabled,
value = authState.username,
error = usernameValidationError,
onValueChange = onUsername)
}
CustomInputField(
fieldType = FieldType.EMAIL,
enabled = authEnabled,
value = authState.email,
error = emailValidationError,
onValueChange = onEmail)
CustomInputField(
fieldType = FieldType.PASSWORD,
enabled = authEnabled,
value = authState.password,
error = passwordValidationError,
onValueChange = onPassword)
CustomAuthButton(
authType = uiState.authType,
enabled = authButtonEnabled,
onClick = onAuth)
if (authResult is CustomResult.InProgress) {
LinearProgressIndicator()
}
Column(horizontalAlignment = Alignment.CenterHorizontally) {
if (uiState.authType == AuthType.SIGN_IN) {
Text(stringResource(id = R.string.dont_have_an_account),
style = MaterialTheme.typography.labelSmall)
}
GoToText(authType = uiState.authType,
enabled = authEnabled,
onClick = onChangeAuthType)
}
}
}
}
```
In the code above `authButtonEnabled` will be true in 2 cases: either if a current auth option is sign in and email with password pass the input check, or if a current option is sign up and username together with email and password pass the check. Also, username text field will be shown only on sign up.
---
After that I've defined another composable that accepts ui state and callback methods. This is useful for previewing composables.
```kotlin
@Composable
fun AuthContentColumn(
uiState: AuthViewModel.UiState,
onUsername: (String) -> Unit,
onEmail: (String) -> Unit,
onPassword: (String) -> Unit,
onChangeAuthType: () -> Unit,
onCustomAuth: () -> Unit,
onSignGoogleSignIn: suspend () -> IntentSender?,
onSignInWithIntent: (ActivityResult) -> Unit) {
val authResult = uiState.authResult
val authEnabled = authResult !is CustomResult.InProgress && authResult !is CustomResult.Success
Column(
horizontalAlignment = Alignment.CenterHorizontally,
verticalArrangement = Arrangement.spacedBy(40.dp),
modifier = Modifier
.fillMaxSize()
.verticalScroll(rememberScrollState())
.padding(10.dp)
) {
Title()
AuthFieldsColumn(
uiState = uiState,
authEnabled = authEnabled,
onUsername = onUsername,
onEmail = onEmail,
onPassword = onPassword,
onChangeAuthType = onChangeAuthType,
onAuth = onCustomAuth)
}
}
```
---
Next create a top auth screen composable
```kotlin
@Composable
fun AuthScreen(
onSignIn: () -> Unit,
viewModel: AuthViewModel = hiltViewModel()) {
val uiState by viewModel.uiState.collectAsState()
val focusManger = LocalFocusManager.current
val snackbarController = LocalSnackbarController.current
LaunchedEffect(uiState.authResult) {
if (uiState.authResult is CustomResult.Success) {
focusManger.clearFocus(true)
onSignIn()
}
}
LaunchedEffect(uiState.authResult) {
snackbarController.showErrorSnackbar(uiState.authResult)
}
AuthContentColumn(
uiState = uiState,
onUsername = viewModel::onUsername,
onEmail = viewModel::onEmail,
onPassword = viewModel::onPassword,
onChangeAuthType = viewModel::changeAuthType,
onCustomAuth = viewModel::onCustomAuth,
onSignGoogleSignIn = viewModel::onGoogleSignIn,
onSignInWithIntent = viewModel::onSignInWithIntent,
)
}
```
On successful authentication, the composable clears keyboard focus and calls `onSignIn` callback, which is usually responsible for navigating to another screen. On unsuccessful authentication, an error snackbar is shown. Here's the code for it:
In `MainActivity.kt` create a `LocalSnackbarController` variable
```kotlin
val LocalSnackbarController = compositionLocalOf<SnackbarController> {
error("No snackbar host state provided")
}
```
Then inside of `setContent` function wrap the app's content with `CompositionLocalProvider` and define a dismissable snackbar. To learn more about `CompositionLocalProvider`, visit [this page](https://developer.android.com/develop/ui/compose/compositionlocal)
```kotlin
setContent {
val snackbarHostState = remember {
SnackbarHostState()
}
val swipeToDismissBoxState = rememberSwipeToDismissBoxState(confirmValueChange = {value ->
if (value != SwipeToDismissBoxValue.Settled) {
snackbarHostState.currentSnackbarData?.dismiss()
true
} else false
})
val snackbarController by remember(snackbarHostState) {
mutableStateOf(SnackbarController(snackbarHostState, lifecycleScope, applicationContext))
}
LaunchedEffect(swipeToDismissBoxState.currentValue) {
if (swipeToDismissBoxState.currentValue != SwipeToDismissBoxValue.Settled) {
swipeToDismissBoxState.reset()
}
}
MoneyMonocleTheme(darkTheme = isThemeDark.value) {
CompositionLocalProvider(LocalSnackbarController provides snackbarController) {
Surface(
Modifier
.fillMaxSize()
.background(MaterialTheme.colorScheme.background)
) {
Box(
Modifier
.fillMaxSize()
.imePadding()
) {
MoneyMonocleNavHost(
navController = navController
)
CustomErrorSnackbar(snackbarHostState = snackbarHostState,
swipeToDismissBoxState = swipeToDismissBoxState)
}
}
}
}
}
```
After that in a separate file create `SnackbarController` with `CustomErrorSnackbar`
```kotlin
class SnackbarController(
private val snackbarHostState: SnackbarHostState,
private val coroutineScope: CoroutineScope,
private val context: Context,
) {
fun showErrorSnackbar(result: CustomResult) {
if (result is CustomResult.DynamicError || result is CustomResult.ResourceError) {
coroutineScope.launch {
snackbarHostState.currentSnackbarData?.dismiss()
snackbarHostState.showSnackbar(result.error.asString(context))
}
}
}
}
@OptIn(ExperimentalMaterial3Api::class)
@Composable
fun CustomErrorSnackbar(snackbarHostState: SnackbarHostState,
swipeToDismissBoxState: SwipeToDismissBoxState) {
SwipeToDismissBox(state = swipeToDismissBoxState, backgroundContent = {}) {
SnackbarHost(hostState = snackbarHostState,
snackbar = {data ->
Snackbar(
containerColor = MaterialTheme.colorScheme.errorContainer,
modifier = Modifier
.padding(20.dp)
) {
Row(
verticalAlignment = Alignment.CenterVertically,
modifier = Modifier.fillMaxWidth()
) {
Text(data.visuals.message,
color = MaterialTheme.colorScheme.onErrorContainer,
modifier = Modifier.testTag(stringResource(id = R.string.error_snackbar)))
}
}
})
}
}
```
And that's all there is to it! Good luck on your Android Development journey. | evgensuit |
1,892,909 | SQLynx - Best Web-Based SQL Editor for Developers and Data Analysts | Traditional SQL clients such as DBeaver, DataGrip, Navicat provide a GUI interface. SQLynx also... | 0 | 2024-06-18T22:59:45 | https://dev.to/concerate/sqlynx-best-web-based-sql-editor-for-developers-and-data-analysts-1p0f | Traditional SQL clients such as DBeaver, DataGrip, Navicat provide a GUI interface. SQLynx also provides a web-based SQL Editor.
By adopting SQLynx DBAs no longer need to distribute database credentials to the individuals. DBAs configure the database credentials in SQLynx once, then grant database access to individuals conditionally.
SQLynx is a cutting-edge web-based SQL Integrated Development Environment (IDE) designed for developers and data analysts.
It supports multiple databases, including Mysql,PostgreSQL, Oracle and features an intelligent code editor with syntax highlighting, code completion, and refactoring capabilities.
SQLynx's modern web interface ensures cross-platform compatibility (MacOS, Windows, Linux), making it user-friendly and easy to configure. This powerful tool simplifies SQL editing and enhances productivity for users managing complex database tasks.
For more information, visit https://www.sqlynx.com/en/#/home/probation/SQLynx
| concerate | |
1,892,907 | Build your Service Mesh: The Proxy | DIY Service Mesh This is a Do-It-Yourself Service Mesh, a simple tutorial for... | 27,820 | 2024-06-18T22:56:15 | https://dev.to/ramonberrutti/build-your-service-mesh-part-1-10ed | kubernetes, diy, go, tutorial | ## DIY Service Mesh
This is a Do-It-Yourself Service Mesh, a simple tutorial for understanding
the internals of a service mesh. This project aims to provide a simple,
easy-to-understand reference implementation of a service mesh, which can be used
to learn about the various concepts and technologies used by a service mesh like Linkerd.
## What are you going to learn?
- Build a simple proxy and add service mesh features to it.
- Use Netfilter to intercept and modify network packets.
- Create a simple control plane to manage the service mesh.
- Use gRPC to communicate between the proxy and the control plane.
- Create an Admission Controller to validate and mutate Kubernetes resources.
- Certificate generation flow and mTLS between the services.
- How HTTP/2 works and how to use it with gRPC to balance the traffic between the services.
- Add useful features like circuit breaking, retries, timeouts, and load balancing.
- Add metrics and tracing to the service mesh with OpenTelemetry.
- Canary deployments.
## Some considerations
- Only for learning propose, not a production-ready service mesh.
- The proxy will print many logs to understand what is happening.
- Use IPTables instead of Nftables for simplicity.
- Keep the code as simple as possible to make it easy to understand.
- Some Golang errors are ignored for simplicity.
- Everything will be in the same repository to make it easier to understand the project.
## What is going to be built?
The following components are going to be built:
- **proxy-init**: Configure the network namespace of the pod.
- **proxy**: This is the data plane of the service mesh, which is responsible for intercepting and modifying network packets.
- **controller**: This is the control plane of the service mesh, which is responsible for configuring the data plane.
- **injector**: This is an Admission Controller for Kubernetes, which mutates each pod that needs to be meshed.
- **samples apps**: Four simple applications will communicate with each other. (http-client, http-server, grpc-client, grpc-server)
## Tools and Running the project
- [kind](https://kind.sigs.k8s.io/) to create a Kubernetes cluster locally.
- [Tilt](https://tilt.dev/) to run the project and watch for changes.
- [Buf](https://buf.build/) to lint and generate the Protobuf/gRPC code.
- [Docker](https://www.docker.com/) to build the Docker images.
- [k9s](https://k9scli.io/) to interact with the Kubernetes cluster. (Optional)
To start all the components, run the following command:
```bash
kind create cluster
tilt up
```
Tilt will build all the images and deploy all the components to the Kubernetes cluster.
All the images are created by the `Dockerfile` in the root directory.
The **main branch** contains the WIP version of the project.
## Architecture
The architecture of the service mesh is composed of the following components:

## Creating the HTTP applications
- **http-client**: This application is going to be called the `http-server` service.
- **http-server**: This application will be called by the `http-client` service.
In the next steps, `grpc-client` and `grpc-server` will be created.
http-client:
```go
func main() {
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer cancel()
n := 0
endpoint := os.Getenv("ENDPOINT")
if endpoint == "" {
endpoint = "http://http-server.http-server.svc.cluster.local./hello"
}
httpClient := &http.Client{
Timeout: 5 * time.Second,
}
// This application will call the endpoint every second
ticker := time.NewTicker(time.Second)
for {
select {
case <-ctx.Done():
return
case <-ticker.C:
req, err := http.NewRequestWithContext(ctx, http.MethodGet, endpoint, nil)
if err != nil {
panic(err)
}
resp, err := httpClient.Do(req)
if err != nil {
panic(err)
}
dump, err := httputil.DumpResponse(resp, true)
if err != nil {
panic(err)
}
resp.Body.Close()
n++
fmt.Printf("Response #%d\n", n)
fmt.Println(string(dump))
}
}
}
```
http-server:
```go
func main() {
ctx, cancel := signal.NotifyContext(context.Background(), syscall.SIGINT, syscall.SIGTERM)
defer cancel()
failRate, _ := strconv.Atoi(os.Getenv("FAIL_RATE"))
n := uint64(0)
hostname := os.Getenv("HOSTNAME")
version := os.Getenv("VERSION")
var b bytes.Buffer
b.WriteString("Hello from the http-server service! Hostname: ")
b.WriteString(hostname)
b.WriteString(" Version: ")
b.WriteString(version)
mux := http.NewServeMux()
mux.HandleFunc("/hello", func(w http.ResponseWriter, r *http.Request) {
// Dump the request
dump, _ := httputil.DumpRequest(r, true)
fmt.Printf("Request #%d\n", atomic.AddUint64(&n, 1))
fmt.Println(string(dump))
fmt.Println("---")
// Simulate failure
if failRate > 0 {
// Get a random number between 0 and 100
n := rand.Intn(100)
if n < failRate {
http.Error(w, "Internal server error", http.StatusInternalServerError)
fmt.Println("Failed to process request")
return
}
}
w.Header().Set("Content-Type", "text/plain")
w.WriteHeader(http.StatusOK)
w.Write(b.Bytes())
})
server := &http.Server{
Addr: ":8080",
Handler: mux,
}
g, ctx := errgroup.WithContext(ctx)
g.Go(func() error {
<-ctx.Done()
return server.Shutdown(context.Background())
})
g.Go(func() error {
return server.ListenAndServe()
})
if err := g.Wait(); err != nil {
if err != http.ErrServerClosed {
panic(err)
}
}
}
```
The `http-server` service is going to respond with a message that contains the
hostname and the version of the service.
Failures will be simulated by setting the FAIL_RATE environment variable.
Each service is going to be deployed in a different namespace:
- http-client: `http-client` Deployment in the `http-client` namespace: [http-client.yaml](https://github.com/ramonberrutti/diy-service-mesh/blob/main/k8s/http-client.yaml)
- http-server: `http-server` Deployment in the `http-server` namespace: [http-server.yaml](https://github.com/ramonberrutti/diy-service-mesh/blob/main/k8s/http-server.yaml)
## Testing the service mesh
Logs for `http-client` and `http-server` pods to see the communication between the services.
http-client logs:
```bash
Response #311
HTTP/1.1 200 OK
Content-Length: 71
Content-Type: text/plain
Date: Sat, 08 Jun 2024 19:38:27 GMT
Hello from http-server service! Hostname: http-server-799c77dc9b-56lmd Version: 1.0
```
http-server logs:
```bash
Request #171
GET /hello HTTP/1.1
Host: http-server.http-server.svc.cluster.local.
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
```
## Implementing the proxy to intercept the HTTP/1.1 requests and responses.
### Why need a proxy?
The proxy will intercept all of the inbound and outbound traffic of the services. (except explicitly ignored)
Linkerd has a Rust based proxy called `linkerd2-proxy`, and Istio has a C++ based proxy called `Envoy`,
which is a very powerful proxy with a lot of features.
The proxy code is going to be very simple and will be similar to `linkerd2-proxy` functionalities.
For now, the proxy will listen on two ports: one for inbound traffic and another for outbound traffic.
- **4000** for the inbound traffic.
- **5000** for the outbound traffic.
This is a basic proxy implementation that intercepts HTTP requests and HTTP responses.
```go
func main() {
ctx, cancel := signal.NotifyContext(context.Background(), os.Interrupt)
defer cancel()
g, ctx := errgroup.WithContext(ctx)
// Inbound connection
g.Go(func() error {
return listen(ctx, ":4000", handleInboundConnection)
})
// Outbound connection
g.Go(func() error {
return listen(ctx, ":5000", handleOutboundConnection)
})
if err := g.Wait(); err != nil {
panic(err)
}
}
func listen(ctx context.Context, addr string, accept func(net.Conn)) error {
l, err := net.Listen("tcp", addr)
if err != nil {
return fmt.Errorf("failed to listen: %w", err)
}
defer l.Close()
go func() {
<-ctx.Done()
l.Close()
}()
for {
conn, err := l.Accept()
if err != nil {
return fmt.Errorf("failed to accept: %w", err)
}
go accept(conn)
}
}
```
The `listen` function listens on the port and calls the `accept` function when a connection is established.
### handleInboundConnection
All the inbound traffic is going to be intercepted by the proxy, print the request, forward the request
to the local destination port and print the response.
```go
func handleInboundConnection(c net.Conn) {
defer c.Close()
// Get the original destination
_, port, err := getOriginalDestination(c)
if err != nil {
fmt.Printf("Failed to get original destination: %v\n", err)
return
}
fmt.Printf("Inbound connection from %s to port: %d\n", c.RemoteAddr(), port)
// Read the request
req, err := http.ReadRequest(bufio.NewReader(c))
if err != nil {
fmt.Printf("Failed to read request: %v\n", err)
return
}
// Call the local service port
upstream, err := net.Dial("tcp", fmt.Sprintf("127.0.0.1:%d", port))
if err != nil {
fmt.Printf("Failed to dial: %v\n", err)
return
}
defer upstream.Close()
// Write the request
if err := req.Write(io.MultiWriter(upstream, os.Stdout)); err != nil {
fmt.Printf("Failed to write request: %v\n", err)
return
}
// Read the response
resp, err := http.ReadResponse(bufio.NewReader(upstream), req)
if err != nil {
fmt.Printf("Failed to read response: %v\n", err)
return
}
defer resp.Body.Close()
// Write the response
if err := resp.Write(io.MultiWriter(c, os.Stdout)); err != nil {
fmt.Printf("Failed to write response: %v\n", err)
return
}
// Add a newline for better readability
fmt.Println()
}
```
The `handleInboundConnection` function first reads the destination port to our service,
iptables will set the destination port using the `SO_ORIGINAL_DST` socket option.
The function `getOriginalDestination` returns the original destination of the TCP connection,
check the code to see how it works. (This is a Linux specific feature)
After that, read the request, forward the request to the local service port, read
the response and send it back to the client.
For visibility, print the request and response using `io.MultiWriter` to write to the connection and stdout.
### handleOutboundConnection
The outbound look very similar to the inbound, but forward the request to the target service.
```go
func handleOutboundConnection(c net.Conn) {
defer c.Close()
// Get the original destination
ip, port, err := getOriginalDestination(c)
if err != nil {
fmt.Printf("Failed to get original destination: %v\n", err)
return
}
fmt.Printf("Outbound connection to %s:%d\n", ip, port)
// Read the request
req, err := http.ReadRequest(bufio.NewReader(c))
if err != nil {
fmt.Printf("Failed to read request: %v\n", err)
return
}
// Call the external service ip:port
upstream, err := net.Dial("tcp", fmt.Sprintf("%s:%d", ip, port))
if err != nil {
fmt.Printf("Failed to dial: %v\n", err)
return
}
defer upstream.Close()
// Write the request
if err := req.Write(io.MultiWriter(upstream, os.Stdout)); err != nil {
fmt.Printf("Failed to write request: %v\n", err)
return
}
// Read the response
resp, err := http.ReadResponse(bufio.NewReader(upstream), req)
if err != nil {
fmt.Printf("Failed to read response: %v\n", err)
return
}
defer resp.Body.Close()
// Write the response
if err := resp.Write(io.MultiWriter(c, os.Stdout)); err != nil {
fmt.Printf("Failed to write response: %v\n", err)
return
}
// Add a newline for better readability
fmt.Println()
}
```
As can be seen, the only difference is in how the external service is called.
```go
// Call the external service ip:port
upstream, err := net.Dial("tcp", fmt.Sprintf("%s:%d", ip, port))
if err != nil {
fmt.Printf("Failed to dial: %v\n", err)
return
}
defer upstream.Close()
```
It is important to note that the service resolves the DNS, so only the IP and the port need to be provided.
## How are the connections intercepted?
### Kubernetes Pod Networking Understanding
Each kubernetes pod shares the same network between the containers, so the `localhost` is the same for all the containers in the pod.
### initContainers
Kubernetes has a feature called `initContainers`, which is a container that runs before the main containers starts. These containers need to finish before the main containers starts.
### iptables
The `iptables` is a powerful tool to manage Netfilter in Linux, it can be used to intercept and modify the network packets.
Before our http-client and http-server containers starts, the proxy-init is going to configure the `Netfilter` to redirect
all the traffic to the proxy inbounds and outbounds ports.
```go
func main() {
// Configure the proxy
commands := []*exec.Cmd{
// Default accept for all nat chains
exec.Command("iptables", "-t", "nat", "-P", "PREROUTING", "ACCEPT"),
exec.Command("iptables", "-t", "nat", "-P", "INPUT", "ACCEPT"),
exec.Command("iptables", "-t", "nat", "-P", "OUTPUT", "ACCEPT"),
exec.Command("iptables", "-t", "nat", "-P", "POSTROUTING", "ACCEPT"),
// Create custom chains so is possible jump to them
exec.Command("iptables", "-t", "nat", "-N", "PROXY_INBOUND"),
exec.Command("iptables", "-t", "nat", "-N", "PROXY_OUTBOUND"),
// Jump to custom chains, if something is not matched, will return to the default chains.
exec.Command("iptables", "-t", "nat", "-A", "PREROUTING", "-p", "tcp", "-j", "PROXY_INBOUND"),
exec.Command("iptables", "-t", "nat", "-A", "OUTPUT", "-p", "tcp", "-j", "PROXY_OUTBOUND"),
// Set rules for custom chains: PROXY_INBOUND, redirect all inbound traffic to port 4000
exec.Command("iptables", "-t", "nat", "-A", "PROXY_INBOUND", "-p", "tcp", "-j", "REDIRECT", "--to-port", "4000"),
// Set rules for custom chains: PROXY_OUTBOUND
// Ignore traffic between the containers.
exec.Command("iptables", "-t", "nat", "-A", "PROXY_OUTBOUND", "-o", "lo", "-j", "RETURN"),
exec.Command("iptables", "-t", "nat", "-A", "PROXY_OUTBOUND", "-d", "127.0.0.1/32", "-j", "RETURN"),
// Ignore outbound traffic from the proxy container.
exec.Command("iptables", "-t", "nat", "-A", "PROXY_OUTBOUND", "-m", "owner", "--uid-owner", "1337", "-j", "RETURN"),
// Redirect all the outbound traffic to port 5000
exec.Command("iptables", "-t", "nat", "-A", "PROXY_OUTBOUND", "-p", "tcp", "-j", "REDIRECT", "--to-port", "5000"),
}
for _, cmd := range commands {
if err := cmd.Run(); err != nil {
panic(fmt.Sprintf("failed to run command %s: %v\n", cmd.String(), err))
}
}
fmt.Println("Proxy initialized successfully!")
}
```
Some important points:
- To allow outbound traffic from the proxy container, the `iptables` option `--uid-owner` is used.
- The use of custom chains is to make it easier to understand the rules and allow to return to the default chains if the rule is not matched.
- The `REDIRECT` option is used to redirect the traffic to the proxy and is the responsable to add the `SO_ORIGINAL_DST` information to the socket.
### Adding the proxy and proxy-init containers to the deployments:
```yaml
spec:
initContainers:
- name: proxy-init
image: diy-sm-proxy-init
imagePullPolicy: IfNotPresent
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
containers:
- name: proxy
image: diy-sm-proxy
imagePullPolicy: IfNotPresent
securityContext:
runAsUser: 1337
- name: http-client
image: diy-sm-http-client
imagePullPolicy: IfNotPresent
```
The same configuration is going to be applied to the `http-server` deployment.
Some important points:
- proxy-init run as init container that is going to configure the network namespace of the pod and exit.
- `NET_ADMIN` and `NET_RAW` are linux capabilities that are necessary to configure the `Netfilter`,
without these capabilities `iptables` can't call the system calls to configure the `Netfilter`.
- Using the `runAsUser: 1337` in the `proxy` container is very important so the proxy traffic to outside of the pod is allowed.
### Logs output for the proxy and the applications
http-server logs:
```bash
proxy Inbound connection from 10.244.0.115:60296 to port: 8080
http-server Request #13
http-server GET /hello HTTP/1.1
http-server Host: http-server.http-server.svc.cluster.local.
http-server Accept-Encoding: gzip
http-server User-Agent: Go-http-client/1.1
proxy GET /hello HTTP/1.1
proxy Host: http-server.http-server.svc.cluster.local.
proxy User-Agent: Go-http-client/1.1
proxy Accept-Encoding: gzip
proxy
proxy HTTP/1.1 200 OK
proxy Content-Length: 86
proxy Content-Type: text/plain
proxy Date: Mon, 17 Jun 2024 20:58:46 GMT
proxy
proxy Hello from the http-server service! Hostname: http-server-c6f4776bb-mmw2d Version: 1.0
```
http-client logs:
```bash
proxy Outbound connection to 10.96.182.169:80
proxy GET /hello HTTP/1.1
proxy Host: http-server.http-server.svc.cluster.local.
proxy User-Agent: Go-http-client/1.1
proxy Accept-Encoding: gzip
proxy
proxy HTTP/1.1 200 OK
proxy Content-Length: 86
proxy Content-Type: text/plain
proxy Date: Mon, 17 Jun 2024 21:04:53 GMT
proxy
proxy Hello from the http-server service! Hostname: http-server-c6f4776bb-slpdf Version: 1.0
http-client Response #16
http-client HTTP/1.1 200 OK
http-client Content-Length: 86
http-client Content-Type: text/plain
http-client Date: Mon, 17 Jun 2024 21:04:53 GMT
http-client
http-client Hello from the http-server service! Hostname: http-server-c6f4776bb-slpdf Version: 1.0
```
### Adding manually the proxy-init and proxy containers?
Doing this is not so practical, right?
Kubernetes has a feature called `Admission Controller`, which is a webhook that can validate and mutate the objects before they are persisted in the etcd.
Learn more about it [here](https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/).
But in the next steps, a Mutation Admission Controller will be created to inject the proxy-init and proxy containers into the pods that need to be meshed.
## Next
For the next part, the focus will be to create the Admission Controller to inject the proxy-init and the proxy.
## Contact and Github Project
The github project contains the WIP of the project.
Feel free to contact me: https://www.linkedin.com/in/ramonberrutti/
| ramonberrutti |
1,892,906 | Dicas de Terminal e SSH | Algumas dicas para ter um pouco mais de segurança no seu terminal, principalmente de uma segurança... | 0 | 2024-06-18T22:55:39 | https://dev.to/rafaone/dicas-de-terminal-e-ssh-47bd | linux, ssh, terminal | Algumas dicas para ter um pouco mais de segurança no seu terminal, principalmente de uma segurança espacial no sentido do seu entorno e em conexões ssh.
- Use customizador de prompt como starship, nele é possível ocultar o nome da maquina e do usuário, podendo customizar de acordo com seu uso podendo criar seu próprio padrão.

- Evite ter login por senha em seu servidor SSH, estude sobre chaves ssh mantenha ela em local criptografado e use uma passphrase;
- Configure seu .ssh/config com seus host, portas e nome de usuário.
vai sair disso "ssh -p7642 usexpto@server.com" para "ssh server.com".

- Mude a porta do seu SSH, isso ajuda mas não resolve os ataques.
- Se não consegue por um acesso somente por VPN, se possível implemente um "Port Knocking".
#dicas #linux #ssh #terminal | rafaone |
1,892,905 | Building my Own wc Tool in Rust | Introduction In this post, I'll walk you through my journey of building a custom version... | 0 | 2024-06-18T22:51:13 | https://dev.to/krymancer/building-my-own-wc-tool-in-rust-l0c | rust, unix, cli, learning | ### Introduction
In this post, I'll walk you through my journey of building a custom version of the Unix `wc` (word count) tool, which I named `ccwc` (Coding Challenges Word Count). This project was inspired by a [coding challenge](https://codingchallenges.fyi/challenges/challenge-wc) designed to teach the Unix Philosophy of creating simple, composable command-line tools. You can find more details and source code in my [GitHub repository](https://github.com/Krymancer/ccwc).
### The Challenge
The challenge was to build a `wc` tool that can:
- Count the number of bytes, characters, lines, and words in a file.
- Handle command-line options to specify what to count.
- Read from standard input if no file is specified.
[The Unix Philosophy](http://www.catb.org/~esr/writings/taoup/html/) encourages writing simple programs that do one thing well and can be combined to perform complex tasks. Following this principle, I broke down the challenge into several steps, each adding a new feature to the tool.
### Step-by-Step Development
#### Step Zero: Data Structure
One of my favorite things in Rust has to be the type system, thats why I created a struct to represent the stats we might have to get from the input
```rust
struct Stats {
pub bytes: usize,
pub chars: usize,
pub lines: usize,
pub max_line_lenght: usize,
pub words: usize,
pub path: String,
}
```
#### Step One: Counting Bytes
The first task was to implement the `-c` option to count the number of bytes in a file. I did this using a 'constructor'
```rust
impl Stats {
pub fn new(chars: Vec<u8>, path: String) -> Self {
let mut stats = Stats {
bytes: chars.len(),
chars: 0,
lines: 0,
max_line_length: 0,
words: 0,
path,
};
// Additional processing...
stats
}
}
```
This function initializes a `Stats` struct with the number of bytes based on the length of the input vector.
#### Step Two: Counting Lines
Next, I added support for the `-l` option to count the number of lines. This required iterating through the characters and incrementing the line count whenever a newline character (`\n`) was encountered:
```rust
for c in chars {
if c == '\n' {
stats.lines += 1;
}
}
```
#### Step Three: Counting Words
The `-w` option was a bit more complex, as it required detecting word boundaries. I used a simple state machine to keep track of whether the current character is part of a word:
```rust
let mut in_word = false;
for c in chars {
if !c.is_whitespace() {
in_word = true;
} else if in_word {
stats.words += 1;
in_word = false;
}
}
```
#### Step Four: Counting Characters
The `-m` option counts the number of characters. This is straightforward unless the locale supports multibyte characters. For simplicity, I treated each byte as a character in this implementation.
#### Step Five: Default Behavior
I'm handling flags in a very straightforward:
```rust
let args = env::args().skip(1); // Skips the frist argument in the command (most likely the binary name i.e: ccwd ....)
let mut flags: Vec<char> = vec![];
let mut files_paths: Vec<String> = vec![];
let avaliable_flags = ['c', 'm', 'l', 'L', 'w']; // All avaliable flags
for arg in args {
if arg.starts_with('-') { // if the argument start if a - it must be a flag or multiple flags
for flag in arg.chars().skip(1) { // skiping the -
flags.push(flag);
}
} else { // otherwise is treated like a file path
files_paths.push(arg);
}
}
let invalid_flag = flags.iter().find(|flag| !avaliable_flags.contains(flag));
```
If no options are provided, the tool should count bytes, lines, and words by default:
```rust
if flags.is_empty() {
flags.push('c');
flags.push('l');
flags.push('w');
}
```
#### Final Step: Reading from Standard Input
The final feature was to allow the tool to read from standard input if no file is specified:
```rust
let number_of_files = files_paths.len();
if number_of_files < 1 {
let mut buffer = Vec::new();
match io::stdin().read_to_end(&mut buffer) {
Ok(_) => {
stats.push(Stats::new(buffer, "".to_string()));
},
Err(err) => { panic!("Error reading from stdin: {err}")}
}
}
```
### Conclusion
Building `ccwc` was a rewarding experience that really helped me learning rust. By breaking the project into manageable steps, I was able to incrementally add features and ensure each part worked correctly before moving on.
Feel free to check out the complete source code on my [GitHub repository](https://github.com/Krymancer/ccwc). I hope this post inspires you to tackle similar challenges and build your own command-line tools.
Take care folks! | krymancer |
1,892,904 | Vitoli builders, Inc. | At Vitoli Builders, Inc. in Calabasas, CA, we specialize in transforming your outdoor spaces with... | 0 | 2024-06-18T22:50:13 | https://dev.to/vitolibuildersinc/vitoli-builders-inc-535m |

At Vitoli Builders, Inc. in Calabasas, CA, we specialize in transforming your outdoor spaces with our expert landscaping and hardscaping services. With years of experience, our dedicated team delivers stunning lawns and custom hardscape installations that reflect your unique style. Whether you need a custom BBQ area, low-maintenance pool design, or regular lawn upkeep, you can count on us to get the job done right. We pride ourselves on creating beautiful, functional landscapes at prices that fit your budget. Trust Vitoli Builders, Inc. to enhance your Calabasas property with exceptional landscaping solutions tailored to your needs.
Vitoli builders, Inc.
Address: 23586 Calabasas Road Suite 209, Calabasas, CA 91302
Phone: 866-575-5795
Website: [https://vitolilandscapedesign.com/](https://vitolilandscapedesign.com/)
Contact email: Info@vitolilandscapedesign.com
Visit Us:
[Vitoli builders, Inc. Yelp](https://www.yelp.com/biz/vitoli-landscape-design-calabasas-3)
[Vitoli builders, Inc. Nextdoor](https://nextdoor.com/pages/vitoli-landscape-design-woodland-hills-ca/)
[Vitoli builders, Inc. BuildZoom](https://www.buildzoom.com/contractor/vitoli-builders-inc)
Our Services:
Exterior Painting
Hillside Landscaping Specialists
Pool & Spa Design and Remodeling
Landscaping
Hardscaping
EcoScaping | vitolibuildersinc | |
1,891,325 | Managing Whitelist Overload: Ensuring Effective NFT drops with Proof of Funds | In this article, we explore the benefits of requiring proof of funds for whitelist minting in NFT projects. | 0 | 2024-06-18T22:44:44 | https://dev.to/passandscore/managing-whitelist-overload-ensuring-effective-nft-drops-with-proof-of-funds-3cnn | ---
title: "Managing Whitelist Overload: Ensuring Effective NFT drops with Proof of Funds"
description: " In this article, we explore the benefits of requiring proof of funds for whitelist minting in NFT projects."
tags: []
categories: []
published: true
keywords: ""
slug: managing-whitelist-overload-ensuring-effective-token-mints-with-proof-of-funds
link:
- rel: canonical
href: https://www.jasonschwarz.xyz/articles/managing-whitelist-overload-ensuring-effective-token-mints-with-proof-of-funds
---
How often have you curated a whitelist based on partners and community members, only to find the user count far exceeds the total supply of your upcoming drop? You might assume this guarantees a quick sell-out. However, this is frequently not the case.
## Problem
When the entry barrier is as low as simple social participation, you open the door for users who register without genuine intention or financial ability to purchase the token. This can lead to over-subscription and complex management issues. Moreover, if the drop does not sell out, you're left wondering how this happened despite having what seemed like an ample number of interested users ready to mint.
## Solution
Offering a whitelist mint exclusively to users who can prove they have the necessary funds provides several strategic and practical advantages. This ensures that only users who are genuinely interested and financially capable of purchasing the tokens secure a place on the whitelist. Such an approach fosters a more engaged and committed community. Additionally, users who demonstrate financial readiness are more likely to complete their purchases, thereby maximizing your overall whitelist conversion rate.
## Example Implementation
Consider a receive method in a smart contract used for handling the whitelist process. In this scenario, users can self-whitelist by sending Ethereum to the contract. This example includes error handling to verify that self-whitelisting is enabled, the caller is not blacklisted, and the amount of Ether sent is correct. Upon validation, the user's address is added to the whitelist, and their funds are returned. This approach was inspired by Vultisig.
This is only a conceptual example. Do not use this code in production as it has not been audited.
```solidity
receive() external payable {
if (_isSelfWhitelistDisabled) {
revert SelfWhitelistDisabled();
}
if (_isBlacklisted[_msgSender()]) {
revert Blacklisted();
}
if (msg.value != MINT_PRICE) {
revert InsufficientFunds();
}
_addWhitelistedAddress(_msgSender());
payable(_msgSender()).transfer(msg.value);
}
```
## Alternative Implementation Methods
- Pre-Signed Transactions: Users might be required to sign a transaction demonstrating they have the funds available.
- Staking Mechanisms: Users may need to stake a certain amount of cryptocurrency to get whitelisted, which can be refunded later.
## Further Discussion
Although this article focused on NFTs, the approach of requiring proof of funds for whitelist minting is also valuable for ERC20 tokens, offering numerous advantages beyond the immediate context.
Enhancing Fairness and Accessibility: By requiring proof of funds, we create a level playing field where only those with verified financial capability can participate. This ensures a more equitable distribution process and helps avoid bots and fake accounts that might otherwise exploit the system.
Streamlining the Process: Proof of funds simplifies logistics and ensures that participants can immediately fund their purchases. This efficient allocation makes the token distribution process smoother and helps in managing initial token supply dynamics, contributing to price stability.
Enhancing Security and Compliance: Implementing proof of funds can assist in meeting Anti-Money Laundering (AML) compliance requirements by adding a verification layer. It also mitigates the risk of fraud by ensuring that participants use legitimate and traceable funds.
## Your Thoughts
What are your thoughts on requiring proof of funds for whitelisting? Can this approach enhance the credibility, fairness, and efficiency of a project?
I'd love to hear your insights! Share your opinions, and let's continue the conversation in the next article.
Connect with me on social media:
[X](https://x.com/passandscore)
[GitHub](https://github.com/passandscore)
[LinkedIn](https://www.linkedin.com/in/jason-schwarz-75b91482/)
| passandscore | |
1,892,898 | PNG to JPG: Optimizing Your Image Formats | What Are the Differences Between PNG and JPG Images? PNG (Portable Network Graphics) and... | 0 | 2024-06-18T22:26:47 | https://dev.to/msmith99994/png-to-jpg-optimizing-your-image-formats-3ogd | ## What Are the Differences Between PNG and JPG Images?
PNG (Portable Network Graphics) and JPG (or JPEG - Joint Photographic Experts Group) are two of the most commonly used image formats, each with unique characteristics that suit different purposes. Understanding these differences can help you decide which format to use and when it's beneficial to convert between them.
### PNG
- Compression: PNG uses lossless compression, preserving all image data without losing quality. This results in larger file sizes.
- Color Depth: Supports 24-bit color, which can display millions of colors, and an 8-bit alpha channel for transparency.
- File Size: Larger due to lossless compression, which ensures no data is lost.
- Transparency: Supports transparency, making it ideal for images that need clear backgrounds or layers.
- Use Cases: Preferred for web graphics, logos, icons, digital art, and images requiring high quality and transparency.
### JPG
- Compression: JPG uses lossy compression, reducing file size by discarding some image data. This can result in a loss of quality, especially at higher compression levels.
- Color Depth: Supports 24-bit color, displaying millions of colors, making it ideal for photographs.
- File Size: Generally smaller due to lossy compression, which is beneficial for web use.
- Transparency: Does not support transparency.
- Use Cases: Widely used for digital photography, web images, social media, and email due to its balance of quality and file size.
## Where Are They Used?
### PNG
- Web Graphics: Ideal for logos, icons, and images with transparent backgrounds.
- Digital Art: Preferred for images with sharp edges, text, and transparent elements.
- Screenshots: Commonly used for screenshots to capture exact screen details without quality loss.
- Print Media: Used in scenarios where high quality and lossless compression are required.
### JPG
- Digital Photography: Standard format for digital cameras and smartphones due to its balance of quality and file size.
- Web Design: Widely used for photographs and complex images on websites because of its quick loading times.
- Social Media: Preferred for sharing images on social platforms due to its universal support and small file size.
- Email and Document Sharing: Frequently used in emails and documents for easy viewing and sharing.
## Advantages and Disadvantages
### PNG
**Advantages:**
- Lossless Compression: Maintains original image quality without any loss.
- Transparency: Supports transparent backgrounds and varying levels of opacity.
- High Color Depth: Suitable for images requiring detailed color representation.
- Ideal for Editing: No quality loss through multiple edits and saves.
**Disadvantages:**
- Larger File Sizes: Larger than JPG files due to lossless compression, which can be a drawback for web use.
- Not Ideal for Photographs: Typically results in larger files for photographic images compared to JPG.
- Browser Compatibility: While widely supported, PNG can be less efficient for large images on older systems.
### JPG
**Advantages:**
Small File Size: Effective lossy compression reduces file sizes significantly.
Wide Compatibility: Supported by almost all devices, browsers, and software.
High Color Depth: Capable of displaying millions of colors, ideal for photographs.
Adjustable Quality: Compression levels can be adjusted to balance quality and file size.
**Disadvantages:**
- Lossy Compression: Quality degrades with higher compression levels and repeated edits.
- No Transparency: Does not support transparent backgrounds.
- Limited Editing Capability: Cumulative compression losses make it less ideal for extensive editing.
## How to Convert PNG to JPG
Converting [PNG to JPG](https://cloudinary.com/tools/png-to-jpg) can be beneficial when you need smaller file sizes and do not require transparency. Here are several methods to convert PNG images to JPG:
## Conversion Methods
**1. Using Online Tools:**
Websites like Convertio and Online-Convert allow you to upload PNG files and download the converted JPG files.
**2. Using Image Editing Software:**
Software like Adobe Photoshop and GIMP support both PNG and JPG formats. Open your PNG file and save it as JPG.
**3. Command Line Tools:**
Command-line tools like ImageMagick can be used for conversion.
**4. Programming Libraries:**
Programming libraries such as Python's Pillow can be used to automate the conversion process in applications.
## Conclusion
PNG and JPG are essential image formats, each suited for different purposes. PNG is favored for its lossless compression and transparency support, making it ideal for web graphics, digital art, and detailed images. JPG, on the other hand, is valued for its smaller file sizes and wide compatibility, making it perfect for digital photography and web use. | msmith99994 | |
1,892,903 | ¿La Aseguradora se Niega a Pagarte el Seguro de Vida o Invalidez? | Enfrentarse a la negativa de una aseguradora a pagar un seguro de vida o invalidez puede ser una... | 0 | 2024-06-18T22:43:13 | https://dev.to/fedegarcia/la-aseguradora-se-niega-a-pagarte-el-seguro-de-vida-o-invalidez-46ni | Enfrentarse a la negativa de una aseguradora a pagar un seguro de vida o invalidez puede ser una experiencia frustrante y desalentadora. Este artículo está diseñado para ayudarte a entender tus opciones y los pasos que puedes seguir si te encuentras en esta situación. Exploraremos las razones comunes por las que las aseguradoras pueden rechazar las reclamaciones y cómo los [abogados especialistas en seguros de vida](https://www.burgueraabogados.com/abogados-seguros-de-vida/) pueden asistirte en la lucha por tus derechos.
## Razones Comunes para la Negativa de Pago
### 1. Incumplimiento de las Condiciones del Contrato
Las aseguradoras suelen negar las reclamaciones cuando creen que el titular de la póliza no ha cumplido con todas las condiciones del [contrato de seguro de vida](https://es.quora.com/profile/Manuel-Benitez-58/Todo-lo-que-Debes-Saber-sobre-el-Contrato-de-Seguro-de-Vida). Esto puede incluir omisiones en el formulario de solicitud, falta de pago de primas o no revelar información médica relevante.
### 2. Exclusiones de la Póliza
Las pólizas de seguro de vida e invalidez suelen incluir una lista de exclusiones, como suicidio en los primeros años de la póliza o muertes resultantes de actividades peligrosas. Las aseguradoras pueden utilizar estas exclusiones como base para rechazar una reclamación.
### 3. Disputas sobre la Causa de Muerte o Invalidez
En algunos casos, las aseguradoras pueden cuestionar la causa de la muerte o invalidez, argumentando que no está cubierta por la póliza. Por ejemplo, pueden afirmar que la muerte fue resultado de un acto delictivo o de una condición preexistente no declarada.
## Qué Hacer si tu Reclamación es Rechazada
### Revisión del Motivo de Rechazo
Lo primero que debes hacer es revisar cuidadosamente la carta de rechazo de la aseguradora para entender la razón específica por la cual tu reclamación fue denegada. Esto te permitirá abordar directamente las preocupaciones de la aseguradora.
### Recolección de Documentación Adicional
Recopila toda la documentación relevante que respalde tu reclamación. Esto puede incluir registros médicos, testimonios de testigos y cualquier otra evidencia que demuestre que cumples con las condiciones de la póliza.
### Consulta con un [Abogado Especialista](https://www.dibiz.com/burguera)
Considera la posibilidad de consultar con abogados especialistas en seguros de vida. Estos profesionales tienen la experiencia necesaria para evaluar tu caso y determinar si la negativa de la aseguradora es justificada. Pueden ayudarte a presentar una apelación sólida y, si es necesario, representarte en un proceso legal.
### Presentación de una Apelación
La mayoría de las aseguradoras tienen un proceso de apelación interno que puedes utilizar para disputar la negativa de pago. Asegúrate de seguir todos los pasos necesarios y presentar cualquier documentación adicional que respalde tu caso.
## Beneficios de Contratar a un [Abogado Especialista](https://burguera-abogados.jimdosite.com/)
### Experiencia en el Manejo de Reclamaciones
[Los abogados especialistas en seguros de vida](https://abogados-seguros-de-vida.peatix.com/) tienen una comprensión profunda de las leyes y regulaciones que rigen los seguros de vida e invalidez. Saben cómo navegar por el complejo proceso de reclamaciones y cómo abordar las tácticas de las aseguradoras para negar los pagos.
### Representación Legal Efectiva
[Un abogado especializado](https://burguera-abogados-abogados-esp-179fd7.webflow.io/) puede representarte eficazmente tanto en el proceso de apelación interna como en cualquier acción legal que decidas emprender. Su experiencia en la [abogacía](https://www.abogacia.es/) les permite defender tus derechos y luchar por los beneficios que te corresponden.
### Maximización de los Beneficios
Los [abogados en seguros de vida](https://abogados-seguros-de-vida.peatix.com/) pueden ayudarte a maximizar los beneficios de tu póliza al asegurarse de que todas las condiciones se cumplan y se presenten adecuadamente. Pueden identificar cualquier error o inconsistencia en la negativa de la aseguradora y trabajar para corregirlos.
## Recursos Adicionales
### Informes y Guías
Para obtener más información sobre cómo manejar una reclamación de seguro de vida, puedes consultar este [informe detallado sobre reclamaciones de seguros de vida](https://online.publuu.com/553649/1244652). Este recurso ofrece una guía paso a paso para entender el proceso de reclamación y cómo puedes aumentar tus posibilidades de éxito.
### Conexión con Profesionales
Conectarse con otros profesionales y beneficiarios que han enfrentado situaciones similares puede proporcionarte apoyo y consejos valiosos. Existen foros y grupos en línea donde puedes compartir experiencias y obtener recomendaciones sobre los mejores [abogados especialistas](https://www.burgueraabogados.com/abogados-seguros-de-vida/).
## Conclusión
Enfrentarse a la negativa de una aseguradora a pagar un seguro de vida o invalidez puede ser un desafío significativo. Sin embargo, con la información correcta y el apoyo adecuado, puedes luchar por tus derechos y obtener los beneficios que te corresponden. Recuerda siempre revisar cuidadosamente las razones de la negativa, recopilar toda la documentación relevante y considerar la posibilidad de consultar con [abogados especialistas en seguros de vida](https://burguera-abogados.weebly.com/) para obtener la mejor representación posible.
Si necesitas más información sobre el proceso de reclamación o cómo un [contrato de seguro de vida](https://es.quora.com/profile/Manuel-Benitez-58/Todo-lo-que-Debes-Saber-sobre-el-Contrato-de-Seguro-de-Vida) puede protegerte, no dudes en consultar los recursos adicionales y contactar a un profesional en la materia. Tu bienestar y el de tus seres queridos son demasiado importantes como para dejarlos en manos de decisiones injustas de las aseguradoras.
| fedegarcia | |
1,889,736 | Big-O Notation: One Byte Explainer | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-18T22:42:26 | https://dev.to/jgracie52/big-o-notation-one-byte-explainer-1o9o | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Big-O notation is a worst case runtime. An algorithm of O(n^2), with n=200 inputs, will at worst take 40,000 iterations to run. Big-O is useful in determining how optimized an algo is. An algo of O(2^n) will take longer to run than an algo of O(log(n)).
## Additional Context
There’s plenty more to runtime analysis than just Big-O. For instance, Big-O is focused on the overarching runtime of an algorithm (the part of the algo that takes the longest). It does not, however, concern itself with the __exact__ amount of time an algo will take (otherwise we'd be looking at stuff like O(2n+37)).
To demonstrate, consider the below loops. Both loops have the same Big-O of O(n) (which is called linear time) since they iterate through a list of numbers from 0-n _one_ time. But, technically, the first loop will run a smidge faster since it has less operations (i.e. it isn't doing the extra if-else branching).
```python
from time import time
def timer_func(func):
# This function shows the execution time of
# the function object passed
def wrap_func(*args, **kwargs):
t1 = time()
result = func(*args, **kwargs)
t2 = time()
print(f'Function {func.__name__!r} executed in {(t2-t1):.4f}s')
return result
return wrap_func
@timer_func
def basicLoop1(x):
sumX = 0
for i in range(x):
sumX += i
return sumX
@timer_func
def basicLoop2(x):
sumX = 0
for i in range(x):
if i > 5:
sumX += i * 2
else:
sumX += i
return sumX
if __name__ == '__main__':
basicLoop1(10000000) # Takes ~0.43s
basicLoop2(10000000) # Takes ~0.88s
```
Another thing to consider is that Big-O is focused on the **_worst_** case scenario. There may be instances where, on average, an algorithm runs faster than its Big-O runtime.
We denote this **_average_** runtime as Big-θ (big theta). There is also a chance that, for really good cases, it may run even faster. Best case runtimes can be denoted using Big-Ω (big omega).
A simple example of why this is important is when comparing merge sort to insertion sort.
Merge sort is O(nlogn), while insertion is O(n^2). So, when looking at large reverse-sorted lists (our worst case scenario for insertion sorting), merge sort is always better.
But what about when the list is already sorted? Merge sort still takes nlogn time, but insertion sort is now Ω(n) time.
You’ll notice that, when lists are **_mostly_** if not fully sorted, insertion sort will tend to run faster than its merge sort counterpart. Because of this, it may be reasonable to use insertion sort instead of merge sort if we are reasonably sure the lists we are getting are mostly sorted to begin with.
There is obviously a lot more to Big-O and runtime analysis than what I've covered here. If you'd like a more thorough explanation, I highly recommend [this](https://www.youtube.com/watch?v=v4cd1O4zkGw) video from HackerRank as a starting point.
Hopefully this helped a bit with your understanding of Big-O. Thanks for reading and happy coding! | jgracie52 |
1,892,902 | Parsing and Validating Data in Elixir | In the enchanting world of Elixir programming, data validation is a quest every developer embarks on.... | 0 | 2024-06-18T22:40:53 | https://dev.to/zoedsoupe/parsing-and-validating-data-in-elixir-1310 | webdev, elixir, parsing, programming | In the enchanting world of Elixir programming, data validation is a quest every developer embarks on. It's a journey through the land of schemas, types, and constraints, ensuring data integrity and correctness. Today, we'll explore four powerful artifacts: Ecto, Norm, Drops, and Peri. Each of these tools offers unique powers for taming your data. We'll delve into their strengths, use cases, and compare them to help you choose the right one for your quest.
## The Quest: Parse, Don't Validate
Before we embark on our journey, let's discuss a guiding principle in functional programming: **Parse, Don't Validate**. This pattern emphasizes transforming data into a well-defined structure as early as possible. By doing so, you avoid scattered, ad-hoc validation throughout your codebase, leading to clearer, more maintainable code. It's like casting a spell to organize the chaos of raw data into a neat, structured form.
## The Artifacts
### 1. Ecto
Ecto is a robust toolkit primarily designed for interacting with databases. However, it also offers powerful capabilities for embedded schemas and schemaless changesets, making it versatile for data validation.
#### Embedded Schemas
Ecto allows defining schemas that don't map to a database table, ideal for validating nested data structures.
```elixir
defmodule User do
use Ecto.Schema
embedded_schema do
field :name, :string
field :email, :string
end
end
def changeset(data) do
%User{}
|> Ecto.Changeset.cast(data, [:name, :email])
|> Ecto.Changeset.validate_required([:name, :email])
end
```
#### Schemaless Changesets
For dynamic data, Ecto provides schemaless changesets, offering flexibility at the cost of increased complexity.
```elixir
def changeset(data) do
Ecto.Changeset.cast({%{}, %{name: :string, email: :string}}, data, [:name, :email])
|> Ecto.Changeset.validate_required([:name, :email])
end
```
### 2. Norm
Norm focuses on defining and conforming to data structures with custom predicates, offering a clean syntax and powerful validation.
```elixir
defmodule User do
import Norm
defschema do
schema(%{
name: spec(is_binary()),
age: spec(is_integer() and &(&1 > 18))
})
end
end
Norm.conform(%{name: "Jane", age: 25}, User.schema())
# => {:ok, %{name: "Jane", age: 25}}
```
### 3. Drops
Drops is a newer library that provides a rich set of tools for defining and validating schemas, leveraging Elixir's type system.
```elixir
defmodule UserContract do
use Drops.Contract
schema do
%{
required(:name) => string(:filled?),
required(:age) => integer(gt?: 18)
}
end
end
UserContract.conform(%{name: "Jane", age: 21})
# => {:ok, %{name: "Jane", age: 21}}
```
### 4. Peri
Peri is inspired by Clojure's Plumatic Schema, focusing on validating raw maps with nested schemas and optional fields. It's designed to be powerful yet simple, embracing the "Parse, Don't Validate" pattern.
```elixir
defmodule MySchemas do
import Peri
defschema :user, %{
name: :string,
age: :integer,
email: {:required, :string},
role: {:enum, [:admin, :user]}
}
defschema :profile, %{
user: {:custom, &MySchemas.user/1},
bio: :string
}
end
MySchemas.user(%{name: "John", age: 30, email: "john@example.com", role: :admin})
# => {:ok, %{name: "John", age: 30, email: "john@example.com", role: :admin}}
MySchemas.user(%{name: "John", age: "thirty", email: "john@example.com"})
# => {:error, [%Peri.Error{path: [:age], message: "expected integer received \"thirty\""}]}
```
### Conditional and Composable Types in Peri
Peri shines with its support for conditional and composable types, making it a powerful tool for complex validation scenarios.
```elixir
defmodule AdvancedSchemas do
import Peri
defschema :user, %{
name: :string,
age: {:cond, &(&1 >= 18), :integer, :nil},
email: {:either, {:string, :nil}},
preferences: {:list, {:oneof, [:string, :atom]}}
}
end
AdvancedSchemas.user(%{name: "Alice", age: 25, email: nil, preferences: ["coding", :reading]})
# => {:ok, %{name: "Alice", age: 25, email: nil, preferences: ["coding", :reading]}}
AdvancedSchemas.user(%{name: "Bob", age: 17})
# => {:ok, %{name: "Bob", age: 17, email: nil, preferences: nil}}
```
## Conclusion
Each of these tools offers unique advantages and caters to different needs:
- **Ecto** is great for data associated with databases but can handle schemaless and embedded data structures too.
- **Norm** provides a clean and powerful way to define and validate data structures.
- **Drops** leverages Elixir's type system and offers rich schema definitions and validations.
- **Peri** emphasizes simplicity and power, supporting complex types and conditional validations.
By understanding the strengths and weaknesses of each, you can choose the right tool for your data validation needs in Elixir. Happy coding, fellow sorcerers of Elixiria!
### References
- [Ecto Documentation](https://hexdocs.pm/ecto/Ecto.html)
- [Norm Documentation](https://hexdocs.pm/norm/Norm.html)
- [Drops Documentation](https://hexdocs.pm/drops/Drops.html)
- [Peri Documentation](https://hexdocs.pm/peri/Peri.html)
Feel free to dive into the source code and contribute to these projects to make Elixiria an even more magical place! | zoedsoupe |
1,892,900 | Day 973 : Deal With That | liner notes: Professional : So... after all that time I spent filling out the form for the visa, I... | 0 | 2024-06-18T22:28:58 | https://dev.to/dwane/day-973-deal-with-that-2d78 | hiphop, code, coding, lifelongdev | _liner notes_:
- Professional : So... after all that time I spent filling out the form for the visa, I got rejected, again. The reason was the same as before, the wrong type was chosen. The organizer is telling me that it's the right type, but the visa people are saying it's not. I'm not going to deal with it anymore. I'm throwing in the towel and let someone else deal with that. haha Responded to some community questions. Spent the rest of the day working on refactoring a project.
- Personal : Went through some tracks for the radio show. Worked a little on my side project. Didn't do too much. Watched an episode of "The Boys" and went to sleep.

Going to go through more tracks for the radio show. I really need to finish this logo for my side project and get it uploaded. Probably end the night watching a couple of episodes of "Demon Slayer". Going to eat dinner and get to work.
Have a great night!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube huqzP39Zo10 %} | dwane |
1,892,899 | Hashing | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-18T22:26:49 | https://dev.to/valentintt/hashing-318p | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Hashing: converts input data into a fixed-size string (hash) using a one-way function. Key for quick data lookups (hash tables), data integrity, blockchain, password storage, and cryptography. Unlike encryption, it doesn't convert data for confidentiality.
## Additional Context
Picture from: [SignMyCode](https://signmycode.com/resources/best-hashing-algorithms) (Copyright © 2024 SignMyCode.com. All Rights Reserved.) | valentintt |
1,892,889 | Nuxt + ESLint 9 + TypeScript + Prettier - Configuration Guide 2024 | Due to recent updates and compatibility issues, setting up a Nuxt project with the latest versions of... | 0 | 2024-06-18T22:16:26 | https://dev.to/jeanjavi/nuxt-eslint-9-typescript-prettier-configuration-guide-2024-4h2c | vue, nuxt, javascript, typescript | Due to recent updates and compatibility issues, setting up a Nuxt project with the latest versions of ESLint 9, Prettier, and TypeScript can be challenging. This guide will walk you through initializing a Nuxt project and configuring ESLint and Prettier for the latest standards.
## Why ESLint + Prettier?
ESLint helps find and fix problems in your JavaScript code, while Prettier ensures consistent code formatting. Using them together in the latest versions enhances code quality and developer productivity.
## Initialize Nuxt Project
Start by initializing a Nuxt project with the latest Nuxt CLI:
```bash
npx nuxi@latest init nuxt-proyect-name
```
## Required Extensions for Visual Studio Code
To ensure optimal development workflow and proper integration of ESLint and Prettier, the following extensions are required in Visual Studio Code:
1. **ESLint**: Provides JavaScript and TypeScript linting support.
- [ESLint Extension](https://marketplace.visualstudio.com/items?itemName=dbaeumer.vscode-eslint)
2. **Prettier - Code Formatter**: Formats your code using Prettier.
- [Prettier Extension](https://marketplace.visualstudio.com/items?itemName=esbenp.prettier-vscode)
Make sure to install these extensions to leverage the full capabilities of ESLint and Prettier in your Nuxt project.
## ESLint 9 Installation and Configuration
Follow these steps to set up ESLint:
```bash
npm init @eslint/config@latest
```
### Installation Answers
**How would you like to use ESLint?**
- `To check syntax and find problems`
**What type of modules does your project use?**
- `JavaScript modules (import/export)`
**Which framework does your project use?**
- `Vue.js`
**Does your project use TypeScript?**
- `Yes`
**Where does your code run?**
- ✅ Browser
- ✅ Node
**Would you like to install them now?**
- `Yes`
**Which package manager do you want to use?**
- Select your preferred option or use `npm` by default.
### ESLint Initial Configuration
The default ESLint configuration can be customized further. Start with this base configuration:
```
import globals from "globals";
import pluginJs from "@eslint/js";
import tseslint from "typescript-eslint";
import pluginVue from "eslint-plugin-vue";
export default [
{ languageOptions: { globals: { ...globals.browser, ...globals.node } } },
pluginJs.configs.recommended,
...tseslint.configs.recommended,
...pluginVue.configs["flat/essential"],
// Add your custom rules here
];
```
This setup utilizes the new flat configuration introduced in ESLint 9.
## Prettier
### Installation
Install Prettier as a development dependency:
```bash
npm install --save-dev --save-exact prettier
```
### Configuration Files
Create a `.prettierrc` file to define Prettier rules:
```json
{
"semi": false,
"singleQuote": true,
"trailingComma": "none"
}
```
Create a `.prettierignore` file to specify files and directories to ignore:
```bash
# Ignore build output directories
.nuxt/
dist/
# Ignore node modules
node_modules/
# Ignore specific configuration files
*.config.js
# Ignore environment variables files
.env
.env.*
# Ignore lock files
yarn.lock
package-lock.json
# Ignore logs
*.log
# Ignore compiled files
*.min.js
*.min.css
# Ignore specific file types
*.png
*.jpg
*.jpeg
*.gif
*.svg
# Ignore other generated files
coverage/
```
### Test Prettier
To test Prettier, run:
```bash
npx prettier index.js --write
```
### Auto-format on Save
Enable auto-formatting in Visual Studio Code:
1. Search `format on save` in settings (`CTRL + ,`) and enable `Editor: Format on Save`.
2. Search `Default Formatter` in settings and select `Prettier - Code Formatter`.
## Combining Prettier and ESLint
Install `eslint-config-prettier` to turn off all conflicting rules:
```bash
npm i eslint-config-prettier
```
### Update ESLint Configuration
Update your `eslint.config.js` to integrate Prettier:
```
import globals from "globals";
import pluginJs from "@eslint/js";
import tseslint from "typescript-eslint";
import pluginVue from "eslint-plugin-vue";
import eslintConfigPrettier from "eslint-config-prettier";
export default [
{ languageOptions: { globals: { ...globals.browser, ...globals.node } } },
pluginJs.configs.recommended,
...tseslint.configs.recommended,
...pluginVue.configs["flat/essential"],
// 👇👇👇 NEW CODE 👇👇👇
eslintConfigPrettier,
// 👆👆👆 NEW CODE 👆👆👆
];
```
## Add TypeScript Support for `.vue` Files
Extend the ESLint configuration to support TypeScript in Vue files:
```
import globals from "globals";
import pluginJs from "@eslint/js";
import tseslint from "typescript-eslint";
import pluginVue from "eslint-plugin-vue";
import eslintConfigPrettier from "eslint-config-prettier";
export default [
{ languageOptions: { globals: { ...globals.browser, ...globals.node } } },
pluginJs.configs.recommended,
...tseslint.configs.recommended,
...pluginVue.configs["flat/essential"],
// 👇👇👇 NEW CODE 👇👇👇
{
files: ['**/*.vue'],
languageOptions: {
parserOptions: {
parser: '@typescript-eslint/parser'
}
}
},
// 👆👆👆 NEW CODE 👆👆👆
eslintConfigPrettier,
];
```
## Ignoring Files in ESLint
Specify files and directories to ignore in ESLint:
```
import globals from "globals";
import pluginJs from "@eslint/js";
import tseslint from "typescript-eslint";
import pluginVue from "eslint-plugin-vue";
import eslintConfigPrettier from "eslint-config-prettier";
export default [
{ languageOptions: { globals: { ...globals.browser, ...globals.node } } },
pluginJs.configs.recommended,
...tseslint.configs.recommended,
...pluginVue.configs["flat/essential"],
{
files: ['**/*.vue'],
languageOptions: {
parserOptions: {
parser: '@typescript-eslint/parser'
}
}
},
// 👇👇👇 NEW CODE 👇👇👇
{
ignores: ['node_modules', 'dist', 'public', '.nuxt']
},
// 👆👆👆 NEW CODE 👆👆👆
eslintConfigPrettier,
];
```
By following these steps, you'll have a Nuxt project well-configured with ESLint, Prettier, and TypeScript, ensuring high code quality and consistency throughout your development process. | jeanjavi |
1,892,897 | ※#@$$+2.7.7.8.4.1.1.5.7.4.6. 가등록함ஜ ஜhow to Join illuminati 6666 for Money | ※#@$$+2.7.7.8.4.1.1.5.7.4.6. 가등록함ஜ۩۞۩ஜhow to Join illuminati 6666 for Money $##@## HOW... | 0 | 2024-06-18T22:13:26 | https://dev.to/cuba_fuba_35ee26b7612c24c/27784115746-gadeungroghamjjhow-to-join-illuminati-6666-for-money-1h77 | webdev, javascript, beginners, programming | ※#@$$+2.7.7.8.4.1.1.5.7.4.6. 가등록함ஜ۩۞۩ஜhow to Join illuminati 6666 for Money $##@##
#HOW TO JOIN ILLUMINATI TO BECOME RICH AND FAMOUS FOREVER+2.7.7.8.4.1.1.5.7.4.6
#How_to_join_Illuminati_in_South_africa +2.7.7.8.4.1.1.5.7..4.6
#how_to_join_illuminati_in_uganda_+27784115746
#How_to_join_Illuminati+2.7.7.8.4.1.1.5.7.4.6.
#How_to_join_Illuminati_in_witbank +2.7.7.8.4.1.1.5.7.4.6.
#How_to_join_Illuminati_in_Johannesburg +2.7.7.8.4.1.1.5.7.4.6
#How_to_join_Illuminati_in_Pretoria +2.7.7.8.4.1.1.5.7.4.6
#How_to_join_Illuminati_in_Durban +2.7.7.8.4.1.1.5.7.4.6
#How_to_join_Illuminati_in_cape town +2.7.7.8.4.1.1.5.7.4.6
#How_to_join_Illuminati_in_southAfrica +2.7.7.8.4.1.1.5.7.4.6
#How_to_join_Illuminati_in_church_in_southAfrica +2.7.7.8.4.1.1.5.7.4.6
#How_to_join_Illuminati_in_free_state +2.7.7.8.4.1.1.5.7.4.6
#How_to_join_Illuminati_in_western_cape +2.7.7.8.4.1.1.5.7.4.6
#How_to_join_Illuminati_in_George_western cape+2.7.7.8.4.1.1.5.7.4.6
#How_to_join_Illuminati_in_Bloemfontein+2.7.7.8.4.1.1.5.7.4.6 Call the grand master on to join the most powerful secret society in the world, we don't force any one to join as it's you yourself to decide your future. You will be guided through the whole process and be helped on how to join the occult. Hail 666 in gabolon francistown Kweneng Molepolole Maun Mahalapye Selibe Phikwe Serowe Kanye Mochudi Lobatse Tshabong Kasane Ghanzi Ramotswa Masunga Sowa Town Jwaneng SOUTH AFRICA EUROPE ASIA UNITED STATES monaco kampala uganda gulu jinja mbarara juba sudan how to get rich fast Kuwait.Europ.Canada.Arizona.Arkansas.Colorado.Connecticut.Florida.Georgia.Idaho.Indiana.Kentucky, Louisiana.Mississippi.Missouri.Montana.Nebraska.North.Carolina.North.Dakota.Ohio.Oklahoma.Puerto Rinco*, South Carolina, South Dakota, Tennessee, Texas, Virginia, Washington, Washington DC, West Virginia, Wisconsin, Wyoming Virgin Islands California*, Delaware, Illinois, Iowa, Kansas, Maine, Massachusetts, Michigan, Minnesota, New Hampshire, New Jersey, New York, Oregon, Pennsylvania, Rhode Island, Vermont UK and Ireland Portugal Switzerland France Luxembourg Belgium and Austria Spain
Australia Ireland New Zealand South Africa United Kingdom iran Mexico Brazil Argentina Colombia Peru Venezuela Chile Belize Costa Rica Cuba El Salvador French Guiana Grenada Guatemala Guyana Honduras Jamaica Nicaragua Panama Suriname Bolivia Ecuador Paraguay Uruguay
Uk France Germany Portugal Spain Italy Hungary Poland Switzerland Romania Bulgaria Denmark Finlad Netherlands Norway Sweden Algeria Cameroon Congo Egypt Madagascar Morocco Chad Jordan Mauritania Sudan Tunisia South Africa Bahrain,Iraq,Kuwait,Oman,Qatar,Saudi Arabia,UAE Iran Israel Russia China Uzbekistan Ukraine Japan South Korea Taiwan dubai Abu Dhabi Ras Al Khaimah (RAK)Sharjah Fujairah Ajman Umm Al Quwain (UAQ) Kuwait Europ Canada Arizona,
India Bangladesh Srilanka Myanmar Malaysia Indonesia Philippines Vietnam Thailand Australia,New Zealand kampala nairobi johannesburg rastenburg mafiheng durban cape town gaboroni francis town botswana zambia USA MALAYSIA UK LONDON AUSTRALIA lusaka kampala nairobi mafikeng durban cape town gaboron francis town botswana zambia CANADA Afghanistan, Albania, Algeria, American Samoa, Andorra, Angola, Anguilla, Antigua and Barbuda, Argentina, Armenia, Aruba, Australia, Austria, Azerbaijan, Bahamas, Bahrain, Bangladesh, Barbados, Belarus, Belgium, Belize, Benin, Bermuda, Bhutan, Bolivia, Bosnia-Herzegovina, Botswana, Bouvet Island, Brazil, Brunei, Bulgaria, Burkina Faso, Burundi, Cambodia, Cameroon, Canada, Cape Verde, Cayman Islands, Central African Republic, Chad, Chile, China, Christmas Island, Cocos Islands, Colombia, Comoros, Congo, Democratic, Congo, Cook Islands, Costa Rica, Croatia, Cuba, Cyprus, Czech Republic, Denmark, Djibouti, Dominica, Dominican Republic, Ecuador, Egypt, El Salvador, Equatorial Guinea, Eritrea, Estonia, Ethiopia, Falkland Islands, Faroe Islands, Fiji, Finland, France, French Guiana, Gabon, Gambia, Georgia, Germany, Ghana, Gibraltar, Greece, Greenland, Grenada, Guadeloupe, Guam, Guatemala, Guinea, Guinea Bissau, Guyana, Haiti, Holy See, Honduras, Hong Kong, Hungary, Iceland, India, Indonesia, Iran, Iraq, Ireland, Israel, Italy, Ivory Coast, Jamaica, Japan, Jordan, Kazakhstan, Kenya, Kiribati, Kuwait, Kyrgyzstan, Laos, Latvia, Lebanon, Lesotho, Liberia, Libya, Liechtenstein, Lithuania, Luxembourg, Macau, Macedonia, Madagascar, Malawi, Malaysia, Maldives, Mali, Malta, Marshall Islands, Martinique, Mauritania, Mauritius, Mayotte, Mexico, Micronesia, Moldova, Monaco, Mongolia, Montenegro, Montserrat, Morocco, Mozambique, Myanmar, Namibia, Nauru, Nepal, Netherlands, Netherlands Antilles, New Caledonia, New Zealand, Nicaragua, Niger, Nigeria, Niue, Norfolk Island, North Korea, Northern Mariana Islands, Norway, Oman, Pakistan, Palau, Panama, Papua New Guinea, Paraguay, Peru, Philippines, Pitcairn Island, Poland, Polynesia, Portugal, Puerto Rico, Qatar, Reunion, Romania, Russia, Rwanda, Saint Helena, Saint Kitts and Nevis, Saint Lucia, Saint Pierre and Miquelon, Saint Vincent and Grenadines, Samoa, San Marino, Sao Tome and Principe, Saudi Arabia, Senegal, Serbia, Seychelles, Sierra Leone, Singapore, Slovakia, Slovenia, Solomon Islands, Somalia, South Africa, South Georgia and South Sandwich Islands, South Korea, Spain, Sri Lanka, Sudan, Suriname, Svalbard and Jan Mayen Islands, Swaziland, Sweden, Switzerland, Syria, Taiwan, Tajikistan, Tanzania, Thailand, Timor-Leste, Togo, Tokelau, Tonga, Trinidad and Tobago, Tunisia, Turkey, Turkmenistan, Turks and Caicos Islands, Tuvalu, Uganda, Ukraine, United Arab Emirates, United Kingdom, United States, Uruguay, Uzbekistan, Vanuatu, Venezuela, Vietnam, Virgin Islands, Wallis and Futuna Islands, Yemen, Zambia, Zimbabwe Aankoms,Acornhoek,Addo,Adelaide,Afguns,Aggeneys,Albert Luthuli,Albertina,Alberton,Alexander Bay,Alexandra,Alexandria,Alice,Allanridge,Alldays,Amajuba,Amalia,Amalienstein,Amanzimtoti,Amsterdam,Andriesvale,Anysspruit,Argent,Arlington,Arniston,Ashton,Askham,Athlone,Atlantic Ocean,Atlantic Seaboard,Atlantis,Atteridgeville,Augrabies,Aureus,Aurora,Avoca,Avontuur,Ba-Phalaborwa,Baardskeerdersbos,Babanango,Babelegi,Badplaas,Balfour,Balgowan,Ballito,Balmoral,Bandelierkop,Bankkop,Bantry Bay,Barberton,Barkly West,Barrydale,Bathurst,Beaufort West,Bedford,Beeshoek,Beestekraal,Bekkersdal,Bela-Bela,Belfast,Belhar,Bellville,Benoni,Berbice,Bergville,Bergvliet,Berlin,Bethal,Bethlehem,Bethulie,Bettiesdam,Bettys Bay,Bhisho,Bhongweni,Bishop Lavis,Bishopscourt,Bitterfontein,Bizana,Black Rock,Bloemfontein,Bloemhof,Bloubergstrand,Blue Downs,Bo-Kaap,Bochum,Boipatong,Bojanala Platinum District,Boksburg,Bonnievale,Bonteheuwel,Bophelong,Bosbokrand,Boshof,Boston,Bot River,Botha AH,Bothasig,Bothaville,Botshabelo,Brackenfell,Brakpan,Branddraai,Brandfort,Brandvlei,Brandwacht,Braunschweig,Bray,Bredasdorp,Breede River Valley,Breyten,Brits,Britstown,Broederstroom,Bronberg,Brondal,Bronkhorstspruit,Brooklyn,Buffalo City,Buffelsjagbaai,Bultfontein,Bulwer,Burgersfort,Bushbuckridge,Bushveld,Butterworth,Byrne,Cacadu,Caledon,Calitzdorp,Calvinia,Campbell,Camps Bay,Cape Flats,Cape Point,Cape Town,Capri Village,Capricorn,Carletonville,Carnarvon,Carolina,Carolusberg,Cathcart,Catoridge,Cedarville,Central Karoo,Centurion,Ceres,Charlestown,Chintsa,Chrissiesmeer,Christiana,Citrusdal,City Bowl,City of Matlosana,City of Tshwane,Clanwilliam,Claremont,Clarens,Clifton,Clocolan,Cloetesville,Clovelly,Coffee Bay,Colenso,Colesberg,Coligny,Concordia,Constantia,Cookhouse,Copperton,Cornelia,Cradock,Crawford,Culemborg Park,Cullinan,Dalsig,Dalton,Danielskuil,Dannhauser,Dargle,Darling,Davale,Daveyton,De Aar,De Doorns,De Kelders,De Rust,De Waterkant,Dealesville,Delareyville,Delft,Delmas,Delportshoop,Dendron,Deneysville,Derby,Despatch,Devils Peak,Devon,Dewetsdorp,Diamond Fields,Dibeng,Die Boord,Diep River,Diepdale,Diepgezet,Dingleton,Dohne,Doonside,Doringbaai,Douglas,Dr Kenneth Kaunda District,Dr Ruth Segomotsi Mompati District,Drummond,Duduza,Duiwelskloof,Dullstroom,Dundee,Dundonald,Durban,Durbanville,Dwarskloof AH,Dysselsdorp,EMonti,EThekwini,Eascarpment,East London,Eastern Cape,Eastern Free State,Edenburg,Edenvale,Edenville,Edgemead,Eendekuil,Eerstehoek,Eikepark,Ekangala,EkuPhakameni,Ekulindeni,Ekurhuleni,Eland SH,Elandsbaai,Elandslaagte,Electric City,Elgin,Elim,Elliot,Ellisras,Elsies River,Elukwatini,Emalahleni,Embhuleni,Emfuleni,Empangeni,Emphuluzi,Enkhaba,Epping,Escarpment,Eshowe,Estcourt,Evaton,Excelsior,Fauresmith,Ficksburg,Finsbury,Fish Hoek,Flagstaff,Fochville,Fort Beaufort,Fouriesburg,Frances Baard,Frankfort,Franklin,Franschhoek,Franskraal,Fraserburg,Free State,Fresnaye,Ga-Rankuwa,Gansbaai,Ganyesa,Gardens,Garies,Gauteng,Gcuwa,Genadendal,George,Germiston,Gert Sibande,Glencairn,Glencoe,Glenmore,GoldFields,Gonubie,Goodwood,Gordons Bay,Gouda,Graaff Reinet,Graafwater,Grabouw,Grahamstown,Gras and Wetlands,Graskop,Grassy Park,Gravelotte,Great Brak River,Greater Tubatse,Greater Tzaneen,Green Hills,Green Kalahari,Green Point,Greylingstad,Greyton,Greytown,Griquatown,Groblershoop,Groot Marico,Groot-Elandsvlei AH,Gugulethu,Haarlem,Haartebeesfontien,Haenertsburg,Haga-Haga,Hamburg,Hammanskraal,Hangklip,Hankey,Hanover,Hanover Park,Hantam Karoo,Harburg,Harding,Harrismith,Hartbeesfontein,Hartbeespoort,Hartebeeskop,Hartenbos,Hartswater,Hattingspruit,Hazyview,Heathfield,Hectorspruit,Hectorton,Heidelberg,Heilbron,Hekpoort,Helderberg,Helikon Park,Hennenman,Hermanus,Hermon,Hertzog,Hertzogville,Hibberdene,Higgovale,Highveld and Cosmos,Hillcrest,Hillside AH,Hilton,Himeville,Hluhluwe,Hobhouse,Hoedspruit,Hogsback,Home Lake,Hondeklip,Hoopstad,Hopefield,Hopetown,Hotazel,Hout Bay,Howick,Humansdorp,Hutchinson,ILembe,Idas Valley,Idutywa,Ifafa Beach,Illovo Beach,Impendile,Impumelelo,Inanda,Indian Ocean,Ingwavuma,Irene,Isando,Isipingo Beach,Itumeleng,Ixopo,Jacobsdal,Jagersfontein,Jan Kempdorp,Jeffreys Bay,Jericho,Johannesburg,John Taola Gaetsewe,Joubertina,KaNgwane,Kaapmuiden,Kagiso,Kakamas,Kalahari,Kalbaskraal,Kalk Bay,Kalksteenfontein,Kamieskroon,Kanoneiland,Kareedouw,Karendal,Karridene,Katberg,Kathu,Katlehong,Kayamandi,Kei Mouth,Keimoes,Keiskammahoek,Kelso,Kempton Park,Kenhardt,Kenilworth,Kensington,Kentani,Kenton-on-Sea,Kestell,Kgalagadi,Kgotsong,Khayelitsha,Khutsong,Kidds Beach,Kimberley,King Sabata Dalindyebo,King Williams Town,Kingsburgh,Kinross,Kirkwood,Klaarstroom,Klaserie,Klawer,Klein Karoo,Kleinbaai,Kleinmond,Klerksdorp,Kloof,Knysna,Kocksoord,Koelenhof,Koffiefontein,Kokstad,Komatipoort,Komga,Kommetjie,Koppies,Koringberg,Kosmos,Koster,Kraaifontein,Krakeelrivier,Kranskop,Kromdraai,Kroondal,Kroonstad,Krugersdorp,Kuils River,Kungwini,Kuruman,KwaDukuza,KwaMashu,KwaMhlanga,KwaThema,KwaZulu-Natal,LAgulhas,La Coline,La Lucia,La Mercy,Ladismith,Lady Frere,Ladybrand,Ladysmith,Laingsburg,Lamberts Bay,Langa,Langebaan,Lansdowne,Lavender Hill,Leeu-Gamka,Leeudoringstad,Lehurutshe,Leipoldtville,Lejweleputswa,Lenasia,Lephalale,Lesedi,Lethabong,Letsitele,Leydsdorp,Libode,Lichtenburg,Lime Acres,Limpopo,Lindley,Little Brak River,Llandudno,Lochie,Loeriesfontein,Loopspruit,Lotus River,Louis Trichardt,Louisvale,Loumarina AH,Louwsburg,Lowveld,Loxton,Luckhoff,Lusikisiki,Lydenburg,Lynedoch,Maanhaarrand,Mabopane,Macasar,Machadodorp,Madadeni,Madibeng,Mafikeng,Magaliesburg,Mahlabatini,Maitland,Makeleketla,Makhado,Makwassie,Malelane,Malgas,Malmesbury,Maluti a Phofung,Mamelodi,Mamre,Mandini,Manenberg,Mangaung,Marble Hall,Mareetsane,Margate,Marikana,Marquard,Marydale,Masiphumelele,Matatiele,Matjhabeng,Matjiesfontein,Mbhejeka,Mbombela,McGregor,Mdantsane,Melkbosstrand,Melmoth,Memel,Merafong City,Merrivale,Merweville,Messina,Metsweding,Middelvlei AH,Midrand,Midvaal Municipality,Mier,Millside,Milnerton,Mitchells Plain,Mkuze,Mmabatho,Modder River,Moddergat,Modimolle,Modjadjiskloof,Mogalakwena,Mogale City,Mogwadi,Mogwase,Mohlakeng,Mohlakeng Ext 1,Mohlakeng Ext 3,Mohlakeng Ext 4,Mohlakeng Ext 7,Mokopane,Montagu,Monte Vista,Mooinooi,Mooiplaas,Mooiriver,Moorreesburg,Morgans Bay,Morgenzon,Mossel Bay,Mostertsdrift,Mothibastad,Mouille Point,Mount Edgecombe,Mount Fletcher,Mount Frere,Mowbray,Mpumalanga,Msunduzi,Mthatha,Mtubatuba,Mtunzini,Muden,Muizenberg,Muldersdrift,Munsieville,Murraysburg,Musina,Nababeep,Naboomspruit,Namakwa,Namaqualand,Napier,Natures Valley,Nelson Mandela Bay Metro,Nelspoort,Nelspruit,New Germany,New Hanover,Newcastle,Newlands,Ngaka Modiri Molema District,Ngcobo,Nieu-Bethesda,Nieuwoudtville,Nigel,Nkomazi,Nokeng Tsa Taemane,Nongoma,Noodsberg,Noordhoek,North West Province,Northern Cape,Northern Free State,Northern Suburbs,Norvalspont,Nottingham Road,Noupoort,Nuwerus,Nyandeni,Nyanga,Nylstroom,Observatory,Ocean View,Ofcolaco,Ohrigstad,Okiep,Olifantshoek,Onseepkans,Op die Berg,Orania,Oranjeville,Oranjezicht,Orkney,Ottery,Ottosdal,Ottoshoop,Oudtshoorn,Oyster Bay,Paarl,Palm Beach,Pampierstad,Panorama,Panvlak Gold Mine,Papendorp,Park Rynie,Parow,Parys,Patensie,Paternoster,Paul Roux,Paulpietersburg,Pearly Beach,Peddie,Pella,Pellican Park,Pelzvale AH,Pennington,Perdekop,Petrus Steyn,Petrusburg,Petrusville,Phalaborwa,Philippi,Philippolis,Philipstown,Phuthaditjhaba,Piet Retief,Pietermaritzburg,Piketberg,Pilgrims Rest,Pinelands,Pinetown,Pixley ka Seme,Plettenberg Bay,Plumstead,Pofadder,Polokwane,Pomeroy,Pongola,Port Alfred,Port Edward,Port Elizabeth,Port Nolloth,Port Shepstone,Port St Johns,Porterville,Postmasburg,Potchefstroom,Potgietersrus,Pretoria,Prieska,Prince Albert,Prince Alfreds Hamlet,Pringle Bay,Putsonderwater,Qolora Mouth,Queensburgh,QwaQwa,Ramokoka,Ramsgate,Randburg,Randfontein,Randfontein Estate Gold Mine,Randfontein Harmony Gold Mine,Randfontein NU,Randfontein South AH,Randgate,Randpoort,Ratanda,Rawsonville,Rayton,Reddersburg,Redelinghuys,Refilwe,Reiger Park,Reitz,Reivilo,Retreat,Richards Bay,Richmond,Riebeek Kasteel,Riebeek West,Riemvasmaak,Rietpoort,Rikasrus AH,Riversdale,Riviersonderend,Robertson,Robin Park,Roedtan,Rondebosch,Rondebosch East,Roodepoort,Rooiels,Rosebank,Rosendal,Rouxville,Rustenburg,Sabie,Saldanha,Salem,Salt River,Salt Rock,Samora Micheal,Sandton,Sannieshof,Saron,Sasolburg,Scarborough,Schotse Kloof,Schweizer-Reneke,Scottburgh,Sea Point,Sebokeng,Secunda,Sedgefield,Sedibeng District,Senekal,Senwabarwana,Seshego,Setlagole,Seymour,Sezela,Sharpeville,Shelly Beach,Simons Town,Simonsig,Sisonke,Siyabuswa,Siyanda,Skeerpoort,Skukuza,Smithfield,Soebatsfontein,Somerset East,Somerset West,Soshanguve,South Peninsula,Southbroom,Southern Suburbs,Soutpansberg,Soweto,Springbok,Springfontein,Springs,St Francis Bay,St Helena Bay,St James,St Lucia,St Michaels-on-Sea,Standerton,Stanford,Stanger,Steenberg,Steinkopf,Stella,Stellenbosch,Steynsrus,Steytlerville,Stilbaai,Stilfontein,Strand,Strandfontein,Struisbaai,Strydenburg,Stutterheim,Sun Valley,Sunland,Sunnydale,Sutherland,Suurbraak,Swartberg,Swartruggens,Swellendam,Swinburne,Table View,Tableview,Tamboerskloof,Tarkastad,Tarlton,Taung,Tembisa,Tenacre AH,Thaba Nchu,Thabazimbi,The Amatola Region,The Cape Metropole,The Garden Route,The Overberg,The West Coast,The Wild Coast,The Winelands,Theunissen,Thohoyandou,Thokoza,Thornton,Three Anchor Bay,Three Sisters,Thulamela,Tlhabane,Toekomsrus,Tokai,Tongaat,Tosca,Touws River,Transgariep,Trawal,Trompsburg,Tsakane,Tshwane,Tsolo,Tulbagh,Tweeling,Tweespruit,Tzaneen,UMgungundlovu,UMhlathuze,UMkhanyakude,UThukela,Ubombo,Ugu,Uitenhage,Ulundi,Umbogintwini,Umdloti,Umgababa,Umhlanga Rocks,Umkomaas,Umtata,Umtentweni,Umzimkulu,Umzinto,Umzinyathi,Umzumbe,Underberg,Uniondale,Universiteitsoord,University Estate,Upington,Upper Karoo,Uthungulu,Uvongo,Vaalbank,Valley of the Olifants,Van Reenen,Van Stadensrus,Van Wyksvlei,Van Zylsrus,Vanderbijlpark,Vanderkloof,Vanrhynsdorp,Vanwyksdorp,Velddrif,Ventersburg,Ventersdorp,Vereeniging,Verkeerdevlei,Verulam,Victoria Bay,Victoria West,Viljoenskroon,Villiers,Villiersdorp,Virginia,Vivo,Vlottenburg,Volksrust,Vosloorus,Voëltjiesdorp,Vrede,Vredefort,Vredehoek,Vredenburg,Vredendal,Vryburg,Vryheid,Waenhuiskrans,Wakkerstroom,Walmer Estate,Warden,Warmbaths,Warner Beach,Warrenton,Wartburg,Wasbank,Waterval Boven,Waterval Onder,Wattville,W
eenen,Welkom,Wellington,Wepener,Wesselsbron,West Beach,West Coast,West Porges,West Rand District,Westergloor,Western Cape,Western Region,Westonaria,Westville,Wetton,Wheatlands AH,White River,Whittlesea,Widenham,Wilbotsdal BENEFITS GIVEN TO NEW MEMBERS WHO JOIN ILLUMINATI. 1. A Cash Reward of USD $15,000,000 USD AH,Wilderness,Williston,Willowmore,Willowvale,Winburg,Windsorton,Winkelspruit,Winterton,Witbank,Witsand,Wolmaransstad,Wolseley,Woodstock,Worcester,Wupperthal,Wynberg,York,Yzerfontein,Zastron,Zebedeila,Zeerust,Zenzele,Zion City Moria,Zithobeni,Zoar,Zonnebloem,Zululand, Algeria Angola Benin Botswana Burkina Faso Burundi Cameroon Cape Verde (Cabo Verde) Central African Republic Chad Comoros Democratic Republic of the Congo Republic of the Congo Djibouti Egypt Equatorial Guinea Eritrea Eswatini (Swaziland) Ethiopia Gabon The Gambia Ghana Guinea Guinea-Bissau Ivory Coast (Côte d'Ivoire) Kenya Lesotho Liberia Libya Madagascar Malawi Mali Mauritania Mauritius Morocco Mozambique Namibia Niger Nigeria Rwanda São Tomé and Príncipe Senegal Seychelles Sierra Leone Somalia South Africa South Sudan Sudan Tanzania Togo Tunisia Uganda Zambia Zimbabwe skype live:abubat https://joinilluminatitoday.wordpress.com/
http://papanaide.co.za
http://joinilluminatitoday.co.za
https://illuminatisecretsocieties.co.za
https://www.dailymotion.com/video/x1d9gjh
https://www.youtube.com/watch?v=gbkNdPNg1OQ
https://sites.google.com/view/best-bring-back-love-pay-after/home
https://howtobringbacklostlovespellscasting.blogspot.com/
https://sites.google.com/view/howtojoinilluminatitoday4money/home?authuser=0
https://sites.google.com/view/bestbringbacklostlovespell-/home?authuser=5
https://mmmty.simdif.com/illuminati_clothing.html
https://youtube.com/shorts/Z2kAvZd5bJs?feature=share
https://youtu.be/xL3Vq6g7wgY
https://youtu.be/iI9yOxn-b1Q
https://youtube.com/shorts/Z2kAvZd5bJs?feature=share
https://youtube.com/shorts/-9bImoVk2DA?feature=share
https://youtube.com/shorts/w_RNWxoIRF0?feature=share
https://youtu.be/q_H5M0RySg4
https://youtube.com/shorts/a7ndA89nW-g?feature=share
https://youtube.com/shorts/PERVjQQB0Mw?feature=share
https://youtube.com/shorts/S5ADtMoa4oI?feature=share
https://youtube.com/shorts/S5ADtMoa4oI?feature=share
https://youtube.com/shorts/I7LVCbCWBUk?feature=share
https://youtube.com/shorts/I7LVCbCWBUk?feature=share
https://youtube.com/shorts/xUAo2UMaINI?feature=share
https://youtube.com/shorts/djjrXNRQQ84?feature=share
https://youtube.com/shorts/djjrXNRQQ84?feature=share
https://youtube.com/shorts/WytpIsVJedc?feature=share
https://youtube.com/shorts/ULCRVnNcOcQ?feature=share
https://youtube.com/shorts/ULCRVnNcOcQ?feature=share
https://youtube.com/shorts/XIjxiH6c5OY?feature=share
https://youtube.com/shorts/IatUayN4CeM?feature=share
| cuba_fuba_35ee26b7612c24c |
1,892,572 | Why Use Layout? | Layout pages are a special type of page that can be used as a template for other pages. They are used... | 27,500 | 2024-06-18T22:10:40 | https://dev.to/elanatframework/why-use-layout-43ih | tutorial, dotnet, beginners, backend | Layout pages are a special type of page that can be used as a template for other pages. They are used to define a common Layout that can be shared across multiple pages.
## Why should we use Layouts?
It is not mandatory to use Layout, but the need for Layout pages arises when you want to define a common layout for multiple pages, such as a header, footer, or navigation menu. By using a Layout page, you can decouple the Layout from each individual page and make it easier to maintain and update.
Usually, in web development, we have several HTML pages that use the same header and footer format, and their head section is somewhat the same. Layout is a page where we can add the header, footer and head section so that there is no need to repeat them on pages.

The image above shows the position of a page in orange, which is placed inside a Layout. Layout parts include header and footer and an aside on the right side, all of which are marked with green color.
## Use Layout in [CodeBehind](https://github.com/elanatframework/Code_behind) Framework
The Layout page is a View that is not particularly different from other Views. To specify that this is a Layout page, we need to add the `@islayout` attribute to the View page.
To set the layout on the page, it is enough to add the `@layout` attribute in the View page and write the Layout path in front of it inside two double quotes (").
Example:
Layout page (layout.aspx)
```html
@page
@islayout
<!DOCTYPE html>
<html>
<head>
<title>@ViewData.GetValue("title")</title>
</head>
<body>
@PageReturnValue
</body>
</html>
```
In the example above, an aspx file (`layout.aspx`) has been added to the project in the `wwwroot` path.
Here we have specified that this page is a `Layout` by adding the `@islayout` variable to the page attributes section. `PageReturnValue` variable will add final values from aspx files in which this layout is introduced. Between the title tags, there is a `NameValueCollection` attribute (`ViewData`) that all aspx files have access to.
> Note: Layout pages are not available when user path request by default.
View (hello-world.aspx)
```html
@page
@layout "/layout.aspx"
@{
string HelloWorld = "Hello CodeBehind framework!";
ViewData.Add("title", "Hello World!");
}
<div>
<h1>Text value is: @HelloWorld</h1>
</div>
```
The above example shows an aspx file (`hello-world.aspx`) in which a layout is introduced.
On this page, `@layout` and the text inside the double quotes indicate that the page has a Layout in the path `wwwroot/layout.aspx`. According to the above codes, a NameValue is added to the `ViewData` attribute with the name title and the value `Hello World!`.
> Note: The name value in `ViewData` is case sensitive.
Result in hello-world.aspx path
```html
<!DOCTYPE html>
<html>
<head>
<title>Hello World!</title>
</head>
<body>
<div>
<h1>Text value is: Hello CodeBehind framework!</h1>
</div>
</body>
</html>
```
As you can see, the above result is obtained by calling the `hello-world.aspx` path.
## Add Layout in series project
The following link is related to the series project that we taught in the previous tutorial:
[https://dev.to/elanatframework/mvc-example-display-information-based-on-url-2309](https://dev.to/elanatframework/mvc-example-display-information-based-on-url-2309)
Please run the series project in Visual Studio Code. Open the `main.aspx` file located in `wwwroot/series_page` and modify it as follows.
Main View (main.aspx)
```diff
@page
@model {SeriesModelList}
@break
+@layout "layout.aspx"
-<!DOCTYPE html>
-<html>
- <head>
- <title>Series information</title>
- <link rel="stylesheet" type="text/css" href="/style/series.css" />
- </head>
- <body>
<h1>Series information</h1>
<hr>
@foreach(SeriesModel TmpModel in @model.SeriesModels)
{
<div class="series_item">
<a href="/series/@TmpModel.SeriesUrlValue">
<h2>@TmpModel.SeriesTitle</h2>
<img src="/image/@TmpModel.SeriesImage" alt="@TmpModel.SeriesTitle">
</a>
<p>Genre: @TmpModel.SeriesGenre</p>
<p>Rating: @TmpModel.SeriesRating</p>
<p>Year: @TmpModel.SeriesYear</p>
</div>
}
- </body>
-</html>
```
According to the above codes, we added the `layout.aspx` page as the Layout of this page and we keep only the tags in the `body` tag and delete the rest.
The main view after separating Layout
```html
@page
@model {SeriesModelList}
@break
@layout "layout.aspx"
<h1>Series information</h1>
<hr>
@foreach(SeriesModel TmpModel in @model.SeriesModels)
{
<div class="series_item">
<a href="/series/@TmpModel.SeriesUrlValue">
<h2>@TmpModel.SeriesTitle</h2>
<img src="/image/@TmpModel.SeriesImage" alt="@TmpModel.SeriesTitle">
</a>
<p>Genre: @TmpModel.SeriesGenre</p>
<p>Rating: @TmpModel.SeriesRating</p>
<p>Year: @TmpModel.SeriesYear</p>
</div>
}
```
After editing, the `main.aspx` file will be according to the above codes.
Then open the `content.aspx` file located in the `wwwroot/series_page` path and modify it as follows.
Content View (content.aspx)
```diff
@page
@model {SeriesModel}
@break
+@layout "layout.aspx"
-<!DOCTYPE html>
-<html>
- <head>
- <title>@model.SeriesTitle</title>
- <link rel="stylesheet" type="text/css" href="/style/series.css" />
- </head>
- <body>
<div class="series_content">
<h2>Series name: @model.SeriesTitle</h2>
<img src="/image/@model.SeriesImage" alt="@model.SeriesTitle">
<p>Genre: @model.SeriesGenre</p>
<p>Rating: @model.SeriesRating</p>
<p>Year: @model.SeriesYear</p>
<p>About: @model.SeriesAbout</p>
</div>
- </body>
-</html>
```
As you can see, we repeat the previous steps for the `content.aspx` page.
The content view after separating Layout
```html
@page
@model {SeriesModel}
@break
@layout "layout.aspx"
<div class="series_content">
<h2>Series name: @model.SeriesTitle</h2>
<img src="/image/@model.SeriesImage" alt="@model.SeriesTitle">
<p>Genre: @model.SeriesGenre</p>
<p>Rating: @model.SeriesRating</p>
<p>Year: @model.SeriesYear</p>
<p>About: @model.SeriesAbout</p>
</div>
```
After editing, the `content.aspx` file will be according to the codes above.
In the `wwwroot` directory, we create a new View named `layout.aspx` and store the following codes in it.
Layout file (layout.aspx)
```html
@page
@islayout
<!DOCTYPE html>
<html>
<head>
<title>@ViewData.GetValue("title")</title>
<link rel="stylesheet" type="text/css" href="/style/series.css" />
</head>
<body>
@PageReturnValue
</body>
</html>
```
As it is clear, the codes above are the same tags that we removed from the `main.aspx` and `content.aspx` files.
Run the project (F5 key) and then prompt for the `/series` path. As you can see, despite removing the HTML tags, the result is still the same as before.
If you notice, we did not set the title. For more practice, you can set `ViewData` in the Controller class as below:
- For main page: `ViewData.SetValue("title", "Series information")`
- For content page: `ViewData.SetValue("title", model.SeriesTitle)`
### Related links
CodeBehind on GitHub:
https://github.com/elanatframework/Code_behind
Get CodeBehind from NuGet:
https://www.nuget.org/packages/CodeBehind/
CodeBehind page:
https://elanat.net/page_content/code_behind | elanatframework |
1,892,896 | [Game of Purpose] Day 31 - collisions | Today I played around with collisions. I learnt that there are 2 types of collisions: Overlap and... | 27,434 | 2024-06-18T22:07:13 | https://dev.to/humberd/game-of-purpose-day-31-collisions-5bi4 | gamedev | Today I played around with collisions. I learnt that there are 2 types of collisions: Overlap and Block. In order to listen for these event I need to check the following options:

| humberd |
1,892,895 | Exploring Kotlin Coroutines for Asynchronous Programming | Asynchronous programming is essential for modern applications to handle tasks like network calls,... | 0 | 2024-06-18T22:06:27 | https://dev.to/josmel/exploring-kotlin-coroutines-for-asynchronous-programming-2jfp | kotlin | Asynchronous programming is essential for modern applications to handle tasks like network calls, file I/O, or heavy computations without blocking the main thread. Kotlin coroutines provide a robust and efficient way to manage such tasks.
**How Coroutines Work**
Kotlin coroutines enable writing asynchronous code in a sequential style. Key components include:
- CoroutineScope: Defines the scope in which coroutines run, ensuring structured concurrency.
- Suspend Functions: Functions marked with suspend can pause their execution and resume later, allowing non-blocking operations.
- Launch and Async: Functions to start coroutines. launch is used for fire-and-forget tasks, while async is used for tasks that return a result.
**Example: Basic Coroutine**
```
fun main() = runBlocking {
launch {
delay(1000L)
println("World!")
}
println("Hello,")
}
```
In this example, launch starts a new coroutine, and delay is a suspend function that pauses the coroutine without blocking the main thread.
**Use Cases and Examples**
**1. Network Requests:**
Using coroutines to handle network requests prevents blocking the UI thread.
```
suspend fun fetchData(): String {
return withContext(Dispatchers.IO) {
apiService.getData()
}
}
```
**2. Parallel Decomposition:**
Running multiple tasks concurrently to improve performance.
```
runBlocking {
val deferred1 = async { task1() }
val deferred2 = async { task2() }
println("Results: ${deferred1.await()} and ${deferred2.await()}")
}
```
**3. UI Updates:**
Ensuring smooth UI updates by switching contexts.
```
GlobalScope.launch(Dispatchers.Main) {
val data = withContext(Dispatchers.IO) { fetchData() }
updateUI(data)
}
```
**4. Handling Timeouts:**
Managing tasks with time constraints.
```
runBlocking {
withTimeout(1000L) {
val result = longRunningTask()
println("Result: $result")
}
}
```
**5. Structured Concurrency:**
Ensuring coroutines are properly scoped and cancelled.
```
fun CoroutineScope.massiveRun(action: suspend () -> Unit) {
val jobs = List(100_000) {
launch {
repeat(1000) { action() }
}
}
jobs.forEach { it.join() }
}
```
**Conclusion**
Kotlin coroutines offer a powerful model for handling asynchronous tasks, making code more readable and maintainable. By utilizing coroutine scopes, suspend functions, and structured concurrency, developers can build efficient and responsive applications. Coroutines are particularly beneficial for tasks such as network requests, parallel computations, UI updates, handling timeouts, and ensuring structured concurrency.
For further reading, explore the official [Kotlin](https://kotlinlang.org/docs/coroutines-guide.html) coroutines guide. | josmel |
1,892,894 | Understanding useEffect: Enhancing Functional Components in React | Introduction React has evolved significantly since its inception, introducing functional components... | 0 | 2024-06-18T22:02:28 | https://dev.to/mohammad_khan_88b17d2f511/understanding-useeffect-enhancing-functional-components-in-react-3iln |
**Introduction**
React has evolved significantly since its inception, introducing functional components that offer a simpler and more powerful way to build your components without using classes. With the introduction of Hooks in React 16.8, functional components have become more versatile. Among these hooks, useEffect is pivotal in managing side effects in functional components, analogous to lifecycle methods in class components.
**What is useEffect?**
useEffect is a Hook that allows you to perform side effects in your functional components. It serves the same purpose as componentDidMount, componentDidUpdate, and componentWillUnmount in class components but unified into a single API.
**Why use useEffect?**
**Handling Side Effects**
Functional components are primarily suited for UI rendering, but real-world applications require interactions with the outside world—like fetching data, setting up subscriptions, and more. useEffect manages these operations efficiently, ensuring that they are executed at the right time during your component's lifecycle.
**Data Fetching**
One of the most common use cases for useEffect is fetching data from an API. Here's a simple example:
```
import React, { useState, useEffect } from 'react';
function UserProfile({ userId }) {
const [user, setUser] = useState(null);
useEffect(() => {
fetch(`https://api.example.com/users/${userId}`)
.then(response => response.json())
.then(data => setUser(data));
}, [userId]); // Only re-run the effect if userId changes
if (!user) return "Loading...";
return (
<div>
<h1>{user.name}</h1>
<p>{user.bio}</p>
</div>
);
}
```
**Subscriptions and Cleanups**
useEffect also manages subscriptions like WebSocket connections or external data sources. Importantly, it provides a cleanup mechanism to avoid memory leaks:
```
useEffect(() => {
const subscription = dataSource.subscribe();
return () => {
// Clean up the subscription
dataSource.unsubscribe(subscription);
};
}, [dataSource]);
```
**Conditional Execution**
The dependency array at the end of the useEffect call controls when your effect runs. By providing values in this array, you can ensure that the effect only re-runs when those values change.
**Common Mistakes and Best Practices**
**Overfetching in Effects**
Be cautious not to trigger an effect too frequently, which can lead to performance issues like overfetching from APIs. This typically happens when the dependency array is not set correctly.
**Infinite Loops**
Mutating a state inside an effect that also depends on that state can lead to an infinite loop. To avoid this, make sure you understand the dependencies of your effects.
**Optimization**
Use multiple useEffect calls to separate unrelated logic. This not only keeps your components clean but also avoids unnecessary re-executions of effects.
**Conclusion**
useEffect is a powerful tool in the React Hooks API that offers a streamlined way to handle side effects in your functional components. By mastering useEffect, you can improve both the performance and reliability of your React applications.
Experiment with useEffect in your projects and observe how different setups impact the behavior and performance of your applications.
Remember, the best way to learn is by doing, so I encourage you to use these examples as a starting point for your experimentation with React Hooks. | mohammad_khan_88b17d2f511 | |
1,892,893 | Find all palindromes in a string | For this post, we're going to build off 2 of the previous posts in the series. Write a golang... | 27,729 | 2024-06-18T22:02:17 | https://dev.to/johnscode/find-all-palindromes-in-a-string-2m02 | go, interview, programming, career | For this post, we're going to build off 2 of the previous posts in the series.
Write a golang function that finds all palindromes in a string.
I will interpret this to mean 'from the given string, find all strings within it that are palindromes'
In a previous [post](https://dev.to/johnscode/unique-combinations-of-a-string-ekj), we created a function to find all unique strings from a given string.
In the last [post](https://dev.to/johnscode/palindrome-check-a-string-3g4c), we created a function to check if a string is a palindrome.
Using these 2 together, we can find all possible palindromes in a string.
```
func FindAllPalindromes(str string) []string {
allPalindromes := []string{}
uniqueStrings := uniquecombos.FindUniqueCombinations(str)
for _, uniqueString := range uniqueStrings {
if palindromecheck.PalindromeCheck(uniqueString) {
allPalindromes = append(allPalindromes, uniqueString)
}
}
return allPalindromes
}
```
It turns out, the unit test has a curveball worth noting here.
The `FindAllPalindromes` function builds the result array in whatever order the palindromes are found. This may or may not be the order of the 'expected' result in the unit test.
For example, the string, 'aba', has 4 palindromes: 'a', 'aa', 'aba', and 'b'. However, `FindAllPalindromes` returns 'a', 'aba', 'aa', and 'b'.
We have several options here:
- write a function that compares two arrays without regard to order, ie the 2 arrays have the same elements and length.
- sort both the expected and result arrays, then compare
For simplicity, I chose the second option but have built the expected result of the test cases in presorted form to squeeze a little time off test runs.
```
func TestFindAllPalindromes(t *testing.T) {
testCases := []struct {
input string
expected []string
}{
// note that expected arrays have been presorted for quicker test runs
{"", []string{}},
{"a", []string{"a"}},
{"ab", []string{"a", "b"}},
{"aba", []string{"a", "aa", "aba", "b"}},
{"aab", []string{"a", "aa", "b"}},
{"abcba", []string{"a", "aa", "aba", "abba", "abcba", "aca", "b", "bb", "bcb", "c"}},
}
for _, tc := range testCases {
results := FindAllPalindromes(tc.input)
// sort result to match expected order
slices.Sort(results)
if !reflect.DeepEqual(results, tc.expected) {
t.Errorf("findUniqueCombinations(%q) = %v; expected %v", tc.input, results, tc.expected)
}
}
}
```
How can we make this better?
Post your thoughts in the comments.
Thanks!
_The code for this post and all posts in this series can be found [here](https://github.com/johnscode/gocodingchallenges)_ | johnscode |
1,892,891 | The thought process behind converting System Requirements into Object-Oriented Design | Converting system requirements into an object-oriented design (OOD) involves translating functional... | 0 | 2024-06-18T22:01:59 | https://dev.to/muhammad_salem/a-step-by-step-guide-to-ood-unpacking-the-thought-process-behind-converting-requirements-563e | softwareengineering, oop, softwaredesign | Converting system requirements into an object-oriented design (OOD) involves translating functional descriptions into a structured model of interacting objects. Here’s a step-by-step thought process that outlines how to approach this task:
### Step 1: Identifying Entities and Attributes
#### Thought Process:
1. **Requirement Analysis**: Begin by thoroughly reviewing the system requirements document. Look for nouns and noun phrases, as they often suggest potential objects or entities within the system.
2. **Domain Knowledge**: Use your understanding of the domain to recognize common entities that are typically involved in similar systems. This might include both tangible items (e.g., 'Car', 'User') and intangible concepts (e.g., 'Order', 'Invoice').
3. **Abstraction**: Abstract these nouns to determine whether they represent distinct objects or if they should be generalized. For instance, 'Admin User' and 'Regular User' might both be generalized into a 'User' entity.
#### Key Considerations:
- **Relevance**: Ensure the identified entities are relevant to the system’s goals.
- **Granularity**: Strike a balance between too granular (too many small objects) and too broad (overly generalized objects).
- **Consistency**: Maintain a consistent level of abstraction across all entities.
### Step 2: Defining Attributes and Behaviors
#### Thought Process:
1. **Attribute Identification**: Identify properties of each entity by looking for adjectives or descriptive phrases in the requirements. For example, a 'User' might have attributes like 'username', 'password', and 'email'.
2. **Behavior Identification**: Determine what actions or operations each entity can perform, often indicated by verbs in the requirements. For instance, a 'User' might 'log in', 'register', or 'update profile'.
3. **Encapsulation**: Group related attributes and behaviors together within the same entity. This ensures that an object encapsulates all the necessary data and functionality related to it.
#### Key Considerations:
- **Necessity**: Ensure each attribute and behavior is necessary for fulfilling the requirements.
- **Encapsulation**: Keep attributes private and expose behaviors (methods) to interact with those attributes.
- **Clarity**: Behaviors should be clearly defined and aligned with the entity’s role.
### Step 3: Establishing Relationships
#### Thought Process:
1. **Functional Analysis**: Examine the system functionalities to identify how entities interact. This helps in establishing relationships such as associations, dependencies, and collaborations.
2. **Relationship Types**: Determine the nature of these relationships:
- **Association**: A general connection between two entities (e.g., 'User' and 'Order').
- **Aggregation/Composition**: A whole-part relationship where one entity is composed of one or more other entities (e.g., 'Library' contains 'Books').
- **Inheritance**: A hierarchical relationship where one entity is a specialized form of another (e.g., 'AdminUser' inherits from 'User').
#### Key Considerations:
- **Cardinality**: Define the multiplicity of relationships (e.g., one-to-one, one-to-many).
- **Directionality**: Determine if the relationship is bidirectional or unidirectional.
- **Dependence**: Ensure that dependent objects are appropriately managed to avoid orphaned instances or memory leaks.
### Step 4: Refining the Object Model
#### Thought Process:
1. **Validation**: Cross-check the initial object model with the system requirements to ensure all functionalities are covered and the model is coherent.
2. **Feedback**: Gather feedback from stakeholders, including developers, domain experts, and end-users to refine the model.
3. **Iteration**: Iterate over the model to incorporate feedback and address any inconsistencies or gaps.
#### Key Considerations:
- **Redundancy**: Remove redundant entities or attributes.
- **Consistency**: Ensure consistency across the model, particularly in naming conventions and relationships.
- **Complexity**: Avoid unnecessary complexity; keep the model as simple as possible while meeting all requirements.
### Potential Issues and Solutions:
1. **Ambiguities**: Resolve ambiguities in requirements through discussions with stakeholders.
2. **Over-generalization**: Avoid making entities too broad; ensure they are specific enough to handle distinct responsibilities.
3. **Under-specification**: Ensure entities and relationships are sufficiently detailed to support all required functionalities.
4. **Scalability**: Consider future changes and scalability in the design to avoid major refactoring.
By following these steps and considerations, you can systematically convert system requirements into an object-oriented design that accurately represents the problem domain and supports the desired system functionalities.
### Step-by-Step Journey of an OOD Detective: Designing a Parking Lot System
#### Step 1: Identifying Entities and Attributes
**Thought Process:**
To uncover real-world entities, I closely examine the requirements. I look for nouns and key concepts, and ask myself questions such as:
- Who are the main actors?
- What objects are being manipulated or used?
- What are the tangible and intangible items mentioned?
**Entities Identification:**
1. **ParkingLot**: Manages the overall structure and operations.
2. **ParkingFloor**: Represents each level of the parking lot.
3. **ParkingSpot**: Individual parking spaces within floors.
4. **Vehicle**: Represents cars, motorcycles, trucks, etc.
5. **Customer**: Users who park their vehicles.
6. **Ticket**: Issued to customers upon entry.
7. **Payment**: Handles payment transactions.
8. **ParkingAttendant**: Assists customers with payment and other services.
9. **EntrancePanel**: Issues tickets and manages entry.
10. **ExitPanel**: Manages payment and exit processes.
11. **DisplayBoard**: Shows available spots and messages.
12. **Admin**: Manages the system configuration and attendants.
**Attributes:**
For each entity, I determine relevant attributes by considering the properties needed to fulfill the requirements.
- **ParkingLot**: `name`, `location`, `totalCapacity`, `floors`, `entrancePanels`, `exitPanels`, `displayBoards`
- **ParkingFloor**: `floorNumber`, `spots`, `displayBoard`
- **ParkingSpot**: `spotId`, `spotType`, `isOccupied`, `vehicle`
- **Vehicle**: `vehicleId`, `vehicleType`, `licensePlate`
- **Customer**: `customerId`, `name`, `contactInfo`
- **Ticket**: `ticketId`, `issueTime`, `paidTime`, `payment`, `vehicle`
- **Payment**: `paymentId`, `amount`, `method`, `status`
- **ParkingAttendant**: `attendantId`, `name`, `shift`
- **EntrancePanel**: `panelId`, `location`
- **ExitPanel**: `panelId`, `location`
- **DisplayBoard**: `boardId`, `location`, `messages`
- **Admin**: `adminId`, `name`, `permissions`
#### Step 2: Defining Attributes and Behaviors
**Thought Process:**
To define behaviors, I look at the actions described in the requirements and consider what each entity must do to support these actions. I ask:
- What operations does each entity need to perform?
- What interactions are described in the use cases?
**Behaviors:**
- **ParkingLot**:
- `addFloor()`, `removeFloor()`, `addEntrancePanel()`, `addExitPanel()`, `displayAvailability()`
- **ParkingFloor**:
- `addSpot()`, `removeSpot()`, `findAvailableSpot()`, `displayAvailableSpots()`
- **ParkingSpot**:
- `assignVehicle()`, `removeVehicle()`
- **Vehicle**:
- `enterLot()`, `exitLot()`
- **Customer**:
- `takeTicket()`, `payTicket()`
- **Ticket**:
- `calculateFee()`, `markPaid()`
- **Payment**:
- `processPayment()`, `generateReceipt()`
- **ParkingAttendant**:
- `assistCustomer()`, `processCashPayment()`
- **EntrancePanel**:
- `issueTicket()`, `displayMessage()`
- **ExitPanel**:
- `scanTicket()`, `processPayment()`
- **DisplayBoard**:
- `updateMessage()`, `showAvailability()`
- **Admin**:
- `configureSystem()`, `manageAttendants()`
#### Step 3: Establishing Relationships
**Thought Process:**
I consider how entities need to interact to fulfill the system's functionalities. I translate system functionalities into interactions between entities, asking:
- How do entities collaborate to complete tasks?
- What kinds of relationships (association, aggregation, composition, inheritance) are appropriate?
**Relationships:**
- **ParkingLot and ParkingFloor**: Composition (`ParkingLot` contains multiple `ParkingFloor`s).
- **ParkingFloor and ParkingSpot**: Composition (`ParkingFloor` contains multiple `ParkingSpot`s).
- **ParkingSpot and Vehicle**: Association (a `ParkingSpot` can be occupied by a `Vehicle`).
- **Customer and Ticket**: Association (a `Customer` has one `Ticket`).
- **Ticket and Payment**: Association (a `Ticket` can have one `Payment`).
- **ParkingLot and EntrancePanel/ExitPanel**: Aggregation (a `ParkingLot` has multiple `EntrancePanel`s and `ExitPanel`s).
- **ParkingFloor and DisplayBoard**: Aggregation (each `ParkingFloor` has one `DisplayBoard`).
- **ParkingLot and Admin**: Association (an `Admin` configures the `ParkingLot`).
**Inheritance**:
- **Vehicle**: Base class with derived classes `Car`, `Truck`, `Motorcycle`, etc.
- **Payment**: Base class with derived classes `CashPayment` and `CardPayment`.
#### Step 4: Refining the Object Model
**Thought Process:**
I validate the initial model against the requirements, seeking feedback, and iterating to refine the model. I look for:
- Consistency with requirements.
- Redundancy or missing elements.
- Complexity and maintainability.
**Validation and Refinement:**
1. **Review Requirements**: Ensure all use cases are covered.
2. **Feedback**: Get input from stakeholders.
3. **Iteration**: Adjust the model based on feedback.
**Potential Issues and Solutions**:
- **Ambiguities**: Clarify requirements with stakeholders.
- **Over-generalization**: Ensure entities are specific enough for their roles.
- **Under-specification**: Add necessary details to attributes and behaviors.
- **Scalability**: Design the system to handle future expansion (e.g., adding more floors or spot types).
**Final Object Model**:
The final model includes well-defined entities, attributes, behaviors, and relationships, ensuring a robust and scalable design for the parking lot management system. The model is validated against the requirements, ensuring all functionalities are supported efficiently.
Below is an implementation of the Parking Lot system. This implementation covers the key classes and their relationships based on the design. I'll include essential methods and properties to demonstrate how the system can be constructed and used.
### Entities and Relationships
#### 1. **ParkingLot**
```csharp
using System;
using System.Collections.Generic;
public class ParkingLot
{
public string Name { get; set; }
public string Location { get; set; }
public int TotalCapacity { get; set; }
public List<ParkingFloor> Floors { get; set; }
public List<EntrancePanel> EntrancePanels { get; set; }
public List<ExitPanel> ExitPanels { get; set; }
public List<DisplayBoard> DisplayBoards { get; set; }
public ParkingLot(string name, string location, int totalCapacity)
{
Name = name;
Location = location;
TotalCapacity = totalCapacity;
Floors = new List<ParkingFloor>();
EntrancePanels = new List<EntrancePanel>();
ExitPanels = new List<ExitPanel>();
DisplayBoards = new List<DisplayBoard>();
}
public void AddFloor(ParkingFloor floor)
{
Floors.Add(floor);
}
public void RemoveFloor(ParkingFloor floor)
{
Floors.Remove(floor);
}
public void AddEntrancePanel(EntrancePanel panel)
{
EntrancePanels.Add(panel);
}
public void AddExitPanel(ExitPanel panel)
{
ExitPanels.Add(panel);
}
public void DisplayAvailability()
{
foreach (var board in DisplayBoards)
{
board.ShowAvailability();
}
}
}
```
#### 2. **ParkingFloor**
```csharp
using System.Collections.Generic;
public class ParkingFloor
{
public int FloorNumber { get; set; }
public List<ParkingSpot> Spots { get; set; }
public DisplayBoard DisplayBoard { get; set; }
public ParkingFloor(int floorNumber)
{
FloorNumber = floorNumber;
Spots = new List<ParkingSpot>();
}
public void AddSpot(ParkingSpot spot)
{
Spots.Add(spot);
}
public void RemoveSpot(ParkingSpot spot)
{
Spots.Remove(spot);
}
public ParkingSpot FindAvailableSpot(ParkingSpotType spotType)
{
foreach (var spot in Spots)
{
if (spot.SpotType == spotType && !spot.IsOccupied)
{
return spot;
}
}
return null;
}
public void DisplayAvailableSpots()
{
DisplayBoard.ShowAvailability();
}
}
```
#### 3. **ParkingSpot**
```csharp
public class ParkingSpot
{
public string SpotId { get; set; }
public ParkingSpotType SpotType { get; set; }
public bool IsOccupied { get; set; }
public Vehicle Vehicle { get; set; }
public ParkingSpot(string spotId, ParkingSpotType spotType)
{
SpotId = spotId;
SpotType = spotType;
IsOccupied = false;
Vehicle = null;
}
public void AssignVehicle(Vehicle vehicle)
{
Vehicle = vehicle;
IsOccupied = true;
}
public void RemoveVehicle()
{
Vehicle = null;
IsOccupied = false;
}
}
public enum ParkingSpotType
{
Compact,
Large,
Handicapped,
Motorcycle,
Electric
}
```
#### 4. **Vehicle**
```csharp
public abstract class Vehicle
{
public string VehicleId { get; set; }
public string LicensePlate { get; set; }
protected Vehicle(string vehicleId, string licensePlate)
{
VehicleId = vehicleId;
LicensePlate = licensePlate;
}
}
public class Car : Vehicle
{
public Car(string vehicleId, string licensePlate) : base(vehicleId, licensePlate) { }
}
public class Truck : Vehicle
{
public Truck(string vehicleId, string licensePlate) : base(vehicleId, licensePlate) { }
}
public class Motorcycle : Vehicle
{
public Motorcycle(string vehicleId, string licensePlate) : base(vehicleId, licensePlate) { }
}
```
#### 5. **Customer**
```csharp
public class Customer
{
public string CustomerId { get; set; }
public string Name { get; set; }
public string ContactInfo { get; set; }
public Customer(string customerId, string name, string contactInfo)
{
CustomerId = customerId;
Name = name;
ContactInfo = contactInfo;
}
public Ticket TakeTicket(ParkingLot lot, Vehicle vehicle)
{
var ticket = new Ticket(Guid.NewGuid().ToString(), DateTime.Now, null, null, vehicle);
// Assuming entrance logic here, simplified
return ticket;
}
public void PayTicket(Ticket ticket, Payment payment)
{
ticket.PayTicket(payment);
}
}
```
#### 6. **Ticket**
```csharp
using System;
public class Ticket
{
public string TicketId { get; set; }
public DateTime IssueTime { get; set; }
public DateTime? PaidTime { get; set; }
public Payment Payment { get; set; }
public Vehicle Vehicle { get; set; }
public Ticket(string ticketId, DateTime issueTime, DateTime? paidTime, Payment payment, Vehicle vehicle)
{
TicketId = ticketId;
IssueTime = issueTime;
PaidTime = paidTime;
Payment = payment;
Vehicle = vehicle;
}
public double CalculateFee(DateTime exitTime)
{
var totalHours = (exitTime - IssueTime).TotalHours;
// Fee calculation logic goes here
return totalHours * 3; // Simplified fee calculation
}
public void PayTicket(Payment payment)
{
Payment = payment;
PaidTime = DateTime.Now;
}
public bool IsPaid()
{
return PaidTime.HasValue;
}
}
```
#### 7. **Payment**
```csharp
public abstract class Payment
{
public string PaymentId { get; set; }
public double Amount { get; set; }
public PaymentStatus Status { get; set; }
protected Payment(string paymentId, double amount)
{
PaymentId = paymentId;
Amount = amount;
Status = PaymentStatus.Pending;
}
public abstract void ProcessPayment();
public void MarkAsCompleted()
{
Status = PaymentStatus.Completed;
}
}
public class CashPayment : Payment
{
public CashPayment(string paymentId, double amount) : base(paymentId, amount) { }
public override void ProcessPayment()
{
// Cash payment processing logic
MarkAsCompleted();
}
}
public class CardPayment : Payment
{
public string CardNumber { get; set; }
public CardPayment(string paymentId, double amount, string cardNumber) : base(paymentId, amount)
{
CardNumber = cardNumber;
}
public override void ProcessPayment()
{
// Card payment processing logic
MarkAsCompleted();
}
}
public enum PaymentStatus
{
Pending,
Completed,
Failed
}
```
#### 8. **ParkingAttendant**
```csharp
public class ParkingAttendant
{
public string AttendantId { get; set; }
public string Name { get; set; }
public string Shift { get; set; }
public ParkingAttendant(string attendantId, string name, string shift)
{
AttendantId = attendantId;
Name = name;
Shift = shift;
}
public void AssistCustomer(Customer customer)
{
// Assist customer logic
}
public void ProcessCashPayment(Ticket ticket, double amount)
{
var payment = new CashPayment(Guid.NewGuid().ToString(), amount);
payment.ProcessPayment();
ticket.PayTicket(payment);
}
}
```
#### 9. **EntrancePanel and ExitPanel**
```csharp
public class EntrancePanel
{
public string PanelId { get; set; }
public string Location { get; set; }
public EntrancePanel(string panelId, string location)
{
PanelId = panelId;
Location = location;
}
public Ticket IssueTicket(Vehicle vehicle)
{
return new Ticket(Guid.NewGuid().ToString(), DateTime.Now, null, null, vehicle);
}
public void DisplayMessage(string message)
{
Console.WriteLine(message);
}
}
public class ExitPanel
{
public string PanelId { get; set; }
public string Location { get; set; }
public ExitPanel(string panelId, string location)
{
PanelId = panelId;
Location = location;
}
public void ScanTicket(Ticket ticket)
{
if (!ticket.IsPaid())
{
double fee = ticket.CalculateFee(DateTime.Now);
Console.WriteLine($"Payment required: ${fee}");
}
else
{
Console.WriteLine("Ticket already paid.");
}
}
public void ProcessPayment(Ticket ticket, Payment payment)
{
payment.ProcessPayment();
ticket.PayTicket(payment);
}
}
```
#### 10. **DisplayBoard**
```csharp
using System.Collections.Generic;
public class DisplayBoard
{
public string BoardId { get; set; }
public string Location { get; set; }
public List<string> Messages { get; set; }
public DisplayBoard(string boardId, string location)
{
BoardId = boardId;
Location =
location;
Messages = new List<string>();
}
public void UpdateMessage(string message)
{
Messages.Add(message);
ShowAvailability();
}
public void ShowAvailability()
{
foreach (var message in Messages)
{
Console.WriteLine(message);
}
}
}
```
#### 11. **Admin**
```csharp
public class Admin
{
public string AdminId { get; set; }
public string Name { get; set; }
public string Permissions { get; set; }
public Admin(string adminId, string name, string permissions)
{
AdminId = adminId;
Name = name;
Permissions = permissions;
}
public void ConfigureSystem(ParkingLot lot)
{
// Configuration logic
}
public void ManageAttendants(List<ParkingAttendant> attendants)
{
// Management logic
}
}
```
### Putting It All Together
Here’s a small demonstration of how these classes can be used together:
```csharp
class Program
{
static void Main(string[] args)
{
// Create a parking lot
ParkingLot lot = new ParkingLot("Main Lot", "123 Main St", 500);
// Create and add floors
ParkingFloor floor1 = new ParkingFloor(1);
lot.AddFloor(floor1);
// Create and add parking spots
ParkingSpot spot1 = new ParkingSpot("1A", ParkingSpotType.Compact);
floor1.AddSpot(spot1);
// Create entrance and exit panels
EntrancePanel entrance = new EntrancePanel("Entrance1", "North Entrance");
ExitPanel exit = new ExitPanel("Exit1", "South Exit");
lot.AddEntrancePanel(entrance);
lot.AddExitPanel(exit);
// Create a customer and a vehicle
Customer customer = new Customer("C001", "John Doe", "555-1234");
Vehicle car = new Car("V001", "ABC123");
// Customer takes a ticket
Ticket ticket = entrance.IssueTicket(car);
// Customer pays the ticket at the exit
Payment payment = new CardPayment("P001", ticket.CalculateFee(DateTime.Now), "1234-5678-9012-3456");
exit.ProcessPayment(ticket, payment);
// Check the payment status
if (ticket.IsPaid())
{
Console.WriteLine("Ticket has been paid.");
}
}
}
```
This example demonstrates the creation of a parking lot, adding floors and spots, managing entrance and exit panels, and processing a customer's parking and payment. The design encapsulates the key requirements and interactions for a parking lot management system.
CAVEAT:
The idea of having actor classes like `Customer` orchestrate the logic of other classes (like `ShoppingCart`) is a concept that developers should approach with caution and consider the context of their application. Here's why:
**Benefits in Courses and Books:**
* **Teaching Object-Oriented Principles:** Courses and books often use the actor orchestration approach to illustrate core object-oriented principles like encapsulation, separation of concerns, and message passing. It can be a simplified way to introduce these concepts.
* **Focus on Core Concepts:** By placing logic within actor classes, these resources can focus on core OOD principles without introducing complexities like frameworks (ASP.NET Core Identity) or persistence mechanisms (databases).
**Caveats for Real-World Applications:**
* **Over-engineering for Simple Systems:** For basic e-commerce applications, having a `Customer` class orchestrate shopping cart logic can be overkill. It might introduce unnecessary complexity.
* **Potential for Duplication:** As the application grows, authorization logic placed within the `Customer` class might need to be duplicated across controllers, leading to inconsistencies.
* **Separation of Concerns in Web Applications:** In web applications with frameworks like ASP.NET Core, it's often better to separate concerns using controllers for handling user interactions and authorization, and dedicated services for handling domain logic (like the `ShoppingCartService`).
**When Actor Orchestration Might Be Appropriate:**
* **Complex Systems:** In systems with intricate business rules and interactions between actors, modeling actors with orchestration logic can be more beneficial. It can help manage complex workflows and interactions.
* **Standalone Applications:** For desktop or standalone applications without a layered architecture (like ASP.NET Core MVC with separate controllers and services), placing some logic within actor classes might be reasonable.
**What Junior Developers Should Keep in Mind:**
* **Understand the Trade-offs:** The actor orchestration approach can be a valuable learning tool, but junior developers should understand its limitations and consider the complexity of the application when applying it in practice.
* **Focus on Core Principles:** Grasp the core object-oriented principles demonstrated through the actor orchestration approach. These principles are essential for building well-structured and maintainable applications.
* **Learn Best Practices for Web Frameworks:** As you learn about web development frameworks like ASP.NET Core, understand the recommended patterns for separation of concerns (controllers, services, repositories).
**In Conclusion:**
The idea of actor orchestration is a valuable concept to learn in OOD, but it's important to assess its suitability for your specific application. For web applications with ASP.NET Core, separating concerns using dedicated services is generally a better approach for maintainability and scalability. Keep in mind the trade-offs and learn best practices for the frameworks you'll be using in the real world.
## Orchestrating vs Separated Concerns: Ordering Food with an App
Imagine you're building a mobile app for a food delivery service. Here's how the approach to handling user interaction with the order process can differ:
**Scenario 1: Actor Orchestration (for Learning Purposes)**
* **`Customer` Class:** This class would handle most of the logic. The user would interact with methods like `SearchRestaurants`, `ViewMenu`, and `PlaceOrder`. Inside these methods, the `Customer` class would interact with other classes like `Restaurant` and `Order`.
```java
public class Customer {
public List<Restaurant> searchRestaurants(String cuisine) {
// Call Restaurant class to find restaurants based on cuisine
}
public void viewMenu(Restaurant restaurant) {
// Call Restaurant class to retrieve menu
}
public void placeOrder(Restaurant restaurant, List<MenuItem> items) {
Order order = new Order(this, restaurant, items);
// Call Order class to process and submit the order
}
}
```
**Benefits (for learning):**
* **Simpler Example:** This approach can be easier to grasp initially as it keeps the logic centralized in the `Customer` class.
* **Focus on OOD Principles:** It highlights concepts like encapsulation (data hidden within classes) and message passing (methods calling each other).
**Drawbacks (for real-world app):**
* **Over-engineering for Simple App:** In a basic app, this can lead to a cluttered `Customer` class.
* **Duplication Risk:** As the app grows, authorization logic might need to be placed in the `Customer` class and potentially duplicated elsewhere.
**Scenario 2: Separated Concerns (Recommended for Real-World App)**
* **Dedicated Service Classes:** Separate classes like `OrderService` and `RestaurantService` handle specific functionalities. The `CustomerActivityController` in your mobile app would handle user interactions and call these services.
```java
public class CustomerActivityController {
@Autowired
private OrderService orderService;
@Autowired
private RestaurantService restaurantService;
public List<Restaurant> searchRestaurants(String cuisine) {
return restaurantService.findRestaurants(cuisine);
}
public void viewMenu(int restaurantId) {
Restaurant restaurant = restaurantService.getRestaurant(restaurantId);
// Display menu retrieved from restaurant object
}
public void placeOrder(int restaurantId, List<MenuItem> items) {
orderService.createOrder(restaurantId, items, getLoggedInUserId());
}
}
```
**Benefits:**
* **Separation of Concerns:** Clearer division of responsibilities (controllers for user interaction, services for domain logic).
* **Maintainability:** Easier to modify or extend functionalities within dedicated services.
* **Scalability:** The code can be adapted for more complex features or integrations with other services.
**Conclusion:**
While the actor orchestration approach can be a helpful learning tool, separating concerns with dedicated services is generally a better practice for building maintainable and scalable web applications.
This example highlights the trade-offs: the first approach is simpler to understand initially, but the second approach leads to a cleaner and more scalable application structure in the long run. | muhammad_salem |
1,854,422 | Dev: AR/VR | An Augmented Reality/Virtual Reality (AR/VR) Developer is a specialized software engineer responsible... | 27,373 | 2024-06-18T22:00:00 | https://dev.to/r4nd3l/dev-arvr-5fa4 | vr, developer | An **Augmented Reality/Virtual Reality (AR/VR) Developer** is a specialized software engineer responsible for designing, developing, and implementing immersive experiences using augmented reality (AR) and virtual reality (VR) technologies. Here's a detailed description of the role:
1. **Understanding of AR/VR Technologies:**
- AR/VR Developers possess a deep understanding of augmented reality and virtual reality concepts, technologies, and platforms.
- They are familiar with AR/VR hardware devices, such as headsets, goggles, and controllers, as well as software development kits (SDKs), frameworks, and tools for building AR/VR applications.
2. **Programming Skills:**
- AR/VR Developers are proficient in programming languages commonly used for AR/VR development, such as C#, C++, UnityScript, JavaScript, and Python.
- They have experience with game engines like Unity3D, Unreal Engine, and WebXR frameworks for developing immersive AR/VR experiences across different platforms and devices.
3. **User Experience (UX) Design:**
- AR/VR Developers collaborate with UX/UI designers to create intuitive and engaging user interfaces for AR/VR applications.
- They focus on optimizing user interactions, navigation, spatial awareness, and 3D user interface (UI) elements to enhance the overall user experience and immersion in virtual environments.
4. **3D Modeling and Animation:**
- AR/VR Developers work with 3D artists and animators to create and integrate 3D models, assets, textures, and animations into AR/VR experiences.
- They optimize 3D assets for performance, scalability, and realism, ensuring smooth rendering and animation in real-time interactive environments.
5. **Spatial Computing and Tracking:**
- AR/VR Developers implement spatial computing techniques, including positional tracking, motion tracking, gesture recognition, and spatial mapping, to enable accurate object placement and interaction in augmented and virtual environments.
- They leverage sensor data from devices like cameras, accelerometers, gyroscopes, and depth sensors to track users' movements and gestures in physical space.
6. **Multi-platform Development:**
- AR/VR Developers develop cross-platform AR/VR applications compatible with various devices, including standalone VR headsets, smartphones, tablets, augmented reality glasses, and mixed reality devices.
- They optimize applications for different hardware configurations, screen sizes, input methods, and performance constraints to deliver consistent experiences across platforms.
7. **Integration of AR/VR Features:**
- AR/VR Developers integrate advanced features and functionalities into AR/VR applications, such as spatial audio, haptic feedback, physics simulations, real-time multiplayer interactions, and AI-driven behaviors.
- They leverage APIs, plugins, and libraries to add AR/VR capabilities like object recognition, environmental understanding, hand tracking, eye tracking, and voice commands.
8. **Performance Optimization:**
- AR/VR Developers optimize the performance and rendering efficiency of AR/VR applications to maintain stable frame rates, reduce latency, and minimize motion sickness.
- They implement techniques like occlusion culling, level of detail (LOD) rendering, asynchronous timewarp, and dynamic resolution scaling to improve performance on resource-constrained devices.
9. **Testing and Quality Assurance:**
- AR/VR Developers conduct thorough testing and quality assurance (QA) of AR/VR applications to identify bugs, glitches, and usability issues.
- They perform device compatibility testing, user acceptance testing, and user experience testing to ensure that AR/VR experiences meet quality standards and deliver immersive and enjoyable experiences to users.
10. **Continuous Learning and Innovation:**
- AR/VR Developers stay updated with the latest trends, innovations, and developments in the field of augmented reality and virtual reality.
- They participate in AR/VR communities, forums, conferences, and workshops, collaborate with peers, and experiment with new technologies and techniques to push the boundaries of AR/VR development.
In summary, an AR/VR Developer plays a vital role in creating immersive, interactive, and realistic augmented reality and virtual reality experiences for a wide range of applications, including gaming, education, training, simulation, healthcare, retail, marketing, and entertainment. By leveraging their technical expertise, creativity, and passion for innovation, they contribute to shaping the future of human-computer interaction and digital experiences in the AR/VR space. | r4nd3l |
1,891,172 | Using RAG to Build Your IDE Agents | In the post-GPT revolution era, many of us developers have started using LLM-enabled tools in our... | 0 | 2024-06-18T21:59:22 | https://medium.com/welltested-ai/using-rag-to-build-your-ide-agents-e1ed652fa3b2 | rag, commanddash, pandasai, vscode | In the post-GPT revolution era, many of us developers have started using LLM-enabled tools in our development workflows. Nowadays, you can complete new and complex development tasks in a short span with the help of theses LLM tools when used correctly.
Until you start using them for anything related to new APIs or SDKs or their latest version, this is the place where they fall short.
## Fixing the Shortcomings with RAG
At CommandDash (formerly Welltested), our team has been working in code generation. Like other organizations in this field, we recognized the challenges and have actively developed solutions.
Therefore, with the Dash Agent Framework, we initiated building a robust RAG system to tackle these issues from inception stage itself.
## What is RAG?
Retrieval-Augmented Generation (RAG) is a powerful technique that combines the capabilities of LLMs with relevant references to enrich the responses. These relevant references are typically derived from external knowledge sources like document databases and more.

RAG significantly enhances the capabilities of LLMs, especially when working with new packages and frameworks. By accessing up-to-date information from documentation, code examples, and other sources, RAG-based LLMs can:
- **Provide accurate and contextual responses**: Instead of relying solely on pre-trained data, LLMs can access the latest documentation and code examples to provide accurate and relevant information.
- **Adapt to evolving technologies**: As APIs and SDKs evolve, RAG can keep pace by constantly updating its knowledge base from official sources.
In this blog, we will build a powerful IDE agent for [PandasAI](https://pandas-ai.com/) using Dash Agent. Then later on, we'll understand how using RAG can significantly improve LLM responses.
## Building PandasAI Agent
PandasAI is a Python platform that makes it easy to ask questions about your data in a natural language. It integrates generative artificial intelligence capabilities into pandas to allow you to extract insights effortlessly.

Now that you're familiar with PandasAI. Let's start our journey to build our own PandasAI agent. The job of this agent will be to assist developers in building and integrating PandasAI code efficiently.
### Prerequisite Steps
**1. Install Dart**
Dash Agent is built upon dart language. If you haven't already, follow the official Flutter installation instructions [here](https://flutter.dev/docs/get-started/install).
**2. Install dash_cli**
Now, install the dash_cli command line tool that enables you to create and publish your agents at the CommandDash marketplace. Open your terminal and run the following command:
```shell
dart pub global activate dash_cli
```
### Create PandasAI Project
Next, you will create the pandas_ai project. This is the place where you will define your agent configurations. Run the following command in the terminal:
```shell
dash_cli create pandas_ai
```
This will create a dash agent project that contains the template code agent building. Then, open the project in your preferred IDE where the [flutter extension](https://docs.flutter.dev/get-started/install/macos/mobile-ios?tab=download#text-editor-or-integrated-development-environment) is installed.
### Adding Agent Data Sources
The core of a RAG-based agent lies in its knowledge base, known as data sources. These sources provide the agent with context and information to understand and respond to user requests.
For our PandasAI agent, we will gather data from the following sources:
- **Official PandasAI Documentation**: https://docs.pandas-ai.com, https://pandasai-docs.readthedocs.io/en/latest
- **Official Examples and Issues shared by PandasAI team**: https://github.com/sinaptik-ai/pandas-ai
- **Other Open Source Examples**: [CSV Chatbot](https://github.com/kBrutal/CSV_ChatBot), [GroqMultiCSVChatPandasAI
](https://github.com/InsightEdge01/GroqMultiCSVChatPandasAI), [MutipleCSVChatllama3Pandasai](https://github.com/InsightEdge01/MutipleCSVChatllama3Pandasai), [PandasAI-Tutorials](https://github.com/TirendazAcademy/PandasAI-Tutorials)
Navigate to the **lib/data_sources** file in your project and replace the existing code with:
```dart
import 'package:dash_agent/data/datasource.dart';
import 'package:dash_agent/data/filters/filter.dart';
import 'package:dash_agent/data/objects/file_data_object.dart';
import 'package:dash_agent/data/objects/project_data_object.dart';
import 'package:dash_agent/data/objects/web_data_object.dart';
// Indexes all the documentation related data
class DocsDataSource extends DataSource {
@override
List<FileDataObject> get fileObjects => [];
@override
List<ProjectDataObject> get projectObjects => [];
@override
List<WebDataObject> get webObjects => [
WebDataObject.fromSiteMap('https://docs.pandas-ai.com/sitemap.xml'),
WebDataObject.fromSiteMap(
'https://www.xml-sitemaps.com/download/pandasai-docs.readthedocs.io-a2835e7d4/sitemap.xml?view=1'),
];
}
// Indexes all the example code and issues related data
class ExampleDataSource extends DataSource {
final accessToken = 'your_personal_github_access_token';
@override
List<FileDataObject> get fileObjects => [];
@override
List<ProjectDataObject> get projectObjects => [];
@override
List<WebDataObject> get webObjects => [
WebDataObject.fromGithub(
'https://github.com/sinaptik-ai/pandas-ai', accessToken,
codeFilter: CodeFilter(pathRegex: r'^examples\/.*')),
WebDataObject.fromGithub(
'https://github.com/ismailtachafine/PandasAI-CSV-Analysis',
accessToken,
codeFilter: CodeFilter(pathRegex: r'.*\.py$')),
WebDataObject.fromGithub(
'https://github.com/kBrutal/CSV_ChatBot', accessToken,
codeFilter: CodeFilter(pathRegex: r'.*\.py$')),
WebDataObject.fromGithub(
'https://github.com/InsightEdge01/GroqMultiCSVChatPandasAI',
accessToken,
codeFilter: CodeFilter(pathRegex: r'.*\.py$')),
WebDataObject.fromGithub(
'https://github.com/InsightEdge01/MutipleCSVChatllama3Pandasai',
accessToken,
codeFilter: CodeFilter(pathRegex: r'.*\.py$')),
WebDataObject.fromGithub(
'https://github.com/TirendazAcademy/PandasAI-Tutorials',
accessToken,
codeFilter: CodeFilter(pathRegex: r'.*\.py$')),
];
}
```
The above code shared the references of sources that need to be indexed both the documentation and examples. Apart from the sources link, you have also provided `accessToken` and `codeFilter`:
- `accessToken`: During processing, the CommandDash server indexes data for `WebDataObject.fromGithub` via Github's official API. To fetch the data from the GitHub API efficiently, the personal github token is required and can be easily generated by visiting the [tokens](https://github.com/settings/tokens) page.
- `CodeFilter`: This filter enables the framework to selectively index the code files based on the regex shared. This is optional.
> **Note**: Your Personal Access Token is very sensitive data. Please make sure not it share it with anyone or push it to any public source.
You can learn more about `WebDataObject` and associated properties in detail at CommandDash [documentation](https://www.commanddash.io/docs/capabilities-datasources).
### Adding Agent System Prompt and Metadata
Next, you'll add system prompt and agent metadata to the `AgentConfiguration` class. Navigate to **lib/agent.dart** file and replace the existing code with:
```dart
import 'package:dash_agent/configuration/metadata.dart';
import 'package:dash_agent/data/datasource.dart';
import 'package:dash_agent/configuration/command.dart';
import 'package:dash_agent/configuration/dash_agent.dart';
import 'data_sources.dart';
class PandasAI extends AgentConfiguration {
final docsDataSource = DocsDataSource();
final exampleDataSource = ExampleDataSource();
// Add the metadata information about PandasAI agent
@override
Metadata get metadata => Metadata(
name: 'Pandas AI',
avatarProfile: 'assets/logo.jpeg',
tags: ['LLM Framework', 'Data Analysis']);
// Add the systemPrompt for dash agent's commandless mode (also know as chat mode).
// System prompt is a key component for conversational-style agents. As it provides
// the initial context and guidance regarding the agent's purpose and functionality to the LLM.
@override
String get registerSystemPrompt =>
'''You are a Pandas AI assistant inside user's IDE. PandasAI is a Python library that makes it easy to ask questions to your data in natural language.
You will be provided with latest docs and examples relevant to user questions and you have to help them achieve their desired results. Output code and quote links and say I don't know when the docs don't cover the user's query.''';
// Add the data sources that needs to indexed for RAG purposes.
@override
List<DataSource> get registerDataSources =>
[docsDataSource, exampleDataSource];
@override
List<Command> get registerSupportedCommands => [];
}
```
The above code basically glues everything all together for the PandasAI agent - data source, metadata, system prompt, commands, etc. that are needed to build the dash agent.
For more details related to the `AgentConfiguration` please read the [dash_agent framework](https://pub.dev/packages/dash_agent).
Finally, head to **bin/main.dart** file and replace the existing code with:
```dart
import 'package:dash_agent/dash_agent.dart';
import 'package:pandas_ai/agent.dart';
/// Entry point used by the [dash-cli] to extract your agent
/// configuration during publishing.
Future<void> main() async {
await processAgent(PandasAI());
}
```
That's it. Your agent is now configured and ready to be used. Next, you'll publish it so that it can be tested and shared with other devs as well.
## Publishing the PandasAI Agent
You need to be logged in to dash_cli using GitHub auth to publish your agent. Run the following command in the terminal to login:
```shell
dash_cli login
```
Finally, run the following command in the terminal from the root folder of your pandas_ai project to publish the agent:
```shell
dash_cli publish
```
This will validate the configuration and if all looks good. It will schedule your agent publication. Once your agent is ready to be used. You will get an email confirming the successful publication and PandasAI will be visible in the CommandDash Marketplace:

## What's Next
Congratulations! 🎉 Now you know how to create powerful agents using the Dash Agent framework. These agents leverage the power of RAG and LLMs. We're excited to see the innovative agents you'll build with [Dash Agent](https://www.commanddash.io/docs/introduction).
Also, don't forget to try out the PandasAI agent, which is currently live on the CommandDash extension for VS Code. Check it out [here](https://marketplace.visualstudio.com/items?itemName=WelltestedAI.fluttergpt).
Next up, we will see in our upcoming blog, how well PandasAI perform. Stay tuned! | yogesh009 |
1,892,888 | .NET 8 💥 - Intro to Kubernetes for .NET Devs | All .NET developers will eventually need to deploy their code, and one effective method is through... | 0 | 2024-06-18T21:56:33 | https://dev.to/moe23/net-8-intro-to-kubernetes-for-net-devs-1omm | dotnet, kubernetes, containers, docker | All .NET developers will eventually need to deploy their code, and one effective method is through Kubernetes.
In this video, I will provide a comprehensive explanation of Kubernetes tailored specifically for .NET developers. I will cover the fundamentals of Kubernetes, how it operates, and demonstrate how .NET web applications can seamlessly integrate within a Kubernetes infrastructure. This guide aims to demystify Kubernetes and empower .NET developers with the knowledge to effectively leverage this powerful deployment tool.
{% embed https://youtu.be/6QHOdAiA2tA %}
You can follow me
📹 Youtube - http://youtube.com/@mohamadlawand
♯ Github - https://github.com/mohamadlawand087
🎫 LinkedIn - https://www.linkedin.com/in/mlawand
💥 LinkTree - https://linktr.ee/mohamadlawand | moe23 |
1,888,348 | Lesser-known HTML tags #1 | Let's dive into HTML lands HTML is vast, and it's easy to overlook most of its tags, even... | 0 | 2024-06-18T21:50:41 | https://dev.to/blobbybobby/lesser-known-html-tags-1-1697 | webdev, html, css, beginners | ## Let's dive into HTML lands
HTML is vast, and it's easy to overlook most of its tags, even the cool ones. That's why I decided to dive into HTML and share my findings. Here’s a first selection of HTML tags you might not know but would want to test for your next projects.
---
# `<details>` & `<summary>`
### For the collapsible content
You know those accordion sections often seen in Q&A sections on websites? Well, with plain HTML, you can wrap your question in `<summary>` and the question-answer block in `<details>`.
Introduced with HTML5, `<summary>` allows toggling the content in `<details>`.
```html
<details>
<summary>How to pronounce Wingardium Leviosa ?</summary>
<p>
It's Le-VIO-sa. Not Levio-SAAAA !
</p>
</details>
```
Below an example :
{% codepen https://codepen.io/blobby-bobby-the-bashful/pen/PovEmWP %}
Using common HTML attributes (like open), you can add animations or a fancier layout with CSS, so you actually don't need JavaScript or an external library for these fun accordions 😉.
Here's an example of how to animate `<details>` and `<summary>` with CSS:
```css
/* Replacing the arrow with a custom icon displayed at the end of <summary> */
summary::after {
content: '';
width: 18px;
height: 18px;
background: url('https://cdn-icons-png.flaticon.com/512/32/32339.png');
background-size: cover;
margin-left: .75em;
transition: 0.2s;
}
/* Then we animate this icon on click on details */
details[open] > summary::after {
transform: rotate(45deg);
}
```
And here's the result :
{% codepen https://codepen.io/blobby-bobby-the-bashful/pen/RwmgEdG %}
---
# `<ins>` & `<del>`
### For the text edits annotations
Have you seen Git commits where added code is highlighted in green and deleted code in red? Need to mark up text changes ? Introduced with HTML4, `<ins>` and `<del>` serve this purpose, allowing version control within a document, and displaying the range of text that has been inserted/removed.
There are two ways of using `<ins>` and `<del>`
**Case 1 - inner text edit**
```html
<p>“It's Le-<ins>viooooo</ins>-sa<del>aaaa</del>!”</p>
```
**Case 2 - Like those good old commits**
```html
<p>“Wingardium Leviosa”</p>
<ins cite="../howtobeawizard.html" datetime="2024-05">
<p>“It’s Wing-<b>GAR</b>-dium Levi-<b>O</b>-sa”</p>
</ins>
<del>
<p>“Not Levio-<b>SAAAAAAA</b>”</p>
</del>
```
As shown in the example above, `<ins>` and `<del>` come with the attributes `cite` and `datetime`, so you can provide the date and source as additional information.
{% codepen https://codepen.io/blobby-bobby-the-bashful/pen/dyERQZj %}
---
# `<address>`
### For your contact informations
`<address>` is designed to define the contact information for a person or an organization, typically the author of the site or a specific post.
Therefore, the `<address>` tag can include an email address, URL, physical address, phone number, or social media handle, and it is displayed in italics by default. The most appropriate places to use this tag are within the footer, of an `<article>` or of the `<body>` in a web page.
Here’s an example of how to use the `<address>` tag:
```html
<p>Contact information</p>
<address>
<a href="mailto:just@curious.com">just@curious.com</a><br />
<a href="tel:+33666666666">06 66 66 66 66</a>
</address>
```
And below a playground with this tag:
{% codepen https://codepen.io/blobby-bobby-the-bashful/pen/bGyROJY %}
---
And that's it for this first exploration ^^.
Many other cool but sadly ignored HTML tags are waiting to be discovered 👀. I’ll share more of them later.
See you soon 👋!
| blobbybobby |
1,892,887 | var, let, const in JavaScript - summary of differences between them | Summary of differences between var, let, and const in JavaScript | 0 | 2024-06-18T21:49:26 | https://dev.to/artem/var-let-const-in-javascript-summary-of-differences-between-them-m88 | javascript | ---
title: var, let, const in JavaScript - summary of differences between them
published: true
description: Summary of differences between var, let, and const in JavaScript
tags: #javascript
# cover_image: https://miro.medium.com/v2/resize:fit:1100/format:webp/1*X3uZSHdqhsjt56lBFwWfvw.png
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-18 21:07 +0000
---
Summary of Differences.
If you, like me, are tired of this interview question.
### Scope:
**var**: Function or global scope.
**let**: Block scope.
**const**: Block scope.
### Hoisting:
**var**: Hoisted and initialized with undefined.
**let**: Hoisted but not initialized (temporal dead zone).
**const**: Hoisted but not initialized (temporal dead zone).
### Re-declaration:
**var**: Can be re-declared within the same scope.
**let**: Cannot be re-declared within the same scope.
**const**: Cannot be re-declared within the same scope.
### Immutability:
**var** and **let**: Mutable references.
**const**: Immutable reference (the reference cannot change, but the value can if it's an object).
## Examples
### var
```js
function varExample() {
console.log(x); // undefined (due to hoisting)
var x = 10;
if (true) {
var x = 20; // Same variable, function-scoped
console.log(x); // 20
}
console.log(x); // 20 (same variable as above)
}
varExample();
```
### let
```js
function letExample() {
// console.log(y); // ReferenceError (temporal dead zone)
let y = 10;
if (true) {
let y = 20; // Different variable, block-scoped
console.log(y); // 20
}
console.log(y); // 10 (original variable)
}
letExample();
```
### const
```js
function constExample() {
// console.log(z); // ReferenceError (temporal dead zone)
const z = 10;
if (true) {
const z = 20; // Different variable, block-scoped
console.log(z); // 20
}
console.log(z); // 10 (original variable)
const obj = { name: "Alice" };
obj.name = "Bob"; // Allowed (the object itself is mutable)
console.log(obj.name); // Bob
// obj = { name: "Charlie" }; // TypeError (can't reassign const)
}
constExample();
``` | artem |
1,892,885 | What is a Byte? | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-18T21:42:31 | https://dev.to/ganatrajay2000/what-is-a-byte-dnk | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
<!-- Explain a computer science concept in 256 characters or less. -->
In binary, 8 bits make up a byte, with each bit being 0 or 1, creating 256 unique values ranging from 0 to 255. Historically, a byte has been used to encode a single character of text, making it the smallest addressable unit of memory storage in computers.
## Additional Context
One Byte Explainer: Just like 256 values in a byte, this explanation fits perfectly within its size limit—byte-sized knowledge! 😉😆
Credits:
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
1. https://256stuff.com/256.html#:~:text=In%20any%20case%2C%20256%20is,base%20unit%20in%20a%20computer.
2. https://en.wikipedia.org/wiki/Byte
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
Photo by <a href="https://unsplash.com/@samsungmemory?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Samsung Memory</a> on <a href="https://unsplash.com/photos/person-holding-white-and-black-labeled-card-nk6o1WQ4j6k?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a>
<!-- Thanks for participating! --> | ganatrajay2000 |
1,892,681 | Python Basics 4: Input Function | Input Function: For taking input from the user directly, Python has a special built-in function,... | 0 | 2024-06-18T21:36:36 | https://dev.to/coderanger08/python-basics-4-input-function-5e94 | python, programming, beginners, tutorial | **_Input Function:_**
For taking input from the user directly, Python has a special built-in function, input(). It takes a string as a prompt argument, and it is optional. It is used to display information or messages to the user regarding the input.
syntax: input("A prompt for the user")
Suppose you're building a program where you need to know the name of the user. You can use the input function to do that.
`name=input("What's your name?")`
Inside the input, we write the prompt, which will be displayed in the terminal to the user. Then user can write their input.
input() pauses program execution to allow the user to type in a line of input from the keyboard. After typing, the user needs to press the ENTER key otherwise, the program will be waiting for the user's input indefinitely.
Once the user presses the Enter key, all characters typed are read, stored in the variable (name), and returned as a **string**.
(remember this, we'll talk about type conversion briefly!)
Now, let's suppose you also want to ask the user's age. As usual, you wrote your program like this:
`age=input("Enter your age:")`
If your program is executed like this, at first glance, you won't see any problems with the code. However, the problem will arise when you want to do calculations with this age.
```
age=input("Enter your age:")
print(age+10)
#TypeError: can only concatenate str (not "int") to str
```
Why did this error occur? So technically, your age variable never stored an integer in the first place.
`print(type(age)) #output: <class 'str'>`
As you can see your age variable has stored the input as a string. And you can't do calculations between string and integer.
In order to solve the problem, you need to convert your string variable into integer. Which is called **Type Conversion or Casting**.
**How can you do that?**
First, write down your datatype that you want to convert into. Then, use your input function to take input from the user.
`age=int(input("Enter your age"))`
This way, now your age variable will convert string into integer and store that. And now you can do calculations.
More Examples: float()- to convert into float, str()=to convert into string etc.
This type of type conversion is known as ;**Explicit Casting**, which is done by us, the programmars.
**Implicit Casting:** It is done by python. Here, python automatically changes between data types.
Ex:
```
x=10(Integer)
x+=10.5
Print(x) >>> 20.5(Float)
Print(type(x)) >>> <class 'float'>
```
Here, python automatically changed integer into float .
The input function is one of the most used functions in python. You can make your program more user-oriented by using input function.
| coderanger08 |
1,892,883 | EXCEPTİON HANDLİNG FOR SHADY PEOPLE WİTH SHADY CODE | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-18T21:32:34 | https://dev.to/kerkg/exception-handling-for-shady-people-with-shady-code-40pl | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
Exception handling is catching errors before the app crashes. For example, when you guess your shady code will throw an error, you can make a simple handler to catch and do something or intentionally throw an error because you are as shady as your code.
## Additional Context
@kerkg
| kerkg |
1,892,882 | A Thank You ❤️ | ✨ An army of 10,000 of you have hit the follow button next to my name.✨ That's far and away the... | 0 | 2024-06-18T21:29:49 | https://dev.to/magnificode/a-thank-you-3od1 | community, watercooler | ✨ An army of 10,000 of you have hit the follow button next to my name.✨
That's far and away the largest number of people that have knowingly hit a button to engage with me in the 25+ years that I've existed on the internet.
That's a ludicrously large number to me, and it felt like a milestone worth celebrating. 🤘
With that, I just wanted to throw out a whole hearted thank you to everyone here. I've been writing on Dev.to off and on for just over 5 years now and it's been nothing but welcoming, and filled with enjoyable interactions with folks in the community.
Community is hard to find, but when you do find it, nurture it and enjoy it.
Thank you all very much! | magnificode |
1,892,880 | One Byte Explainer | Hashing transforms input data into a fixed-size string of characters, which acts as a unique... | 0 | 2024-06-18T21:19:35 | https://dev.to/danny_monyela_df495ca7abc/one-byte-explainer-2kn4 | devchallenge, cschallenge, computerscience, beginners | Hashing transforms input data into a fixed-size string of characters, which acts as a unique identifier. It's essential for data retrieval, ensuring quick access in databases, and is fundamental in cryptography for securing information. | danny_monyela_df495ca7abc |
1,892,879 | Conflict-Free Replicated Data Types (CRDTs) in Frontend Development | Data consistency in distributed systems is a challenging task in today's advanced development... | 27,671 | 2024-06-18T21:09:24 | https://dev.to/syedmuhammadaliraza/conflict-free-replicated-data-types-crdts-in-frontend-development-4nh3 | frontend, devto, community, webdev | Data consistency in distributed systems is a challenging task in today's advanced development environment. As web applications increasingly adopt real-time collaboration characteristics, the need for reliable data synchronization mechanisms is critical. The conflict-free replica data type (CRDT) appears as a promising solution, enabling seamless and conflict-free data replication between multiple clients. This article explores the nature, types, and applications of CRDT in external development.

#### What is CRDT?
CRDTs, or contention-free replication data types, are data structures designed to manage the complexity of distributed systems by providing end-to-end consistency without requiring coordination between nodes. Unlike traditional approaches that rely on centralized servers or complex conflict resolution protocols, CRDT allows each replica to apply updates independently and resolve conflicts automatically based on predefined rules.

The main principles of CRDT are:
1. **Idempotence**: The procedure can be used several times without changing the result other than the initial program.
2. **Commutativity**: The operation can be used in any order and still produce the same result.
3. **Associativity**: The operation group does not affect the final result.
This principle allows maintaining consistency between distributed nodes, even in the case of system fragments or asynchronous updates.
#### Types of CRDTs
CRDT can be broadly divided into two types: condition-based (convergent) and application-based (commutative).
1. **State-based CRDTs (CvRDTs)**:
In state-based CRDTs, each replica maintains a local state that can be combined with the states of other replicas. Fusion is designed to be practical, cohesive, and seamless. An example of a state-based CRDT is a Grow-Only Counter, where counters from different copies are merged.
2. ""Operation-based CRDTs (CmRDTs)"":
Each step is designed to ensure that the workflow does not affect the final state. A common example is a set of Additions, where additions and deletions are resolved by assigning values to the additions.
#### Use of CRDTs in front-end development
CRDTs have found important applications in future development, especially in real-time collaborative applications, offline-first applications, and decentralized applications.
1. **Real-time collaboration apps**:
Apps like Google Docs and Figma rely on real-time collaboration, where multiple users can edit the same document at the same time. By using CRDTs, this application ensures seamless deployment and integration between all users, even when offline.
2. **Offline-first application**:
In scenarios where users may experience remote communication, offline-first application becomes crucial. CRDTs allow these applications to continue to run offline, allowing users to make local changes and automatically sync those changes once connectivity is restored.
3. **Decentralized Applications**:
With the proliferation of decentralized web (Web3) and peer-to-peer applications, it is difficult to maintain data consistency without a centralized authority. CRDT provides a way to achieve consensus and data consistency across distributed nodes, making it ideal for decentralized applications.
#### Implementation of CRDTs in front-end development
Applying CRDTs in front-end development includes choosing the appropriate CRDT type based on application requirements and integrating it with the front-end. Libraries and frameworks such as Automerge, Yjs, and Logotip provide a ready-to-use CRDT process, simplifying the development process.
For example, enabling Automerge in React to edit text together includes:
1. **Initialization**:
Automatically set the status of the initial document.
2. **Updates and Sync**:
Apply changes and sync with other clients using the Automerge API.
3. **Rendering**:
Rendering the state of the document in the meta component.
```javascript
import React, { useState, useEffect } from 'react';
import * as Automerge from 'automerge';
import { WebSocketProvider } from 'y-websocket';
const CollaborativeEditor = () => {
const [doc, setDoc] = useState(() => Automerge.init());
const [text, setText] = useState("");
useEffect(() => {
const wsProvider = new WebSocketProvider('wss://your-websocket-server');
wsProvider.on('document-update', (update) => {
setDoc(Automerge.applyChanges(doc, update));
});
return () => wsProvider.disconnect();
}, [doc]);
const handleChange = (e) => {
const newText = e.target.value;
const newDoc = Automerge.change(doc, doc => {
doc.text = newText;
});
setDoc(newDoc);
setText(newText);
};
return <textarea value={text} onChange={handleChange} />;
};
export default CollaborativeEditor;
```
| syedmuhammadaliraza |
1,892,675 | How to get hired in Dubai? | A lot of people have the misconception that it is easy to apply for a job in Dubai or the United Arab... | 0 | 2024-06-18T16:51:00 | https://dev.to/vadimk_77/how-to-get-hired-in-dubai-4c5h | A lot of people have the misconception that it is easy to apply for a job in Dubai or the United Arab Emirates.
Dubai is one of the most competitive Job markets in the world, as millions of people apply from all over the world. Based on the data from [JobXDubai.com](https://jobxdubai.com), candidates from India, Pakistan and Europe make up the most volume.
Popular positions such as Accountant, Sales & Marketing and HR receive easily over 500 applicants within 24 hours. When a recruiter receives an overwhelming amount of resumes, he or she is unable to process them all. From the feedback we have received, recruiters tend to open at the most the first 100 Resumes and try to pick someone from that list.
Video – how to apply in Dubai
Recruiters work on a performance based compensation – meaning they receive a commission, when they fill a vacancy. Their motivation is to find the right candidate as soon as possible and receive the commission for their work. Most likely if they go through all of the 500 resumes, they will be able to identify the perfect candidate for the position. At the moment, nobody does it and it consumes too much time to look through each resume.
Assuming that a recruiter spends approximately 1minute on a single CV, he would need to spend at least 8hours non stop to look through 500.
So how do you stand out in such a competitive environment?
On average, a candidate needs to apply 200 times before being invited to an interview and get eventually hired. Since your Resume is the only piece of paper that actually markets your skills and yourself, you need to have a winning CV. If you want to learn how to write a winning CV, you can read our article.
Considering you have already followed our guide and created a winning CV, your next step is to set a strategy of where you are going to apply.
1. Jobboards
A job board is a website where employers post job openings and job seekers can browse and apply for positions. The top job boards in the UAE are the following :
Bayt.com
Gulftalent.com
JobXDubai.com
NaukriGulf.com
LinkedIn
Please consider, as a candidate, you should never pay a fee when applying for a Job. Platforms such as JobXDubai.com offer various premium services to help candidates stand out and improve their chances to getting hired in Dubai.
For example, we offer a professional CV service or you can also check your Resume with AI and receive feedback on how to improve it.
Some jobboards are more specialised than others. All of the above mentioned post jobs across MENA, whereas JobXDubai only focuses on Dubai and the UAE.
The process to apply for a job on these platforms is relatively straightforward. You register free of charge, upload your cv, add some personal information and apply.
Strategy: Apply on Job Boards such as JobXDubai.com
We would suggest to start applying via those above mentioned platforms and see how you perform. As the job market is very competitive, we suggest to apply for at least 200 positions, on a daily basis. This strategy should yield result and get you at least shortlisted.
Here is a break down of the strategy:
Register on all of the above platforms
Setup email alerts for your particular position
Speed is key – so make sure you are ready to be the first to apply when a position opens up
Be consistent – apply daily and do not give up
Do not expect to receive any feedback from the Companies / Recruiters, as most of them receive a very high number of applicants and could not provide individual feedback.
If there is no feedback within 2 weeks then most likely you have not been shortlisted and would not advance to an interview. Recruiters might send you an automated email saying something like “we are not moving forward with your application“.
This is not a reflection on your skills or experience, there could be several reasons why the application did not go forward. Do not think about it too much and just keep going and apply daily.
If you should get to the point that you have been applying for over 200 positions and still have not received any single feedback, maybe you should revisit your CV – here is a comprehensive guide how you can improve it.
2. Independent Recruitment Agencies
In Dubai, a lot of smaller to midsize companies tend to outsource their talent search to independent recruitment agencies. There are some well established agencies and non regulated recruiters, which you should avoid at all costs.
There are some shady recruitment agencies, who contact candidates and ask for a fee upfront for a guaranteed job. In all of those cases, these are scams designed to extract money from the candidates and there is no guaranteed job waiting. So please avoid those as much as possible and do not pay.
Strategy: Approach Recruiters
Another strategy on how to get hired in Dubai is to approach recruiters directly. In this article, we have provided Top 30 reputable recruiters in Dubai and the UAE.
How to approach Recruiters:
Step 1
You can start by visiting their official website and see if they have open vacancies that fit your criteria
Submit your CV with recruitment agencies even if there are no open positions
If you contact the recruiters via email, do not write an automated email. Draft the email in a more personal approach.
An example of an email could be:
“Dear Accel team, I hope my emails find you well. I am a chartered Accountant with over 10 years experience at a big 4 company. I believe your company is specialised in this field and I would like for you to have a look at my Resume and see if there would be a position I would be perfect for.
Currently in Dubai on a Visit Visa, I am available for Interviews. I appreciate your time and effort and I am looking forward working together”
Step 2
Most recruiters are using LinkedIn as their main social network. Find the recruitment companies from the list, visit their pages, start following them and setup alerts for new posts.
A lot of companies tend to post their open vacancies on their LinkedIn pages, so make sure you will receive an email, when a new job post goes live.
Furthermore, open the LinkedIn page of any of those Recruitment pages and click on the tab “People”. In this field, you will be see all the employees working for a particular Recruitment Agency.
Research all the agents from the company and start approaching them – if the option to connect is available , then sent out an invite.
Most of the recruiters have a public page and have a “Message” button. Use the message button to get in touch with the recruiter.
As you can see from the above picture, this particular agent has over 17,000 followers on LinkedIn. This is in indication that a lot of candidates are Messaging him with their Resumes.
If you decide to send a message, make it very personal. Do not use any automated / chatgpt generated text. Be polite and friendly. You can write something like this :
“Hey Tommy, hope you are well. As you are Senior Technology Consultant, I would like to share my Resume with you. I have seen that you have [open position title] and I believe my expertise would fit perfectly.
I am currently in Dubai and would be available for an interview”
Do not expect a response here, as you can imagine that a lot of candidates follow the same strategy. In most cases, the recruiter will look at your CV and if he has an open position that he is trying to fill and your skills align with that position, he will invite you to a short interview.
However, if you do not receive a reply, do not panic – try to follow up with another message in a couple of days, reminding him to have a look at your Resume. Be polite, be patient – these are the keys to landing a job in Dubai.
List of reputable and well established Recruitment Agencies
HR Consultancy Firms in Dubai
HR Consultancy Firms in Dubai
1. Accel Human Resource Consultants
Services: HR solutions and recruitment in various sectors.
Address: UAE OFFICE 108, First Floor, Al Nasr Plaza, Oud Metha, Dubai, UAE.
Phone: +9714 396 96 00
Email: jobs@accelhrc.com, info@accelhrc.com
Website: Accel HR Consulting
2. Adecco Middle East
Services: World’s largest HR provider and temporary staffing firm.
Address: Marina Plaza, Dubai Marina Road, Office 1206, Dubai, UAE.
Phone: +971 (0) 4 368 7900
Email: adeccoae.info@adecco.com
Website: Adecco
3. Agile Consultants
Services: Customized services to clients and candidates in the UAE & GCC.
Address: Jumeirah Lakes Towers, Dubai, UAE.
Phone: +971 4 8769064
Email: info@agileconsultants.ae
Website: Agile Consultants
4. Alliance Recruitment Agency
Services: International recruiting, staffing, and HR services provider.
Address: PO Box No: 336899 Dubai, UAE
Website: Alliance Recruitment Agency
5. ANOC
Services: Recruiting and Manpower Consultancy.
Address: 14 Boulevard Plaza, Tower 1 Downtown Dubai, United Arab Emirates
Email: info@anoc.ae, jobs@anoc.ae
Website: ANOC
6. BAC Middle East
Services: Professional recruitment consultancy.
Address: 919 Liberty House Dubai International Financial Centre, Dubai,
Phone: +9714 – 4398500
Email: recruit@bacme.com
Website: BAC Middle East
7. Budge Talent
Services: Recruitment and HR Consultancy in Construction, Real Estate, and Asset Management Sectors.
Address: Office 208C, Diamond Business Center 1 Arjan, Dubai, United Arab Emirates
Phone: +971 (0)4 374 8101
Email: hello@budgetalent.com
Website: Budge Talent
8. Charterhouse
Services: Represents job opportunities across various sectors.
Address: P.O. Box 75972 Suite 903 Maze Tower, Sheikh Zayed Road, Dubai, U.A.E
Phone: +971 4 372 3500
Email: info@charterhouse.ae
Website: Charterhouse
9. Cooper Fitch
Services: Recruitment, executive search, and HR advisory services.
Address: 13th Floor Tiffany Tower Jumeirah Lakes Towers PO Box 118468, Dubai, United Arab Emirates
Phone: +971 4 352 2506
Email: talktous@cooperfitch.ae
Website: Cooper Fitch
10. Candor Human Resources Consultancy
Services: Global employers-oriented manpower recruitment agency.
Address: Suite # 1449, Damas Tower (Burj 2000), Sharjah, UAE
Phone: +971 6 563 9142, +971 6 563 9149
Email: info@candorintl.com
Website: Candor HR Consultancy
HR Consultancy Firms in Dubai
11. Dawaam
Services: Headhunting services and staffing solutions.
Address: 2nd Floor, Office 203 Nassima Tower, Trade Center 1, Sheikh Zayed Road, P.O. Box 49260 Dubai, United Arab Emirates
Phone: +971 (4) 353 4774
Email: info@dawaam.net
Website: Dawaam
12. Engage Selection
Services: Personalized recruitment service.
Address: PO Box 390380 Dubai, United Arab Emirates
Phone: +971 4 447 8955
Email: dubai@engageselection.com
Website: Engage Selection
13. First Select Employment
Services: Professional outsourcing and workforce solutions.
Address 01: Dubai, UAE, Al Manara Building. P.O.Box 32634
Phone: +971 4 380 7491
Address 02: Abu Dhabi, UAE, Al Jazera Sports Tower A. P.O.Box 6314
Phone: +971 2 4451130
Website: First Select Employment
14. Greenland UAE
Services: Bridging the gap between employers and job seekers.
Address: Level 30 – The H Hotel Office Tower One Sheikh Zayed Road – Dubai
Phone: +971 4 405 8198
Email: admin@greenlanduae.com
Website: Greenland UAE
15. Gulf Recruitment Group
Services: Locally owned recruitment agency.
Address: One Business Bay, Omniyat Tower, Business Bay, P.O Box 300179, Dubai
Phone: +971 (0)4 210 9000
Website: Gulf Recruitment Group
16. Guildhall
Services: Executive HR consulting and recruitment.
Address: Dubai Logistics City – United Arab Emirates
Phone: +971 48874527
Email: asia@guildhall.agency
Website: Guildhall
17. Hays
Services: Global recruitment expert.
Address 01: Dubai & Saudi Arabia, Hays FZ LLC, Block No 19, Knowledge Village
Phone: +971 4 559 5800
Address 02: Hays FZ LLC, Guardian Tower 4th Floor (Technip Building), Abu Dhabi
Phone: +971 4 559 5800
Website: Hays
18. HR Source
Services: Multi-dimensional recruitment and HR consultancy.
Address: Fortune Executive Tower Office 2001 (20th floor) T Cluster, JLT Dubai
Phone: +971(0)4 441 2289, +971(0)4 447 4707
Email: info@hrsource.ae
Website: HR Source
19. Inspire Selection
Services: Boutique recruitment consultancy.
Address: Level 14, Boulevard Plaza Tower 1, Emaar Boulevard, Downtown Dubai
Phone: +97143680852
Email: info@inspireselection.com
Website: Inspire Selection
20. Mackenzie Jones
Services: Recruitment consultancy.
Address: Jumeirah Business Centre 3, (JBC 3) Cluster “Y,” Jumeirah Lake Towers
Phone: +971 (0) 4 457 1700
Email: info@mackenziejones.com
Website: Mackenzie Jones
21. ManpowerGroup Middle East
Services: Global workforce solutions provider.
Address 01: Regional Head Office, Dubai Internet City, Building 1, Office 204
Telephone: +971 43910460
Address 02: DIFC, Dubai International Financial Centre (DIFC) Precinct Building 2, Level 3, Office 306 PO Box 26359, United Arab Emirates
Website: ManpowerGroup Middle East
22. Meethaq Employment Agency
Services: HR services and employee outsourcing provider.
Address: Unit 113, The Offices 2 One Central, World Trade Center, P.O. Box: 33663, Dubai, United Arab Emirates
Phone: +971 4 208 3000, +971 4 297 3333
Email: info@meethaq.ae
Website: Meethaq Employment Agency
23. Mindfield Resources
Services: Specialist recruitment consultancy.
Address: Level 14, Boulevard Plaza Tower 1, Emaar Boulevard, Downtown Dubai, P.O. Box 334155, Dubai
Phone: +971 43695011
Email: info@mindfieldresources.com
Website: Mindfield Resources
24. NADIA Global Group
Services: Recruitment, training, and HR services.
Address: 904, Grosvenor Business Tower, Barsha Heights, Dubai, UAE
Phone: +971 4 331 3401
Email: info@nadiaglobal.com
Website: NADIA Global Group
25. Michael Page
Services: Specialized recruitment consultancy.
Address 01: Building 9, Level 6, Dubai Media City, P.O. Box 502440, Dubai, UAE
Phone: +971 4 709 0300
Address 02: Level 9, Office 903, Liberty House, Dubai International Financial Centre
Phone: +971 4 709 0310
Website: Michael Page
26. People Perfect Advisory
Services: Human resource solutions provider.
Address: Al Shafar Tower, Level 14, Tecom, Dubai, UAE
Phone: +971 4 565 6999
Email: info@peopleperfectae.com
Website: People Perfect Advisory
27. RTC-1 Employment Services
Services: Professional recruitment solutions.
Address: Office # 601, The Fairmont Hotel, SZR, Dubai, UAE
Phone: +971 4 394 9993
Email: info@rtc-1.com
Website: RTC-1 Employment Services
28. RecruitMe
Services: HR solutions and recruitment services.
Address: Dubai, United Arab Emirates
Phone: +971 4 375 8989
Email: info@recruit-me.com
Website: RecruitMe
29. Robert Half
Services: Specialized staffing and consulting services.
Address: Office 1703, Level 17, Boulevard Plaza Tower 1, Downtown Dubai
Phone: +971 4 354 25 44
Email: info@roberthalf.ae
Website: Robert Half
30. SOS HR Solutions
Services: Global HR outsourcing and consultancy services.
Address: Office No. 804, Dusseldorf Business Point, Al Barsha 1, Dubai, UAE
Phone: +971 4 396 6000
Email: info@soshrsolutions.com
Website: SOS HR Solutions
3. Large Companies
Large companies usually have a large HR and recruitment department and do not outsource the recruitment process to outside agencies. They usually have a careers portal on their main website.
Careers websites are similar to a job board, where they list all the available positions and candidates can apply after registering.
Please note that larger companies receive a high volume of applicants and use ATS – applicant tracking systems, to identify the right candidates.
If you are not familar with ATS systems, we have provided an indepth article on Applicant Tracking Systems here.
Strategy: Careers Websites
The process is relatively straightforward but very time consuming. You will need to go to each companies website, register your profile (will take you up to 15-20 minutes), then fill out lengthy forms which might take you another 40minutes to do.
As those companies use ATS systems, they require as much information from you as possible, as a lot of CVs will not be parsed correctly. Since those ATS systems do not yet use Artificial Intelligence, they are outdated and have a high error rate.
A lot of large companies in Dubai have hundreds or thousands of open vacancies at all times. Due to their size and employee turnover, they are always hiring new people.
This strategy can prove to be successful, but it is very time consuming.
Top Companies to apply to
The following 100 businesses were chosen as the greatest workplaces because they prioritised serving both their employees’ needs and those of their customers.
We would suggest to go through each of those companies, identify the careers website and apply.
Large Category
McDonald’s UAE
DHL Express
Metropolitan Group
BFL Group
Leminar
InterContinental Group Hotels & Resorts
Hilton
FIVE
Burjeel Holdings
Ajmal Perfumes Group
Globelink West Star Shipping L.L.C
Chalhoub Group – UAE
Teleperformance
Millennium Hotels & Resorts – MEA
Arada Developments LLC
SANIPEX GROUP
Apparel Group
AD Ports Group
Al-Dabbagh Group
Talabat – UAE
Bayut Dubizzle
Massar Solutions PJSC
Jumeirah Hotels & Resorts
G42
BEEAH
Provis Real Estate & Property Management
AtkinsRéalis – UAE
NAKHEEL
Redington Gulf FZE
Al Khayyat Investments
Aldar Education
PureHealth
Saudi German Hospital
Miral LLC
Gulf Coca-Cola Beverages
Medium Category
Century Financial
Aura SKYPOOL
Majid Al Futtaim – Accor Properties
Cisco Systems, Inc
Eros Group
BANKE INTERNATIONAL PROPERTIES
PIZZAEXPRESS UAE
PetroGas
Emirates District Cooling (EMICOOL) LLC
MVP Tech – A Convergint Company
Nissan Middle East FZE
Sunset Hospitality Group
Edenred
A&A Associate LLC
AstraZeneca
STS Group
Medtronic United Arab Emirates
PSA BDP (BDP Global Project Logistic LLC)
ABBVIE BIOPHARMACEUTICALS GMBH
NOVARTIS
Media One Hotel
L’Oréal Middle East
Gastronomica Middle East General Trading LLC
Telecommunication and Digital Governement Regulatry Authurity
Visiontech Systems International LLC
Khidmah
GYMNATION
Alpha Nero FZ LLC
Messe Frankfurt Middle East
Canadian Medical Center
Hilti UAE
Publicis Sapient
Cigna Healthcare
Property Finder
Wyndham Dubai Deira
Small Category
Mood Rooftop Lounge
GREEN BOX CONTAINERS L.L.C
shift
LUXE PORT
KnowBe4
Mott 32
Amgen Gulf – United Arab Emirates
EATON BUSINESS SCHOOL LLC
The Box Self Storage
Goldfish Sushi & Yakitori
Lola Taberna Española
WAHL Middle East & Africa
KALLEGRA INDUSTRIES LLC
RAISE Fitness & Wellness
Charterhouse Middle East
VOLTA SHIPPING SERVICES L.L.C
HOUSE OF SHIPPING MANAGEMENT CONSULTANCIES L.L.C
SEO Sherpa
Cogent Solutions Event Management
Blue Ocean Corporation
International Diplomatic Supplies
Faith Group
Howden Guardian Insurance Brokers L.L.C.
Smart Salem Medical Center
Sahara Motors
Glander International Bunkering
Stryker United Arab Emirates
Marc Ellis
Joe’s Backyard YOUR BBQ HANGOUT
TishTash Communications
In Conclusion
Dubai is a highly competitive job market and after following this guide you should be equipment and well informed on how to apply.
Make sure your CV stands out , setup job alerts across all the platforms such as JobXDubai.com or follow our Telegram Group, where we post all the vacancies.
Start following all the recruiters on LinkedIn, approach individual agents via direct messages on LinkedIn. Be persistent and do not give up. Do not expect to receive any feedback – just apply consistently.
If you would like us to help you writing a winning CV, please have a look here. | vadimk_77 | |
1,892,877 | Project Stage-2 Implementation Part-3 | Welcome back, everyone! In the previous part of this series, we delved into the complexities of... | 0 | 2024-06-18T21:00:41 | https://dev.to/yuktimulani/project-stage-2-implementation-part-3-2fhk | gcc | Welcome back, everyone! In the previous part of this series, we delved into the complexities of Function Multi-Versioning (FMV) and started exploring the depths of GCC's inner workings. After painstakingly investigating files like `tree.h, tree.cc, and tree-inline.h`, we honed in on the elusive function responsible for FMV: `expand_target_clones in multiple_target.cc`.
## Unpacking the Errors: Invalid Target Attributes
As I ventured deeper into the code, modifying the expand_target_clones function to automatically create function clones for various target architectures, I encountered a persistent error that quickly turned my excitement into frustration:
```
error: pragma or attribute ‘target("sve-bf16")’ is not valid
```
Despite my best efforts to resolve this error, it remained stubbornly present. This error indicates that the specified target attribute isn't recognized or supported by my current GCC setup.
## Diagnosing the Error: Why It Happened
This error message typically arises when:
1. Unsupported Target Architecture: The specified target attribute is not supported by the GCC version being used.
2. Incorrect Attribute Syntax: The attribute might be formatted or spelled incorrectly, causing GCC to reject it.
## Steps Taken to Resolve the Error
To tackle this issue, I followed a structured approach to diagnose and attempt to resolve the problem
### 1. Verify Supported Target Attributes
First, I needed to determine which target architectures were actually supported by my GCC version (11.3.1). I used the command below, which is intended to list all valid target architectures:
```
gcc -E -march=help -xe /dev/null
```
However, this command resulted in an error:
```
gcc: error: unrecognized command line option ‘-march=help’
```
Despite the error, it hinted at valid target architectures. Knowing that my GCC version was 11.3.1, I needed to cross-reference the valid targets for this version.
### 2. Cross-Check GCC Documentation
I reviewed the official GCC documentation for version 11.3.1, which provides comprehensive information on supported target architectures. This step was crucial in understanding which attributes were valid for the version I was using.
### 3. Update Hardcoded Attributes
With the correct list of supported targets, I revised my hardcoded attributes to include only those that were valid. Instead of `sve-bf16`, I opted for more widely supported attributes like sve and sve2:
```
const int num_attrs = 2;
char attrs2[num_attrs][5] = {"sve", "sve2"};
```
Despite this change, the error persisted.
## Further Investigations and Continued Errors
Even after updating the attributes, the error did not disappear. This was a significant setback, but it also provided an opportunity to deepen my understanding of the issue.
### 1. Examine CPU Features
I examined the /proc/cpuinfo file to understand the supported features of my CPU. This information helped confirm that the hardware supported the attributes I was attempting to use.
```
cat /proc/cpuinfo
```
However, this did not shed light on why the compiler continued to reject the target attributes.
### 2. Investigate GCC Configuration
I reviewed the GCC configuration to ensure that it was built with support for the architectures I was targeting. Ensuring that the compiler itself was correctly configured was essential, but it did not resolve the error.
### 3. Seek Insights from Community Resources
I turned to forums and mailing lists where other developers had discussed similar issues. While there were many insights, none directly addressed the persistent error I was encountering.
## Key Takeaways and Next Steps
This part of the project was filled with challenges, highlighting the complexities of compiler development. Here are the key takeaways from this experience:
Persistent Errors Are Learning Opportunities: While frustrating, persistent errors offer a chance to deepen your understanding and refine your approach.
Documentation and Community Insights Are Valuable: Reviewing documentation and seeking community insights can provide guidance, even if they don't always offer immediate solutions.
Next Steps: A Plan for Resolving the Error
Given the persistent error, here are the steps I plan to take next:
### 1. Verify GCC Build and Configuration
I will verify that my GCC build includes the necessary support for the target attributes. This involves checking the configuration and potentially rebuilding GCC with the required features.
### 2. Test with Different GCC Versions
Testing with different GCC versions may help identify if the issue is specific to version 11.3.1. This could provide insights into whether a version upgrade or a different build is necessary.
### 3. Explore Alternative Architectures
I will explore using alternative target architectures that are widely supported and verified to work with my setup. This may involve testing with simpler or more generic attributes initially.
### 4. Debug with Detailed Logging
I will enable detailed logging and diagnostics in GCC to better understand why the error occurs. This may involve using flags or modifying the GCC code to provide more information during compilation.
Wrapping It Up: Lessons Learned and Moving Forward
Despite the persistent error, this project has been a valuable learning experience. It has highlighted the importance of persistence, attention to detail, and the value of community and documentation in tackling complex issues.
As I continue to work through these challenges, I am optimistic that the next steps will provide the insights needed to resolve the error and successfully implement the automatic FMV cloning feature.
Thank you for joining me on this journey. Stay tuned for more updates as I continue to unravel the mysteries of GCC and FMV. Happy coding! 🚀 | yuktimulani |
1,889,897 | Case Study - TDD in Node.js Inspector Server and Other Projects | Overview Test Driven Development (TDD) is a software development methodology where tests... | 27,155 | 2024-06-18T21:00:00 | https://uchenml.tech/test-driven/ | productivity, testing, softwaredevelopment | ## Overview
Test Driven Development (TDD) is a software development methodology where tests are written before the actual code. The progress of implementation is then guided by the status of these tests.
There is often confusion between the terms "automated testing," "unit testing," and "TDD." To clarify:
- **Automated Testing** refers to any testing performed by specialized software without requiring manual intervention. This includes various types of testing, depending on the scope (unit/integration) or the metrics being evaluated (correctness, security, load, benchmarking).
- **Unit Testing** is a subset of automated testing that focuses on the smallest, independent logical units of code. These tests can be created at any stage of development, whether before or after the code is written.
- **Test Driven Development (TDD)** is a practice where tests are designed and implemented before writing the actual code. While these tests are typically automated, they can also be manual in some cases. TDD can be applied at any level of granularity.
## Node.js Inspector Server
### Problem Statement
The goal was to transition Node.js to utilize a new V8 debugging API and expose a WebSocket endpoint compatible with the Chrome DevTools protocol. This required ensuring a smooth ecosystem transition and providing tools vendors with a clear migration path.
### Challenges
- The implementation needed to reside in the core Node.js binary, adhering to strict performance and security requirements.
- The low-level C++ code had to run on all platforms supported by Node.js.
- Rebuilding the Node.js binary is a time-consuming process that can significantly impact developer productivity.
- I was initially unfamiliar with `libuv` and the internals of Node.js.
### Approach
The initial focus was on creating a WebSocket server in C++ to run outside the V8 engine on a separate thread. This design ensured that the server would continue running even when V8 was paused at a breakpoint, and it also minimized the impact on profiling data of the user code.
To avoid a full rebuild of the Node.js binary during development, the server implementation was initially contained within the test code. As the codebase evolved, it was split into multiple source files and gradually integrated into the core Node.js code.
The current C++ test suite includes:
- [test_inspector_socket_server.cc](https://github.com/nodejs/node/blob/main/test/cctest/test_inspector_socket_server.cc): Tests the server, including socket listening, HTTP protocol support, and potential error states.
- [test_inspector_socket.cc](https://github.com/nodejs/node/blob/main/test/cctest/test_inspector_socket.cc): WebSocket protocol tests with a focus on edge cases and error handling.
One interesting aspect of `libuv` was that the tests were single-threaded, greatly simplifying the implementation of the test suite. This was a fun coding challenge and crucial for catching hard-to-reproduce bugs and regressions, especially those caused by differences in `libuv` behavior across platforms.
Once the server was stable and inspector integration began, tests were written in JavaScript using the WebSocket protocol. These tests were not strictly "unit tests," as V8 inspector already had significant testing coverage in the core V8, and duplicating it would have increased maintenance without adding much value.
Later, a JavaScript API was introduced by community demand, making it even easier to write tests in JavaScript, particularly to cover Node-specific protocol extensions such as [tracing](https://github.com/nodejs/node/blob/main/test/parallel/test-inspector-tracing-domain.js) or [workers](https://github.com/nodejs/node/blob/main/test/parallel/test-worker-debug.js).
### Highlights
The transition to the new protocol was completed ahead of schedule, allowing the legacy protocol to be deprecated and removed altogether. The integration underwent several deep reworks without disrupting the ecosystem, including the addition of support for worker threads. In all cases, new test cases were added to ensure stability.
A significant flakiness in Inspector tests prompted a deep refactor ([PR](https://github.com/nodejs/node/pull/21182)), improving the performance and stability of the entire DevTools protocol.
At least one [test case](https://github.com/nodejs/node/pull/25455) was added to justify keeping code in the native C++ part after contributors proposed moving it to JavaScript.
The community identified several potential security vulnerabilities, leading to the addition of tests to prevent regressions.
## Partner API Endpoint
### Problem Statement
The task was to implement a REST API endpoint according to the specifications provided by a partner company. Their software would query this endpoint to obtain information from our systems, streamlining the customer experience.
### Challenges
- The specification required a large amount of the data points, raising concerns about whether we had all the required information or if it was in the expected format.
- There were uncertainties about whether the requested access complied with our security and privacy policies.
- The necessary information had to be sourced from multiple internal systems, and it was unclear how readily available this data was.
### Approach
The service code implementing the API was divided into multiple layers and engineered into several components:
- **Response Packaging**: A component to format the response according to the partner’s specifications.
- **Data Aggregation and Sanitization**: An internal component to aggregate data and ensure it was sanitized (e.g., converting internal codes to the partner’s specifications, normalizing addresses).
- **Data Source Connectors**: Independent components to connect to each internal data source.
- **Request Processing and Validation**: A separate component to handle request validation and processing.
The first test involved directly calling the endpoint implementation with a mock request and checking the response. The initial implementation returned a hardcoded response, which was then gradually enhanced with more logic. E.g. a code that returns a hardcoded customer address would be replaced with a component that retrieved the address from the customer service. Unit tests were created for each component, focusing on mocking dependencies to verify logic, validation, and error propagation. For example, unit tests for the customer service connector mocked the network layer to directly check requests sent to the customer service, and mock responses were used to validate the connector’s logic, both in a happy path and in error scenarios.
### Highlights
- The project codebase was split into clear maintainable components, enabling parallel development, including discussions with the teams responsible for each data source.
- Significant discussions with stakeholders (e.g., service developers, data owners, security, and privacy teams) were necessary, and we were able to start these discussions sooner which reduce the risk of delays.
- Testing provided with plenty of examples that were really useful in communications. For example, when discussing the data format with the partner, we could provide examples of the data we were sending, which helped clarify the requirements.
The project was delivered on time and promptly accepted by the partner.
## Uchen.ML
### Problem Statement
This project began as an attempt to build deep learning models that could be easily deployed in specific scenarios. It was developed alongside learning the theory of deep neural network training. Both the external API and internal implementation were in constant flux, with significant rewrites anticipated.
### Approach
Each component started as a test case. For example, each gradient calculator began in the test class, with all numbers verified against values returned by the PyTorch implementation. As the framework matured, the underlying math of the stacked components grew increasingly complex, making the tests essential for detecting subtle issues. Extensive rework often required benchmarks to justify code changes. Writing test cases helped refine the framework's API.
### Highlights
The project continues to evolve, despite extended breaks in development. Test cases have been invaluable for catching new issues early, including identifying when new APIs are too cumbersome for unconsidered use cases.
## Best Practices
- Cleanup aggressively and avoid duplicate test cases. Do not test trivial code (such as getters and setters). Tests have maintanance code and can be a significant draw on engineer productivity and even team morale.
- Test behaviors not the implementation. Use higher level APIs and data that mimics the real-world usage.
- Use a tool that reruns the tests on file save, such as `jest --watch` or `ibazel`
- Do not add `TODO` comments in the code. Add disabled or failing tests instead.
## Conclusion
TDD is goes beyond just writing tests; it fundamentally shapes the design and architecture of the code. Tests help understand the requirements and constraints, leading to more robust and error-resistant code. Test cases also serve multiple purposes:
- Tracking Implementation Progress: They provide a clear, incremental path of development, showing what features have been implemented and what remains to be done. Each passing test signifies a step forward in the project, offering a sense of accomplishment and a clear indicator of progress.
- Onboarding New Team Members: For new developers joining the team, test cases offer a practical insight into the functionality and expected behavior of the software. They serve as an up-to-date documentation that new team members can use to understand the codebase more quickly and effectively.
- Providing a Safety Net for Future Changes: One of the most significant benefits of TDD is the confidence it provides when making future modifications. As the software evolves, having a comprehensive suite of tests ensures that new changes do not introduce regressions. This safety net allows developers to refactor and improve the code with greater assurance.
By integrating TDD into the development process, teams can achieve a higher standard of software quality, foster a culture of continuous improvement, and reduce long-term maintenance costs.
| eugeneo_17 |
1,892,876 | Elevate Your Developer Setup with the Best Mechanical Keyboards | Imagine this: You’re in the zone, your fingers dancing across the keys as you bring your latest... | 0 | 2024-06-18T20:55:47 | https://dev.to/3a5abi/elevate-your-developer-setup-with-the-best-mechanical-keyboards-2a44 | devtoys, webdev, productivity | Imagine this: You’re in the zone, your fingers dancing across the keys as you bring your latest coding project to life. Every keystroke feels like a satisfying click, each one precise and purposeful. The right keyboard can make this dream a reality, turning your coding sessions into a symphony of productivity and comfort. Let’s dive into the world of mechanical keyboards and discover how these incredible tools can transform your developer setup. Here are some top picks from [MechanicalKeyboards.com](https://mechanicalkeyboards.com/r?id=07gbgn) that are perfect for developers like you.
Read the full review here! -> [Elevate Your Developer Setup with the Best Mechanical Keyboards - DevToys.io](https://devtoys.io/2024/06/18/elevate-your-developer-setup-with-the-best-mechanical-keyboards/) | 3a5abi |
1,892,875 | Rust Diagnostic Attributes | Rust has introduced a powerful feature known as diagnostic attributes, which allows me to customise... | 0 | 2024-06-18T20:53:11 | https://dev.to/gritmax/rust-diagnostic-attributes-7kk | rust, tutorial | Rust has introduced a powerful feature known as diagnostic attributes, which allows me to customise the error messages emitted by the compiler. This feature is particularly useful for improving the clarity of error messages, especially in complex scenarios involving traits and type mismatches.
## Overview
The diagnostic attributes are part of a new built-in namespace, `#[diagnostic]`, introduced in Rust 1.78. These attributes help provide more informative and context-specific error messages, making it easier for developers to understand and fix issues. Rust is known for its helpful error messages, but there are always cases where they can be improved. Crates that use the type system to verify invariants at compile time often generate large, unclear error messages when something goes wrong. Examples include Bevy, Axum, and Diesel. By giving crate authors tools to control the error messages emitted by the compiler, they can make these messages clearer and more helpful.
## Key Features
1. **Custom Error Messages**: One of the main attributes in this namespace is `#[diagnostic::on_unimplemented]`. This attribute allows trait authors to specify custom error messages when a trait is required but not implemented for a type. For example, instead of a generic error message, you can provide a detailed explanation and hints on how to resolve the issue.
2. **Non-Intrusive**: The attributes in the `#[diagnostic]` namespace are designed to be non-intrusive. They do not affect the compilation result and are purely for enhancing the diagnostic output. This means that applying these attributes will not cause compilation errors as long as they are syntactically valid.
3. **Compiler Flexibility**: The compiler treats these diagnostic hints as optional. It may choose to ignore specific attributes or options, and the support for these attributes can change over time. However, the compiler must not change the semantics of an attribute or emit hard errors for malformed attributes.
## Usage Example
Here is an example of how to use the `#[diagnostic::on_unimplemented]` attribute:
```rust
#[rustc_on_unimplemented(
message = "The type `{Self}` does not implement `MyTrait`. Please ensure that `{Self}` provides an implementation for `my_method`.",
note = "Implement the `my_method` function for `{Self}` to satisfy the `MyTrait` requirement.",
label = "missing implementation for `my_method`"
)]
trait MyTrait {
fn my_method(&self);
}
struct MyType;
// Implement the trait for MyType
impl MyTrait for MyType {
fn my_method(&self) {
// Implementation goes here
}
}
// another type that does not implement MyTrait
struct AnotherType;
// This will trigger the custom error message because AnotherType does not implement MyTrait
fn use_trait<T: MyTrait>(item: T) {
item.my_method();
}
fn main() {
let item = AnotherType;
use_trait(item); // This line will cause a compile-time error with the custom message
}
```
In this example, attempting to use `AnotherType` with the `use_trait` function will trigger the custom error message because `AnotherType` does not implement `MyTrait`.
## Benefits
- **Improved Developer Experience**: By providing more specific and helpful error messages, developers can more quickly understand and resolve issues in their code.
- **Enhanced Code Clarity**: Custom diagnostic messages can include additional context and explanations, making it easier to understand the requirements and constraints of your code.
- **Consistency**: Collecting diagnostic attributes in a common namespace makes it easier for users to find and apply them, and for the language team to establish consistent rules and guidelines.
## Conclusion
The introduction of diagnostic attributes in Rust represents a significant step forward in enhancing the developer experience. By allowing for customized and context-specific error messages, Rust continues to uphold its reputation for providing helpful and user-friendly compiler diagnostics. As you adopt these new features, you'll find that your code becomes not only more robust but also more accessible to others in the Rust community.
For more detailed information, you can refer to the official Rust documentation and the RFC that introduced this feature.
## Sources
- https://rust-lang.github.io/rfcs/3368-diagnostic-attribute-namespace.html
- https://doc.rust-lang.org/reference/attributes.html
| gritmax |
1,892,874 | Why Your Business Needs a Modern Website: A Real-Life Success Story | Raise your hand if you've ever: Visited a website and immediately left. Felt frustrated by a slow,... | 0 | 2024-06-18T20:50:33 | https://dev.to/ridoy_hasan/why-your-business-needs-a-modern-website-a-real-life-success-story-333l | webdev, tutorial, productivity, learning |
Raise your hand if you've ever:
1. Visited a website and immediately left.
2. Felt frustrated by a slow, outdated site.
3. Wondered why some businesses thrive online while others don't.
I recently worked with a local bakery, Sweet Treats. Their pastries were amazing, but online sales were low. Their website was outdated, slow, and confusing. Customers couldn't find what they wanted, and sales suffered.
Emma, the bakery's owner, felt overwhelmed and stressed. She knew the website needed a change but didn’t know where to start.
The right website is crucial for business success. A poor website drives customers away. It creates mistrust and frustration.
We revamped the Sweet Treats website:
1. Made it modern and fast.
2. Improved user-friendly navigation.
3. Enhanced appealing design and quick load times.
The results?
1. More visitors stayed on the site.
2. Online orders doubled within a month.
3. Emma felt relieved and excited.
A well-designed website:
1. Builds trust and credibility.
2. Showcases your brand effectively.
3. Provides a smooth user experience.
Invest in a professional website. Your business deserves it. Transform your online presence. Watch your business grow. Like Emma, you’ll see the difference.
Are you building the right website for your business success? Share your experiences and let's discuss!
get in touch -
my LinkedIn - https://www.linkedin.com/in/ridoy-hasan7 | ridoy_hasan |
1,892,873 | I don't belong here but there's something jobber desperately needs. | Hey my name is Chris and I totally don't belong on this page but I wanted to reach out to the jobber... | 0 | 2024-06-18T20:49:36 | https://dev.to/cfiduccia/i-dont-belong-here-but-theres-something-jobber-desperately-needs-d69 | discuss, api, development, design | Hey my name is Chris and I totally don't belong on this page but I wanted to reach out to the jobber community. I have very limited experience with dev’s, just some alpha testing (someone was on vacation) for the API for the Kansas City Chiefs rewards program. I have moved away from technology and back into construction management. We use Jobber and it’s great for scheduling and holding customer contact data. However, there are a few things which could be added to jobber and allow us to only use it and not have to add a bloated project management system.
Change orders are the gravy in construction and can be very costly for the contractor if they blow it. An example would be a customer changing the color of their counter tops from white to black. Say you forgot to manually add this change order to your list so it was never acted upon. The tops delivered will be wrong, re-ordering them will add two weeks to the schedule and your mistake takes up space in the dumpster. Now a no-charge change order just cost you thousands.
You would need a form for change orders which is emailed to the home owner for approval. Once approved the change order is then added to the scope of work, which would be just an amended estimate after contract. The amount of the change order is added to the total cost of the scope of work. The change order would then populate the change orders list that could be sorted by thing such as one, the employee or a subcontractor responsible for completing the change order, type of work(i.e. carpentry, plumbing), and the change orders location in the critical path.
Change orders are desperately needed, but there are a few things you could add quickly. One would be a punch list. Just a check off list of things that trades need to correct. Another would be the color selection list. This is simply the color and the type of material the homeowner needs to choose, such as the exterior: almond vinyl vs blue cedar siding and in the living room: beige wool carpeting vs mahogany stained 2-1/4” red oak flooring. A few text boxes would be fine.
How many times did I just type “change order’ when C.O. works.
I would be very grateful if you would consider these additions. Feel free to contact me.
| cfiduccia |
1,892,871 | Electricity | This is a submission for DEV Computer Science Challenge v24.06.12: One Byte Explainer. ... | 0 | 2024-06-18T20:46:40 | https://dev.to/ronald_ljohnson_9f27bc2/electricity-6l9 | devchallenge, cschallenge, computerscience, beginners | *This is a submission for [DEV Computer Science Challenge v24.06.12: One Byte Explainer](https://dev.to/challenges/cs).*
## Explainer
<!-- Explain a computer science concept in 256 characters or less. -->
Electricity is the cornerstone of computer science because it powers all computing devices. Binary code, the language of computers, relies on electrical signals to represent 0 and 1. Without electricity, none of the hardware—processors, memory, storage—would function, making modern computing impossible.
## Additional Context
<!-- Please share any additional context you think the judges should take into consideration as it relates to your One Byte Explainer. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
ronald_ljohnson_9f27bc2
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | ronald_ljohnson_9f27bc2 |
1,892,868 | Crafting a Web Application with Golang: A Step-by-Step Guide | Introduction This is the first part the series Build a Web App with Golang . I am going to... | 27,770 | 2024-06-18T20:46:35 | https://blog.gkomninos.com/crafting-a-web-application-with-golang-a-step-by-step-guide | go, webdev, coding | ## Introduction
This is the first part the series [Build a Web App with Golang](https://blog.gkomninos.com/series/webapp-using-golang) . I am going to show you how you can build a web application using Golang. In fact, we are going to build an application that I need.
I this blog post we will cover the following:
* Scope, meaning we will define what we want to build
The goal of this blog post is to understand the problem and it's domain. This will help us to understand what we need to code.
I will try to keep each blog post in a separate git branch. I believe this will make it easier for you to follow the tutorial.
### Scope
> As a freelancer I want a web application in which I will login and I can create invoices for my clients.
> I want to be able to view the invoices created and download a PDF version
Let's breakdown the above requirements one by one.
First, let's see what an invoice contains:
* Freelancer's details which include:
Company Name
Company Address
Email
Tax Number
VAT Number
Bank Accounts
* Client details which include:
Company Name
Company Address
Email
VAT number
* Invoice Details:
Invoice Number
Invoice Date
Line Items
Here a sample invoice:

A Line Item, consists of:
\- Description
\- Disbursements
\- Fees
A Bank Account, consists of:
\- Bank Name
\- Account No.
\- IBAN
\- BIC
We also need to have the VAT rate for our country and the amount of days that the invoice is payable within.
Let's now discuss the invoice number. This should be unique per invoice and in most of the cases it must be sequential. For my case the invoice number has the form for example 1030/24 which means that the invoice belongs to 2024 year and it has the number 1030. The next invoice should be 1031/24 and so on.
This generation should be automated.
Since this is an MVP and it should work only for one freelancer we will use HTTP Basic Authentication and the UI will be really minimal
### Pages
Since we will build a web application let's define the pages that we want to build.
That's a draft but it will help us to get started.
\- / : this is the homepage of the application. It should display a list of all the invoices created with pagination. Next to each invoice we want to have a button to view its details.
\- /settings: Here we will display our company's details
\-/settings/edit: A form to edit our company's details
\- /clients: We will display a list of our clients
\- /clients/id: It will display the details for the client with id id
\- /clients/id/edit: a form to edit the client's details with id
\- /clients/new: a form to add a new client
\- /invoices/id/download: this will download the invoice with id id
\- /invoices/id: we will view the details of an invoice
\- /invoices/new: a form to create a new invoice
### Technologies that we will use
We need to store our data somewhere. For simplicity and easy of deployment we are going to use the awesome [SQLite](https://sqlite.org/) DBMS.
This post is all about Golang so that's the back-end language. However we are going to use the [Echo Framework](https://echo.labstack.com/) and [GORM](https://gorm.io/index.html) to speedup development.
For the front-end I am not sure yet, most likely we are going to use [Bootstrap](https://getbootstrap.com/) . I am also pretty sure that we are going to use ChatGPT to write the basic HTML and CSS for us.
### Conclusion
In the blog post we will dive into coding. We will create our git repository and setup some basics tooling that will save us a lot of time while we are developing.
I expect to release the next blog post of the tutorial every day.
If you have any questions or requests please write a comment.
Please subscribe to my newsletter so you get the updates.
Finally, just today I created an [X Account](https://x.com/gkomdev) and I would love to start having some followers. | gosom |
1,892,861 | Passing My CKA! | Introduction Hello, fellow developing cloud architecture network administration... | 0 | 2024-06-18T20:46:18 | https://dev.to/ravenesc/passing-my-cka-5be1 | kubernetes, learning, devops, cloud |
## Introduction
Hello, fellow developing cloud architecture network administration engineering Kubernetes cyber security consultants! 😮💨
I'm excited to share my journey to becoming a Certified Kubernetes Administrator (CKA). It’s been a whirlwind of learning experiences and growth!
## My Journey
### 08/2023: AWS Solutions Architect Associate and CSPM Development
In August, I successfully passed my AWS Solutions Architect Associate exam. This was a significant milestone for me, as it not only validated my cloud expertise but also sparked my interest in exploring more advanced cloud technologies. To build on this knowledge, I developed a Cloud Security Monitoring Tool (CSPM), which was fully deployable via Terraform and GitHub. This project was my first foray into the world of automation and infrastructure as code, laying the groundwork for my future Kubernetes journey.
### 10/2023: Diving into Kubernetes
After completing my AWS certification, I was eager to dive deeper into container technology. I asked my mentor about learning Kubernetes since I barely understood AWS ECS, ECR, and EKS. I started by learning Kubernetes (k8s) on the terminal, a hands-on approach that felt both exciting and challenging. This step marked the beginning of my deep dive into container orchestration.
### 11/2023: Starting with the Basics
To get a solid understanding of Kubernetes, I began with an introductory course on [YouTube](https://youtu.be/XuSQU5Grv1g). This course provided a basic overview and was essential for building my foundational knowledge. It was a lighter introduction, but it gave me the intrigue to tackle more complex topics.
### 12/2023: Intensive Learning with KodeKloud Udemy Course
After completing the introductory course, I realized I needed more comprehensive training to truly master Kubernetes. I enrolled in the [KodeKloud](https://kodekloud.com/) [Udemy](https://www.udemy.com/course/certified-kubernetes-administrator-with-practice-tests/) course specifically designed for the CKA exam. This course offered in-depth lessons and practical labs that significantly enhanced my understanding of Kubernetes. I spent countless hours working through exercises and scenarios, gradually building my competence.
### 01/2024 to 04/2024: The Learning Curve
Learning Kubernetes was not an overnight process. It required dedication and persistence. During these months, I immersed myself in various aspects of Kubernetes, from cluster architecture to deployment strategies. I also expanded my knowledge beyond Kubernetes, delving into computer networking, logging architecture, Linux commands, Secrets management, industry standards, ETCD, Kubeadm, Kubectl, YAML, JSONpath, updating clusters, Linux file system architecture, and backup and restore.
### 05/2024: Exam Preparation
With a solid foundation, I scheduled my exam and began rigorous preparation. I took numerous mock exams using resources like [Killercoda](https://killercoda.com/playgrounds), [Killer.sh](https://killer.sh/cka), and materials from the [Linux Foundation](https://training.linuxfoundation.org/). The PSI ‘at-home’ testing proved to be a stressful process. I had never been through such a rigid testing environment of that caliber. The stress of the testing environment was real, but I learned to control what I could and let go of the rest. And honestly, it went better than I expected.
### 05/30/2024: First Attempt
So I went into the PSI browser, followed the step they gave me and on my first attempt, I scored 60, just shy of the 66 needed to pass. I identified my weak areas, particularly in networking and cluster updates, and dedicated two more weeks to intensive study.
### 06/13/2024: The Final Attempt
Feeling confident, and after having gone through the process of PSI testing, I retook the exam and scored 78! The 6-month-long study was done, and I officially became a Certified Kubernetes Administrator. The journey had its ups and downs, but looking back every moment was worth it.
## Conclusion
I’m incredibly excited to have achieved this milestone. The journey taught me the importance of persistence, continuous learning, and the value of a supportive community. I’m grateful to everyone who supported me along the way.
As I move forward, I plan to take a month to reset and reflect on my current skills. My goal is to continue learning and growing, building on the foundation I've established. I’m looking forward to connecting with others who are on a similar path and sharing our experiences. Maybe one day, I will get to see you at a conference when I can afford it. 😂
Until then, thank you for reading, and here’s to the next chapter!
| ravenesc |
1,892,870 | shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 2 | In this article, I discuss how Blocks page is built on ui.shadcn.com. Blocks page has a lot of... | 0 | 2024-06-18T20:45:47 | https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-is-blocks-page-built-part-2-1393 | javascript, nextjs, shadcnui, opensource | In this article, I discuss how [Blocks page](https://ui.shadcn.com/blocks) is built on [ui.shadcn.com](http://ui.shadcn.com). [Blocks page](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/blocks/page.tsx) has a lot of utilities used, hence I broke down this Blocks page analysis into 5 parts.
1. [shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 1](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-1-ac4472388f0a)
2. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 2
3. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 3 (Coming soon)
4. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 4 (Coming soon)
5. shadcn-ui/ui codebase analysis: How is “Blocks” page built — Part 5 (Coming soon)
[In part 1](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-1-ac4472388f0a), we looked at two important modular functions named getAllBlockIds and \_getAllBlocks.
In part 2, we will look at the following:
1. Where is BlockDisplay used?
2. Where to find BlockDisplay component?
3. BlockDisplay component explained
4. getBlock function
Where is BlockDisplay used?
---------------------------

BlockDisplay is used in [blocks/page.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/blocks/page.tsx). In part 1, I explained the code behind fetching blocks; When you visit [blocks page](https://ui.shadcn.com/blocks) on [ui.shadcn.com](http://ui.shadcn.com), you will find a lot of blocks rendered, it is in fact from blocks array above. These blocks so far do not contain the code that translates to a components used in an individual block. BlockDisplay has it own logic to deal with rendering components for each block. Important concept here to remember is to pass the minimal information required.
Where to find BlockDisplay component?
-------------------------------------
You can find BlockDisplay component exported from [components/block-display.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/block-display.tsx#L5).

Has about 29 lines of code.
BlockDisplay component explained
--------------------------------
Let’s try to understand the BlockDisplay code at a high level.
```js
import { getBlock } from "@/lib/blocks"
import { BlockPreview } from "@/components/block-preview"
import { styles } from "@/registry/styles"
export async function BlockDisplay({ name }: { name: string }) {
const blocks = await Promise.all(
styles.map(async (style) => {
const block = await getBlock(name, style.name)
const hasLiftMode = block?.chunks ? block?.chunks?.length > 0 : false
// Cannot (and don't need to) pass to the client.
delete block?.component
delete block?.chunks
return {
...block,
hasLiftMode,
}
})
)
if (!blocks?.length) {
return null
}
return blocks.map((block) => (
<BlockPreview key={\`${block.style}-${block.name}\`} block={block} />
))
}
```
There is a blocks array that is populated after Promise.all resolves. Promise.all has an array of styles imported from registry/styles. These styles are mapped over and each style is further processed.
getBlock is a utility function that accepts two parameters, name and [style.name](http://style.name) and there is also a flag named hasLiftMode that is based on block.chunks.length. block.components and block.chunks are deleted as they are not required on the client side.
These blocks are then mapped over and each block is used in BlockPreview.
getBlock function
-----------------
getBlock function is imported [lib/blocks.ts](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L27) and contains the code below:
```js
export async function getBlock(
name: string,
style: Style\["name"\] = DEFAULT\_BLOCKS\_STYLE
) {
const entry = Index\[style\]\[name\]
const content = await \_getBlockContent(name, style)
const chunks = await Promise.all(
entry.chunks?.map(async (chunk: BlockChunk) => {
const code = await readFile(chunk.file)
const tempFile = await createTempSourceFile(\`${chunk.name}.tsx\`)
const sourceFile = project.createSourceFile(tempFile, code, {
scriptKind: ScriptKind.TSX,
})
sourceFile
.getDescendantsOfKind(SyntaxKind.JsxOpeningElement)
.filter((node) => {
return node.getAttribute("x-chunk") !== undefined
})
?.map((component) => {
component
.getAttribute("x-chunk")
?.asKind(SyntaxKind.JsxAttribute)
?.remove()
})
return {
...chunk,
code: sourceFile
.getText()
.replaceAll(\`@/registry/${style}/\`, "@/components/"),
}
})
)
return blockSchema.parse({
style,
highlightedCode: content.code ? await highlightCode(content.code) : "",
...entry,
...content,
chunks,
type: "components:block",
})
}
```
As you can see, there is a lot going on here. We will cover some part of this code in this article and the rest in the coming articles.
```js
const entry = Index\[style\]\[name\]
const content = await \_getBlockContent(name, style)
```
Index is imported from \_registry\_ and is autogenerated by [scripts/build-registry.ts](https://github.com/shadcn-ui/ui/blob/main/apps/www/scripts/build-registry.mts) (more on this later).
[_getBlockContent_](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L107) _returns an object that is assigned to content. The below code is picked from_ getBlockContent.
```js
async function \_getBlockContent(name: string, style: Style\["name"\]) {
const raw = await \_getBlockCode(name, style)
```
as you can this, in turn, calls \_getBlockCode.
\_getBlockCode has the code
```js
async function \_getBlockCode(
name: string,
style: Style\["name"\] = DEFAULT\_BLOCKS\_STYLE
) {
const entry = Index\[style\]\[name\]
const block = registryEntrySchema.parse(entry)
if (!block.source) {
return ""
}
return await readFile(block.source)
}
```
I have talked about Index, registryEntrySchema and parse in great detail in [part 1](https://medium.com/@ramu.narasinga_61050/shadcn-ui-ui-codebase-analysis-how-is-blocks-page-built-part-1-ac4472388f0a). we did not understand the readFile function yet.
readFile
--------
readFile function is used to read file content from block.source
```js
async function readFile(source: string) {
const filepath = path.join(process.cwd(), source)
return await fs.readFile(filepath, "utf-8")
}
```
[Index](https://github.com/shadcn-ui/ui/blob/main/apps/www/__registry__/index.tsx) contains blocks that do not have source, but at the end, you will find some blocks with source as shown below

For example, for authentication-04, code is available at [_registry_](https://github.com/shadcn-ui/ui/blob/main/apps/www/__registry__/new-york/block/authentication-04.tsx)[/new-york/block/authentication-04.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/__registry__/new-york/block/authentication-04.tsx).

What you see above as one of the blocks shown on blocks page on [ui.shadcn.com](http://ui.shadcn.com) is the code from [_registry_](https://github.com/shadcn-ui/ui/blob/main/apps/www/__registry__/new-york/block/authentication-04.tsx)[/new-york/block/authentication-04.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/__registry__/new-york/block/authentication-04.tsx). Isn’t this awesome!?
Let’s take a step back now and get back on track with out function call stack. We got to readFile from \_getBlockCode. With this so far, we have understood the \_getBlockCode. We got to \_getBlockCode from \_\_getBlockContent and only covered the first line as shown below
### \_getBlockContent
```js
async function \_getBlockContent(name: string, style: Style\["name"\]) {
const raw = await \_getBlockCode(name, style)
const tempFile = await createTempSourceFile(\`${name}.tsx\`)
const sourceFile = project.createSourceFile(tempFile, raw, {
scriptKind: ScriptKind.TSX,
})
// Extract meta.
const description = \_extractVariable(sourceFile, "description")
const iframeHeight = \_extractVariable(sourceFile, "iframeHeight")
const containerClassName = \_extractVariable(sourceFile, "containerClassName")
// Format the code.
let code = sourceFile.getText()
code = code.replaceAll(\`@/registry/${style}/\`, "@/components/")
code = code.replaceAll("export default", "export")
return {
description,
code,
container: {
height: iframeHeight,
className: containerClassName,
},
}
}
```
In part 3, I will explain how createTempSourceFile, createSourceFile and _extractVariable work in order to understand_ getBlockCode completely. Keep in mind, we still need to get back to getBlock since this is used in BlockDisplay . Did you notice the chain of function calls here? functions following single responsibility principle and self explanatory and modular.
Conclusion
----------
In part 2, I discuss the BlockDisplay component. This component has functions that are chained together so well that it respects the SRP and is quite module.
I will highlight the key function in this chain. It goes like this:
BlockDisplay => getBlock => \_getBlockContent => readFile
readFile is where you will find the code that reads the source code available at _registry_/ that loads the “blocks” and is rendered via an iframe. Jesus! I wasn’t expecting magic of this sort.
So far, this only completes _getBlockContent explanation, I still need to get back to_ getBlock _and BlockDisplay which has its own complex functions such as_ createTempSourceFile_,_ createSourceFile _and_ extractVariable. I have 0 clue as how they work, but I will make a good attempt to understand and explain them in an easy way in the next article.
> _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/)
About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/apps/www/components/block-display.tsx#L5](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/block-display.tsx#L5)
2. [https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L27](https://github.com/shadcn-ui/ui/blob/main/apps/www/lib/blocks.ts#L27) | ramunarasinga |
1,892,867 | pygame button gui | def button (msg, x, y, w, h, ic, ac, action=None ): mouse = pygame.mouse.get_pos() ... | 0 | 2024-06-18T20:40:35 | https://dev.to/ghazie754/pygame-button-gui-1pbe | python, pygame, gamedev, ui | def button (msg, x, y, w, h, ic, ac, action=None ):
mouse = pygame.mouse.get_pos()
click = pygame.mouse.get_pressed()
if (x+w > mouse[0] > x) and (y+h > mouse[1] > y):
pygame.draw.rect(watercycle, CYAN, (x, y, w, h))
if (click[0] == 1 and action != None):
if (action == "Start"):
game_loop()
elif (action == "Load"):
##Function that makes the loading of the saved file##
elif (action == "Exit"):
pygame.quit()
else:
pygame.draw.rect(watercycle, BLUE, (x, y, w, h))
smallText = pygame.font.Font("freesansbold.ttf", 20)
textSurf, textRect = text_objects(msg, smallText)
textRect.center = ( (x+(w/2)), (y+(h/2)) )
watercycle.blit(textSurf, textRect) | ghazie754 |
1,892,701 | CSS Variables (CSS Custom properties) for Beginners | In our projects, we often encounter repetitive values such as width, color, font, etc. These values... | 0 | 2024-06-18T20:40:32 | https://udokakasie.hashnode.dev/a-comprehensive-guide-to-css-variables-with-real-examples | webdev, beginners, css, javascript | In our projects, we often encounter repetitive values such as width, color, font, etc. These values can lead to redundancy in our work and make it difficult to change on large projects.
This guide explains CSS variables also known as CSS custom properties in a beginner-friendly manner and a step-by-step guide for changing CSS values using JavaScript when a user performs an action such as a click.
By implementing CSS Variables, you can streamline your design process and improve the efficiency of your project.
**Prerequisite:** A basic knowledge of HTML, and CSS is required to comprehend this article.
## What are CSS Variables?
You may be wondering if "variables" exist in CSS(Cascading Style Sheets) as it does in programming languages, and the answer is "yes". CSS variables also known as CSS custom properties are entities defined by CSS authors that express certain values to be reused across the components.
This means that CSS provides you with a tiny storage of value that can be referenced as many times as possible within your project.
### Let's break this down,
Imagine two people working on a project that requires changing a brand color from red to green. Person1 edits the color on all elements in the project to green, while Person2 simply changes the CSS value on their variable to green and everything works just fine. Would you rather be Person1 or Person2?
## Benefits of Using CSS Variables
- Improves the readability and semantics of code
- Makes changing repetitive values a lot easier.
- Easy update of values dynamically using JavaScript, offering flexibility in response to user actions or clicks.

## How to Declare CSS Variables
CSS variables can be declared using two dashes as a prefix for the CSS style property name, and any valid CSS value.
**Syntax**
```
--variable-name: value
```
This can be declared locally or globally depending on your specific need.
**Local declaration** means declaring the variable inside a CSS selector and, hence can only be accessed inside that scope.
```
/* this is a local declaration*/
header{
--brand-color: red
}
```
**Global declaration** is done using _:root_ pseudo-class. This makes the variable accessible globally.
```
/* this is a global declaration*/
:root{
--brand-color: red
}
```
💡Note: CSS variable names are case-sensitive hence primary-Color and primary-color are not the same.
## How to Access CSS Variables
CSS variables are accessed using the var() function.
```
selector{
property: var(--variableName)
}
```
## How to Change CSS Variables Using JavaScript
This is helpful when there is a need to change some values when a user performs a particular action. For example, when a user should select fonts, colors, and themes on your website.
To change CSS values based on the user's actions, you have to take the following steps;
Step 1: Setup Your HTML
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<link rel="stylesheet" href="./style.css">
<script src="./app.js"></script>
<title>Document</title>
</head>
<body>
<header>
<h1>Hello There</h1>
<p>This is a paragraph</p>
<button id="button" onclick="handleClick()">Change Color here</button>
</header>
</body>
</html>
```
Step 2: Style your CSS;
```
:root{
--primary-color: blue;
}
h1{
color: var(--primary-color)
}
p{
background-color: var(--primary-color);
}
button{
border: 5px solid var(--primary-color)
}
```
Output

Step 3: JavaScript;
Manipulate the DOM (Document Object Model) and get the CSS selectors. e.g :root
```
const changeButton = document.getElementById('button')
const root = document.querySelector(':root')
```
Create a function for handling the click event.
In the function, change the root value by using the "setProperty" method.
```
function handleClick(event) {
root.style.setProperty('--primary-color', 'green')
}
```
The Script.js
```
const changeButton = document.getElementById('button')
const root = document.querySelector(':root')
function handleClick(event) {
root.style.setProperty('--primary-color', 'green')
}
```
Output after click

## In Conclusion
This method works perfectly with any CSS value such as fonts, width, colors, etc.
To read more on CSS Style Properties please visit [MDN](https://developer.mozilla.org/en-US/docs/Web/CSS)
Please check out my other articles on [Front-end development](https://dev.to/udoka033/a-beginners-guide-to-front-end-development-4hjj), [Innermost workings of the web](https://dev.to/udoka033/how-the-web-works-page-loading-and-beyond-4jd6) and [How to overcome impostor syndrome in tech ](https://dev.to/udoka033/overcoming-imposter-syndrome-as-a-beginner-in-tech-20f0).
Please like, comment and follow for more web development, and tech-related articles.
| udoka033 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.