id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,905,688
Descubre las Ventajas de Apache Kafka en la Arquitectura Dirigida por Eventos (Parte I)
En el vertiginoso mundo del desarrollo de software, la agilidad y la escalabilidad son la clave del...
0
2024-06-30T04:02:23
https://dev.to/lidiog/descubre-las-ventajas-de-apache-kafka-en-la-arquitectura-dirigida-por-eventos-parte-i-1l9m
En el vertiginoso mundo del desarrollo de software, la agilidad y la escalabilidad son la clave del éxito. Dos conceptos están revolucionando la forma en que diseñamos y construimos sistemas: la Arquitectura Orientada a Eventos (EDA) y Apache Kafka. ## ¿Qué es la Arquitectura Orientada a Eventos? EDA (Event-driven architecture) es un paradigma de diseño que se centra en la producción, detección y reacción a eventos. Permite crear sistemas más flexibles y adaptables. Facilita la comunicación en tiempo real entre diferentes partes de una aplicación. **Apache Kafka: El Motor de los Eventos** Plataforma de streaming distribuida de código abierto. Actúa como el sistema nervioso central para datos en movimiento. Permite procesar millones de eventos por segundo de manera confiable. !["Diagrama simple de Kafka"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fixxdyzddiv5td2qawls.png) --- ## ¿Qué es Apache Kafka? Apache Kafka es más que una simple plataforma de mensajería. Es un sistema distribuido de streaming de datos que ha revolucionado la forma en que las empresas manejan sus flujos de información en tiempo real. **Un Vistazo a la Historia de Kafka** 2011: Nace en LinkedIn para manejar grandes volúmenes de datos en tiempo real. 2012: Se convierte en proyecto de código abierto bajo Apache Software Foundation. 2014: LinkedIn reporta procesar 1 trillón de mensajes por día con Kafka. 2017: Confluent, fundada por los creadores de Kafka, lanza Kafka como servicio. Hoy: Utilizado por más del 80% de las empresas Fortune 100. **Evolución Constante** Kafka ha evolucionado de ser una simple cola de mensajes a convertirse en una plataforma completa de streaming de eventos ### Características Principales de Kafka ✨ Alta Escalabilidad - Maneja terabytes de datos sin perder rendimiento. - Escala horizontalmente con facilidad. ✨ Durabilidad y Confiabilidad - Replicación de datos para prevenir pérdidas. - Tolerancia a fallos incorporada. ✨ Alto Rendimiento - Procesa millones de mensajes por segundo. - Latencia extremadamente baja (menos de 10ms). ✨ Persistencia - Almacena streams de datos de forma segura en un sistema distribuido. ✨ Procesamiento de Streams - Permite operaciones complejas sobre flujos de datos en tiempo real. !["Logo de Kafka"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ambwhjoacbjgaoatmpc.png) **¿Por Qué Kafka es Tan Popular?** Kafka se ha convertido en la columna vertebral de arquitecturas modernas por su capacidad para: - Conectar sistemas dispares de manera eficiente. - Procesar y reaccionar a eventos en tiempo real. - Facilitar la construcción de pipelines de datos robustos. --- ## ¿Qué es la Arquitectura Dirigida por Eventos (EDA)? La Arquitectura Dirigida por Eventos (Event-driven architecture) es un paradigma de diseño que está cambiando la forma en que construimos sistemas de software. Pero, ¿qué significa realmente y por qué es tan importante? Vamos a descomponerlo. **Definición y Conceptos Clave** EDA es un estilo de arquitectura donde el flujo del sistema está determinado por eventos. Pero, ¿qué es exactamente un evento? 🔹 Evento: Cualquier cambio significativo en el estado de un sistema. Imagina un e-commerce: - Un cliente agrega un producto al carrito → Evento - Se realiza un pago → Evento - Se envía un pedido → Evento En EDA, estos eventos son el corazón del sistema. Algunos conceptos clave incluyen: - Productores de eventos (Event Producers): Componentes que generan eventos. - Consumidores de eventos (Event Consumers): Componentes que reaccionan a eventos. - Canales de eventos (Event Channels): Rutas por donde fluyen los eventos. - Procesadores de eventos (Event Processors): Analizan y transforman eventos. ### Beneficios de EDA **¿Por qué tantas empresas están adoptando EDA?** Aquí tienes algunas razones convincentes: ✅ Desacoplamiento: Los componentes del sistema pueden evolucionar independientemente. ✅ Escalabilidad: Fácil de escalar componentes individuales según la demanda. ✅ Flexibilidad: Agregar nuevas funcionalidades sin afectar los sistemas existentes. ✅ Respuesta en tiempo real: Reaccionar instantáneamente a cambios en el sistema. ✅ Resiliencia: Si un componente falla, el resto del sistema puede continuar funcionando. ✅ Trazabilidad: Facilita el seguimiento y la auditoría de todas las acciones del sistema. ### EDA vs. Arquitectura Monolítica !["EDA vs. Arquitectura Monolítica"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vcmdo3r84g8i0gc48ojn.png) 🏗️ Arquitectura Monolítica: - Todo en un solo bloque - Difícil de escalar - Cambios afectan todo el sistema - Actualización compleja 🌐 Arquitectura Dirigida por Eventos: - Componentes independientes - Escala fácilmente - Cambios localizados - Actualizaciones modulares **EDA en Acción** Imagina una aplicación de entrega de comida: Usuario hace un pedido → Evento "Pedido Creado" Restaurante acepta → Evento "Pedido Aceptado" Repartidor asignado → Evento "Repartidor Asignado" Comida entregada → Evento "Pedido Entregado" Cada evento desencadena acciones en diferentes partes del sistema, creando un flujo fluido y reactivo. --- ## Fundamentos de Apache Kafka Apache Kafka es una plataforma robusta con una arquitectura única. Vamos a explorar sus conceptos fundamentales y cómo funciona. **Conceptos Clave** 1. Tópicos (Topics) - Canales lógicos donde se publican los mensajes. - Similares a carpetas en un sistema de archivos o tablas en una base de datos. - Ejemplo: Un tópico "pedidos" para todos los eventos relacionados con pedidos. 2. Particiones (Partitions) - Divisiones de un tópico para permitir paralelismo. - Cada partición es una secuencia ordenada e inmutable de mensajes. - Permiten que Kafka escale horizontalmente. 3. Productores (Producers) - Aplicaciones que publican (escriben) mensajes en los tópicos. - Pueden elegir a qué partición enviar cada mensaje. - Ejemplo: Un sistema de carrito de compras que produce eventos de "pedido realizado". 4. Consumidores (Consumers) - Aplicaciones que se suscriben a tópicos y procesan los mensajes. - Leen mensajes de las particiones. - Ejemplo: Un sistema de facturación que consume eventos de "pedido realizado". **Funcionamiento Básico** Los productores envían mensajes a tópicos específicos. Kafka distribuye estos mensajes en las particiones del tópico. Los consumidores leen los mensajes de las particiones, manteniendo un "offset" (posición) en cada partición. **🔑 Características clave:** - Persistencia: Los mensajes se almacenan en el disco, proporcionando durabilidad. - Escalabilidad: Las particiones permiten distribuir la carga entre múltiples servidores (brokers). - Alto rendimiento: Kafka puede manejar millones de mensajes por segundo. !["Diagrama de flujo de datos en Kafka"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pdtj9s1mbw5qozrzh88o.png) **Ejemplo Práctico** Imaginemos un sistema de e-commerce: 1. **Productor: Sistema de carrito de compras** - Genera eventos "Pedido Creado" (Order Created) 2. **Tópico: "pedidos" (orders)** - Dividido en 3 particiones para manejar alto volumen 3. **Consumidores:** - Sistema de inventario: actualiza stock - Sistema de envíos: prepara paquetes - Sistema de facturación: genera facturas Cada consumidor procesa los mensajes independientemente, permitiendo un sistema altamente escalable y resistente. **Ventajas de esta Arquitectura** ✅ Desacoplamiento: Productores y consumidores operan independientemente. ✅ Durabilidad: Los mensajes persisten, incluso si los consumidores fallan. ✅ Escalabilidad: Fácil de escalar agregando más particiones o consumidores. ✅ Rendimiento: Procesamiento de alta velocidad gracias al paralelismo. --- ### Principios de la Arquitectura Dirigida por Eventos (EDA) La Arquitectura Dirigida por Eventos (Event-Driven Architecture, EDA) es un enfoque que cambia fundamentalmente cómo diseñamos y construimos sistemas. Vamos a explorar sus componentes principales y las ventajas que ofrece. **Componentes Principales de EDA** 1. Productores de Eventos (Event Producers) - Generan eventos en respuesta a cambios de estado o acciones. - Ejemplo: Un sistema de sensores IoT que produce eventos de temperatura. 2. Consumidores de Eventos (Event Consumers) - Reciben y reaccionan a eventos. - Ejemplo: Un sistema de alertas que notifica cuando la temperatura es demasiado alta. 3. Bus de Eventos (Event Bus) - Canal por donde fluyen los eventos entre productores y consumidores. - En nuestro contexto, Kafka actúa como un bus de eventos robusto y escalable. 4. Procesadores de Eventos (Event Processors) - Analizan, transforman o enriquecen eventos. - Ejemplo: Un sistema que calcula promedios de temperatura en tiempo real. 5. Almacén de Eventos (Event Store) - Guarda eventos para su posterior análisis o reprocesamiento. - Kafka puede servir como un almacén de eventos duradero y distribuido. **Ventajas de EDA** 1. Desacoplamiento (Decoupling) - Los componentes pueden evolucionar independientemente. - Facilita la adopción de microservicios. 2. Escalabilidad (Scalability) - Fácil de escalar componentes individuales según la demanda. - Permite manejar picos de tráfico eficientemente. 3. Flexibilidad (Flexibility) - Agregar nuevas funcionalidades sin afectar los sistemas existentes. - Adaptación rápida a cambios en los requisitos del negocio. 4. Reactividad (Reactivity) - Respuesta en tiempo real a cambios en el sistema. - Mejora la experiencia del usuario y la toma de decisiones. 5. Resiliencia (Resilience) - Si un componente falla, otros pueden seguir funcionando. - Recuperación más fácil después de fallos. 6. Trazabilidad (Traceability) - Registro completo de todas las acciones del sistema. - Facilita auditorías y debugging. 7. Poliglotismo (Polyglot) - Diferentes partes del sistema pueden usar diferentes tecnologías. - Flexibilidad para elegir la mejor herramienta para cada tarea. 8. Evolución Continua (Continuous Evolution) - Facilita la implementación de cambios incrementales. - Soporta mejor las metodologías ágiles. **EDA en Acción: Un Ejemplo Práctico** Imaginemos un sistema de comercio electrónico: 1. Usuario realiza una compra → Evento "Compra Realizada" (Purchase Made) 2. Sistema de Inventario recibe el evento → Actualiza stock 3. Sistema de Envíos recibe el evento → Prepara el paquete 4. Sistema de Análisis recibe el evento → Actualiza métricas de ventas Cada componente reacciona independientemente al evento, creando un sistema altamente modular y eficiente. --- ### Kafka como Base para EDA Apache Kafka se ha convertido en la columna vertebral de muchas arquitecturas dirigidas por eventos (EDA) modernas. Veamos por qué es una opción ideal y cómo se utiliza en el mundo real. Características que hacen de Kafka una opción ideal para EDA Alta Throughput (Alto Rendimiento) - Capaz de manejar millones de eventos por segundo. - Perfecto para sistemas de gran escala con flujos de datos intensivos. Baja Latencia (Low Latency) - Procesa eventos en tiempo casi real (milisegundos). - Esencial para aplicaciones que requieren respuestas inmediatas. Durabilidad y Confiabilidad - Almacenamiento persistente de eventos. - Replicación de datos para prevenir pérdidas. Escalabilidad Horizontal - Fácil de escalar añadiendo más brokers al clúster. - Adapta el sistema a medida que crece el volumen de eventos. Ordenamiento de Eventos - Garantiza el orden de los eventos dentro de una partición. - Crucial para mantener la integridad de los flujos de eventos. Procesamiento de Streams - Kafka Streams permite procesamiento en tiempo real. - Facilita la implementación de lógica de negocio compleja sobre flujos de eventos. Ecosistema Rico - Amplia gama de conectores y herramientas de integración. - Facilita la conexión con diversas fuentes y destinos de datos. **Casos de uso comunes de Kafka en EDA** 1. Microservicios Desacoplados - Kafka actúa como intermediario entre microservicios. - Ejemplo: Sistema de e-commerce donde el servicio de pedidos, inventario y envíos se comunican a través de Kafka. 2. Análisis en Tiempo Real - Procesa streams de datos para insights inmediatos. - Ejemplo: Plataforma de trading que analiza tendencias de mercado en tiempo real. 3. IoT y Telemetría - Maneja flujos de datos de dispositivos conectados. - Ejemplo: Sistema de monitoreo de flotas de vehículos que trackea ubicación y estado. 4. Procesamiento de Logs - Centraliza y procesa logs de múltiples sistemas. - Ejemplo: Plataforma de seguridad que analiza logs para detectar amenazas. 5. Sincronización de Datos - Mantiene diferentes sistemas o bases de datos sincronizados. - Ejemplo: Sincronización de datos entre un sistema legacy y una nueva plataforma cloud. 6. Pipelines de ETL en Tiempo Real - Transforma y carga datos continuamente. - Ejemplo: Sistema de recomendaciones que actualiza perfiles de usuarios en tiempo real. 7. Monitoreo y Alertas - Detecta patrones o anomalías y genera alertas. - Ejemplo: Sistema de monitoreo de infraestructura que alerta sobre fallos potenciales. !["Casos de Uso kafka"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/toelc63byvftwsc7od30.png) Kafka proporciona una base sólida para implementar EDA en diversos escenarios. Su capacidad para manejar grandes volúmenes de eventos de manera confiable y en tiempo real lo convierte en una herramienta indispensable en el toolkit de arquitectos y desarrolladores modernos. En la Parte 2 de este artículo, profundizaremos en cómo implementar estas arquitecturas, los beneficios concretos y cómo enfrentar los desafíos comunes. ¡No te lo pierdas! --- **¿Listo para sumergirte más profundo? ¡Continúa en la Parte 2!** En esta primera parte, hemos explorado los fundamentos de Apache Kafka y la Arquitectura Dirigida por Eventos (EDA). Hemos visto cómo estos conceptos están revolucionando el diseño de sistemas de software modernos. Pero esto es solo el comienzo. En la Parte 2 de este artículo, nos adentraremos en aspectos más avanzados y prácticos: - Implementación de EDA con Kafka: Técnicas y mejores prácticas. - Desafíos y consideraciones: Cómo superar obstáculos comunes. - Herramientas y ecosistema: Potenciando tu stack tecnológico. - Tendencias futuras: Hacia dónde se dirige el mundo de Kafka y EDA. Ya sea que seas un desarrollador experimentado o estés comenzando tu viaje en el mundo de la arquitectura de software, la Parte 2 te proporcionará insights valiosos y aplicables. ¿Estás listo para llevar tu comprensión de Kafka y EDA al siguiente nivel? ¡No te pierdas la Parte 2!
lidiog
1,906,299
Uncover the Perks of Apache Kafka in Event-Driven Architecture (Part I)
In the breakneck world of software development, agility and scalability are the keys to success. Two...
0
2024-06-30T04:02:17
https://dev.to/lidiog/uncover-the-perks-of-apache-kafka-in-event-driven-architecture-part-i-4hod
In the breakneck world of software development, agility and scalability are the keys to success. Two concepts are revolutionizing how we design and build systems: Event-Driven Architecture (EDA) and Apache Kafka. ## What's Event-Driven Architecture? EDA is a design paradigm that zeroes in on producing, detecting, and reacting to events. It allows for creating more flexible and adaptable systems. It facilitates real-time communication between different parts of an application. **Apache Kafka: The Event Engine** Open-source distributed streaming platform. Acts as the central nervous system for data in motion. Allows processing millions of events per second reliably. !["Simple Kafka diagram"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fixxdyzddiv5td2qawls.png) --- ## What's Apache Kafka? Apache Kafka is more than just a messaging platform. It's a distributed data streaming system that's turned the way companies handle their real-time information flows on its head. **A Glance at Kafka's History** 2011: Born at LinkedIn to handle massive volumes of real-time data. 2012: Becomes an open-source project under the Apache Software Foundation. 2014: LinkedIn reports processing 1 trillion messages per day with Kafka. 2017: Confluent, founded by Kafka's creators, launches Kafka as a service. Today: Used by over 80% of Fortune 100 companies. **Constant Evolution** Kafka has evolved from a simple message queue to become a complete event streaming platform. ### Kafka's Key Features ✨ High Scalability - Handles terabytes of data without losing performance. - Scales horizontally with ease. ✨ Durability and Reliability - Data replication to prevent losses. - Built-in fault tolerance. ✨ High Performance - Processes millions of messages per second. - Extremely low latency (less than 10ms). ✨ Persistence - Stores data streams securely in a distributed system. ✨ Stream Processing - Allows complex operations on real-time data streams. !["Kafka Logo"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ambwhjoacbjgaoatmpc.png) **Why's Kafka So Popular?** Kafka has become the backbone of modern architectures due to its ability to: - Efficiently connect disparate systems. - Process and react to events in real-time. - Facilitate the construction of robust data pipelines. --- ## What's Event-Driven Architecture (EDA)? Event-Driven Architecture is a design paradigm that's changing the game in how we build software systems. But what does it really mean, and why's it such a big deal? Let's break it down. **Definition and Key Concepts** EDA is an architectural style where the system's flow is determined by events. But what exactly is an event? 🔹 Event: Any significant change in a system's state. Picture an e-commerce platform: - A customer adds a product to their cart → Event - A payment is made → Event - An order is shipped → Event In EDA, these events are the heart of the system. Some key concepts include: - Event Producers: Components that generate events. - Event Consumers: Components that react to events. - Event Channels: Paths through which events flow. - Event Processors: Analyze and transform events. ### EDA Benefits **Why are so many companies jumping on the EDA bandwagon?** Here are some compelling reasons: ✅ Decoupling: System components can evolve independently. ✅ Scalability: Easy to scale individual components based on demand. ✅ Flexibility: Add new functionalities without affecting existing systems. ✅ Real-time response: React instantly to changes in the system. ✅ Resilience: If one component fails, the rest of the system can keep running. ✅ Traceability: Facilitates tracking and auditing of all system actions. ### EDA vs. Monolithic Architecture !["EDA vs. Monolithic Architecture"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vcmdo3r84g8i0gc48ojn.png) 🏗️ Monolithic Architecture: - Everything in one block - Hard to scale - Changes affect the entire system - Complex updates 🌐 Event-Driven Architecture: - Independent components - Scales easily - Localized changes - Modular updates **EDA in Action** Picture a food delivery app: User places an order → "Order Created" Event Restaurant accepts → "Order Accepted" Event Delivery driver assigned → "Driver Assigned" Event Food delivered → "Order Delivered" Event Each event triggers actions in different parts of the system, creating a smooth and reactive flow. --- ## Apache Kafka Fundamentals Apache Kafka is a robust platform with a unique architecture. Let's dive into its fundamental concepts and how it works. **Key Concepts** 1. Topics - Logical channels where messages are published. - Similar to folders in a file system or tables in a database. - Example: An "orders" topic for all order-related events. 2. Partitions - Divisions of a topic to allow parallelism. - Each partition is an ordered and immutable sequence of messages. - Allow Kafka to scale horizontally. 3. Producers - Applications that publish (write) messages to topics. - Can choose which partition to send each message to. - Example: A shopping cart system that produces "order placed" events. 4. Consumers - Applications that subscribe to topics and process messages. - Read messages from partitions. - Example: A billing system that consumes "order placed" events. **Basic Operation** Producers send messages to specific topics. Kafka distributes these messages across the topic's partitions. Consumers read messages from the partitions, maintaining an "offset" (position) in each partition. **🔑 Key Features:** - Persistence: Messages are stored on disk, providing durability. - Scalability: Partitions allow distributing the load across multiple servers (brokers). - High performance: Kafka can handle millions of messages per second. !["Kafka data flow diagram"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pdtj9s1mbw5qozrzh88o.png) **Practical Example** Let's imagine an e-commerce system: 1. **Producer: Shopping cart system** - Generates "Order Created" events 2. **Topic: "orders"** - Split into 3 partitions to handle high volume 3. **Consumers:** - Inventory system: updates stock - Shipping system: prepares packages - Billing system: generates invoices Each consumer processes messages independently, allowing for a highly scalable and resilient system. **Advantages of this Architecture** ✅ Decoupling: Producers and consumers operate independently. ✅ Durability: Messages persist, even if consumers fail. ✅ Scalability: Easy to scale by adding more partitions or consumers. ✅ Performance: High-speed processing thanks to parallelism. --- ### Event-Driven Architecture (EDA) Principles Event-Driven Architecture is an approach that fundamentally changes how we design and build systems. Let's explore its main components and the advantages it offers. **EDA Main Components** 1. Event Producers - Generate events in response to state changes or actions. - Example: An IoT sensor system that produces temperature events. 2. Event Consumers - Receive and react to events. - Example: An alert system that notifies when the temperature is too high. 3. Event Bus - Channel through which events flow between producers and consumers. - In our context, Kafka acts as a robust and scalable event bus. 4. Event Processors - Analyze, transform, or enrich events. - Example: A system that calculates real-time temperature averages. 5. Event Store - Stores events for later analysis or reprocessing. - Kafka can serve as a durable and distributed event store. **EDA Advantages** 1. Decoupling - Components can evolve independently. - Facilitates microservices adoption. 2. Scalability - Easy to scale individual components based on demand. - Allows efficient handling of traffic spikes. 3. Flexibility - Add new functionalities without affecting existing systems. - Rapid adaptation to changes in business requirements. 4. Reactivity - Real-time response to system changes. - Improves user experience and decision-making. 5. Resilience - If one component fails, others can continue functioning. - Easier recovery after failures. 6. Traceability - Complete record of all system actions. - Facilitates audits and debugging. 7. Polyglotism - Different parts of the system can use different technologies. - Flexibility to choose the best tool for each task. 8. Continuous Evolution - Facilitates implementation of incremental changes. - Better supports agile methodologies. **EDA in Action: A Practical Example** Let's imagine an e-commerce system: 1. User makes a purchase → "Purchase Made" Event 2. Inventory System receives the event → Updates stock 3. Shipping System receives the event → Prepares the package 4. Analytics System receives the event → Updates sales metrics Each component reacts independently to the event, creating a highly modular and efficient system. --- ### Kafka as a Foundation for EDA Apache Kafka has become the backbone of many modern event-driven architectures (EDA). Let's see why it's an ideal choice and how it's used in the real world. **Features that make Kafka an ideal choice for EDA** High Throughput - Capable of handling millions of events per second. - Perfect for large-scale systems with intensive data flows. Low Latency - Processes events in near real-time (milliseconds). - Essential for applications requiring immediate responses. Durability and Reliability - Persistent event storage. - Data replication to prevent losses. Horizontal Scalability - Easy to scale by adding more brokers to the cluster. - Adapts the system as event volume grows. Event Ordering - Guarantees event order within a partition. - Crucial for maintaining event stream integrity. Stream Processing - Kafka Streams allows real-time processing. - Facilitates implementation of complex business logic over event streams. Rich Ecosystem - Wide range of connectors and integration tools. - Facilitates connection with various data sources and destinations. **Common Kafka use cases in EDA** 1. Decoupled Microservices - Kafka acts as an intermediary between microservices. - Example: E-commerce system where order, inventory, and shipping services communicate through Kafka. 2. Real-Time Analytics - Processes data streams for immediate insights. - Example: Trading platform analyzing market trends in real-time. 3. IoT and Telemetry - Handles data flows from connected devices. - Example: Vehicle fleet monitoring system tracking location and status. 4. Log Processing - Centralizes and processes logs from multiple systems. - Example: Security platform analyzing logs to detect threats. 5. Data Synchronization - Keeps different systems or databases in sync. - Example: Synchronization between a legacy system and a new cloud platform. 6. Real-Time ETL Pipelines - Continuously transforms and loads data. - Example: Recommendation system updating user profiles in real-time. 7. Monitoring and Alerts - Detects patterns or anomalies and generates alerts. - Example: Infrastructure monitoring system alerting on potential failures. !["Kafka Use Cases"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/toelc63byvftwsc7od30.png) Kafka provides a solid foundation for implementing EDA in various scenarios. Its ability to handle large volumes of events reliably and in real-time makes it an indispensable tool in the toolkit of modern architects and developers. In Part 2 of this article, we'll dive deeper into how to implement these architectures, the concrete benefits, and how to tackle common challenges. Don't miss it! --- **Ready to dive deeper? Continue with Part 2!** In this first part, we've explored the fundamentals of Apache Kafka and Event-Driven Architecture (EDA). We've seen how these concepts are revolutionizing the design of modern software systems. But this is just the beginning. In Part 2 of this article, we'll delve into more advanced and practical aspects: - Implementing EDA with Kafka: Techniques and best practices. - Challenges and considerations: How to overcome common obstacles. - Tools and ecosystem: Powering up your tech stack. - Future trends: Where the world of Kafka and EDA is heading. Whether you're a seasoned developer or just starting your journey in the world of software architecture, Part 2 will provide you with valuable and applicable insights. Are you ready to take your understanding of Kafka and EDA to the next level? Don't miss Part 2!
lidiog
1,906,298
Creating a C compiler in JavaScript
Creating a C compiler in JavaScript is a complex and ambitious project that involves several...
0
2024-06-30T03:57:33
https://dev.to/sh20raj/creating-a-c-compiler-in-javascript-391c
javascript, c
Creating a C compiler in JavaScript is a complex and ambitious project that involves several components, including lexical analysis, parsing, semantic analysis, and code generation. Below is a simplified and high-level example of how you might start building such a compiler. This example will focus on the lexical analysis (tokenization) and parsing stages, which are the first steps in compiling C code. ### Step 1: Lexical Analysis (Tokenization) The lexical analyzer (lexer) converts the input C code into a stream of tokens. ```javascript class Lexer { constructor(input) { this.input = input; this.tokens = []; this.current = 0; } tokenize() { while (this.current < this.input.length) { let char = this.input[this.current]; if (/\s/.test(char)) { this.current++; continue; } if (/[a-zA-Z_]/.test(char)) { let start = this.current; while (/[a-zA-Z0-9_]/.test(this.input[this.current])) { this.current++; } this.tokens.push({ type: 'IDENTIFIER', value: this.input.slice(start, this.current) }); continue; } if (/[0-9]/.test(char)) { let start = this.current; while (/[0-9]/.test(this.input[this.current])) { this.current++; } this.tokens.push({ type: 'NUMBER', value: this.input.slice(start, this.current) }); continue; } switch (char) { case '+': this.tokens.push({ type: 'PLUS', value: '+' }); this.current++; break; case '-': this.tokens.push({ type: 'MINUS', value: '-' }); this.current++; break; case '*': this.tokens.push({ type: 'STAR', value: '*' }); this.current++; break; case '/': this.tokens.push({ type: 'SLASH', value: '/' }); this.current++; break; case '=': this.tokens.push({ type: 'EQUAL', value: '=' }); this.current++; break; case ';': this.tokens.push({ type: 'SEMICOLON', value: ';' }); this.current++; break; case '(': this.tokens.push({ type: 'LPAREN', value: '(' }); this.current++; break; case ')': this.tokens.push({ type: 'RPAREN', value: ')' }); this.current++; break; default: throw new TypeError('Unexpected character: ' + char); } } return this.tokens; } } ``` ### Step 2: Parsing The parser converts the stream of tokens into an abstract syntax tree (AST). ```javascript class Parser { constructor(tokens) { this.tokens = tokens; this.current = 0; } parse() { let ast = { type: 'Program', body: [] }; while (this.current < this.tokens.length) { ast.body.push(this.parseStatement()); } return ast; } parseStatement() { let token = this.tokens[this.current]; if (token.type === 'IDENTIFIER' && this.tokens[this.current + 1].type === 'EQUAL') { return this.parseAssignment(); } throw new TypeError('Unknown statement: ' + token.type); } parseAssignment() { let identifier = this.tokens[this.current]; this.current++; // skip identifier this.current++; // skip equal sign let value = this.parseExpression(); this.expect('SEMICOLON'); return { type: 'Assignment', identifier: identifier.value, value: value }; } parseExpression() { let token = this.tokens[this.current]; if (token.type === 'NUMBER') { this.current++; return { type: 'Literal', value: Number(token.value) }; } throw new TypeError('Unknown expression: ' + token.type); } expect(type) { let token = this.tokens[this.current]; if (token.type !== type) { throw new TypeError('Expected ' + type + ' but found ' + token.type); } this.current++; } } ``` ### Step 3: Code Generation Finally, the code generator converts the AST into the target language, which could be JavaScript or any other language. ```javascript class CodeGenerator { generate(node) { switch (node.type) { case 'Program': return node.body.map(statement => this.generate(statement)).join('\n'); case 'Assignment': return `let ${node.identifier} = ${this.generate(node.value)};`; case 'Literal': return node.value; default: throw new TypeError('Unknown node type: ' + node.type); } } } ``` ### Putting It All Together Here is how you might use the lexer, parser, and code generator: ```javascript const input = `x = 42;`; const lexer = new Lexer(input); const tokens = lexer.tokenize(); console.log('Tokens:', tokens); const parser = new Parser(tokens); const ast = parser.parse(); console.log('AST:', JSON.stringify(ast, null, 2)); const generator = new CodeGenerator(); const output = generator.generate(ast); console.log('Output:', output); ``` This will tokenize the input, parse it into an AST, and generate JavaScript code from the AST. ### Note This example is highly simplified and only handles a tiny subset of the C language. A full-fledged C compiler would require handling a much larger set of tokens, parsing complex expressions, statements, declarations, types, and generating more sophisticated code.
sh20raj
1,906,292
比特币原生资产与Bitroot协议测试网深度报告
比特币原生资产潜力空前强大 比特币市值与生态数据 比特币不单单是加密货币,更是全球范围被认可的价值存储共识。目前比特币网络月均活跃交易平均在1500万次左右,每月活跃用户在2000万人左右,而且每月新...
0
2024-06-30T03:55:28
https://dev.to/crypto_senior/bi-te-bi-yuan-sheng-zi-chan-yu-bitrootxie-yi-ce-shi-wang-shen-du-bao-gao-59pn
**比特币原生资产潜力空前强大** 比特币市值与生态数据 比特币不单单是加密货币,更是全球范围被认可的价值存储共识。目前比特币网络月均活跃交易平均在1500万次左右,每月活跃用户在2000万人左右,而且每月新增用户在800万左右。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sdps8b1ewtmn653ugeb3.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4n44f3nk1w6yds9nx7i4.png) 数据来源:https://www.theblock.co/data/on-chain-metrics/bitcoin 此外,比特币市值常年占总加密货币市场的50%以上,市值总额超过1.2万亿美元,这是毋容置疑的加密货币领域龙头。但是我们反观比特币网络上的原生资产市场,却仅仅只有数十亿美元,而且在近期低迷的行情中,我相信目前应该在15亿美元左右徘徊。 再看一下隔壁以太坊的视角,以太坊市值仅有比特币的三分之一约4000亿美元。但以太坊网络上的原生资产代币市值却与以太坊本身相当,约4200亿美元。 这样的体量相比之下,以太坊网络原生资产占比以太坊市值约100%,而比特币网络原生资产占比比特币市值仅有0.12%。 同时,由于学长本人长期在加密行业第一线活跃,可以明确的感知到,市场已经对VC币感到疲倦和不安,那些高估值低流通的vc币,在CEX一上线就数十亿美元市值,然后开始套路的解锁暴跌。这样不健康的市场操纵已经被广大加密用户开始反感,早在去年就开始主打公平分发的比特币铭文,以及今年在solana上流行的meme公平分发,其核心都是放弃vc,市场自行调控。 因此,学长在这里可以先给出一个简单的结论,随着时间的推移,市场的重心将会逐渐从vc币转移到目前更公平的资产市场,而比特币原生资产市场恰逢其会。比特币原生资产市场,至少还有900倍的增长空间。选择一个具备庞大潜力的赛道,才是投资的必胜秘诀。 宏观视角下的比特币原生资产 我们换一个视角来观察比特币原生资产赛道。 在月线的视角下,我们来观察最近三年比特币价格与交易所比特币储备的关系。当交易所的比特币储备处于顶峰时,比特币价格正处于底部。而在2024年交易所比特币储备处于新低时,比特币价格却达到了历史新高。 这意味着无论是机构、矿工还是散户,在数据层面上都在逃离中心化的交易所。在今年1月比特币ETF通过后,比特币彻底摆脱了以往“头部交易所定价”的局面,迎来真正意义上的市场决定价值。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0spgf4vjvxx56fx0b6kh.png) 数据来源:https://cryptoquant.com/asset/btc/chart/exchange-flows/exchange-reserve?exchange=all_exchange&window=DAY&sma=0&ema=0&priceScale=log&metricScale=linear&chartStyle=line 第4轮比特币减半周期中,Ordinals协议以及类似协议的爆发式采用,让加密行业意识到基于比特币L1层发行资产与交易资产对比特币主网共识安全和生态发展的正外部性价值,可谓是比特币生态的“Uniswap时刻”。 比特币可编程性的进化与迭代,是比特币社区意见市场治理的结果,当下,通过增强比特币的可编程性进而增加比特币主网区块空间的使用率,成为比特币社区共识的新设计空间。因此,Bitroot解决方案便应运而生了。 Bitroot:基于比特币的智能合约与可编程原生资产协议 发展比特币生态的过程中主要有2个方面的困境,分别是比特币网络的扩展性低(需有更好的扩容方案)和比特币生态的应用少(需要爆款应用),围绕这两个困境,现阶段比特币生态的建设主要聚焦在可编程 BTC 上,因此Bitroot提出了一套解决这些困境的解决方案。 在以往,比特币被归类为价值储存工具的原因是它缺乏可编程性,来源于其非图灵完备的脚本语言和核心开发团队对其可执行的操作类型实施了限制。而比特币发展到2024年,随着技术发展,通过Bitroot协议提出了一种对比特币协议扩展方案,旨在在比特币的基础架构上实现更多的功能,特别是智能合约功能。 通过协议扩展,Bitroot协议将在比特币区块链上实现更复杂操作和应用,使比特币由点对点的电子现金系统成为更全面的分布式计算平台,在保持比特币的核心特性下,如去中心化和安全性,同时引入了原生智能合约和其他高级功能。 Bitroot协议; Bitroot协议是围绕着Bitroot VM构建的一个基于原生比特币的资产发行协议,它为比特币原生资产提供了可编程的能力,包括资产的铸造、发行、转移和管理。 Bitroot协议实现了将智能合约引入比特币的重大创举,其实现核心依赖于OP_CAT操作码。为了便于理解,我们将Bitroot协议具象来说,是提供了一个一站式的比特币服务中心。通过便捷、易交互的UI设计,Bitroot协议内置了一个基于比特币网络的DEX,用户可以直接在一个网站完成Bitroot协议资产发行、管理、交易等操作。随着后Bitroot协议的协议拓展,将会支持其他主流的比特币原生资产以及更多轻量级Dapp应用。 Bitroot协议大使内测反馈 学长作为最早一批接触到Bitroot测试网的人,几乎是跟开发者们同时开始使用,最近一段时间的体验下来非常不错,我将从各个方面分享一下我的使用反馈。 资产发行交互体验:在发行体验上,Bitroot做的非常好。学长在去年很早期的时候也去打过铭文,同样是协议早期阶段,Ordinals需要用户有一定计算机基础,配置Bitcoin的全节点,然后使用命令行来控制钱包,操作非常麻烦,我找了一个Ordinals早期教程来做对比。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bkfcvkvp3hju45o95z2l.png) 因为其上手的复杂程度,铭文在诞生后半年内都是默默无闻。直到ORDI上了币安才让整个赛道爆发。 而学长在使用Bitroot协议的时候惊为天人,整个资产发行流程走下来,不需要懂任何代码,也不需要配置比特币全节点。只需要创建钱包,给钱包地址打gas费,然后填代币选项就可以完成资产发行。不夸张的说,即使没有操作指引,我认为绝大多数人也都能搞懂怎么发行资产。 下面学长为大家演示一遍Bitroot的资产发行的便捷性以及操作流程。 资产发行 需要两种代币,原链BTC和BRT,BTC用于链上GAS消耗,BRT将作为消耗资产,每发行一种资产,将会同步销毁部分BRT,这也为BRT通缩做出巨大贡献。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gzija8qnh7yuwlys5zdh.png) 选择资产发行后,我们发现目前有三类资产可以创建,用户可以根据需求选择你要发行资产的方式。 1.分别是创建字母命名的资产 2.已有创建资产的子资产 3.创建免费的数字名称资产 学长根据自己需求花费了0.5BRT创建了字母命名的资产“BITROOT”,并做了资产的描述以及数量设置(1亿)。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ekiyvkqaw3o612o21m8a.png) 然后我们可以去订单历史查询这笔交易(1,2步骤) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/mrfl3v3adsidkq1vn8dm.png) 学长也查询了btc浏览器(3步骤),发现这笔交易已经被BTC链处理,并且交易成功。这恰恰说明了Bitroot已经实现了在BTC链上一键发行资产。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4fx1dvemrr67jv1ebvrx.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2jkfyogltc8e8vzhf12c.png) 随后我们返回首页,就可以看到刚刚发行的“BITROOT”这个资产,并可看到刚设置的发行总量1亿。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/76pvlwdwxf2img0fcwgo.png) 以上就完成了BTC链上的资产发行。而其他两类资产,经过学长与几个开发者沟通得知,其实是为了后续在比特币上跑智能合约而构建的不同资产类别。因为有些使用场景中,代币并不是长期存在的,比如作为资产作为凭证存在时,当业务流程进行完毕,那么此凭证就应该销毁,从而避免浪费比特币网络空间。因此这类业务场景就完全不需要给这种资产命名。 Bitroot协议从研发之处就考虑到未来社区开发者们的拓展使用,这是非常难得的先见之明。在其他比特币原生资产中,几乎就不考虑未来拓展性,这也导致其他的原生资产协议在发布之后就几乎没有变动。 资产管理功能体验:接下来学长测试了Bitroot另外一项重大的功能,资产管理功能。在Bitroot的资产管理系统中,你可以将BTC、各种代币等资产统一管理。终于不再需要下载无数钱包App,或在网页中无休止地切换,这一切资产的查看、发送、接收全部集中在一个管理界面中,增加了用户体验,减少了很多繁琐。并且在Bitroot的账户系统下,也完全继承了比特币的无需信任、不可篡改的安全属性。 下面学长着重为大家演示一下,如何进行资产转账,以及代币信息查询,增发,锁定等独特的功能。 资产转账 在首页选择你要转账的资产代币,点击箭头展开更多操作,选择Send发送 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2sml7hwehlimcjzgls9y.png) 然后,输入转账地址,转账金额,点击右下角“Send”完成。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fydyp9yjzuoelmpcwvu6.png) 紧接着你会发现资产已经显示再目标账户,等待链上确认到账。括号中的数字是等待确认到账的数量。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0r5jqphbfm8w4afsw0s0.png) 资产信息查看 点击要查看的代币箭头,展开更多操作,点击Show info查看资产信息 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p4gb52b46r9r1aa9ln46.png) 进入信息页面后,你可以很清晰的看到资产代币的各项数据,如,创建代币者的地址,代币描述简介,发行总量,代币是否可以增发,是否允许拆分,以及最下方出现的代币操作等,都有很清楚的记录,便于用户查询了解相关代币信息。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6lh6cqfr0b7e0pdig0la.png) 资产增发 点击你想操作的资产箭头,展开更多操作,点击issue Additional进行资产增发 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bzjb71w288cxik00dzqj.png) 输入你要增发的数量,此案例增发了1亿个代币。然后点击右下角Issue additional进行代币增发。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sjtej3k3pp2cnv32u1ju.png) 然后去查看代币信息,发现总量已经是2亿,查看下面的代币详情,显示已经增发了1亿代币。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ru1ygvg14rvg2bs4tcz4.png) 那么现在不少人可能已经有疑问,这样代币增发是否可控?答案是,当然了,Bitroot团队已经想到了这个方面,所以接下来学长测验锁定资产增发。 锁定资产增发 点击目标资产右侧箭头,展开更多操作,点击Lock token issuance ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5uudczulb6u2kunu34z4.png) 点击Lock token,确定锁定资产。等待锁定完成后,代币将无法增发。这一功能大家使用时,将格外注意,这一功能使用时不可逆。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hjama6pv9c7gyw76fnjc.png) 锁定后,在首页代币图标旁会有一把小锁。当然,在代币信息种,你也可以查到锁定状态及信息。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/d799v30hvmhj8uztcatq.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/35kzm6fua0ikt39gc2a5.png) 接下来,学长讲测验Bitroot另外一项创新功能——资产所有权转移,我觉得也很有意思。我们知道在项目注册中,代币名称是稀有的,好的项目名称会为项目带来更多的曝光和推广。一个好的项目IP本身也具备一些价值。为了可以更好的服务于用户,Bitroot团队设计了资产所有权转移这一功能。 资产所有权转移 在所有权这一块,在产品上用颜色做了区分。 淡黄色:代表你自己拥有所有权 淡蓝色:代表你没有所有权,仅有代币资产 点击资产更多操作,点击transfer ownership ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ftt6y2jayxfx5xca921q.png) 输入对方的Bitroot钱包地址,点击右下角transfer。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/znsu4crk5rrqc4rsp2ow.png) 然后你在首页就会发现,资产代币所有权已经做了转移。你也可以查看代币信息。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/twtui5729lmg0b1bqab8.png) 资产交易功能:在加密货币的世界里,交易一直是一个痛点。要想在不同资产之间转换,我们不得不借助于中心化交易所,冒着安全和隐私风险,但在Bitroot的努力下,可以在完全去中心化的环境下,自由交易BTC、各种代币资产。无需注册或信任任何中心化机构,即可在链上轻松完成资产兑换。 挂买:我们进入市场页面后,发现目前市场有三种交易对,你可以选择任意交易对进行交易,或者你可以在搜索栏中搜索你发的代币,进行买卖。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fjs4qa1fcwdv7mrvpjef.png) 接下来我们以BITROOT代币和BRT代币作为交易对,进行交易演示。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2f8eak5ezgbtz5dojhi4.png) 输入BITROOT单价,数量等,然后选择buy,进行挂单交易 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b60dieaszrs13pr1h3hn.png) 等待比特币区块确认,挂单完成 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m4v9c2z6c06dsz8mdjeq.png) 挂卖:输入BITROOT单价,数量等,然后选择Sell,进行挂单交易 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6qsau8inebslsxpl6xop.png) 然后回到市场后,你会看到你的交易历史,显示已经成交。 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bgl7vx0d85ebi2wnfkgc.png) 过去的DEX往往操作繁琐、门槛较高,阻挡了用户的采用热情。但在Bitroot,你只需轻点几下,就可以完成您想要的任何一笔交易。 产品功能体验:在产品功能上,学长认为Bitroot团队在之前宣发时还是过于低调了。在以往玩铭文的时候,铭文发出来真就只是发出来,不能再做其他的操作。或者依赖于第三方功能来实现其他交互。而Bitroot协议中,资产的创建地址对发行资产有巨大的权限,包括可以给代币增发、或者锁定不可增发、或者创建子代币,在比特币生态里实现多币并行的经济模型。 另外,即使协议默认给了足够大的权限,Bitroot也提供了约束权限的功能。可以通过创建多重签名钱包,对于有需要的用户来说,可以一站式的完成商业化项目构建。 同样反观Ordinals早期,完全不具备这样的功能,即使放在2024年的现在,Ordinals依旧是功能残缺的,或者需要用户去使用第三方中心化的工具来辅助完成。 因此,学长认为Bitroot协议在产品功能上完全是后发先至,领先市面上其他比特币原生资产至少两个维度。 未来基础构建: 最后再聊一下,目前Bitroot依旧只是测试网版本,难免存在一些bug,在学长体验的这段时间已经反馈了一些存在的问题,好在Bitroot团队响应非常及时,基本都做到了快速修复。而核心功能这块使用非常丝滑,没有出现问题。 唯一美中不足的是,比特币测试网的测试币消耗太多了,官方水龙头给的并不多。所以学长掏钱买了一批BTC测试币,后续会让人在大使群里互相发一下,测试币不值钱,但是能让大家节省许多麻烦。 总结 来到了测试报告的总结。首先学长深刻认识到了Bitroot协议的巨大优势,首先是资产发行的便捷性,几乎只需要填写表格,就可以完成一个比特币原生资产的发行。这种极低的资产发行门槛,势必吸引众多开发者和项目方来进行生态构建。 Ordinals从发布到爆发花了六个月,在Bitroot降低开发门槛之后,我相信Bitroot的爆发只需要三个月。 虽然本次测试Bitroot只开放了Bitroot协议这一部分,但这也让我窥见了Bitroot团队的庞大野心。 我捋了一下Bitroot的整体布局和思路,我是这样理解Bitroot的未来版图: 通过Bitroot协议,降低比特币原生资产发行的门槛,吸引更多的项目方和开发者的参与。在Bitroot协议中,通过对资产的细致分类,对比特币区块空间利用率的优化,给开发者们降低开发难度。在这一环节中,Bitroot核心就是降低门槛,降低资产发行门槛,降低开发者使用门槛。 有了资产构建之后,需要生态构建,这也就到了智能合约引入环节。目前在本次测试网中已经能体验到基于比特币构建的DEX,虽然这是生态产品的一小步,但这也是比特币迈向智能合约的一大步。 后续Bitroot Layer2开放测试后,会有一大波Dapp拥入。那些因为比特币性能不足而无法运行的Dapp将在Bitroot Layer2上运行,再通过Bitroot跨连桥打通跨链流动性,比特币以及原生资产们就彻底活跃起来,市值也会快速追上比特币本身。 至此,这就是我理解的Bitroot要干的事情。Bitroot大使测试应该就要开放了,也希望大家重视Bitroot,重视比特币原生资产赛道。这是最近几年为数不多还有机会从最早期参与的Web3事业了。
crypto_senior
1,906,297
Unlocking Python: Essential Insights and Learning for JavaScript Developers
JavaScript was the first programming language I learned as a developer, but I chose to learn Python...
0
2024-06-30T03:53:09
https://dev.to/bryan_manuel_ramos/unlocking-python-essential-insights-and-learning-for-javascript-developers-20k6
**`JavaScript`** was the first programming language I learned as a developer, but I chose to learn **`Python`** soon after for several reasons. In this blog, I will discuss the advantages of picking up **`Python`** and my experience learning the language, while also comparing its basic syntax to **`JavaScript's`**. Be sure to read until the end, where I'll be sharing my tips and tricks for **`JavaScript`** developers who are interested in learning **`Python`**. ## Why Python? **Versatility-** **`Python`** is a very versatile language, and by learning it you will open the doors to many domains that may seem out of reach for you. With the rapid growth of machine learning and big data, **`Python`** has become an indispensable tool in the tech industry. It also has an extensive ecosystem of libraries and frameworks, which makes it an ideal choice for these advanced applications. **Ease of learning-** **`Python`** is known for being very easy to learn when compared to other programming languages, especially for those with prior experience. This can be attributed to its simple syntax and readability. In the realm of web development, using **`Python`** in conjunction with **`JavaScript`** can make for a very powerful application, so the fact that you can learn it fairly quickly and start utilizing it in your own projects is very advantageous. **Real-World Application** In a real-world job setting, you may be required to work with multiple technologies and languages simultaneously. By being comfortable with learning new technologies and adopting the skill of self-learning, you will increase your skill as a programmer and also make yourself a valuable asset for potential employers in the tech industry. ## Essential Python Syntax Compared To Javascript - `Variable Declaration`: **`Python`** uses dynamic typing (x = 5), whereas **`JavaScript`** uses var, let, or const. **JavaScript** ```js //Variables let x = 5 const y = 'John' console.log(x) console.log(y) ``` **Python** ```python # Variables x = 5 y = "John" print(x) print(y) ``` - `Code Blocks and Indentation:` **`Python`** uses indentation to define code blocks, while **`JavaScript`** uses brackets. **JavaScript** ```js if (true) { console.log("This is true") } else { console.log("This is false") } ``` **Python** ```python if True: print("This is true") else: print("This is false") ``` - `Conditionals`: **`Python`** uses `elif`, in contrast to **`JavaScript's`** `else if` **Python** ```python age = 18 if age < 18: print("Minor") elif age == 18: print("Just an adult") else: print("Adult") ``` **JavaScript** ```js let age = 18 if (age < 18){ console.log("Minor") } else if (age === 18){ console.log("Just an adult") } else { console.log("Adult") } ``` - `Iteration`: **`JavaScript`** vs. **`Python`** **`JavaScript`** uses for, while loops, for...of, and array methods like forEach, map, and filter for iteration. **`Python`** primarily uses for and while loops. The **`Python`** range function is commonly used in for loops to generate sequences of numbers (Parallel of `for let` loop in **`JavaScript`**.) Using for loop with range: **Python** ```python numbers = [1, 2, 3, 4] for i in range(len(numbers)): print(numbers[i]) ``` Using for loop directly on the list: **Python** ```python numbers = [1, 2, 3, 4] for number in numbers: print(number) ``` - `Function definition && arrow functions vs lambda functions` **`Python`** uses def, while **`JavaScript`** uses function or arrow functions. **`Python`** lambda functions are used for simple, single-expression anonymous functions, while **`JavaScript`** arrow functions offer more flexibility and maintain `this` context for complex operations. **JavaScript** ```javascript //Function declaration function add(a, b) { return a + b } console.log(add(1,2)) //Arrow function const addArrow = (a,b) => a + b console.log(addArrow(1,2) ``` **Python** ```python #Function declaration def add(a,b): return a + b print(add(1,2)) #Lambda function addLambda = lambda a, b: a + b print(addLambda(1, 2)) ``` - `Data structures`: Lists and Arrays: **`Python`** has lists that are similar to JavaScript arrays but more versatile. Dictionaries and Objects: **`Python`** uses dictionaries for key-value pairs, similar to JavaScript objects. **JavaScript** ```javascript // Creating an array const fruits = ["apple", "banana", "cherry"]; // Accessing elements console.log(fruits[0]); // Adding an element fruits.push("orange"); console.log(fruits); //Creating an object const person = { name: "John", age: 30, city: "New York" }; // Accessing values console.log(person.name); // Adding a new key-value pair person.email = "john@example.com"; console.log(person); ``` **Python** ```python # Creating a list fruits = ["apple", "banana", "cherry"] # Accessing elements print(fruits[0]) # Adding an element fruits.append("orange") print(fruits) # Creating a dictionary person = { "name": "John", "age": 30, "city": "New York" } # Accessing values print(person["name"]) # Adding a new key-value pair person["email"] = "john@example.com" print(person) ``` ## My Tips for Learning Python as a Javascript Developer The biggest advice I would give to those learning **`Python`** as a **`JavaScript`** developer is to start by familiarizing yourself with **`Python's`** syntax, which is quite different from **`JavaScript’s`**. Once you have a basic understanding of the syntax, dive into code challenges to reinforce your learning. You can find some on [Codewars](codewars.com) or [Leetcode](https://leetcode.com/). I chose to redo code problems I was already comfortable with in **`JavaScript`**, in **`Python`**. It allowed me to learn through practical application, and made it easy for me to apply **`Python`** methods and best-practices hands-on. Additionally, project-based learning is invaluable; by working on projects, you’ll encounter real-world problems and solutions, solidifying your knowledge and giving you a deeper understanding of how to use **`Python`** effectively in different contexts. When I was first learning **`Python`**, I chose to implement it into a full-stack project to facilitate data analysis and data cleaning. By doing so, I ended up not only getting more comfortable with the language, but also more comfortable with back-end development. This approach of learning syntax, tackling code challenges, and engaging in project-based learning ensures a comprehensive and practical mastery of **`Python`**. ## Resources for Learning Python Automate the Boring Stuff with Python: https://automatetheboringstuff.com Learn Python 3 Course https://www.codecademy.com/courses/learn-python-3 Official Python Documentation: https://python.org
bryan_manuel_ramos
1,906,294
Create and access a tensor in PyTorch
*Memos: My post explains how to check PyTorch version, CPU and GPU(CUDA). My post explains how to...
0
2024-06-30T03:51:13
https://dev.to/hyperkai/create-and-access-a-tensor-in-pytorch-1047
pytorch, create, access, tesnor
*Memos: - [My post](https://dev.to/hyperkai/check-pytorch-version-cpu-and-gpucuda-in-pytorch-6jk) explains how to check PyTorch version, CPU and GPU(CUDA). - [My post](https://dev.to/hyperkai/access-a-tensor-in-pytorch-1f4e) explains how to access a tensor. - [My post](https://dev.to/hyperkai/istensor-numel-and-device-in-pytorch-2eha) explains [is_tensor()](https://pytorch.org/docs/stable/generated/torch.is_tensor.html), [numel()](https://pytorch.org/docs/stable/generated/torch.numel.html) and [device()](https://pytorch.org/docs/stable/tensor_attributes.html#torch-device). - [My post](https://dev.to/hyperkai/type-conversion-with-type-to-and-a-tensor-in-pytorch-2a0g) explains type conversion with [type()](https://pytorch.org/docs/stable/generated/torch.Tensor.type.html), [to()](https://pytorch.org/docs/stable/generated/torch.Tensor.to.html) and a tensor. - [My post](https://dev.to/hyperkai/type-promotion-resulttype-promotetypes-and-cancast-in-pytorch-33p8) explains type promotion, [result_type()](https://pytorch.org/docs/stable/generated/torch.result_type.html), [promote_types()](https://pytorch.org/docs/stable/generated/torch.promote_types.html) and [can_cast()](https://pytorch.org/docs/stable/generated/torch.can_cast.html). - [My post](https://dev.to/hyperkai/device-conversion-fromnumpy-and-numpy-in-pytorch-1iih) explains device conversion, [from_numpy()](https://pytorch.org/docs/stable/generated/torch.from_numpy.html) and [numpy()](https://pytorch.org/docs/stable/generated/torch.Tensor.numpy.html). - [My post](https://dev.to/hyperkai/setdefaultdtype-setdefaultdevice-and-setprintoptions-in-pytorch-55g8) explains [set_default_dtype()](https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html), [set_default_device()](https://pytorch.org/docs/stable/generated/torch.set_default_device.html) and [set_printoptions()](https://pytorch.org/docs/stable/generated/torch.set_printoptions.html). - [My post](https://dev.to/hyperkai/manualseed-initialseed-and-seed-in-pytorch-5gm8) explains [manual_seed()](https://pytorch.org/docs/stable/generated/torch.manual_seed.html), [initial_seed()](https://pytorch.org/docs/stable/generated/torch.initial_seed.html) and [seed()](https://pytorch.org/docs/stable/generated/torch.seed.html). [tensor()](https://pytorch.org/docs/stable/generated/torch.tensor.html) can create the 0D or more D tensor of zero or more elements as shown below: *Memos: - `tensor()` can be used with [torch](https://pytorch.org/docs/stable/torch.html) but not with a tensor. - The 1st argument with `torch` is `data`(Required-Type:`int`, `float`, `complex` or `bool` or `tuple` of `int`, `float`, `complex` or `bool` or `list` of `int`, `float`, `complex` or `bool`). *The default type is `float32`. - There is `dtype` argument with `torch`(Optional-Type:[dtype](https://pytorch.org/docs/stable/tensor_attributes.html#torch.dtype)): *Memos: - If `dtype` is not given, `dtype` is inferred from `data` or `dtype` of [set_default_dtype()](https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html) is used for floating-point numbers. - `dtype=` must be used. - [My post](https://dev.to/hyperkai/set-dtype-with-dtype-argument-functions-and-get-it-in-pytorch-13h2) explains `dtype` argument. - There is `device` argument with `torch` (Optional-Type:`str`, `int` or [device()](https://pytorch.org/docs/stable/tensor_attributes.html#torch.device)): *Memos: - `device=` must be used. - [My post](https://dev.to/hyperkai/set-device-with-device-argument-functions-and-get-it-in-pytorch-1o2p) explains `device` argument. - There is `requires_grad` argument with `torch` (Optional-Type:`bool`): *Memos: - `requires_grad=` must be used. - [My post](https://dev.to/hyperkai/set-requiresgrad-with-requiresgrad-argument-functions-and-get-it-in-pytorch-39c3) explains `requires_grad` argument. - The one or more floating-point numbers or complex numbers of a tensor are rounded to 4 decimal places by default. ```python import torch """ 0D tensor """ my_tensor = torch.tensor(data=-3) my_tensor # tensor(-3) """ 1D tensor """ torch.tensor(data=[3, 7, -5]) # tensor([3, 7, -5]) torch.tensor(data=[3.635251, 7.270649, -5.164872]) # tensor([3.6353, 7.2706, -5.1649]) torch.tensor(data=[3.635251+4.634852, 7.27+2.586449j, -5.164872-3.45]) # tensor([0.9996+0.0000j, 7.2700+2.5864j, -8.6149+0.0000j]) torch.tensor(data=[True, False, True]) # tensor([True, False, True]) """ 2D tensor """ torch.tensor(data=[[3, 7, -5], [-9, 6, 2]]) # tensor([[3, 7, -5], [-9, 6, 2]]) """ 3D tensor """ torch.tensor(data=[[[3, 7, -5], [-9, 6, 2]], [[8, 0, -1], [4, 9, -6]]]) # tensor([[[3, 7, -5], [-9, 6, 2]], # [[8, 0, -1], [4, 9, -6]]]) ``` In addition, [Tensor()](https://pytorch.org/docs/stable/tensors.html) can create the 1D or more D tensor of zero or more floating-point numbers as shown below: *Memos: - `tensor()` can be used with `torch` but not with a tensor. - The 1st argument with `torch` is `data`(Required-Type:`tuple` of `int`, `float` or `bool` or `list` of `int`, `float` or `bool`). - The one or more floating-point numbers or complex numbers of a tensor are rounded to 4 decimal places by default. ```python import torch torch.Tensor(data=[3., 7., -5.]) # 1D tensor # tensor([3., 7., -5.]) torch.Tensor(data=[[3., 7., -5.], [-9., 6., 2.]]) # 2D tensor # tensor([[-3., 7., -5.], [-9., 6., 2.]]) torch.Tensor(data=[[[-3., 7., -5.], [-9., 6., 2.]], # 3D tensor [[8., 0., -1.], [4., 9., -6.]]]) # tensor([[[-3., 7., -5.], [-9., 6., 2.]], # [[8., 0., 1.], [4., 9., -6.]]]) torch.Tensor(data=[[[-3., 7., -5.], [-9., 6., 2.]], # 3D tensor [[8., 0., -1], [4., 9., -6.]]]) # tensor([[[-3., 7., -5.], [-9., 6., 2.]], # [[8., 0., -1.], [4., 9., -6.]]]) torch.Tensor(data=[[[-3, 7, -5], [-9, 6, 2]], # 3D tensor [[8, 0, -1], [4, 9, -6]]]) # tensor([[[-3., 7., -5.], [-9., 6., 2.]], # [[8., 0., -1.], [4., 9., -6.]]]) torch.Tensor(data=[[[True, False, True], [True, False, True]], # 3D tensor [[False, True, False], [False, True, False]]]) # tensor([[[1., 0., 1.], [1., 0., 1.]], # [[0., 1., 0.], [0., 1., 0.]]]) ``` You can access a 0D or more D tensor with these ways as shown below. *I give much more ways to access a 1D tensor than a 0D, 2D and 3D tensor: ```python import torch my_tensor = torch.tensor(3) # 0D tensor my_tensor # tensor(3) my_tensor = torch.tensor([3]) # 1D tensor my_tensor # tensor([3]) my_tensor = torch.tensor([3, 7, -5, -9, 6, 2, 8, 0, -1, 4, 9, -6]) # 1D tensor my_tensor[4] my_tensor[4,] my_tensor[-10] my_tensor[-10,] my_tensor[4:5] my_tensor[4:5,] my_tensor[-8:5] my_tensor[-8:5,] my_tensor[4:-7] my_tensor[4:-7,] my_tensor[-8:-7] my_tensor[-8:-7,] # tensor(6) my_tensor[4:8] my_tensor[4:8,] my_tensor[-8:8] my_tensor[-8:8,] my_tensor[4:-4] my_tensor[4:-4,] my_tensor[-8:-4] my_tensor[-8:-4,] # tensor([6, 2, 8, 0]) my_tensor[:7] my_tensor[:7,] my_tensor[:-5] my_tensor[:-5,] my_tensor[0:7] my_tensor[0:7,] my_tensor[-12:7] my_tensor[-12:7,] my_tensor[0:-5] my_tensor[0:-5,] my_tensor[-12:-5] my_tensor[-12:-5,] # tensor([3, 7, -5, -9, 6, 2, 8]) my_tensor[5:] my_tensor[5:,] my_tensor[-7:] my_tensor[-7:,] my_tensor[5:12] my_tensor[5:12,] my_tensor[-7:12] my_tensor[-7:12,] # tensor([2, 8, 0, -1, 4, 9, -6]) my_tensor[:] my_tensor[:,] my_tensor[0:12] my_tensor[0:12,] # tensor([3, 7, -5, -9, 6, 2, 8, 0, -1, 4, 9, -6]) my_tensor = torch.tensor([[3, 7, -5, -9, 6, 2], [8, 0, -1, 4, 9, -6]]) my_tensor[1] # 2D tensor my_tensor[:][1] my_tensor[1, :] # tensor([8, 0, -1, 4, 9, -6]) my_tensor[1][3] my_tensor[1, 3] # tensor(4) my_tensor[1][:4] my_tensor[1, :4] # tensor([8, 0, -1, 4]) my_tensor[1][2:] my_tensor[1, 2:] # tensor([-1, 4, 9, -6]) my_tensor[:, 3] # tensor([-9, 4]) my_tensor[:] # tensor([[3, 7, -5, -9, 6, 2], # [8, 0, -1, 4, 9, -6]]) my_tensor = torch.tensor([[[-3, 7, -5], [-9, 6, 2]], [[8, 0, -1], [4, 9, -6]]]) my_tensor[1] # 3D tensor my_tensor[:][1] my_tensor[1, :] my_tensor[1][:2] my_tensor[1, :2] my_tensor[1][0:] my_tensor[1, 0:] # tensor([[8, 0, -1], [4, 9, -6]]) my_tensor[1][0] # tensor([8, 0, -1]) my_tensor[1][0][2] my_tensor[1, 0, 2] # tensor(-1) my_tensor[1][0][:2] my_tensor[1, 0, :2] # tensor([8, 0]) my_tensor[1][0][1:] my_tensor[1, 0, 1:] # tensor([0, -1]) my_tensor[:, :, 1] # tensor([[7, 6], [0, 9]]) my_tensor[:] # tensor([[[-3, 7, -5], [-9, 6, 2]], # [[8, 0, -1], [4, 9, -6]]]) ```
hyperkai
1,903,670
GitHub to Artifact Registry & Docker Hub via Cloud Build
This post describes automating Cloud Build triggered from a personal GitHub source repository using...
0
2024-06-30T03:38:16
https://dev.to/dchaley/github-to-artifact-registry-docker-hub-via-cloud-build-16d1
docker, github, cicd, cloud
This post describes automating Cloud Build triggered from a personal GitHub source repository using GitHub Actions, and pushing the resulting container to both Artifact Registry (for running in our GCP project) and Docker Hub (for other folks to access). --- I've been putting off container build automation for a while. When we dipped our toes in [last time](https://github.com/dchaley/deepcell-imaging/issues/206) we learned that Cloud Build can't connect to a personal repo via a service account. (You need to connect as the repo owner.) And I didn't want to put my personal GitHub token (and everything it can access) into the project. Instead we added [cloud-build.sh](https://github.com/dchaley/deepcell-imaging/blob/436d0e9febe4a5eb783fa8475d17783526ce6dcf/container/cloud-build.sh), a dead-simple script to submit the build job. ```sh #!/bin/sh gcloud builds submit --region=us-central1 --tag us-central1-docker.pkg.dev/deepcell-on-batch/deepcell-benchmarking-us-central1/benchmarking:gce ``` Then, build "automation" meant me running that script from my laptop after merging code. Honestly: this was pretty good. As long as I'm the one merging PRs, it's quick to submit builds after merge. And I _almost_ always remember 😅 But as we get closer to production this is becoming a hassle, and we need the container in GCP Artifact Registry _and_ public Docker Hub. The client env can access public hub containers, but not our Google project. So I set out to [automate the build](https://github.com/dchaley/deepcell-imaging/issues/254) and push to GCP and Docker Hub. Since I couldn't connect Cloud Build, I wondered how hard it would be to do everything on GitHub actions. I wanted the build in one spot if possible. This post was just what I wanted: https://medium.com/@sbkapelner/building-and-pushing-to-artifact-registry-with-github-actions-7027b3e443c1 I created a service account in the Google project; gave it the appropriate permissions: - Artifact Registry Writer - Cloud Build Service Account - Service Account Token Creator I discovered I needed these later: - Log viewer - Viewer (aka project viewer) I also needed to grab the service account JSON and put it in the repo secrets. The actions config was pretty simple: ```yaml name: Build and Push to Artifact Registry on: push: branches: ["main"] env: PROJECT_ID: deepcell-on-batch REGION: us-central1 GAR_LOCATION: us-central1-docker.pkg.dev/deepcell-on-batch/deepcell-benchmarking-us-central1/benchmarking jobs: build-push-artifact: runs-on: ubuntu-latest steps: - name: "Checkout" uses: "actions/checkout@v4" - id: "auth" uses: "google-github-actions/auth@v2" with: credentials_json: "${{ secrets.GCP_SERVICE_ACCOUNT_KEY }}" - name: "Set up Cloud SDK" uses: "google-github-actions/setup-gcloud@v2" - name: "Use gcloud CLI" run: "gcloud info" - name: "Docker auth" run: |- gcloud auth configure-docker ${{ env.REGION }}-docker.pkg.dev --quiet - name: Build image run: docker build . --file Dockerfile --tag ${{ env.GAR_LOCATION }} working-directory: container - name: Push image run: docker push ${{ env.GAR_LOCATION }} ``` But, alas, no luck. The GitHub build failed with this mysterious message: ``` #12 26.21 ERROR: THESE PACKAGES DO NOT MATCH THE HASHES FROM THE REQUIREMENTS FILE. If you have updated the package versions, please update the hashes. Otherwise, examine the package contents carefully; someone may have tampered with them. #12 26.21 unknown package: #12 26.21 Expected sha256 2507549cb11e34a79f17ec8bb847d80afc9e71f8d8dffbe4c60baceacbeb787f #12 26.21 Got bdb08e5ff6de052aed79acf4d6e16c5e09bac12559c6c234511b6048e2d08342 #12 26.21 #12 ERROR: process "/bin/bash -c pip install --user --upgrade -r requirements.txt" did not complete successfully: exit code: 1 ``` An unknown package has an unexpected SHA – cool 😐 I really didn't want to fiddle around with the build, and I knew Cloud Build works. So I decided to convert the GitHub action to a Cloud Build trigger. Because I was no longer building & pushing to just Google Artifact Registry, I needed a Cloud Build config file. I also needed a Docker Hub service account. ```yaml steps: - name: 'gcr.io/cloud-builders/docker' entrypoint: 'bash' args: [ '-c', 'echo "$$PASSWORD" | docker login --username=$$USERNAME --password-stdin' ] secretEnv: [ 'USERNAME', 'PASSWORD' ] - name: 'gcr.io/cloud-builders/docker' args: [ 'build', '-t', 'us-central1-docker.pkg.dev/deepcell-on-batch/deepcell-benchmarking-us-central1/benchmarking:batch', '-t', 'dchaley/deepcell-imaging:batch', '.', ] - name: 'gcr.io/cloud-builders/docker' args: [ 'push', 'us-central1-docker.pkg.dev/deepcell-on-batch/deepcell-benchmarking-us-central1/benchmarking:batch' ] - name: 'gcr.io/cloud-builders/docker' args: [ 'push', 'dchaley/deepcell-imaging:batch' ] availableSecrets: secretManager: - versionName: projects/deepcell-on-batch/secrets/dockerhub-password/versions/1 env: 'PASSWORD' - versionName: projects/deepcell-on-batch/secrets/dockerhub-username/versions/2 env: 'USERNAME' ``` At that point, I just needed to change the build & push image steps to use Cloud Build. This is where we need the log viewer & project viewer permissions: for the `gcloud` CLI to pull in the logs during build. ```yaml - name: Trigger cloud build run: "gcloud builds submit --region=us-central1 --config=build-batch-container.yaml ." working-directory: container ``` Et voilà, the container builds on each commit to `main`, and gets pushed to both Artifact Registry & public Docker Hub. 🎉 ![Screenshot of the GitHub actions showing a completed build taking ~12.5min](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j0jka7kbfvpqz0cw0qp0.png) This experience made clear to me that it's really better to have repos in an organization. That removes the need for GitHub Actions entirely as the infra can just connect to the source. I'm still curious what the unexpected SHA in the unexpected package was about though 🤔
dchaley
1,906,293
Typescript over JavaScript
Typescript is new javascript with some advanced and useful features added to javascript. Typescript,...
0
2024-06-30T03:36:55
https://dev.to/tofail/typescript-over-javascript-3oji
typescript, javascript, webdev
Typescript is new javascript with some advanced and useful features added to javascript. Typescript, which is the superset of javascript and becoming a go to language in the web development sector for every large application. The feature of detecting bugs while typing the code makes it unique. As per research, 60% of javascript developers started using typescript. Providing syntax for types is the key feature of Typescript with its name implemented as ‘TYPESCRIPT". Even tech giants companies like Microsoft are migrating to typescript for its functionalities and features. Communicating tools like Slack and Asana use typescript for their implementation. Developers who work on Angular, React, or Vue use Typescript. For better productivity, Typescript is better for its better collaboration. So, when there is so much about typescript, we should know why we should pick typescript over javascript. **Points to be noted: ** 1. A .ts extension is used to save a Typescript file. 2. A javascript file saved with a .ts extension works perfectly as a typescript file. 3. Typescript is preferred as a server side programming language. 4. Typescript is preferable for building large scale applications. 5. It is statically typed i.e. unless you give the type of the variable, the code will not run which makes error detection easy. Why should we pick typescript over javascript? Let's discuss some points. 1) **Best for creating large scale application:** Typescript is considered as the best programming language for developing large scale or enterprise level applications;. Due to its ‘type’ feature it's easy to understand for developers to read, which is not the case of Javascript. Also its feature of detecting bugs while typing makes it less time consuming. For large projects, typescript helps by catching mistakes with its type system which eventually helps to handly develop and maintain projects with less bugs. 2) **Classes behave same as C++ , python:** If someone has an idea of C++, Python, then he/she should know that Typescript behaves the same way. Unlike Javascript, Typescript behaves the same, especially ‘this’ keyword and defining method. Typescript supports the OOPS concept and also helps in server-side development. 3) **Framework encourages Typescript:** These days various frameworks like React, Angular, NestJS, VueJS support Typescript. Even Angular has its primary language as Typescript. Typescript is javascript with extra features and supports ECMAScript. All javascript frameworks are configurable in working with typescript because of its statically typed and auto bug detecting while coding or development. Implementing the OOPS concept becomes easy with typescript which gives the reason for using typescript over javascript. 4) **Type Hinting/TypeCasting:** Typescript follows TypeCasting which brings the reason to use it over Javascript. To resolve the query in no time , it shows the error while coding. With TypeCasting it's easy to convert one variable type to another. TypeCasting allows variable type as per as requirement then and there. Using the ‘as’ keyword or <> operator is suitable for TypeCasting. At the time of development, Typescript displays the compilation error. **5) Promotes a Better Development Experience:** Typescript is very easy to understand and re-write, it becomes easy for developers to work even after gaps. Every line has a comment feature where coders can type comments while coding, for future use. It happens that more than one developer is working together on one application, that can make a mess but with the type feature it can be easy to debug. This also brings better opportunities for collaboration because any developer can easi;y understand the codebases and start making changes as requirements. This, in return, promotes a better development experience and also could return efficient output. **6) Targets multiple browsers:** It’s better to choose a language that is compatible with all browsers, rather than checking the application on browsers one by one. Once code writing is done and Typescript takes care about which browser should be targeted. It follows the process of compiling the Typescript to Javascript and then runs on browsers. So eventually it supports all browsers which support javascript. This is the biggest reason why anyone should choose Typescript over Javascript.
tofail
1,906,291
VDO Ninja woes and Overlay Setup
Original: https://codingcat.dev/podcast/vdo-ninja-woes-and-overlay-setup Trying to setup Video...
26,111
2024-06-30T03:32:25
https://codingcat.dev/podcast/vdo-ninja-woes-and-overlay-setup
webdev, javascript, beginners, podcast
Original: https://codingcat.dev/podcast/vdo-ninja-woes-and-overlay-setup {% youtube https://youtu.be/gexF47IhYLM %} Trying to setup Video Ninja, I wouldn't watch this if I were you!
codercatdev
1,906,290
🎉 iPhone 15 Pro Max Giveaway! 🎉
iPhone 15 Pro Max Giveaway! 🎉 Hey everyone! 👋 We are thrilled to announce our latest giveaway! 📱✨ 🎁...
0
2024-06-30T03:28:08
https://dev.to/md_shaharia_12/iphone-15-pro-max-giveaway-4i44
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fj4ge5act7u00vh3p0g3.jpg)[iPhone 15 Pro Max Giveaway!](https://sites.google.com/d/1KV_3eZiSN4UIJc9nupBSqe-z73LckXoG/p/1Sqk91XCiTXzoyjB-VAl2Ka6gvqGnLFdd/edit) 🎉 Hey everyone! 👋 We are thrilled to announce our latest giveaway! 📱✨ 🎁 Prize: One brand new [iPhone 15 Pro Max! 📲 🗓 ](https://sites.google.com/d/1KV_3eZiSN4UIJc9nupBSqe-z73LckXoG/p/1Sqk91XCiTXzoyjB-VAl2Ka6gvqGnLFdd/edit)Duration: [30.06.2024] - [04.07.2024] 🏆 Winner Announcement: [05.07.2024] 🔹 How to Enter: 1 2️⃣ Like this post ❤️ 3️⃣ Tag 3 friends in the comments who’d love a new iPhone! 👯‍♂️👯‍♀️👯 ✨ Terms & Conditions: Open to residents of [USA, INDIA,RUSSIA] 🌍 Must be 18 years or older to enter 🎂 Giveaway is not affiliated with Instagram/Facebook/Twitter in any way Good luck to everyone! 🍀 May the best fan win this incredible iPhone 15 Pro Max! 🌟 Open to [{Specify Eligibility}](https://sites.google.com/d/1KV_3eZiSN4UIJc9nupBSqe-z73LckXoG/p/1Sqk91XCiTXzoyjB-VAl2Ka6gvqGnLFdd/edit ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3rtj16ch149t9vzmqii9.jpg)) Feel free to customize this template with your specific details and requirements. Good luck with your giveaway! 🎁 [Win iPhone 15 pro max](https://sites.google.com/d/1KV_3eZiSN4UIJc9nupBSqe-z73LckXoG/p/1Sqk91XCiTXzoyjB-VAl2Ka6gvqGnLFdd/edit)
md_shaharia_12
1,906,288
Numpy Isnumeric Function: Mastering Numeric String Validation
In this lab, we will cover the isnumeric() function of the char module in the Numpy library. This function is used to check if a string contains only numeric characters. The function returns True if there are only numeric characters in the string, otherwise, it returns False.
27,710
2024-06-30T03:24:33
https://labex.io/tutorials/python-numpy-isnumeric-function-86462
numpy, coding, programming, tutorial
## Introduction In this lab, we will cover the `isnumeric()` function of the **char module** in the Numpy library. This function is used to check if a string contains only numeric characters. The function returns **True** if there are only numeric characters in the string, otherwise, it returns **False**. ### VM Tips After the VM startup is done, click the top left corner to switch to the **Notebook** tab to access Jupyter Notebook for practice. Sometimes, you may need to wait a few seconds for Jupyter Notebook to finish loading. The validation of operations cannot be automated because of limitations in Jupyter Notebook. If you face issues during learning, feel free to ask Labby. Provide feedback after the session, and we will promptly resolve the problem for you. ## Import Numpy Library We need to import the numpy library before we can use the `isnumeric()` function. We use the `import` keyword followed by the library name `numpy` and the nickname `np`: ```python import numpy as np ``` ## Using `isnumeric()` with a Single String We can use the `isnumeric()` function to check if a single string contains only numeric characters. Let's use an example string `"12Apple90"` and apply the `isnumeric()` function to it: ```python import numpy as np string1 = "12Apple90" print("The Input string is:") print(string1) x = np.char.isnumeric(string1) print("The Output is:") print(x) ``` Output: ```python The Input string is: 12Apple90 The Output is: False ``` As we can see, the `isnumeric()` function returns **False** as there are non-numeric characters in the input string. ## Using `isnumeric()` with an Array We can also use the `isnumeric()` function with an array of strings. Let's use an example array `inp_ar` which contains a mixture of numeric and non-numeric strings: ```python import numpy as np inp_ar = np.array(['1', '2000', '90', '3.5', '0']) print("The Input array is:") print(inp_ar) outp_arr = np.char.isnumeric(inp_ar) print("The Output array is:") print(outp_arr) ``` Output: ```python The Input array is: ['1' '2000' '90' '3.5' '0'] The Output array is: [ True True True False True] ``` As we can see, the `isnumeric()` function returns an array of boolean values with **True** indicating that the string contains only numeric characters and **False** indicating that the string contains non-numeric characters. ## Limitations of `isnumeric()` It is important to note that the `isnumeric()` function returns **False** for a string with a numeric value with a decimal, as shown in Example 2 above. ## Summary In this lab, we learned about the `isnumeric()` function of the Numpy library. We covered how to use it with single strings and arrays, as well as the limitations of the function. --- ## Want to learn more? - 🚀 Practice [Numpy Isnumeric Function](https://labex.io/tutorials/python-numpy-isnumeric-function-86462) - 🌳 Learn the latest [NumPy Skill Trees](https://labex.io/skilltrees/numpy) - 📖 Read More [NumPy Tutorials](https://labex.io/tutorials/category/numpy) Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) ! 😄
labby
1,905,600
How to make Slack Workflow input form
Hi I'm Tak. This article theme is how to make Slack Workflow input form. Overview I...
0
2024-06-30T03:24:33
https://dev.to/takahiro_82jp/how-to-make-slack-workflow-input-form-4doa
slack, devops, serverless, nocode
Hi I'm Tak. This article theme is how to make Slack Workflow input form. ## Overview I recently used Slack workflow for AWS resource management. It is a useful tool, So It's required for Devops and Platform engineering. This article teach you how to create input form and save its contents to  Google spreadsheet. ## How to make Slack Workflow input form 1. Click "More" and "Automations" ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/184pob1mmz8uw82nxisp.png) 2. Click "New Workflow" and "Build Workflow", Next you see display to build workflow ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/88aa9y0ut7azp0c81mkb.png) 3. Click "Form a link in Slack" ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z4ww94r741f8yb1mk9dz.png) You build start hook to call workflow. There are others, schedule, to use emoji, to join menber and so on. 4. Click "Forms" You click "Forms". ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bh7kq6ume7zz0tgrws6x.png) 5. Click "Collect info in a form" You click "Collect info in a form", You display create form page. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/63lljfbegcu959eo294o.png) 6. Create form You can look "Collect info in a form", You input "Form title". Next Click "Add Question", Create form Question. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7whq3hj8d1n20441ktzq.png) 7. Create Question First You input question, then select Question type, input description, check required or not. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jwpahduys28o0ld0iw6k.png) <br> There are a lot of "Question tupe". ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/q7fsbwzilmkzv7vnve6j.png) 8. Save form You return page. You finish to form, so click "Save". ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5x5e1rjzqrbsucyx136d.png) 9. Connect Google Spreadsheet You send inputed form info to Google Spreadsheet, You connect it. Click "Google Sheets". ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hjmhmv56lm1fcaaqboz0.png) 10. Click "Add to spreadsheet" You click "Add to spreadsheet", so You look connect Google Spreadsheet page for add info. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t25e0m86pjojyrqvpc7k.png) 11. Connect Google Spreadsheet Choose your Google account, then select spreadsheet file name and sheet name. Next, Check Column and Value. You finish to check, so you click "Save". ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cc2ca0ywz4vq7wnl3172.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/epp8ybfl84aehk4j8pzp.png) 12. Finish to create form You finish to create form, so Click "Finish Up". You set config, If you customize it. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0j2acjgb9bmcocyxylrz.png) ## Last You can use input form created slack workflow. There are a lot of customization pattern. Slack have CI/CD and Git repository service web hooks. Addition of you can use Slack Workflow, It seem to perfect environment DevOps and Platform engineering. I keep learning Slack Workflow.
takahiro_82jp
1,906,284
Data Validation is super important, don't ignore or delay it.
I often encounter backend codebases where data isn't validated before being processed or inserted...
0
2024-06-30T03:16:51
https://dev.to/themuneebh/data-validation-is-super-important-dont-ignore-or-delay-it-3c8k
softwareengineering, typescript
I often encounter backend codebases where data isn't validated before being processed or inserted directly into the database. This can cause serious bugs, as relying solely on your frontend buddy can lead to headaches, ruin the flow, result in unwanted and unexpected data in the database, and make you vulnerable to SQL injection if you are using an SQL database. What's the solution then? I agree that data validation is a must, but it isn't always easy to get right. Here's what you can do: **Don't trust anyone.** Validating data involves four crucial steps: 1. Check the type. 2. Validate the format. 3. Refine it. 4. Transform it (optional). We often stop at the first step and ignore the rest. Don't do that. Take a simple example of validating a phone number. You accept a "string" and check if it is a string. If so, you allow it and let it go to the database. No errors for now. But what if you have to verify that phone number with some third-party API and the format of the phone number isn't valid? Handling it later may still be possible, but it's not a good user experience to show the user that they have input the wrong data because we didn't validate the format earlier. Now they have to provide it again in the proper format. To create a good user experience, you should always: 1. Check the data type (string, boolean, integer, object). 2. Validate the format - if it's a phone number, it must look like one. 3. Refine it - make it perfect for future use cases. For example, if you don't need the "+" at the start, remove it now. 4. Transform it - after removing the "+", you may be left with an alphanumeric string, but you need a number instead of a string, so transform it. That's all you need to take care of to make your backend more robust. If you are a TypeScript lover like me, you can consider using Zod to fulfill all of the above criteria. It's a comprehensive library with great TypeScript support. Thanks for reading.
themuneebh
1,906,287
Expressjs vs Django vs FastApi vs Golang vs Rust (Actix) vs Ruby on Rails vs LAMP Stack (PHP) Hypothetical Benchmark Results
Framework Requests per Second Average Latency (ms) Scalability Ease of Use Community...
0
2024-06-30T03:24:23
https://dev.to/aliahadmd/expressjs-vs-django-vs-fastapi-vs-golang-vs-rust-actix-vs-ruby-on-rails-vs-lamp-stack-php-hypothetical-benchmark-results-nca
django, express, fastapi, ruby
| Framework | Requests per Second | Average Latency (ms) | Scalability | Ease of Use | Community Support | |-----------------|----------------------|----------------------|-------------|-------------|-------------------| | Go (net/http) | 30,000 | 20 | High | Moderate | Strong | | Rust (Actix) | 35,000 | 15 | High | Low | Growing | | FastAPI | 20,000 | 25 | High | High | Growing | | Elixir Phoenix | 25,000 | 20 | Very High | Moderate | Niche | | Express.js | 15,000 | 30 | High | High | Very Strong | | Django | 10,000 | 40 | Moderate | High | Very Strong | | Ruby on Rails | 8,000 | 50 | Moderate | Very High | Very Strong | | LAMP Stack (PHP)| 12,000 | 35 | Moderate | Moderate | Very Strong | check more details : [Expressjs vs Django vs FastApi vs Golang vs Rust (Actix) vs Ruby on Rails vs LAMP Stack (PHP) Hypothetical Benchmark Results](https://www.aliahad.com/posts/expressjs-vs-django-vs-fastapi-vs-golang-vs-rust-actix-vs-ruby-on-rails-vs-lamp-stack-php-hypothetical-benchmark-results)
aliahadmd
1,906,283
Spring Boot + Swagger: A Step-by-Step Guide to Interactive API Documentation
Introduction In today's software development, especially when dealing with APIs...
0
2024-06-30T03:23:59
https://dev.to/cuongnp/supercharge-your-spring-boot-application-with-swagger-a-step-by-step-guide-to-interactive-api-documentation-18g7
webdev, springboot, beginners, programming
## Introduction In today's software development, especially when dealing with APIs (Application Programming Interfaces), having clear and detailed documentation is essential. It guides developers on how to use a service, which endpoints are available, and what data to send or expect in return. Swagger has become a widely used tool for this purpose. This guide will explain what Swagger is, how it functions, and provide an illustrative example of its use. ## What is Swagger? Swagger is a powerful open-source framework backed by a large ecosystem of tools that helps developers design, build, document, and consume RESTful web services. Swagger is language-agnostic and can be used with any programming language that supports REST API development. ### Key Features: - **API Documentation**: Swagger automatically generates interactive API documentation, allowing developers to explore and test endpoints. - **Design First or Code First**: Swagger supports design-first and code-first approaches, making it versatile for different development workflows. - **Interactive API Console**: It provides an interface to test API endpoints directly from the documentation. - **Standardization**: Using the OpenAPI Specification (formerly known as Swagger Specification), Swagger standardizes how APIs are described. Before diving deeper into the practice today, you should prepare the things below: ### **Prerequisite** - SpringBoot application source code - IntelliJ - (Option) I’m using the current project which is running on an AWS EC2 server, so if you want to deploy and test on your server - Open port `8080` in SecurityGroup ## Step 1: Setup library - Choose the appropriate setup and compatible version depending on your project's configuration, whether it's Gradle or Maven. [springfox-swagger-ui](https://mvnrepository.com/artifact/io.springfox/springfox-swagger-ui) ## Step 2: Create a `SwaggerConfig` - We need to configure Swagger for our application. Java: ```java @Configuration @EnableSwagger2 public class SwaggerConfig { @Bean public Docket api() { return new Docket(DocumentationType.SWAGGER_2) .select() .apis(RequestHandlerSelectors.basePackage("your package name")) .paths(PathSelectors.any()) .build(); } } ``` Kotlin ```kotlin @Configuration @EnableSwagger2 class SwaggerConfig { @Bean fun api(): Docket { return Docket(DocumentationType.SWAGGER_2) .select() .apis(RequestHandlerSelectors.basePackage("your package name")) .paths(PathSelectors.any()) .build() } } ``` > Don't forget to replace “your package name” with your application (typically, your application name) ## Step 3: Update Security Config - If your app is configured with a security layer, you need to modify it to allow access to the Swagger documentation. Java ```java protected void configure(HttpSecurity http) throws Exception { http .cors() // Other configurations... .antMatchers("/v2/api-docs", "/configuration/**", "/swagger*/**", "/webjars/**") // Add endpoint // Other configurations... } ``` Kotlin ```kotlin @Throws(Exception::class) protected fun configure(http: HttpSecurity) { http .cors() // Other configurations... .antMatchers("/v2/api-docs", "/configuration/**", "/swagger*/**", "/webjars/**") // Add endpoint .permitAll() // Add permit for it // Other configurations... } ``` ## Step 4: Viewing the Documentation - Build and deploy (or start the server in localhost) - If you're developing on your PC, simply restart and enter this in your browser: ```xml http://localhost:8080/swagger-ui.html/ ``` - If you’re deploying in your server ```xml http://your-server-ip:8080/swagger-ui.html/ ``` ![API-List](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qdn8vdhw6rx6l89z6iel.png) - All of the APIs will be displayed on the Swagger page. You can try and test the APIs just like you would in POSTMAN. ![Swagger-springboot](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8y9bkbnb4n4d8h5njpvu.png) ## Final Thought Swagger is an invaluable tool for API development and documentation. It not only helps in maintaining up-to-date documentation but also enhances the developer experience by providing an interactive interface to explore and test APIs. By integrating Swagger into your API development workflow, you can ensure that your APIs are well-documented, easy to understand, and user-friendly. For the Node.js project, I also guide steps. Check it out!!! [Set up swagger for NodeJS project!](https://dev.to/cuongnp/swagger-nodejs-express-a-step-by-step-guide-4ob)
cuongnp
1,906,286
GitHub CMS
Original: https://codingcat.dev/podcast/github-cms
26,111
2024-06-30T03:20:18
https://codingcat.dev/podcast/github-cms
webdev, javascript, beginners, podcast
Original: https://codingcat.dev/podcast/github-cms {% youtube https://youtube.com/embed/rTmzgCZdzQ0 %}
codercatdev
1,906,285
Hypertext Markup Language (HTML)
Acceptance criteria: It is done when I have analyzed and identified , , , and elements in the...
0
2024-06-30T03:19:12
https://dev.to/moth668/hypertext-markup-language-html-5blo
Acceptance criteria: It is done when I have analyzed and identified <html>, <head>, <meta>, and <title> elements in the index.html file in the prework-study-guide repository. In this Module, I've learned about the <!DOCTYPE html> declaring to the browser to expect the following document to be HTML. Then I learned about how the language is set and the different elements of the <head>. Experimenting in VS Code with the index.html file by changing the <title> in the <head> - This actually shows up in the tab when the file is opened in Chrome, but the main title on the page still shows Prework Study Guide. (I went ahead and changed <h1> in the <body> to change that page title to match)
moth668
1,906,280
VTable table component, how to listen for mouse hover events on the elements?
Question Description When customizing cell content using customLayout, including Text and...
0
2024-06-30T03:14:05
https://dev.to/fangsmile/vtable-table-component-how-to-listen-for-mouse-hover-events-on-the-elements-517h
visactor, vtable, webdev, excel
## Question Description When customizing cell content using customLayout, including Text and Image, I would like to have some custom logic when hovering over the Image. Currently, the mouse-enter event for the cell cannot distinguish between specific targets. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/88fp9cjk9tdkoo1qzise.png) For DOM elements in JavaScript, the mouseenter event is triggered only once when the mouse pointer enters (moves over) the element. So is there an event like mouseenter_cell to monitor specified content in custom cells? ## Solution You can bind the mouseenter and mouseleave events to the image dom of the custom layout 'customLayout'. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xia9rf8q62lxzjfdi506.png) ## Code example You can paste it into the official editor to test: https://visactor.io/vtable/demo/custom-render/custom-cell-layout-jsx ``` const VGroup = VTable.VGroup; const VText = VTable.VText; const VImage = VTable.VImage; const VTag = VTable.VTag; const option = { container: document.getElementById('container'), columns: [ { field: 'bloggerId', title: 'bloggerId' }, { field: 'bloggerName', title: 'bloggerName', width: 330, customLayout: args => { const { table, row, col, rect } = args; const { height, width } = rect || table.getCellRect(col, row); const record = table.getRecordByRowCol(col, row); // const jsx = jsx; const container = ( <VGroup attribute={{ id: 'container', width, height, display: 'flex', flexWrap: 'nowrap', justifyContent: 'flex-start', alignContent: 'center' }} > <VGroup id="container-right" attribute={{ id: 'container-right', width: width - 60, height, fill: 'yellow', opacity: 0.1, display: 'flex', flexWrap: 'nowrap', flexDirection: 'column', justifyContent: 'space-around', alignItems: 'center' }} > <VGroup attribute={{ id: 'container-right-top', fill: 'red', opacity: 0.1, width: width - 60, height: height / 2, display: 'flex', flexWrap: 'wrap', justifyContent: 'flex-start', alignItems: 'center' }} > <VText attribute={{ id: 'bloggerName', text: record.bloggerName, fontSize: 13, fontFamily: 'sans-serif', fill: 'black', textAlign: 'left', textBaseline: 'top', boundsPadding: [0, 0, 0, 10] }} ></VText> <VImage attribute={{ id: 'location-icon', width: 15, height: 15, image: '<svg t="1684484908497" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="2429" width="200" height="200"><path d="M512 512a136.533333 136.533333 0 1 1 136.533333-136.533333 136.533333 136.533333 0 0 1-136.533333 136.533333z m0-219.272533a81.92 81.92 0 1 0 81.92 81.92 81.92 81.92 0 0 0-81.92-81.92z" fill="#0073FF" p-id="2430"></path><path d="M512 831.214933a27.306667 27.306667 0 0 1-19.2512-8.055466l-214.493867-214.357334a330.5472 330.5472 0 1 1 467.490134 0l-214.357334 214.357334a27.306667 27.306667 0 0 1-19.387733 8.055466z m0-732.091733a275.933867 275.933867 0 0 0-195.106133 471.04L512 765.269333l195.106133-195.106133A275.933867 275.933867 0 0 0 512 99.1232z" fill="#0073FF" p-id="2431"></path><path d="M514.321067 979.490133c-147.456 0-306.107733-37.000533-306.107734-118.3744 0-45.602133 51.746133-81.92 145.681067-102.4a27.306667 27.306667 0 1 1 11.605333 53.384534c-78.370133 17.066667-102.673067 41.915733-102.673066 49.015466 0 18.432 88.064 63.761067 251.4944 63.761067s251.4944-45.192533 251.4944-63.761067c0-7.3728-25.258667-32.768-106.496-49.834666a27.306667 27.306667 0 1 1 11.195733-53.384534c96.6656 20.343467 150.186667 56.9344 150.186667 103.2192-0.273067 80.964267-158.9248 118.3744-306.3808 118.3744z" fill="#0073FF" p-id="2432"></path></svg>', boundsPadding: [0, 0, 0, 10], cursor: 'pointer' }} stateProxy={stateName => { if (stateName === 'hover') { return { background: { fill: 'green', cornerRadius: 5, expandX: 1, expandY: 1 } }; } }} onPointerEnter={event => { event.currentTarget.addState('hover', true, false); event.currentTarget.stage.renderNextFrame(); }} onPointerLeave={event => { event.currentTarget.removeState('hover', false); event.currentTarget.stage.renderNextFrame(); }} ></VImage> <VText attribute={{ id: 'locationName', text: record.city, fontSize: 11, fontFamily: 'sans-serif', fill: '#6f7070', textAlign: 'left', textBaseline: 'top' }} ></VText> </VGroup> </VGroup> </VGroup> ); // decode(container) return { rootContainer: container, renderDefault: false }; } }, { field: 'fansCount', title: 'fansCount', fieldFormat(rec) { return rec.fansCount + 'w'; }, style: { fontFamily: 'Arial', fontSize: 12, fontWeight: 'bold' } }, { field: 'worksCount', title: 'worksCount', style: { fontFamily: 'Arial', fontSize: 12, fontWeight: 'bold' } }, { field: 'viewCount', title: 'viewCount', fieldFormat(rec) { return rec.fansCount + 'w'; }, style: { fontFamily: 'Arial', fontSize: 12, fontWeight: 'bold' } }, { field: 'viewCount', title: 'viewCount', fieldFormat(rec) { return rec.fansCount + 'w'; }, style: { fontFamily: 'Arial', fontSize: 12, fontWeight: 'bold' } }, { field: '', title: 'operation', width: 100, icon: ['favorite', 'message'] } ], records: [ { bloggerId: 6, bloggerName: 'Virtual anchor bird', bloggerAvatar: 'https://lf9-dp-fe-cms-tos.byteorg.com/obj/bit-cloud/VTable/custom-render/bird.jpeg', introduction: 'Hello everyone, I am the virtual anchor Xiaoniao. I like singing, acting and variety shows. I hope to be happy with everyone through the live broadcast.', fansCount: 900, worksCount: 12, viewCount: 8, city: 'Happy City', tags: ['music', 'performance', 'variety'] } ], defaultRowHeight: 80 }; const instance = new VTable.ListTable(document.getElementById(CONTAINER_ID), option); window.tableInstance = instance; ``` ## documents demo: https://visactor.io/vtable/demo/custom-render/custom-cell-layout-jsx Relevant api:https://visactor.io/vtable/option/ListTable-columns-text#customLayout Tutorial:https://visactor.io/vtable/guide/custom_define/custom_layout github:https://github.com/VisActor/VTable
fangsmile
1,906,279
After custom rendering in the column configuration of the VTable component, the icon configuration fails. How to solve this?
Problem Description We have used the customLayout or customRender configuration for custom...
0
2024-06-30T03:07:36
https://dev.to/fangsmile/after-custom-rendering-in-the-column-configuration-of-the-vtable-component-the-icon-configuration-fails-how-to-solve-this-39he
visactor, vtable, visiualization, webdev
## Problem Description We have used the customLayout or customRender configuration for custom rendering in business scenarios, but we also want to use the icon button icon feature of VTable itself. However, after both configurations are enabled, the icon does not display correctly. Is there any way to make both configurations work properly? As shown below, only the content of customRender is displayed. The icon configuration icon is not displayed. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qz9omnqzw43ovejud425.png) ## Solution You can solve this problem by using renderDefault of the custom rendering configuration. However, after configuration, you may find unwanted content being drawn. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uv33tpryydn5onxv8mo2.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pajzide399df1krd0hpg.png) To solve this problem, you can use fieldFormat to directly return an empty value with this custom function, so that the default text content will not be drawn. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bo228lpckfusohdsemsb.png) ## Code Examples You can paste it into the official editor for testing: https://visactor.io/vtable/demo/custom-render/custom-render ``` const option = { columns:[ { field: 'not_urgency', title:'not urgency', width:400, headerStyle:{ lineHeight:50, bgColor:'#4991e3', color:'white', textAlign:'center', fontSize:26, fontWeight:600, }, style:{ fontFamily:'Arial', fontSize:12, fontWeight:'bold' }, fieldFormat:()=>'', icon:{ name: 'detail', type: 'svg', svg: `<svg t="1710211168958" class="icon" viewBox="0 0 1024 1024" version="1.1" xmlns="http://www.w3.org/2000/svg" p-id="3209" xmlns:xlink="http://www.w3.org/1999/xlink" width="200" height="200"><path d="M722.944 256l-153.6 153.6c-3.072 3.072-5.12 6.656-7.168 10.24-1.536 4.096-2.56 8.192-2.56 12.288v1.536c0 4.096 1.024 7.68 2.56 11.264 1.536 3.584 3.584 6.656 6.656 9.728 3.072 3.072 6.656 5.12 10.24 7.168 4.096 1.536 8.192 2.56 12.288 2.56 4.096 0 8.192-1.024 12.288-2.56 4.096-1.536 7.168-4.096 10.24-7.168l153.6-153.6v114.688c0 2.048 0 4.096 0.512 6.144 0.512 2.048 1.024 4.096 2.048 6.144 1.024 2.048 1.536 3.584 3.072 5.632 1.024 1.536 2.56 3.584 4.096 4.608 1.536 1.536 3.072 2.56 4.608 4.096 1.536 1.024 3.584 2.048 5.632 3.072 2.048 1.024 4.096 1.536 6.144 2.048 2.048 0.512 4.096 0.512 6.144 0.512 2.048 0 4.096 0 6.144-0.512 2.048-0.512 4.096-1.024 6.144-2.048 2.048-1.024 3.584-1.536 5.632-3.072 1.536-1.024 3.584-2.56 4.608-4.096 1.536-1.536 2.56-3.072 4.096-4.608 1.024-1.536 2.048-3.584 3.072-5.632 1.024-2.048 1.536-4.096 2.048-6.144 0.512-2.048 0.512-4.096 0.512-6.144V223.744c0-4.096-1.024-8.192-2.56-12.288-1.536-4.096-4.096-7.168-7.168-10.24h-0.512c-3.072-3.072-6.656-5.12-10.24-6.656-4.096-1.536-7.68-2.56-12.288-2.56h-192c-2.048 0-4.096 0-6.144 0.512-2.048 0.512-4.096 1.024-6.144 2.048-2.048 1.024-3.584 1.536-5.632 3.072-1.536 1.024-3.584 2.56-4.608 4.096-1.536 1.536-2.56 3.072-4.096 4.608-1.024 1.536-2.048 3.584-3.072 5.632-1.024 2.048-1.536 4.096-2.048 6.144-0.512 2.048-0.512 4.096-0.512 6.144s0 4.096 0.512 6.144c0.512 2.048 1.024 4.096 2.048 6.144 1.024 2.048 1.536 3.584 3.072 5.632 1.024 1.536 2.56 3.584 4.096 4.608 1.536 1.536 3.072 2.56 4.608 4.096 1.536 1.024 3.584 2.048 5.632 3.072 2.048 1.024 4.096 1.536 6.144 2.048 2.048 0.512 4.096 0.512 6.144 0.512h115.712z m-268.288 358.4l-153.6 153.6h114.688c2.048 0 4.096 0 6.144 0.512 2.048 0.512 4.096 1.024 6.144 2.048 2.048 1.024 3.584 1.536 5.632 3.072 1.536 1.024 3.584 2.56 4.608 4.096 1.536 1.536 2.56 3.072 4.096 4.608 1.024 1.536 2.048 3.584 3.072 5.632 1.024 2.048 1.536 4.096 2.048 6.144 0.512 2.048 0.512 4.096 0.512 6.144 0 2.048 0 4.096-0.512 6.144-0.512 2.048-1.024 4.096-2.048 6.144-1.024 2.048-1.536 3.584-3.072 5.632-1.024 1.536-2.56 3.584-4.096 4.608-1.536 1.536-3.072 2.56-4.608 4.096-1.536 1.024-3.584 2.048-5.632 3.072-2.048 1.024-4.096 1.536-6.144 2.048-2.048 0.512-4.096 0.512-6.144 0.512H224.256c-2.048 0-4.096 0-6.144-0.512-2.048-0.512-4.096-1.024-6.144-2.048-2.048-1.024-3.584-1.536-5.632-3.072-1.536-1.024-3.584-2.56-4.608-4.096-1.536-1.536-2.56-3.072-4.096-4.608-1.024-1.536-2.048-3.584-3.072-5.632-1.024-2.048-1.536-4.096-2.048-6.144-0.512-2.048-0.512-4.096-0.512-6.144v-192.512c0-2.048 0-4.096 0.512-6.144 0.512-2.048 1.024-4.096 2.048-6.144 1.024-2.048 1.536-3.584 3.072-5.632 1.024-1.536 2.56-3.584 4.096-4.608 1.536-1.536 3.072-2.56 4.608-4.096 1.536-1.024 3.584-2.048 5.632-3.072 2.048-1.024 4.096-1.536 6.144-2.048 2.048-0.512 4.096-0.512 6.144-0.512s4.096 0 6.144 0.512c2.048 0.512 4.096 1.024 6.144 2.048 2.048 1.024 3.584 1.536 5.632 3.072 1.536 1.024 3.584 2.56 4.608 4.096 1.536 1.536 2.56 3.072 4.096 4.608 1.024 1.536 2.048 3.584 3.072 5.632 1.024 2.048 1.536 4.096 2.048 6.144 0.512 2.048 0.512 4.096 0.512 6.144v114.688l153.6-153.6c3.072-3.072 6.656-5.12 10.24-7.168 4.096-1.536 8.192-2.56 12.288-2.56 4.096 0 8.192 1.024 12.288 2.56 4.096 1.536 7.168 3.584 10.24 6.656h0.512c3.072 3.072 5.12 6.656 7.168 10.24 1.536 4.096 2.56 8.192 2.56 12.288 0 4.096-1.024 8.192-2.56 12.288-3.072 5.12-5.12 8.704-8.192 11.264z" p-id="3210" fill="#999999"></path></svg>`, marginRight: 8, positionType: VTable.TYPES.IconPosition.absoluteRight, width: 16, height: 16, cursor: 'pointer', visibleTime: 'mouseenter_cell', funcType: 'record_detail', tooltip: { title:'展开详情', style: { fontSize: 12, padding: [8, 8, 8, 8], bgColor: '#46484a', arrowMark: true, color: 'white', maxHeight: 100, maxWidth: 200 }, placement: VTable.TYPES.Placement.top } }, customRender(args){ const { width, height}= args.rect; const {dataValue,table,row} =args; const elements=[]; let top=30; const left=15; let maxWidth=0; elements.push({ type: 'text', fill: 'red', fontSize: 20, fontWeight: 500, textBaseline: 'middle', text: row===1? 'important but not urgency':'not important and not urgency', x: left+50, y: top-5, }); return { elements, expectedHeight:top+20, expectedWidth: 300, renderDefault:true } } }, ], records:[ { 'type': 'important', "urgency": ['crisis','urgent problem','tasks that must be completed within a limited time'], "not_urgency": ['preventive measures','development relationship','identify new development opportunities','establish long-term goals'], }, { 'type': 'Not\nimportant', "urgency": ['Receive visitors','Certain calls, reports, letters, etc','Urgent matters','Public activities'], "not_urgency": ['Trivial busy work','Some letters','Some phone calls','Time-killing activities','Some pleasant activities'], }, ], defaultRowHeight:80, heightMode:'autoHeight', widthMode:'standard', autoWrapText:true, }; const tableInstance = new VTable.ListTable(document.getElementById(CONTAINER_ID),option); window['tableInstance'] = tableInstance; ``` ## Relevant Documents Related API: https://visactor.io/vtable/option/ListTable-columns-text#customRender.renderDefault Tutorial:https://visactor.io/vtable/demo/custom-render/custom-render github:https://github.com/VisActor/VTable
fangsmile
1,906,305
Implementing User Suspension in Your Laravel Application
This guide will walk through implementing user suspension in a Laravel application. This...
0
2024-06-30T04:22:29
https://raziul.dev/implementing-user-suspension-in-your-laravel-application
laravel, php, eloquent
--- title: Implementing User Suspension in Your Laravel Application published: true date: 2024-06-30 03:06:28 UTC tags: laravel,php,eloquent canonical_url: https://raziul.dev/implementing-user-suspension-in-your-laravel-application cover_image: https://raziul.dev/_astro/47fc0d55-a976-4145-8e1a-d671cf0a9c5a_1b48x.webp.jpg --- This guide will walk through implementing user suspension in a Laravel application. This functionality allows you to temporarily or permanently suspend users and notify them accordingly. ## Step 1: Add Suspension Columns to the Users Table First, we need to update our `users` table to include columns for tracking suspension status and reason. 1. Create a new migration: ``` bash php artisan make:migration add_suspension_columns_to_users_table --table=users ``` 2. In the migration file, add the following code: ``` php use Illuminate\Database\Migrations\Migration; use Illuminate\Database\Schema\Blueprint; use Illuminate\Support\Facades\Schema; return new class extends Migration { public function up(): void { Schema::table('users', function (Blueprint $table) { $table->timestamp('suspended_at')->nullable(); $table->timestamp('suspension_ends_at')->nullable(); $table->string('suspension_reason')->nullable(); }); } public function down(): void { Schema::table('users', function (Blueprint $table) { $table->dropColumn([ 'suspended_at', 'suspension_ends_at', 'suspension_reason' ]); }); } }; ``` 3. Run the migration ``` bash php artisan migrate ``` ## Step 2: Update the User Model 1. Open `app/Models/User.php` file. 2. Use the `Suspendable` trait and add the necessary attribute casts: ``` php use App\Traits\Suspendable; ... class User extends Authenticatable { ... use Suspendable; protected $casts = [ 'email_verified_at' => 'datetime', 'password' => 'hashed', 'suspended_at' => 'datetime', 'suspension_ends_at' => 'datetime', ]; ... } ``` Laravel 11 and newer versions utilize `casts` methods for property casting: ```php protected function casts(): array { return [ 'email_verified_at' => 'datetime', 'password' => 'hashed', 'suspended_at' => 'datetime', 'suspension_ends_at' => 'datetime', ]; } ``` ## Step 3: Create the Suspendable Trait 1. Create a new PHP file at `app/Traits/Suspendable.php` 2. Add the following code to that file: ```php <?php namespace App\Traits; use App\Notifications\UserSuspendedNotification; use App\Notifications\UserUnsuspendedNotification; use Carbon\CarbonInterface; use Illuminate\Database\Eloquent\Casts\Attribute; trait Suspendable { /** * Account is banned for lifetime. */ protected function isBanned(): Attribute { return Attribute::get( fn () => $this->suspended_at && is_null($this->suspension_ends_at) ); } /** * Account is suspended for some time. */ protected function isSuspended(): Attribute { return Attribute::get( fn () => $this->suspended_at && $this->suspension_ends_at?->isFuture() ); } /** * Suspend account and notify them. */ public function suspend(string $reason, CarbonInterface $ends_at = null): void { $this->update([ 'suspended_at' => now(), 'suspension_reason' => $reason, 'suspension_ends_at' => $ends_at, ]); $this->notify(new UserSuspendedNotification($this)); } /** * Un-suspend account and notify them. */ public function unsuspend(): void { if (! $this->suspended_at) { return; } $this->update([ 'suspended_at' => null, 'suspension_reason' => null, 'suspension_ends_at' => null, ]); $this->notify(new UserUnsuspendedNotification($this)); } } ``` This trait adds the `suspend` and `unsuspend` methods to the `User` model for suspending and unsuspending accounts easily. This also provides the `is_banned` and `is_suspended` attributes for checking suspension status. ## Step 4: Create Notifications 1. Create notification classes: ```bash php artisan make:notification UserSuspendedNotification php artisan make:notification UserUnsuspendedNotification ``` 2. Edit `app/Notifications/UserSuspendedNotification.php` ```php namespace App\Notifications; use App\Models\User; use Illuminate\Bus\Queueable; use Illuminate\Contracts\Queue\ShouldQueue; use Illuminate\Notifications\Messages\MailMessage; use Illuminate\Notifications\Notification; class UserSuspendedNotification extends Notification { use Queueable; public function __construct( public readonly User $user ) { } /** * Get the notification's delivery channels. */ public function via(object $notifiable): array { return ['mail']; } /** * Get the mail representation of the notification. */ public function toMail(object $notifiable): MailMessage { if ($this->user->is_banned) { $subject = __('Your account has been banned'); $message = null; } else { $subject = __('Your account has been suspended'); $message = __('Suspension will end on: :date', ['date' => $this->user->suspention_ends_at]) } return (new MailMessage) ->subject($subject) ->line($subject) ->line(__('Reason: **:reason**', ['reason' => $this->user->suspension_reason])) ->line($message) ->line(__('If you believe this is a mistake, please contact us.')) ->line(__('Thank you for your understanding.')); } } ``` 3. Edit `app/Notifications/UserUnsuspendedNotification.php` ```php namespace App\Notifications; use App\Models\User; use Illuminate\Bus\Queueable; use Illuminate\Contracts\Queue\ShouldQueue; use Illuminate\Notifications\Messages\MailMessage; use Illuminate\Notifications\Notification; class UserUnsuspendedNotification extends Notification { use Queueable; public function __construct( public readonly User $user ) { } /** * Get the notification's delivery channels. */ public function via(object $notifiable): array { return ['mail']; } /** * Get the mail representation of the notification. */ public function toMail(object $notifiable): MailMessage { return (new MailMessage) ->subject(__('Your Suspension Removed')) ->greeting(__('Hello :name,', ['name' => $this->user->name])) ->line(__('Suspension has been removed. Your account is now active.')) ->line(__('You can now log into your account.')) ->action(__('Log in'), route('login')) ->line(__('Thank you for staying with us.')); } } ``` We are almost done 😀 Let's take a look at the usage example: ```php $user = \App\Models\User::find(1); // temporary suspension (for 7 days) $user->suspend('suspension reason', now()->addDays(7)); // permanent suspension $user->suspend('suspension reason'); // unsuspension $user->unsuspend(); ``` Now, The only thing that remains is to check whether the authenticated user is suspended and restrict their access to the application. Let's do this in the next step. ## Step 5: Restrict Application Access for Suspended Users 1. Create Middleware ```bash php artisan make:middleware CheckUserSuspension ``` 2. In the middleware file `app/Http/Middleware/CheckUserSuspension.php`, add the following logic to handle restricted access for suspended users: ```php namespace App\Http\Middleware; use Closure; use Illuminate\Http\Request; use Symfony\Component\HttpFoundation\Response; class CheckUserSuspension { public function handle(Request $request, Closure $next): Response { $user = $request->user(); abort_if( $user && ($user->is_suspended || $user->is_banned), Response::HTTP_FORBIDDEN, __('Your account has been suspended or banned. Check your email for details.') ); return $next($request); } } ``` 3. Apply Middleware to Routes: In `routes/web.php` or `routes/api.php` apply the middleware to the routes you want to protect: ```php use App\Http\Middleware\CheckUserSuspension; // Protected routes Route::group(['middleware' => ['auth', CheckUserSuspension::class]], function () { Route::get('/dashboard', DashboardController::class]); }); // Other routes ``` Otherwise, you can add this middleware to the `web` or `api` middleware group to apply it to a set of routes. ## Step 6: Applying to Middleware Groups (optional) 1. Laravel 11 or newer ```php // file: bootstrap/app.php use App\Http\Middleware\CheckUserSuspension; return Application::configure(basePath: dirname( __DIR__ )) // other code ->withMiddleware(function (Middleware $middleware) { $middleware->web(append: [ CheckUserSuspension::class, ]); }) // other code ``` 2. For Laravel 10 or older ```php // file: app/Http/Kernel.php protected $middlewareGroups = [ 'web' => [ // other middlewares CheckUserSuspension::class, ], 'api' => [ // other middlewares CheckUserSuspension::class, ], ]; ``` ## Conclusion By following this guide, you have successfully implemented user suspension functionality in your Laravel application. This approach keeps your `User` model clean and encapsulates the suspension logic within a reusable `Suspendable` trait. This feature allows you to manage user access effectively by suspending and unsuspending users as needed. This not only enhances the security and control over user activities but also ensures a better user management system. Happy coding! ❤️
raziul
1,906,278
SvelteKit with Notion CMS and Cloudinary
Original: https://codingcat.dev/podcast/sveltekit-with-notion-cms-and-cloudinary
26,111
2024-06-30T03:05:16
https://codingcat.dev/podcast/sveltekit-with-notion-cms-and-cloudinary
webdev, javascript, beginners, podcast
Original: https://codingcat.dev/podcast/sveltekit-with-notion-cms-and-cloudinary {% youtube https://www.youtube.com/watch?v=ElZfDlOevyc %}
codercatdev
1,906,277
Javascript/Html/Css
JavaScript, HTML va CSS veb sahifa yaratishda uch asosiy texnologiyasi hisoblanadi. Ularning har...
0
2024-06-30T03:03:39
https://dev.to/bekmuhammaddev/javascripthtmlcss-2ok8
javascript, html, css
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pdq7jvr8gzfkwt93l3bs.png) JavaScript, HTML va CSS veb sahifa yaratishda uch asosiy texnologiyasi hisoblanadi. Ularning har biri o'ziga xos funksiyalarga ega bo'lib, birgalikda ishlatilganda veb-sahifalarni yaratish va boshqarishda juda samarali bo'ladi. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6qxquvuo5vt0w0org8h4.png) _javascript,html,css strukturasi:_ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2xzsu57h86ywjqor0jfk.png) **HTML (HyperText Markup Language)** HTML veb-sahifalarning tuzilishini belgilash uchun ishlatiladi. U veb-sahifalarning tarkibiy qismlarini aniqlaydi, masalan, matn, tasvirlar, videolar, havolalar va boshqa elementlarni. HTML ning Asosiy Tuzilishi: HTML hujjati asosiy uch qismdan iborat: doctype deklaratsiyasi, head qismi va body qismi. HTML ning asosiy tuzilishi: ``` <!DOCTYPE html> <html> <head> <title>Mening Veb Sahifam</title> </head> <body> <h1>Sarlavha</h1> <p>Bu paragraflar.</p> </body> </html> ``` HTML Elementlari: HTML elementi teglar yordamida belgilangan. Teglar ochuvchi va yopuvchi teglardan iborat bo'ladi va ular orasidagi kontent HTML elementining mazmunini tashkil qiladi. Asosiy HTML Teglari - 'html': HTML hujjatining boshlanishi va tugashi. - 'head': Metama'lumotlar, stil va skriptlarni o'z ichiga oladi. - 'title': Brauzerning sarlavha qismida ko'rsatiladigan matn. - 'body': Veb-sahifaning asosiy mazmunini o'z ichiga oladi. - 'h1' dan 'h6' gacha: Sarlavhalar uchun teglar, 'h1' eng katta, 'h6' eng kichik sarlavha. - 'p': Paragraf matni. - 'a': Havola (anchor) elementi. - 'img': Tasvir elementi. - 'div': Bo'lim yoki konteyner elementi. - 'span': Inline konteyner elementi. HTML veb-sahifalarning asosiy tuzilishini belgilash uchun ishlatiladi. U oddiy va tushunarli sintaksisga ega bo'lib, boshqa texnologiyalar bilan birgalikda ishlatilganda juda qudratli vositaga aylanadi. HTML, CSS va JavaScript bilan birga ishlatib, interaktiv va estetik jihatdan chiroyli veb-sahifalar yaratishingiz mumkin. **CSS (Cascading Style Sheets)** CSS veb-sahifalarning ko'rinishini, ya'ni stilini belgilash uchun ishlatiladi. U HTML elementlarining rangini, shriftini, o'lchamini, joylashuvini va boshqa vizual xususiyatlarini boshqaradi. css kod: ``` <!DOCTYPE html> <html> <head> <title>Mening Veb Sahifam</title> <style> body { font-family: Arial, sans-serif; background-color: #f0f0f0; color: #333; } h1 { color: #0066cc; } </style> </head> <body> <h1>Sarlavha</h1> <p>Bu paragraflar.</p> </body> </html> ``` css misol: ``` h1 { color: blue; font-size: 24px; } ``` Yuqoridagi misolda h1 selektori barcha <h1> teglari uchun color va font-size xususiyatlarini belgilaydi. **JavaScript** JavaScript veb-sahifalarga dinamiklik qo'shadi. U foydalanuvchi bilan o'zaro aloqani boshqaradi, sahifa mazmunini o'zgartiradi, ma'lumotlarni tekshiradi va boshqa ko'plab vazifalarni bajaradi.JavaScript asosan veb-sahifalarni dinamik va interaktiv qilish uchun ishlatiladi. JavaScript dastlab Netscape tomonidan yaratilgan bo'lib, hozirda dunyodagi eng ommabop va keng qo'llaniladigan dasturlash tillaridan biridir. javascript kod: ``` <!DOCTYPE html> <html> <head> <title>Mening Veb Sahifam</title> <style> body { font-family: Arial, sans-serif; background-color: #f0f0f0; color: #333; } h1 { color: #0066cc; } </style> </head> <body> <h1 id="sarlavha">Sarlavha</h1> <p id="matn">Bu paragraflar.</p> <button onclick="matnniOzgartirish()">Matnni O'zgartirish</button> <script> function matnniOzgartirish() { document.getElementById("sarlavha").innerHTML = "Yangi Sarlavha"; document.getElementById("matn").innerHTML = "Matn o'zgartirildi!"; } </script> </body> </html> ``` JavaScript'ning qo'llanilishi: 1. Veb rivojlantirish: Asosan front-end rivojlantirishda ishlatiladi, ammo Node.js tufayli back-end rivojlantirishda ham ishlatiladi. 2. Mobil ilovalar: React Native va Ionic kabi freymvorklar orqali mobil ilovalar yaratish uchun foydalaniladi. 3. O'yinlar: Phaser va Babylon.js kabi freymvorklar yordamida brauzerda o'ynaladigan o'yinlar yaratish uchun ishlatiladi. 4. Server tomon dasturlash: Node.js yordamida serverda ishlaydigan dasturlar yozish mumkin. Mashhur freymvorklar va kutubxonalar: 1. React: Facebook tomonidan ishlab chiqilgan va bir sahifali ilovalar (SPA) yaratishda keng qo'llaniladi. 2. Angular: Google tomonidan ishlab chiqilgan va katta va murakkab veb-ilovalarni yaratish uchun mo'ljallangan. 3. Vue.js: Juda moslashuvchan va oddiy sintaksisga ega bo'lgan freymvork bo'lib, kichik va katta loyihalarda ishlatiladi. 4. Node.js: Server tomon dasturlash uchun ishlatiladi, va yuqori samarali va kengaytiriladigan tarmoqli dasturlar yaratishga yordam beradi. HTML sahifaning tuzilishini belgilaydi, CSS sahifani bezaydi, va JavaScript sahifaga dinamiklik qo'shadi. Bu uchala texnologiya birgalikda veb-sahifalarni chiroyli va interaktiv qilish uchun ishlatiladi. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/93d0i8zprt2wp70ta952.png) 2023-yildagi texnalogiyalar reytingida javascript yetakchi html,css 2-o'rinda va boshqa tillar. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w6a9ridpmpjzc0yrh25o.png)
bekmuhammaddev
1,906,276
SvelteKit with DaisyUI
Original: https://codingcat.dev/podcast/sveltekit-with-daisyui
26,111
2024-06-30T03:02:34
https://codingcat.dev/podcast/sveltekit-with-daisyui
webdev, javascript, beginners, podcast
Original: https://codingcat.dev/podcast/sveltekit-with-daisyui {% youtube https://youtu.be/l4sbqrY0XGk %}
codercatdev
1,906,275
Effortlessly Orchestrating Workflows in the Cloud: A Deep Dive into AWS Managed Apache Airflow
Effortlessly Orchestrating Workflows in the Cloud: A Deep Dive into AWS Managed Apache...
0
2024-06-30T03:02:21
https://dev.to/virajlakshitha/effortlessly-orchestrating-workflows-in-the-cloud-a-deep-dive-into-aws-managed-apache-airflow-3e4b
![topic_content](https://cdn-images-1.medium.com/proxy/1*hXIV3K77zDbI0B5vuV_X3A.png) # Effortlessly Orchestrating Workflows in the Cloud: A Deep Dive into AWS Managed Apache Airflow ### Introduction In today's rapidly evolving technological landscape, businesses and organizations increasingly rely on intricate workflows encompassing a myriad of tasks, ranging from data processing and analysis to machine learning model training and deployment. Orchestrating these workflows efficiently and reliably is paramount to achieving operational agility and maximizing productivity. Enter Apache Airflow, an open-source platform purpose-built for precisely this task. At its core, Airflow empowers users to define, schedule, and monitor workflows programmatically, offering a robust and scalable solution for managing complex data pipelines. However, deploying and managing Airflow on-premises can introduce its own set of challenges, often demanding significant investment in infrastructure, monitoring, and maintenance. This is where AWS Managed Apache Airflow (MWAA) steps in as a game-changer. MWAA is a fully managed service that simplifies the deployment and operation of Apache Airflow in the AWS cloud. With MWAA, you can focus on building and orchestrating your workflows without the burden of managing the underlying infrastructure. ### Understanding AWS Managed Apache Airflow (MWAA) At its heart, MWAA leverages AWS's robust infrastructure to provide a highly available and scalable Airflow environment. Let's break down the key components and features that make MWAA a compelling choice for workflow orchestration: * **Fully Managed Service:** AWS takes care of provisioning, patching, and scaling the underlying infrastructure, freeing you from operational overhead. * **Security:** MWAA integrates seamlessly with AWS security services such as AWS Identity and Access Management (IAM) for granular control over access and permissions. * **Scalability and Availability:** MWAA automatically scales your Airflow environment based on workload demands, ensuring high availability and responsiveness. * **Monitoring and Logging:** Integrate with Amazon CloudWatch for comprehensive monitoring and logging of your Airflow deployments. * **Cost-Effectiveness:** Pay only for the resources you consume, making MWAA a cost-effective solution compared to managing your own Airflow infrastructure. ### Practical Use Cases for MWAA 1. **ETL Pipelines** Extract, Transform, Load (ETL) processes form the backbone of many data-driven applications. MWAA excels in orchestrating these pipelines, allowing you to define tasks for data extraction from various sources, apply transformations, and load the processed data into target data stores. **Example:** Consider a scenario where you need to ingest data from multiple sources like Amazon S3, process it using AWS Glue or Amazon EMR, and then load it into Amazon Redshift for analytics. MWAA can orchestrate this entire pipeline, ensuring that tasks are executed in the correct order and dependencies are met. 2. **Machine Learning Model Training and Deployment** The iterative nature of machine learning workflows demands robust orchestration. MWAA streamlines this process by enabling you to define tasks for data preprocessing, model training, hyperparameter tuning, model evaluation, and deployment. **Example:** Imagine you need to train a machine learning model using Amazon SageMaker. Your workflow might involve data retrieval from S3, data preprocessing using AWS Glue, model training with SageMaker, hyperparameter optimization, and model deployment to a SageMaker endpoint. MWAA can orchestrate these tasks, ensuring a seamless and reproducible ML workflow. 3. **Infrastructure Automation** MWAA can also orchestrate infrastructure-related tasks, such as provisioning and configuring AWS resources. This can be particularly useful for automating deployments, scaling resources, or performing system updates. **Example:** Suppose you need to automate the deployment of a new application environment on AWS. Your workflow might involve provisioning EC2 instances, configuring security groups, deploying applications using AWS CodeDeploy, and updating DNS records. MWAA can manage this complex orchestration, ensuring consistent and repeatable deployments. 4. **Data Warehousing and Analytics** In data warehousing scenarios, MWAA can orchestrate data ingestion, transformation, and loading processes into your data warehouse. It can also schedule and manage the execution of analytical queries and reports. **Example:** Consider a scenario where you need to load data from various transactional databases into Amazon Redshift. MWAA can orchestrate the data extraction, transformation, and loading process, ensuring your data warehouse is populated with up-to-date information. 5. **Serverless Workflow Orchestration** MWAA seamlessly integrates with AWS serverless services, making it ideal for orchestrating serverless workflows. You can trigger AWS Lambda functions, manage AWS Step Functions state machines, and interact with other serverless offerings. **Example:** Imagine a scenario where you have a serverless application that processes images uploaded to S3. MWAA can trigger Lambda functions for image processing, invoke Step Functions for workflow management, and store results in DynamoDB, providing a robust orchestration layer for your serverless architecture. ### Comparison with Other Solutions While MWAA offers a compelling solution for workflow orchestration, it's essential to consider other available options: | Feature | AWS MWAA | Google Cloud Composer | Azure Cloud Composer | Self-Hosted Airflow | |--------------------------|--------------------------|------------------------|-----------------------|-------------------| | Managed Service | Yes | Yes | Yes | No | | Integration with Ecosystem | AWS Services | Google Cloud Services | Azure Services | Customizable | | Security | AWS IAM | Google Cloud IAM | Azure AD | Customizable | | Scalability | Auto-Scaling | Auto-Scaling | Auto-Scaling | Manual | | Cost | Pay-as-you-go | Pay-as-you-go | Pay-as-you-go | Infrastructure Cost| ### Conclusion AWS Managed Apache Airflow provides a powerful and convenient solution for orchestrating complex workflows in the cloud. Its fully managed nature, seamless integration with the AWS ecosystem, and robust security features make it an ideal choice for organizations of all sizes. By abstracting the complexities of managing Airflow, MWAA empowers you to focus on building and optimizing your data pipelines and workflows, ultimately accelerating your journey to data-driven insights and operational efficiency. ## Advanced Use Case: Building a Real-time Machine Learning Pipeline with MWAA As an experienced software architect and AWS Solution Architect, let's explore a more advanced use case: building a real-time machine learning pipeline that leverages MWAA's orchestration capabilities in conjunction with other AWS services. **Scenario:** Imagine you're building a fraud detection system for a financial institution. The system needs to analyze real-time transaction data to identify potentially fraudulent activities and trigger alerts for immediate action. **Architecture:** 1. **Data Ingestion:** Real-time transaction data streams into Amazon Kinesis Data Streams. 2. **Data Preprocessing:** An AWS Lambda function, triggered by Kinesis, performs real-time data preprocessing, such as data cleaning, transformation, and feature engineering. 3. **Fraud Detection Model:** A pre-trained machine learning model, deployed as a SageMaker endpoint, receives the preprocessed transaction data from Lambda. 4. **Real-time Inference:** The SageMaker endpoint performs real-time inference on the incoming data, generating predictions about the likelihood of fraud. 5. **Alerting and Action:** If the model predicts a high probability of fraud, an AWS Lambda function is triggered to initiate appropriate actions, such as sending alerts to a security team or blocking the transaction. **MWAA's Role:** * **Pipeline Orchestration:** MWAA orchestrates the entire pipeline, ensuring tasks are executed in the correct order. It manages dependencies between data ingestion, preprocessing, model inference, and alerting. * **Monitoring and Retraining:** MWAA can monitor the performance of the machine learning model in real time, tracking metrics like precision, recall, and F1-score. Based on these metrics, MWAA can trigger automated model retraining jobs in SageMaker to maintain model accuracy over time. * **Scalability and Fault Tolerance:** MWAA ensures the pipeline can scale to handle fluctuating transaction volumes and provides fault tolerance by automatically restarting failed tasks or scaling resources as needed. **Benefits:** * **Real-time Fraud Detection:** By leveraging MWAA for orchestration and real-time AWS services like Kinesis and Lambda, this architecture enables real-time fraud detection, minimizing potential losses. * **Automated Model Management:** MWAA's ability to monitor model performance and trigger retraining ensures the accuracy and reliability of the fraud detection system. * **Scalability and Reliability:** The use of managed AWS services and MWAA's orchestration capabilities ensures the pipeline can scale to handle large volumes of data and provides high availability for mission-critical operations. This advanced use case illustrates how MWAA's robust orchestration features, coupled with the power of the AWS ecosystem, can be leveraged to build sophisticated and mission-critical applications. By embracing a cloud-native approach to workflow management, organizations can achieve unprecedented levels of agility, efficiency, and scalability in today's data-driven world.
virajlakshitha
1,905,137
Using JSONB in PostgreSQL
Introduction JSONB, short for JSON Binary, is a data type developed from the JSON data...
0
2024-06-30T03:00:00
https://howtodevez.blogspot.com/2024/04/using-jsonb-in-postgresql.html
postgres, beginners, datascience, data
Introduction ------------ **JSONB**, short for **JSON Binary**, is a data type developed from the **JSON** data type and supported by **PostgreSQL** since version 9.2. The key difference between **JSON** and **JSONB** lies in how they are stored. **JSONB** supports binary storage and resolves the limitations of the **JSON** data type by optimizing the insert process and supporting indexing. If you want to know how to install PostgreSQL and learn some basic knowledge about it, check out [this article](https://howtodevez.blogspot.com/2024/03/installing-postgresql-with-docker.html). Defining a Column ----------------- The query below will create a table with a column of the JSONB data type, which is very simple: ```sql CREATE TABLE table_name ( id int, name text, info jsonb ); ``` Inserting Data -------------- To insert data into a table with a JSONB column, enclose the content within single quotes ('') like this: ```sql INSERT INTO table_name VALUES (1, 'name', '{"text": "text value", "boolean_vaule": true, "array_value": [1, 2, 3]}'); ``` We can also insert into an array of objects in a similar way: ```sql INSERT INTO table_name VALUES (1, 'name', '[1, "text", false]'); ``` Query data ---------- To query data from a column with the JSONB data type, there are a few ways: ```sql -- get all field data SELECT info FROM table_name; -- get specific field for json object SELECT info->>'field_name' AS field FROM table_name; -- select element from array (start from 1) SELECT info[2] FROM table_name -- compare value must be inside '' SELECT * FROM table_name WHERE info->>'field_name' >= '20'; -- get rows have field exists value SELECT count(*) FROM table_name WHERE info ? 'field_name'; ``` Creating an Index ----------------- As mentioned earlier, one of the key differences between JSON and JSONB is that JSONB supports creating indexes, which allows for faster data access when dealing with large amounts of data. Here's how you can create an index: ```sql -- create index for info with field 'value' CREATE INDEX index_name ON table_name ((info->>'value')); ``` To check the effectiveness of the Index, you should insert a large amount of data (around 10,000 records) to see the improvement in query speed before and after indexing. Conclusion ---------- Through this article, I hope you have gained more understanding about JSONB and how to create, insert, and query JSONB data in PostgreSQL.  **_See you again in the next articles. Happy coding!_** **_If you found this content helpful, please visit [the original article on my blog](https://howtodevez.blogspot.com/2024/04/using-jsonb-in-postgresql.html) to support the author and explore more interesting content._** <a href="https://howtodevez.blogspot.com/2024/03/sitemap.html" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Blogger-FF5722?style=for-the-badge&logo=blogger&logoColor=white" width="36" height="36" alt="Blogspot" /></a><a href="https://dev.to/chauhoangminhnguyen" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/dev.to-0A0A0A?style=for-the-badge&logo=dev.to&logoColor=white" width="36" height="36" alt="Dev.to" /></a><a href="https://www.facebook.com/profile.php?id=61557154776384" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/Facebook-1877F2?style=for-the-badge&logo=facebook&logoColor=white" width="36" height="36" alt="Facebook" /></a><a href="https://x.com/DavidNguyenSE" target="_blank" rel="noreferrer"><img src="https://img.shields.io/badge/X-000000?style=for-the-badge&logo=x&logoColor=white" width="36" height="36" alt="X" /></a>
chauhoangminhnguyen
1,906,274
What is an Observability Pipeline?
Key Takeaways Observability pipelines are essential for managing the ever-growing volume of...
0
2024-06-30T02:56:05
https://dev.to/rickysarora/what-is-an-observability-pipeline-2n56
**Key Takeaways** Observability pipelines are essential for managing the ever-growing volume of telemetry data (logs, metrics, traces) efficiently, enabling optimal security, performance, and stability within budget constraints. They address challenges such as data overload, legacy architectures, rising costs, compliance and security risks, noisy data, and the need for dedicated resources. Observo.ai's AI-powered Observability Pipeline offers solutions like data optimization and reduction, smart routing, anomaly detection, data enrichment, a searchable observability data lake, and sensitive data discovery, significantly reducing costs and improving incident resolution times. **Introduction** [An Observability Pipeline](www.observo.ai) is a transformative tool for managing complex system data, optimizing security, and enhancing performance within budget. Observo.ai revolutionizes this with AI, slashing costs and streamlining data analysis to empower Security and DevOps teams. **Q: What is an Observability Pipeline?** ‍ A: An Observability Pipeline, sometimes called a Telemetry Pipeline is a sophisticated system designed to manage, optimize, and analyze telemetry data (like logs, metrics, traces) from various sources. It helps Security and DevOps teams efficiently parse, route, and enrich data, enabling them to make informed decisions, improve system performance, and maintain security within budgetary constraints. Observo.ai elevates this concept with AI-driven enhancements that significantly reduce costs and improve operational efficiency. **Overview** Observability is the practice of asking questions about the inner workings of a system or application based on the data it produces. It involves collecting, monitoring, and analyzing various data sources (logs, metrics, traces, etc.) to comprehensively understand how the system behaves, its performance, and potential security threats. Another important practice is telemetry, which involves collection and transmission of data from remote sources to a central location for monitoring, analysis and decision making. Logs, metrics, events and traces are known as the four pillars of Observability. The telemetry data collected and analyzed by Security and DevOps teams in their observability efforts is growing at an unrelenting pace – for some organizations, as much as 35% year over year. That means that costs to store, index, and process data are doubling in a little more than 2 years. Some of our larger customers spend tens of millions of dollars a year just to store and process this data. An observability pipeline or a telemetry pipeline can help Security and DevOps teams get control over their telemetry data such as security event logs, application logs, metrics, traces et al. It allows them to choose the best tools to analyze and store this data for optimal security, performance, and stability within budget requirements. Observability pipelines parse and shape data into the right format, route it to the right SIEM and Observability tools, optimize it by reducing low-value data and enriching it with more context, and empower these teams to make optimal choices while dramatically reducing costs. **Challenges of Observability** - Data Overload: Security and DevOps teams need help to keep pace with the growth of telemetry data used for observability efforts. This leads them to make sub-optimal choices about what data to analyze, how heavily to sample, and how long to retain data for later security investigations and compliance. Budget is often the culprit driving decisions about how much data to analyze, but it can impair enterprise security, performance, and stability if these teams lack a complete view of their environment. - Legacy Architectures: Traditional architectures, relying on static, rule-based methods and indexing for telemetry data processing and querying, struggle to adapt to the soaring data volumes and dynamic nature of modern systems. As the scale of data expands, these static methods fail to keep pace, endangering real-time analysis and troubleshooting. Log data constantly changes with new releases and services. Static systems need constant tuning to stay on top of this change which can be time-consuming and difficult without seasoned professionals at the helm. - Rising Costs: As telemetry data volumes grow, so do the costs of storing, processing, and indexing this data. Many customers report that storage and compute costs are the same or more than their SIEM and log analytics license costs. Because budgets remain flat or decrease, rising costs force decisions about which data can be analyzed and stored - jeopardizing security, stability, and compliance goals. - Compliance and Security Risks: As telemetry data grows, it becomes increasingly challenging to keep private identifiable information (IPI) secure. Recent reports suggest that data breaches from observability systems have increased by 48% in just the last two years. Manual efforts to mask this data rely on in-depth knowledge of data schemas to try to protect PII. Unfortunately, those efforts fall short. PII like social security numbers, credit card numbers, and personal contact information is often found in open text fields, not just the fields you would expect. This leaves organizations vulnerable to data breaches in new and troubling ways and makes compliance efforts even more challenging. - Noisy Data Overwhelming Useful Signal: About 80% of log data has zero analytical value, yet most teams are paying to analyze all of it. This adds to the cost challenges we’ve mentioned and limits the flexibility of getting a comprehensive and holistic view of observability into core systems. All of this noise also makes SIEM and Observability systems work much harder. It’s easy to find a needle in a really small haystack. If that haystack gets really big you might need more people to help you find that one important needle. The same is true for SIEM and log management tools. Too much data requires much more CPU power to index and search through it and costs 80% more than it should to store it. - Lack of Dedicated Resources: Most of our customers deploy large teams to tackle these challenges before using Observo. They develop an intimate knowledge of the telemetry data and tools designed to optimize observability. This draws them away from working on proactive efforts to improve security, performance, reliability, and stability and other projects that bring a lot of value to their organization. The most skilled and knowledgeable of these teams also leave over time. If the systems are heavily reliant on their expertise, this puts the strength of observability in jeopardy. **The Observo.ai Observability Pipeline** Observo.ai has developed an AI-powered Observability pipeline to address these challenges. Our customers have reduced observability costs by 50% or more by optimizing telemetry data such as Security event logs, application logs, metrics, and others, and by routing data to the most cost-effective destination for storage and analysis. By optimizing and enriching data with AI-generated sentiment analysis, our customers have cut the time to identify and resolve incidents by more than 40%. Built in Rust, Observo.ai's Observability pipelines are extremely fast and designed to handle the most demanding workloads. Here are some of the key ways our solution addresses your biggest observability challenges. - Data Optimization and Reduction: We have found that only 20% of log data has value. Our Smart Summarizer can reduce the volume of data types such as VPC Flow Logs, Firewall logs, OS, CDN, DNS, Network devices, Cloud infrastructure and Application logs by more than 80%. Teams can ingest more quality data while reducing their overall ingest and reduce storage and compute costs. Many customers reduce their total observability costs by 50% or more. - Smart Routing: Observo.ai’s observability pipeline transforms data from any source to any destination, giving you complete control over your data. We deliver data in the right format to the tool or storage location that makes the most sense. This helps customers avoid vendor lock-in by giving them choices about how to store, index, and analyze their data. - Anomaly Detection : The Observo.ai observability pipeline learns what is normal for any given data type. The Observo.ai Sentiment Engine identifies anomalies and can integrate with common alert/ticketing systems like ServiceNow, PagerDuty, and Jira for real-time alerting. Customers have lowered mean time to resolve (MTTR) incidents by 40% or more. - Data Enrichment: Observo enriches data to add context. Observo.ai’s models assign “sentiment” based on pattern recognition, or add 3rd party data like Geo-IP and threat intel. Sentiment dashboards add valuable insights and help reduce alert fatigue. By adding context, teams achieve faster, more precise searches and eliminate false alarms that can mask real ones. - Searchable, Full-Fidelity, Low-cost Observability Data Lake: The Observo.ai observability pipeline helps you create a full-fidelity observability data lake in low-cost cloud storage. We store data in Parquet file format making it highly compressed and searchable. You can use natural language queries, so you don’t need to be a data scientist to retrieve insights from your observability stack. Storing this data in your SIEM or log management tool can cost as much as a hundred times more than in an Observo.ai data lake. This helps you retain more data, for longer periods of time, spend less money, and be a lot more flexible. - Sensitive Data Discovery: Observo.ai proactively detects sensitive and classified information in telemetry data flowing through the Observability pipeline, allowing you to secure it through obfuscation or hashing wherever it sits. Observo.ai uses pattern recognition to discover all sensitive data, even if it’s not where you’d expect it to be or in fields designated for PII. **Use Cases** There are numerous use cases for Observability pipelines and how they can help organizations solve challenges. Primarily, they are a combination of the challenges mentioned above. Here are some examples that we have seen with organizations of various sizes. - Get data from Splunk forwarder, optimize the data and send to Splunk. Route the raw data in optimized parquet schema to data lake on AWS S3 - Ingest Cisco Firewall events and Windows event logs from Kafka topic. Send the optimized data to Azure Sentinel and full fidelity data to a Snowflake data lake - Collect logs from OpenTelemetry agent, reduce the noise and send the optimized data to Datadog - Receive data from Cribl Logstream, reduce the data volume, mask PII data and route it to Exabeam. A full fidelity copy in JSON format is sent to an Azure Blob Storage data lake - Ingest VPC Flow logs and CloudTrail events from AWS Kinesis, reduce the noise and send optimized data to Elasticsearch **Conclusion** An observability pipeline is a critical tool for cutting costs, managing data growth, and giving you choices about what data to analyze, which tools to use, and how to store it. An AI-powered observability pipeline elevates observability with much deeper data optimization, and automated pipeline building, and makes it much easier for anyone in your organization to derive value without having to be an expert in the underlying analytics tools and data types. Observo.ai helps you break free from static, rules-based pipelines that fail to keep pace with the ever-changing nature of your data. Observo.ai helps you automate observability with a pipeline that constantly learns and evolves with your data. **Learn More** For more information on how you can save 50% or more on your SIEM and observability costs with the AI-powered Observability Pipeline, Read the Observo.ai White paper, [Elevating Observability with AI](https://www.observo.ai/whitepaper-elevating-observability-ai-powered-pipelines).
rickysarora
1,906,273
How I created reusable React Icon Component using react-icons library in an AstroJs Project.
In the past weeks lately, I have been focused on building clean landing page websites using AstroJs...
0
2024-06-30T02:43:37
https://dev.to/mrpaulishaili/how-i-created-reusable-react-icon-component-using-react-icons-library-in-an-astrojs-project-nk4
astro, webdev, frontend, javascript
In the past weeks lately, I have been focused on building clean landing page websites using AstroJs framework. One of the difficulties I often face however is the limitation of the icon libraries available in the astro-icons, as compared to the react-icon library. So here's what I decided to do: ```js import React from 'react'; import * as FaIcons from 'react-icons/fa'; import * as MdIcons from 'react-icons/md'; import * as AiIcons from 'react-icons/ai'; import * as GiIcons from 'react-icons/gi'; import * as IoIcons from 'react-icons/io'; import * as CiIcons from "react-icons/ci"; import * as FiIcons from "react-icons/fi"; import * as LuIcons from "react-icons/lu"; const iconSets = { fa: FaIcons, md: MdIcons, ai: AiIcons, gi: GiIcons, io: IoIcons, ci: CiIcons, fi: FiIcons, lu: LuIcons, }; const Icon = ({ name, set = 'fa', size = 24, color = 'currentColor', className = '' }) => { const IconComponent = iconSets[set][name]; if (!IconComponent) { console.warn(`Icon ${name} from set ${set} not found`); return null; } return <IconComponent size={size} color={color} className={className} />; }; export default Icon; ``` After which I imported this IconX (which I called it, that it doesn't conflict with the icon component from astro-icons) component into the components I would like to use it in. ``` js <IconX size={24} set="ci" name={"CiSearch"} client:load /> ``` Now I have access to the thousand of icons provided by react-icons right in my AstroJs Project. Do like, share and follow for more. Enjoy the read.
mrpaulishaili
1,905,595
Boost Your Angular App's Speed: Code Splitting Strategies
Introduction In today's blog post we'll learn how can you boost your Angular app's speed....
0
2024-06-30T02:39:57
https://dev.to/wirefuture/boost-your-angular-apps-speed-code-splitting-strategies-5e5a
angular, webdev, javascript, typescript
## Introduction In today's blog post we'll learn how can you boost your Angular app's speed. In Angular, performance optimization is a crucial factor for a great user experience. The most powerful technique in your toolkit is code splitting. This strategy involves breaking your application into smaller, much more manageable chunks that load only as and when required. By implementing code splitting based on routes, modules and lazy loading you can reduce initial loading times and improve performance. ## What is Code Splitting? In [Angular development](https://wirefuture.com/angular-development), code splitting is about loading parts of your application on demand rather than all at once. This not only speeds up the initial load but also conserves resources by loading only what's necessary for the current view or feature. ## Tip 1: Route-Based Code Splitting Angular's routing mechanism supports lazy loading of modules based on routes. This means modules are loaded only when the user navigates to a corresponding route. Here's how you can set it up: ``` const routes: Routes = [ { path: 'dashboard', loadChildren: () => import('./dashboard/dashboard.module').then(m => m.DashboardModule) }, { path: 'profile', loadChildren: () => import('./profile/profile.module').then(m => m.ProfileModule) }, { path: 'settings', loadChildren: () => import('./settings/settings.module').then(m => m.SettingsModule) } ]; ``` In this example, each route (dashboard, profile, settings) loads its module (DashboardModule, ProfileModule, SettingsModule) only when the user navigates to that specific route, keeping initial bundle size small. ## Tip 2: Component-Based Code Splitting For more granular control, you can implement component-based code splitting using Angular's ComponentFactoryResolver. This approach allows you to dynamically load components when they are needed, rather than loading them upfront. Here's a simplified example: ``` import { Component, ComponentFactoryResolver, ViewContainerRef } from '@angular/core'; @Component({ selector: 'app-dynamic-component', template: '<ng-template #dynamicComponentContainer></ng-template>' }) export class DynamicComponentComponent { constructor(private resolver: ComponentFactoryResolver, private container: ViewContainerRef) {} loadComponent() { import('./path/to/your/dynamic-component').then(({ DynamicComponent }) => { const factory = this.resolver.resolveComponentFactory(DynamicComponent); this.container.createComponent(factory); }); } } ``` This method loads components dynamically, enhancing performance by loading only the necessary components at runtime. ## Tip 3: Preloading Strategies To strike a balance between immediate and deferred loading, Angular offers preloading strategies. These strategies allow you to load critical modules in the background after the initial content has loaded, ensuring a seamless user experience during navigation. Here's how you can configure it in your routing module: ``` import { NgModule, PreloadAllModules } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; const routes: Routes = [ // your routes here ]; @NgModule({ imports: [RouterModule.forRoot(routes, { preloadingStrategy: PreloadAllModules })], exports: [RouterModule] }) export class AppRoutingModule { } ``` By specifying PreloadAllModules, Angular preloads all lazy-loaded modules in the background after the main content has loaded, optimizing subsequent navigation. ## Tip 4: Bundle Analysis and Optimization To make sure your code splitting efforts deliver maximum performance benefits, use tools like Angular CLI's bundle analyzer. These tools analyze your application's bundle size, identify areas for further optimization, and recommend effective code splitting strategies. ## Conclusion Code splitting in Angular is a game changer for optimizing your application performance. Whether through route-based lazy loading, component-based splitting, preloading strategies or bundle optimization, these techniques let you build faster, more responsive Angular applications. Try out these strategies, track performance metrics and refine your approach to stay ahead of the curve in Angular development.
tapeshm
1,906,271
My Disney Internship: Month One
I won’t lie, I was nervous to start my internship at Disney. Why me? I’ve only been learning to code...
0
2024-06-30T02:31:35
https://dev.to/dlewisdev/my-disney-internship-month-one-1akk
ios, swift
I won’t lie, I was nervous to start my internship at Disney. Why me? I’ve only been learning to code for a year, would I be able to produce and succeed in a company this large and successful? I had no idea what to expect, and I psyched myself out to the point where I wasn’t sure I’d show up for orientation. Thankfully, all my anxiety was lifted shortly after orientation began. Disney does a wonderful job of making their interns feel the magic. After a long day of being reminded of the history of Disney, the impact the company has on their guests, and the part I’ll get to play in creating the magic for others, I was more excited than nervous to meet my team and get to work. I’m a native iOS developer. I started learning Swift and SwiftUI in June of last year, shortly after WWDC ‘23. My internship, however, is on a backend team. I was told which languages to expect to work in after I secured the position, and I had little to no experience at all in them. I wasn’t expected to, which was great, but as someone who deeply cares about doing good work this was not ideal. One of my quotes to live by is “Never disrespect someone’s belief in you.” I knew that Disney internships weren’t easy to earn. I had to remind myself that I was chosen because the interviewers genuinely believed I could do the job. When I finally got to my first day, I received incredible news. The backend team had been assigned an iOS project, and they had no other iOS developers on the team. Not only would I get to spend my internship working on an iOS project, but it’s a project I get to own, help shape, and provide valued input on. I couldn’t even imagine a better situation for my growth and development. I was ecstatic until I found out that the project was written entirely in Objective-C and UIKit. I’d spent the past year avoiding both Objective-C and UIKit like they have cooties. I had never even seen a single line of Objective-C code and UIKit made me feel nauseated. Unwell. Unpleasant physical reactions. Fear set in immediately and I went right back to worrying if I’d be able to succeed this summer. After yet another pep talk from my partner, I realized how much of a gift this opportunity was. Objective-C code is still prevalent in the industry, especially in more established codebases. Looking at job listings, many of them give bonus points for familiarity with Objective-C. I may not like it, but SwiftUI adoption isn’t yet at the point where I can have zero experience with UIKit and be competitive in this job market. Earning a paycheck to add this legacy language to my bag while being able to ask questions and learn from experienced developers on other teams is an absolute blessing. Several weeks later, I’m genuinely appreciative of this incredible opportunity. I’ve been able to make my way around the codebase because reading Objective-C isn’t *that* much different from reading Swift. My team started me out with simple tasks and that’s helped me get comfortable with the basics like string manipulation, header and method files, pointers, and the many syntax differences coming from Swift. A resource that has been invaluable to me is Paul Hudson’s book [Objective-C for Swift Developers](https://www.hackingwithswift.com/store/objective-c-for-swift-developers). That’s not an affiliate link or anything like that. I don’t get paid to promote it. It’s been genuinely helpful to me and if you’re a Swift developer trying to learn Objective-C I think it’ll be helpful to you. I thought I’d be miserable working with Objective-C, but as a good friend told me, this might be the most important project of my career. I get to learn a legacy language with the expectations (learn) and pressure (none) of an intern, and I don’t have to feel bad asking for questions because my team is, and I don’t want to be hyperbolic, the greatest, brightest, and most supportive team to ever grace this godforsaken planet. I’m having too much fun for this to be real life, learning and growing rapidly in this environment, and I can’t wait to keep going and see how this experience impacts the rest of my journey.
dlewisdev
1,906,270
SvelteKit with Notion CMS
Original: https://codingcat.dev/podcast/sveltekit-with-notion-cms
26,111
2024-06-30T02:28:33
https://codingcat.dev/podcast/sveltekit-with-notion-cms
webdev, javascript, beginners, podcast
Original: https://codingcat.dev/podcast/sveltekit-with-notion-cms {% youtube https://youtu.be/OiapPdbBrpY?si=hbHH8daKr3y4Zd3c %}
codercatdev
1,906,269
JS Runtime / Execution context
JavaScript runtime is the environment or engine required to run JavaScript code. These runtime...
0
2024-06-30T02:27:28
https://dev.to/bekmuhammaddev/js-runtime-execution-context-3900
english, runtime, javascript
JavaScript runtime is the environment or engine required to run JavaScript code. These runtime environments parse and execute JavaScript code. JavaScript is the only language that runs in the browser. Runtime types and operating environment: 1-Google chrome (Browser) Browsers are the main runtime environment of JavaScript. Each browser has its own JavaScript engine: - Google Chrome: V8 engine. - Mozilla Firefox: SpiderMonkey engine. - Safari: JavaScriptCore (Nitro) engine. - Microsoft Edge: Chakra old versions and V8 new versions. In the browser JavaScript runtime environment, JavaScript code can be used together with HTML and CSS. 2-Node.js Node.js is a runtime environment for running JavaScript on the server side. This environment is based on the V8 engine and allows JavaScript code to be executed outside the browser. _Nodejs technology runs javascript codes outside the browser._ - Server scripts: HTTP servers, APIs, etc. - Asynchronous performance: The asynchronous nature of Node.js makes it highly efficient and fast. - Large ecosystem: There are many modules and libraries for Node.js. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/19w49f09e0r11gi5ic6q.png) **Execution context** In the JavaScript programming language, the execution context is the environment that contains all the information necessary to execute the code. Each executable code has its own execution context. Execution context contains the following elements: - Variable Object All variables, functions and arguments are stored here. - Scope Chain Scope chain represents the search order of variables and functions. Each execution context has its own scope chain, and this scope chain contains references to the outer parent execution contexts. - this Keyword The this keyword can refer to different objects depending on the execution context. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/18mlpa8ipdwda924mosb.png) **Types of Execution Context** There are 2 main types of execution context in JavaScript: 1-Global Execution Context: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8mhaz42tc95xmjpfmpgi.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ycwljoim7y4e531e5c8.png) 2-Function Excution Context: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hcghlkzrznkke6pdh7ag.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zu1plyy26y29mzbt6sm8.png) **javascript in global execution vs function execution** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xy08j6aa2qmzl1q4abe3.png) Execution context: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ukmip4nz965y8dw5k8z3.png) **Execution Context** When JavaScript is running in a browser, it cannot be understood directly, so it must be converted into a machine-readable language. When the browser's JavaScript engine encounters JavaScript code, it "translates" the JavaScript code we wrote and creates a special environment that controls the execution. This environment is called Execution context. ***Execution context** can have **global scope** and **function scope**. When JavaScript first runs, it creates a **global scope**.* *Next, the JavaScript **parses** and stores the variable and function declarations **in memory**.* *Finally, the code will initialize the variables stored in memory.* Execution context is a block of data opened by JavaScript for each block of code, which contains all the information needed for the currently running code. For example, the variables/functions/this keyword ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rntetnscmgiu6ljtuqsi.png) - Creation phase - variables declared with var are assigned **undefined**, variables declared with let are **uninitilized** and functions are read. - Execution phase - variables are assigned values ​​and functions are called.
bekmuhammaddev
1,906,262
Unified Approximation Theorem for Neural Networks
For any ( f \in \mathcal{F}(\mathbb{R}^n) ) and any ( \epsilon &gt; 0 ), there exists a neural...
0
2024-06-30T02:17:35
https://dev.to/ramsi90/unified-approximation-theorem-for-neural-networks-2b23
For any ( f \in \mathcal{F}(\mathbb{R}^n) ) and any ( \epsilon > 0 ), there exists a neural network ( \mathcal{N}(\mathbf{x}; \theta) ) with parameters ( \theta ) such that: [ \sup_{\mathbf{x} \in K} \left| f(\mathbf{x}) - \mathcal{N}(\mathbf{x}; \theta) \right| < \epsilon, ] where ( K \subset \mathbb{R}^n ) is compact.
ramsi90
1,906,261
Centralized Exchange Development: A Comprehensive Guide
Introduction Cryptocurrencies have revolutionized the financial landscape, offering a...
27,673
2024-06-30T02:10:52
https://dev.to/rapidinnovation/centralized-exchange-development-a-comprehensive-guide-3dke
## Introduction Cryptocurrencies have revolutionized the financial landscape, offering a new way of thinking about money, assets, and exchanges. Centralized exchanges (CEXs) play a pivotal role in this ecosystem, acting as the primary gateways for cryptocurrency trading and investment. ## What is Centralized Exchange Development? Centralized Exchange Development refers to the process of creating a platform where users can trade cryptocurrencies or other assets in a controlled environment. These exchanges are managed by a central authority which oversees all transactions, ensuring security, liquidity, and compliance with regulatory frameworks. ## Types of Centralized Exchanges Centralized exchanges can be categorized into three types: ## Benefits of Centralized Exchanges Centralized exchanges offer several advantages, including enhanced liquidity, high trading volume, and user-friendly interfaces, making them a popular choice for both novice and experienced traders. ## Challenges in Centralized Exchange Development Developing a centralized exchange comes with its set of challenges, including security concerns, regulatory compliance, and scalability issues. Addressing these challenges is crucial for the successful operation of the exchange. ## How Centralized Exchanges are Developed The development process involves planning and requirement analysis, choosing the right technology stack, and rigorous implementation and testing to ensure the platform meets all specified requirements and functions correctly under various scenarios. ## Future of Centralized Exchanges The future of centralized exchanges is expected to be driven by technological advancements, regulatory changes, and evolving market trends. Innovations in blockchain, AI, and cybersecurity will play a crucial role in shaping the future of these platforms. ## Real-World Examples of Centralized Exchanges Some of the most prominent centralized exchanges include Binance, Coinbase, and Kraken. These platforms are known for their ease of use, high liquidity, and a wide range of supported cryptocurrencies. ## In-depth Explanations Centralized exchanges implement various security protocols, rely on market makers for liquidity, and use robust user authentication processes to ensure the safety and efficiency of their platforms. ## Comparisons & Contrasts Comparing centralized and decentralized exchanges highlights their unique features, benefits, and drawbacks, helping users make informed decisions based on their specific needs and contexts. ## Why Choose Rapid Innovation for Implementation and Development Rapid Innovation offers expertise in blockchain technology, a proven track record, and customized solutions, making it an ideal partner for developing and implementing advanced crypto exchanges. ## Conclusion Centralized exchanges have played a pivotal role in the development and adoption of cryptocurrencies. Rapid innovation is crucial for the advancement of these platforms, enhancing their efficiency, security, and user experience. 📣📣Drive innovation with intelligent AI and secure blockchain technology! Check out how we can help your business grow! [Blockchain Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [Blockchain Development](https://www.rapidinnovation.io/service- development/blockchain-app-development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) [AI Software Development](https://www.rapidinnovation.io/ai-software- development-company-in-usa) ## URLs * <https://www.rapidinnovation.io/post/the-importance-of-centralized-exchange-development> ## Hashtags #CryptoExchanges #BlockchainTechnology #CryptoTrading #CentralizedExchanges #CryptoInnovation
rapidinnovation
1,906,260
7 Best Practices for ReactJS Development in 2024
Modern web development has made ReactJS an essential tool, enabling developers to create dynamic and...
0
2024-06-30T02:08:26
https://dev.to/vyan/7-best-practices-for-reactjs-development-in-2024-1a6e
webdev, javascript, beginners, react
Modern web development has made ReactJS an essential tool, enabling developers to create dynamic and interactive user interfaces with ease. ReactJS development best practices change along with technology. Maintaining technological leadership will necessitate implementing fresh approaches and plans to guarantee scalable, maintainable, and efficient code. In 2024, the following seven best practices should be adhered to when developing ReactJS: ### 1. Embracing Component-Driven Development Among ReactJS best practices, component-driven programming is still at the forefront. The goal is to simplify complicated UIs into reusable parts that support code reusability and outline functionality. Adopting component-driven development promotes teamwork, improves code maintainability, and expedites the development process. **Example:** ```jsx const Button = ({ onClick, label }) => ( <button onClick={onClick}>{label}</button> ); const App = () => ( <div> <Button onClick={() => alert('Button clicked!')} label="Click Me" /> <Button onClick={() => alert('Another button clicked!')} label="Another Click" /> </div> ); ``` ### 2. Leveraging React Hooks for State Management React Hooks' broad adoption has made handling state in functional components easier to understand and implement. Effective state management requires utilizing React Hooks like `useEffect` and `useState`. Developers can avoid the drawbacks of class-based components and write cleaner, more legible code by adopting Hooks. **Example:** ```jsx import React, { useState, useEffect } from 'react'; const Counter = () => { const [count, setCount] = useState(0); useEffect(() => { document.title = `Count: ${count}`; }, [count]); return ( <div> <p>You clicked {count} times</p> <button onClick={() => setCount(count + 1)}>Click me</button> </div> ); }; ``` ### 3. Implementing Performance Optimization Techniques Performance optimization of ReactJS applications is critical in this dynamic environment. By using methods like code splitting, memoization, and lazy loading, bundle sizes can be decreased, render times can be decreased, and user experience can be improved overall. React developers continue to place a high priority on performance optimization to maintain the responsiveness and scalability of their applications despite growing complexity. **Example:** ```jsx import React, { useState, useMemo } from 'react'; const ExpensiveCalculationComponent = ({ num }) => { const computeExpensiveValue = (num) => { // Simulate a heavy computation console.log('Computing...'); return num * 2; }; const memoizedValue = useMemo(() => computeExpensiveValue(num), [num]); return <div>Computed Value: {memoizedValue}</div>; }; ``` ### 4. Embracing Server-Side Rendering (SSR) and Static Site Generation (SSG) Static site generation (SSG) and server-side rendering (SSR) have become more popular in ReactJS development due to the growing demands of search engines and user expectations regarding quickly loaded web pages. With frameworks like Next.js, developers can embrace SSR and SSG approaches and produce lightning-fast online experiences without sacrificing the adaptability and interaction of React components. **Example:** ```jsx // pages/index.js with Next.js import React from 'react'; const Home = ({ data }) => ( <div> <h1>Welcome to Next.js!</h1> <p>Data fetched from server: {data}</p> </div> ); export async function getServerSideProps() { // Fetch data from an API const res = await fetch('https://api.example.com/data'); const data = await res.json(); return { props: { data } }; } export default Home; ``` ### 5. Adopting TypeScript for Type Safety TypeScript continues to gain traction as the preferred language for building robust and scalable web applications. Developers can benefit from greater code maintainability, improved tooling support, and type safety when they use TypeScript in ReactJS applications. They may work together more productively in teams, identify issues early, and refactor code with confidence by utilizing TypeScript's static typing features. **Example:** ```tsx import React from 'react'; type GreetingProps = { name: string; }; const Greeting: React.FC<GreetingProps> = ({ name }) => <h1>Hello, {name}!</h1>; export default Greeting; ``` ### 6. Implementing Accessibility Practices Accessibility and inclusive design are becoming required components of contemporary online development, not optional features. By using accessibility practices, ReactJS projects may guarantee that their applications meet industry standards and laws and are usable by people with disabilities. Finding and fixing accessibility problems early in the development process is made easier by integrating tools like React Accessibility Scanner and carrying out manual audits. **Example:** ```jsx const AccessibleButton = ({ onClick, label }) => ( <button onClick={onClick}> {label} </button> ); const App = () => ( <div> <AccessibleButton onClick={() => alert('Button clicked!')} label="Click Me" /> </div> ); ``` ### 7. Embracing Continuous Integration and Deployment (CI/CD) Delivering high-quality ReactJS apps requires streamlining the development workflow with CI/CD pipelines. Build, test, and deployment procedures may all be automated to help teams work more efficiently, find errors earlier, and confidently push updates to production. Adopting CI/CD processes increases customer satisfaction and commercial value by promoting a culture of creativity, teamwork, and quick iteration. **Example:** ```yaml # .github/workflows/ci.yml for GitHub Actions name: CI on: push: branches: [ main ] pull_request: branches: [ main ] jobs: build: runs-on: ubuntu-latest steps: - uses: actions/checkout@v2 - name: Set up Node.js uses: actions/setup-node@v2 with: node-version: '20' - name: Install dependencies run: npm ci - name: Run tests run: npm test - name: Build project run: npm run build ``` ### In Closing ReactJS development is characterized by a relentless pursuit of efficiency, scalability, and user-centricity. Through the adoption of contemporary tools and practices, performance optimization strategies, and React Hooks, developers may create innovative web apps. ReactJS is at the forefront of web development in 2024 and beyond, emphasizing innovation and teamwork.
vyan
1,906,257
Getting Started with Python: A Beginner's Guide
Welcome to your journey into Python programming! Python is a versatile and powerful programming...
0
2024-06-30T02:00:17
https://dev.to/tinapyp/getting-started-with-python-a-beginners-guide-29f2
python, datascience, beginners
Welcome to your journey into Python programming! Python is a versatile and powerful programming language that is easy to learn and fun to use. In this guide, we will cover the basics of Python, including how to write your first line of code, understand data types, work with variables, and define functions. Let's dive in! ## First Line of Code Every journey begins with a single step. In Python, the first step is often writing a simple line of code to print a message to the screen. ```python print('Hello My Lovely!') ``` This code uses the `print` function to display the text `'Hello My Lovely!'` on the screen. ## Data Types Understanding data types is crucial in programming. Here are the basic data types in Python: - **int**: Represents integers (e.g., 1, 2, 3) - **float**: Represents floating-point numbers (e.g., 1.5, 2.75) - **str**: Represents strings, which are sequences of characters (e.g., 'hello', 'Python') - **boolean**: Represents Boolean values, either `True` or `False` Example usage: ```python print(1 + 2) # Output: 3 print(1 + 2.5) # Output: 3.5 print(1 + '2') # Type Error: can't add str with int ``` ## Variables Variables are names that we give to certain values in our code. They allow us to store and manipulate data. ```python a = 2 + 3 # a here is a variable holding the value 5 ``` ## Functions Functions are useful for writing reusable code. A function is a block of code that performs a specific task. ```python def luasSegitga(alas, tinggi): return alas * tinggi / 2 ``` In this example, `luasSegitga` is a function that calculates the area of a triangle. ## Comparison Operators Python comparison operators return Boolean results: **True** or **False**. | Symbol | Name | Expression | Description | |--------|-------------------------|------------|-----------------------------| | == | Equality operator | a == b | a is equal to b | | != | Not equal to operator | a != b | a is not equal to b | | > | Greater than operator | a > b | a is larger than b | | >= | Greater than or equal to| a >= b | a is larger than or equal to b | | < | Less than operator | a < b | a is smaller than b | | <= | Less than or equal to | a <= b | a is smaller than or equal to b | ### Comparison Operators With Strings Python uses Unicode values to compare strings: | Expression | Description | |------------|-------------| | "a" == "a" | If string "a" is identical to string "a", returns True. Else, returns False | | "a" != "b" | If string "a" is not identical to string "b" | | "a" > "b" | If string "a" has a larger Unicode value than string "b" | | "a" >= "b" | If the Unicode value for string "a" is greater than or equal to the Unicode value of string "b" | | "a" < "b" | If string "a" has a smaller Unicode value than string "b" | | "a" <= "b" | If the Unicode value for string "a" is smaller than or equal to the Unicode value of string "b" | ## Logical Operators Logical operators are used to combine multiple Boolean expressions. | Operator | Description | |----------|------------------------------------| | and | Returns True if both expressions are True | | or | Returns True if at least one expression is True | | not | Returns True if the expression is False | Example: ```python x = True y = False print(x and y) # Output: False print(x or y) # Output: True print(not x) # Output: False ``` ## Data Structures Python provides several data structures to store collections of data. ### Lists Lists are ordered collections of items. ```python my_list = [1, 2, 3, 'a', 'b', 'c'] ``` ### Tuples Tuples are similar to lists but are immutable. ```python my_tuple = (1, 2, 3, 'a', 'b', 'c') ``` ### Sets Sets are unordered collections of unique items. ```python my_set = {1, 2, 3, 'a', 'b', 'c'} ``` ### Dictionaries Dictionaries store key-value pairs. ```python my_dict = {'name': 'John', 'age': 30} ``` ## Conclusion This guide covered the basics of Python, including writing your first line of code, understanding data types, working with variables, defining functions, and using comparison and logical operators. With this foundation, you are well on your way to becoming proficient in Python programming. Keep practicing, and happy coding!
tinapyp
1,906,259
Building TailwindUI's Spotlight using SvelteKit and Svelte 5 with TailwindCSS
Introduction As a developer who is always eager to learn and showcase my work, I wanted a...
0
2024-06-30T01:59:55
https://dev.to/sirneij/building-tailwinduis-spotlight-using-sveltekit-and-svelte-5-with-tailwindcss-5f7f
webdev, javascript, svelte, sveltekit
## Introduction As a developer who is always eager to learn and showcase my work, I wanted a "portfolio" application to share my learnings and projects. While searching for inspiration, I stumbled upon [Spotlight by TailwindUI][1] and [briandev's spotlight],[2] a stunning UI created by the Tailwind CSS team. However, since it is a paid product built with Next.js, I decided to challenge myself by replicating it using SvelteKit, specifically to explore the new features of Svelte 5, which was a release candidate at the time. My goal was to recreate the spotlight design while integrating additional functionalities such as syntax highlighting with Highlight.js, a dev.to-like new post creation page, a commenting system, and the ability to modify and run some code blocks directly within the application. ## Source Code You can access the complete source code for this project on GitHub: {% github Sirneij/spotlight-sveltekit %} Additionally, the application is live at [spotlight-sveltekit.vercel.app](https://spotlight-sveltekit.vercel.app/). ## Implementation ### Step 1: Setting Up the Project and Layout #### Installing TailwindCSS First, we need to set up TailwindCSS in our SvelteKit project. Follow the [official TailwindCSS installation guide for SvelteKit][3] to get started. Once TailwindCSS is installed, we can proceed to configure our layout. #### Modifying `app.html` and Creating `+layout.svelte` In `app.html`, ensure you include some of the theme-switching scripts which allow us to persist the user's desired theme in the browser's localStorage. See [src/app.html][4]. Next, we create `+layout.svelte` to define our main layout structure: ```html <script lang="ts"> import Footer from "$lib/components/layout/Footer.svelte"; import Header from "$lib/components/layout/Header.svelte"; import type { Snippet } from "svelte"; import "../app.css"; import type { PageData } from "./$types"; const { data, children }: { data: PageData; children: Snippet } = $props(); const isCreate = $derived(data.url.includes("create")); </script> <div class="fixed inset-0 flex justify-center sm:px-8"> <div class="flex w-full max-w-7xl lg:px-8"> <div class="w-full bg-white ring-1 ring-zinc-100 dark:bg-zinc-900 dark:ring-zinc-300/20" ></div> </div> </div> <div class="relative" class:h-screen="{isCreate}"> <Header isHomePage={data.url === '/'} {isCreate} /> <main>{@render children()}</main> {#if !isCreate} <footer /> {/if} </div> ``` In Svelte 5, `$props` replaces `export let` from Svelte 4. Hence, instead of `export let data: PageData`, I did: ```html <script lang="ts"> ... const { data, children }: { data: PageData; children: Snippet } = $props(); ... </script> ``` `$props()` also have a reserved property, called `children`, which contains default content. There's a new feature called [Snippet][5]. The `$effect` rune is used in place of `onMount` and `onDestroy`. Learn more about these new runes and other stuff on the [Svelte 5 documentation][6]. Also, note how `{@render children()}` replaces `<slot/>`. You don't need Svelte stores in Svelte 5. You can have a `.svelte.ts` file that exposes some variables which use the svelte rune. For instance, in the dev.to-like editor I built, I store tags using this class: ```ts /** * Represents the state of tags. * * @class TagState * @property {Set<string>} tagList - Set of tags. * @method addTag - Add a tag to the set. * @file frontend/src/lib/states/tags.svelte.ts */ class TagState { // Set of tags. tagList = $state<Set<string>>(new Set()); // Add a tag to the set. addTag(tag: string) { this.tagList.add(tag); } } export const tagState = new TagState(); export const tagsAbortController = new AbortController(); ``` It serves as my store!!! Beautiful, huh? ### Step 2: Integrating Highlight.js For syntax highlighting, I used Highlight.js. I created a custom wrapper to handle code block rendering and highlighting. Take a look at [src/lib/utils/helpers/code.block.ts][7]. This wrapper supports theme switching between `horizon-dark` and `night-owl`, displays filenames, allows code copying, and even runs some code using an `iframe` for security reasons. The design of the code block component is inspired by TailwindCSS's documentation style. ### Step 3: Creating a dev.to-like Custom Editor #### Tag Selection with Suggestions For the tag selection feature, I used a combination of Svelte's reactivity and some functions to provide suggestions as users type. The code works but it can be cleaned further. #### Markdown Rich-Text Editor For the markdown editor, I built a user-friendly markdown editor interface. It has most of dev.to's features including keyboard combinations. It detects the Operating system a user is running and gives key combinations based on this. You can combine three keys. For instance, `CMD/CTRL + SHIFT + K` will add a code block to the textarea. The parts of the code block are explained as follows: `language` is the programming language. `filename` is the name of the file you want to display its code. `{line nos}` represents the line numbers, separated by commas, you want to emphasize. `runnable` denotes whether or not the code can be run. If it's present, it means it can be run. Otherwise, it can't. This custom code block is parsed by a custom parser to extract these parts using regex. #### Custom Parser for GitHub Repos I wrote a custom parser to detect and embed GitHub repositories as well. Just like dev.to's. The full implementation includes other features and refinements, which you can explore in the source code. ### Future Enhancements The final version of this application for my portfolio will include a backend built with either Go or Rust, incorporating more robust features such as a real-time commenting system. Stay tuned for updates, and feel free to suggest new features or your preferred backend language. ## Outro Enjoyed this article? I'm a Software Engineer and Technical Writer actively seeking new opportunities, particularly in areas related to web security, finance, healthcare, and education. If you think my expertise aligns with your team's needs, let's chat! You can find me on [LinkedIn](https://www.linkedin.com/in/john-owolabi-idogun/) and [Twitter](https://x.com/Sirneij). If you found this article valuable, consider sharing it with your network to help spread the knowledge! [1]: https://spotlight.tailwindui.com/ "Spotlight by TailwindUI" [2]: https://github.com/bketelsen/briandev "briandev's spotlight" [3]: https://tailwindcss.com/docs/guides/sveltekit "TailwindCSS sveltekit guide" [4]: https://github.com/Sirneij/spotlight-sveltekit/blob/main/src/app.html "src/app.html" [5]: https://svelte-5-preview.vercel.app/docs/snippets "Svelte 5 Snippets" [6]: https://svelte-5-preview.vercel.app/docs/runes "Svelte 5 runes" [7]: https://github.com/Sirneij/spotlight-sveltekit/blob/main/src/lib/utils/helpers/code.block.ts "Custom wrapper for highlight.js"
sirneij
1,906,258
Bootstrapping Cloudflare Workers app with oak framework & routing controller
Hello all, Following up on my previous introductory post about the library oak-routing-ctrl, I'd...
27,800
2024-06-30T01:57:39
https://dev.to/thesephi/bootstrapping-cloudflare-workers-app-with-oak-framework-routing-controller-3blp
worker, typescript, oak, webhook
Hello all, Following up on my previous [introductory post about the library oak-routing-ctrl](https://dev.to/thesephi/scaffolding-api-projects-easily-with-oak-routing-ctrl-1pj), I'd love to share an **npm script** that generates a boilerplate code-base with the following tools built-in: - [oak framework](https://oakserver.org/) (middleware framework) - [oak-routing-ctrl](https://jsr.io/@dklab/oak-routing-ctrl) (TypeScript decorator for API scaffolding) - [Cloudflare Workers](https://workers.cloudflare.com/) application development kit (wrangler) --- ## Part 1: Bootstrapping ```bash npm create oak-cloudflare-worker@latest ``` This script will ask us a few setup preferences, such as: - project **directory** (we can leave empty to use the _current directory_) - project meta i.e. **name**, **version**, **author**, etc. - basically the things we'd normally declare in the `package.json` file Once the last step is confirmed, we'd have a project directory ready to work on. Here's the folder structure: ``` |____.npmrc |____.gitignore |____package.json |____tsconfig.json |____wrangler.toml |____src |____index.ts |____CfwController.ts ``` --- ## Part 2: Developing We can now install the dependencies (as declared in `package.json`) with e.g.: ```bash npm i ``` Executing `npm run dev` will start the "Worker" on local: ``` ⎔ Starting local server... [wrangler:inf] Ready on http://localhost:8787 ╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │ [b] open a browser, [d] open Devtools, [l] turn off local mode, [c] clear console, [x] to exit │ ╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ ``` Assuming we use the template example provided in the boilerplate ([CfwController.ts](https://github.com/Thesephi/oak-routing-ctrl-cloudflare-worker/blob/main/src/CfwController.ts)), then we can communicate with the Worker like so: ``` curl http://localhost:8787/echo/devto {"query":{},"body":{},"param":{"name":"devto"},"headers":{"accept":"*/*","accept-encoding":"br, gzip","host":"localhost:8787","user-agent":"curl/8.6.0"}} ``` --- ## Part 3: Deploying To deploy to Cloudflare, one way is simply running ```bash npm run deploy ``` which will guide us through the Cloudflare authentication flow until the application is fully "uploaded" to Cloudflare. Alternatively, if we don't like going through the authentication flow all the time, we can get a [Cloudflare API token](https://developers.cloudflare.com/fundamentals/api/get-started/create-token/). Note that, at the minimum, we'll need the permissions to manage "Workers" resources. I chose these for my example: ![Example of Cloudflare API token permissions to manage Worker resources](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kjti73q0078gdroxaye0.png) With the token, one can add it to the env var and run the deployment script e.g. like so: ```bash CLOUDFLARE_API_TOKEN=your-cloudflare-api-token npm run deploy ``` --- ## Closing words I hope the script ```bash npm create oak-cloudflare-worker@latest ``` may help you bootstrap your Cloudflare Workers project with ease. It is obviously opinionated on the `oak` and `oak-routing-ctrl` libraries, but also there's nothing preventing one from switching to another supporting framework (or writing everything from scratch). If 'starting from scratch' is your flavour, you can also execute `npm create cloudflare@latest` (as [documented](https://developers.cloudflare.com/workers/get-started/guide/)) to start off with an almost-empty Cloudflare Workers project. In parallel, a [GitHub code template](https://github.com/Thesephi/oak-routing-ctrl-cloudflare-worker) also exists as an alternative to running `npm create oak-cloudflare-worker`. For transparency: the source code for the `npm create` script itself is [hosted here](https://github.com/Thesephi/create-oak-cloudflare-worker). If you've created or plan to create something cool with this script, your story would be so much appreciated in the comment section below 🙏. If you got an idea to improve the script, I'm all ears here, or you may simply [suggest a PR](https://github.com/Thesephi/create-oak-cloudflare-worker/pulls)! I wish you enjoyments in your next projects 🍺🍺🍺
thesephi
1,906,193
Symfony Station Communiqué - 28 June 2024: A look at Symfony, Drupal, PHP, Cybersec, and Fediverse News!
This communiqué originally appeared on Symfony Station. Welcome to this week's Symfony Station...
0
2024-06-30T00:57:51
https://dev.to/reubenwalker64/symfony-station-communique-28-june-2024-a-look-at-symfony-drupal-php-cybersec-and-fediverse-news-5dmk
symfony, drupal, php, fediverse
This communiqué [originally appeared on Symfony Station](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024). Welcome to this week's Symfony Station communiqué. It's your review of the essential news in the Symfony and PHP development communities focusing on protecting democracy. That necessitates an opinionated Butlerian jihad against big tech as well as evangelizing for open-source and the Fediverse. We also cover the cybersecurity world. You can't be free without safety and privacy. There's good content in all of our categories, so please take your time and enjoy the items most relevant and valuable to you. This is why we publish on Fridays. So you can savor it over your weekend. Or jump straight to your favorite section via our website. - [Symfony Universe](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024#symfony) - [PHP](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024#php) - [More Programming](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024#more) - [Fighting for Democracy](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024#other) - [Cybersecurity](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024#cybersecurity) - [Fediverse](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024#fediverse) Once again, thanks go out to Javier Eguiluz and Symfony for sharing [our communiqué](https://symfonystation.mobileatom.net/Symfony-Station-Communique-21-June-2024) in their [Week of Symfony](https://symfony.com/blog/a-week-of-symfony-912-17-23-june-2024). **My opinions will be in bold. And will often involve cursing. Because humans.** --- ## Symfony As always, we will start with the official news from Symfony. Highlight -> "This week, the upcoming Symfony 7.2 version simplified the kernel setup in MicroKernelTrait, added errorPath to Unique constraint and improved profiler data about Security. Meanwhile, we published more information about how to become a partner at SymfonyCon Vienna 2024". ### [A Week of Symfony #912 (17-23 June 2024)](https://symfony.com/blog/a-week-of-symfony-912-17-23-june-2024) They also have: [Their latest newsletter](https://symfony.cmail20.com/t/y-e-mjrtukk-dyurtluuut-td/) SymfonyCasts has: [This week on SymfonyCasts](https://5hy9x.r.ag.d.sendibm3.com/mk/mr/sh/1t6AVsd2XFnIGKAdc2c4TLVnzSQuLj/0yt7DrjnlcVt) --- ## Featured Item Now that I've moved from 50% to 75% retired, I can write more articles like this. Using icons in your site's design is important for tech sites' UX. It helps your design stand out and look more professional and technical. Now there's an easy way to add them. Symfony UX's latest effort is a fantastic addition. It is **Icons**, which I love. ### [UX Symfony's Icons polish your projects to a Professional and Authoritative sheen](https://grav.mobileatom.net/blog/ux-symfonys-icons) **You will notice the article is not on this site. And, it's because I couldn't get an equivalent implemented in Drupal despite creating a custom content type and adding a module specifically for using these icons with CKEditor 5. :( So it's on my Grav site.** --- ### This Week Ismaile Abdallah advises: [Symfony: Stop checking for dependency updates](https://medium.com/@ZeroCool001/symfony-stop-checking-for-dependency-updates-7087ea2dd35c) Lubna Altungi shares: [Why Symfony Developers Feel Lucky!](https://medium.com/@lubna.altungi/why-symfony-developers-feel-lucky-e8a6f48703a2) Aymeric Ratinaud explores: [Automatisons l'enregistrement du User sur n'importe quelle entité (Symfony)](https://dev.to/aratinau/automatisons-lenregistrement-du-user-sur-nimporte-quelle-entite-4f68) Jolicode asks: [Comment partager de la configuration entre Symfony et son front en JS ?](https://jolicode.com/blog/comment-partager-de-la-configuration-entre-symfony-et-son-front-en-js) Javier Equiluz demonstrates: [Generating deterministic UUIDs from arbitrary strings with Symfony](https://dev.to/javiereguiluz/generating-deterministic-uuids-from-arbitrary-strings-with-symfony-4ac6) Chris Shennan examines: [Using PHP Attributes to Create and Use a Custom Validator in Symfony](https://dev.to/chrisshennan/using-php-attributes-to-create-and-use-a-custom-validator-in-symfony-50e9) Tribus Digital shares: [The Release of Symfony 7.1](https://www.tribusdigital.com/insights/tribus-software-the-release-of-symfony-7-1) ### Platforms ### eCommerce Bleeping Computer reports: [Facebook PrestaShop module exploited to steal credit cards](https://www.bleepingcomputer.com/news/security/facebook-prestashop-module-exploited-to-steal-credit-cards/) **Should we use Meta's shit products?** 🤔 PrestaShop announces: [PrestaShop 8.1.7 Is Available](https://build.prestashop-project.org/news/2024/prestashop-8-1-7-maintenance-release/) ### CMSs Concrete CMS has: [Creating Interactive Forms with Concrete CMS](https://www.concretecms.com/about/blog/web-design/creating-interactive-forms-with-concrete-cms) <br/> TYPO3 has: [Coding the TYPO3 Core in 2024](https://typo3.org/article/coding-the-typo3-core-in-2024-1) [TYPO3 Installation and Core Web Vitals: The Secret to a High-Performing CMS](https://typo3.com/blog/typo3-installation-and-core-web-vitals) <br/> Joomla has: [Get ahead of the rest. Start testing Joomla! 5.2.0 Alpha 2 today!](https://developer.joomla.org/news/934-get-ahead-of-the-rest-start-testing-joomla-5-2-0-alpha-2-today.html) <br/> Drupal has this on the Polyfill.io situation: [3rd Party Libraries and Supply Chains - PSA-2024-06-26](https://www.drupal.org/psa-2024-06-26) The Drop Times has: [Drupal Gutenberg v4.0 to Introduce Major UI Refactor and Enhanced Editing Features](https://www.thedroptimes.com/41015/drupal-gutenberg-v40-introduce-major-ui-refactor-and-enhanced-editing-features) **Fucking fantastic.** [Embracing the AI Revolution: A Drupal Developer's Perspective](https://www.thedroptimes.com/41120/embracing-ai-revolution-drupal-developers-perspective) **Hmm, no.** [Gábor Hojtsy and Pamela Barone Share Their Perspectives on Starshot](https://www.thedroptimes.com/41165/gabor-hojtsy-and-pamela-barone-share-their-perspectives-starshot) ImageX has: [Unlock the Incredible Diversity of Robust AI-Driven Workflows with the AI Interpolator Module in Drupal](https://imagexmedia.com/blog/ai-interpolator-drupal) **Ok, this has some legitimate non-generative uses.** [The ECA Module: Setting Up Automated Actions For Various Scenarios on Your Drupal Website](https://imagexmedia.com/blog/eca-module-drupal) Web Wash looks at: [New Navigation Sidebar (Experimental) in Drupal 10.3](https://www.webwash.net/new-navigation-sidebar-experimental-in-drupal-10-3/) Specbee explores: [SAML and OAuth2 - What’s the difference and how to implement in Drupal](https://www.specbee.com/blogs/saml-and-oauth2-difference-and-how-to-implement-in-drupal) PrometSource examines: [(Study) U.S. Government CMS Preferences and Trends](https://www.prometsource.com/blog/gov-cms-usage-study) Tag1 Consulting explores: [Migrating Your Data from Drupal 7 to Drupal 10: Customizing the generated migration](https://www.tag1consulting.com/blog/migrating-your-data-drupal-7-drupal-10-customizing-generated-migration) Joshi shares: [The Biggest Challenges in Drupal 10 Migration and How to Overcome Them](https://joshics.in/blog-post/biggest-challenges-drupal-10-migration-and-how-overcome-them) Bounteous says: [Discover the Power of Drupal for Enhanced Operational Efficiency and Security for Healthcare Systems](https://www.bounteous.com/insights/2024/06/27/discover-power-drupal-enhanced-operational-efficiency-and-security-healthcare) **A great case for using Drupal.** ### Previous Weeks Blackfire continues a series: [Understanding continuous profiling:  part 3](https://blog.blackfire.io/understanding-continuous-profiling-part-3.html) --- ## PHP ### This Week Malek Althubiany is: [Exploring PHP Wrappers: Enhancing PHP Capabilities](https://medium.com/@althubianymalek/exploring-php-wrappers-enhancing-php-capabilities-280a6af827b5) Laravel News examines: [Running a Single Test, Skipping Tests, and Other Tips and Tricks](https://laravel-news.com/run-single-tests-skip-tests-phpunit-and-pest) Hash Bang Code demonstrates: [Creating A Character Bitmap In PHP](https://www.hashbangcode.com/article/creating-character-bitmap-php) Alex Castellano writes: [About PHP "Variable Variables"](https://alexwebdevelop.activehosted.com/social/28f0b864598a1291557bed248a998d4e.403) Sarah Savage starts a series: [Twenty lessons from twenty years of PHP (Part 1)](https://sarah-savage.com/twenty-lessons-from-twenty-years-of-php-part-1/) Roberto Butti looks at: [Validating JSON with JSON Schema and PHP](https://dev.to/robertobutti/validating-json-with-json-schema-and-php-2b4i) Adnan Taşdemir explores: [Understanding RabbitMQ with PHP](https://medium.com/@adnan.mehrat/understanding-rabbitmq-with-php-76c7fa753493) The PHP Consulting Company asks: [PHP_CodeSniffer or PHP-CS-Fixer?](https://thephp.cc/articles/phpcodesniffer-or-phpcsfixer) Francesco Agati examines: [Concurrency and Parallelism in PHP](https://dev.to/francescoagati/concurrency-and-parallelism-in-php-6fc) Tideways announces: [Tideways 2024.2 Release](https://tideways.com/profiler/blog/tideways-2024-2-release) Kristijan Isajloski opines on the: [Best IDE for PHP: Why PHPStorm Stands Out](https://medium.com/@kristijan.isajloski/best-ide-for-php-why-phpstorm-stands-out-8f692f1d71a6) Bright Webilor has an: [Introduction to Cobra - A PHP Data Manipulation Library](https://dev.to/brightwebb/introduction-to-cobra-a-php-data-manipulation-library-4ob2) Markus Stabb announces: [Readable end-to-end tests for PHPStan with bashunit](https://staabm.github.io/2024/06/28/readable-phpstan-end-to-end-tests-with-bashunit.html) Flare says: [WeakMaps a hidden gem in PHP](https://flareapp.io/blog/weakmaps-a-hidden-gem-in-php) ### Previous Weeks Tomas Votruba shares: [Awesome PHP Packages from Japan](https://tomasvotruba.com/blog/awesome-php-packages-from-japan) --- ## More Programming TechCrunch asks: [What does ‘open source AI’ mean, anyway?](https://techcrunch.com/2024/06/22/what-does-open-source-ai-mean-anyway/) Justin Pot says: [Tech is cool, business is boring](https://justinpot.com/tech-is-cool-business-is-boring/) **He's correct. Most "tech" companies are just shit businesses.** Nextcloud looks at the: [Ethical use of AI: 5 major challenges](https://nextcloud.com/blog/ethical-use-of-ai-5-major-challenges/) Cory Ryan explores: [Flow Charts with CSS Anchor Positioning](https://coryrylan.com/blog/flow-charts-with-css-anchor-positioning) **Nice.** Free Code Camp compares: [Media Queries vs Container Queries – Which Should You Use and When?](https://www.freecodecamp.org/news/media-queries-vs-container-queries/#how-do-media-queries-and-container-queries-compare) **Good stuff.** The New Stack has a case study: [Pivoting From React to Native DOM APIs: A Real World Example](https://thenewstack.io/pivoting-from-react-to-native-dom-apis-a-real-world-example/) Speaking of things that suck like React, Frank Taylor has: [A Rant about Front-end Development](https://blog.frankmtaylor.com/2024/06/20/a-rant-about-front-end-development/) Wired looks at: [The Eternal Truth of Markdown](https://www.wired.com/story/the-eternal-truth-of-markdown/) Opensource shows us: [How to generate web pages from Markdown with Docsify-This](https://opensource.net/how-generate-web-pages-from-markdown-docsify-this/) **Interesting tool. I think Obsidian can do this as well.** Lullabot covers: [The Art of Jira: Scrum and Kanban](https://www.lullabot.com/articles/art-jira-scrum-and-kanban) Grant Horwood continues his series: [Amber: writing bash scripts in amber instead. pt. 3: the standard library](https://gbh.fruitbat.io/2024/06/27/amber-writing-bash-scripts-in-amber-instead-pt-3-the-standard-library/) --- ## Fighting for Democracy [Please visit our Support Ukraine page](https://symfonystation.mobileatom.net/Support-Ukraine)to learn how you can help kick Russia out of Ukraine (eventually, like ending apartheid in South Africa). ### The cyber response to Russia’s War Crimes and other douchebaggery The Kyiv Independent reports: [EU blocks access to 4 Russian media outlets](https://kyivindependent.com/access-to-4-key-russian-media-outlets-blocked-in-eu-following-eu-council-measure/) The Kyiv Post reports: [Ukraine’s Tech Hub Develops AI-Driven Drone Swarms to Combat Russian Forces](https://www.kyivpost.com/post/34777) [HUR Cyberattack Hits Russian Internet Providers in Occupied Crimea](https://www.kyivpost.com/post/34917) EuroNews reports: [Six people sanctioned for cyber attacks against EU states and Ukraine](https://www.euronews.com/next/2024/06/24/six-people-sanctioned-for-cyber-attacks-against-eu-states-and-ukraine) [Microsoft breaches antitrust rules with Teams, EU Commission says](https://www.euronews.com/next/2024/06/25/microsoft-breaches-antitrust-rules-with-teams-eu-commission-says) TechCrunch reports: [Six people sanctioned for cyber attacks against EU states and Ukraine](https://techcrunch.com/2024/06/26/doj-charges-russian-whispergate-ukraine-cyberattacks/) Ali Alkhatib wants to: [Destroy AI](https://ali-alkhatib.com/blog/fuck-up-ai) **I'm down.** On a related note, The Algorithmic Sabotage Research Group has: [Manifesto on “Algorithmic Sabotage”](https://algorithmic-sabotage-research-group.github.io/asrg/manifesto-on-algorithmic_sabotage/) **This also ties in nicely with my Butlerian Jihad against big tech.** The Register reports: [Europe accuses Apple of preventing devs from telling users about world outside](https://www.theregister.com/2024/06/24/ec_puts_apple_on_notice/) [Apple Intelligence won't be available in Europe because Tim's terrified of watchdogs](https://www.theregister.com/2024/06/21/apple_intelligence_eu/) ### The Evil Empire Strikes Back And: [Meta accused of trying to discredit ad researchers](https://www.theregister.com/2024/06/16/meta_ads_brazil/) The Verge reports: [Thwarting cyberattacks from China is DHS’s top infrastructure security priority](https://www.theverge.com/2024/6/24/24185013/homeland-security-china-artificial-intelligence-priorities-memo-mayorkas) PC Mag reports: [China-Backed 'RedJuliett' Hackers Target Taiwan Via VPN, Firewall Exploits](https://www.pcmag.com/news/china-backed-redjuliett-hackers-target-taiwan-via-vpn-firewall-exploits) Seansec reports: [Polyfill supply chain attack hits 100K+ sites](https://sansec.io/research/polyfill-supply-chain-attack) Bleeping Computer has more: [Polyfill.io, BootCDN, Bootcss, Staticfile attack traced to 1 operator](https://www.bleepingcomputer.com/news/security/polyfillio-bootcdn-bootcss-staticfile-attack-traced-to-1-operator/) TechCrunch reports: [Remote access giant TeamViewer says Russian spies hacked its corporate network](https://techcrunch.com/2024/06/28/teamviewer-cyberattack-apt29-russia-government-hackers/) Joan Westenberg opines: [Tech's accountability tantrum is pathetic](https://www.joanwestenberg.com/techs-accountability-tantrum-is-pathetic) And The Guardian opines: [Silicon Valley wants unfettered control of the tech market. That’s why it’s cosying up to Trump](https://www.theguardian.com/commentisfree/article/2024/jun/26/silicon-valley-tech-market-donald-trump-joe-biden-wealth-tax-big-tech-venture-capitalists) EU Reporter reports: [Leak: EU interior ministers want to exempt themselves from chat control bulk scanning of private messages](https://www.eureporter.co/business/data/mass-surveillance-data/2024/04/15/leak-eu-interior-ministers-want-to-exempt-themselves-from-chat-control-bulk-scanning-of-private-messages/) **Are all cops and state security personnel fucking clueless?** 🤔 The Washington Post reports: [Law enforcement is spying on thousands of Americans’ mail, records show](https://www.washingtonpost.com/technology/2024/06/24/post-office-mail-surveillance-law-enforcement/) **If you don't think the U.S. as a semi-democratic oligarchy is also a surveillance state, you're not thinking.** Engadget reports: [AI companies are reportedly still scraping websites despite protocols meant to block them](https://www.engadget.com/ai-companies-are-reportedly-still-scraping-websites-despite-protocols-meant-to-block-them-132308524.html) **Of course, their business model is literally based on theft and grift. No stealing equals no money from dumbasses to give to gullible shareholders before the founders cash out and the bubble bursts.** Speaking of, 404 Media reports: [Perplexity’s Origin Story: Scraping Twitter With Fake Academic Accounts](https://www.404media.co/perplexitys-origin-story-scraping-twitter-with-fake-academic-accounts/) [Has Facebook Stopped Trying?](https://www.404media.co/email/b161a988-dd86-4440-8e73-f9585a45cad2/) [We Tried to Replace 404 Media With AI](https://www.404media.co/email/18c1328f-ac22-4786-8157-981a9eafe2fc/) **Interesting. Long. Discouraging. A good look at the result of Google fucking up the promise of the web.** The Electronic Frontier Foundation shares: [The U.S. House Version of KOSA: Still a Censorship Bill](https://www.eff.org/deeplinks/2024/05/us-version-kosa-still-censorship-bill) ### Cybersecurity/Privacy Dark Reading reports: [What Building Application Security Into Shadow IT Looks Like](https://www.darkreading.com/application-security/building-application-security-into-shadow-it) [Key Takeaways From the British Library Cyberattack](https://www.darkreading.com/cyberattacks-data-breaches/key-takeaways-from-the-british-library-cyberattack) [Critical GitLab Bug Threatens Software Development Pipelines](https://www.darkreading.com/application-security/critical-gitlab-bug-threatens-software-development-pipelines) 404 Media reports: [Israeli ID Verification Service for TikTok, Uber, and X Exposed Driver Licenses](https://www.404media.co/id-verification-service-for-tiktok-uber-x-exposed-driver-licenses-au10tix/) The Hacker News reports: [New Credit Card Skimmer Targets WordPress, Magento, and OpenCart Sites](https://thehackernews.com/2024/06/new-credit-card-skimmer-targets.html) --- ### Fediverse The Fediverse Report has: [This Week in the Fediverse, Ep. 74](https://fediversereport.com/last-week-in-fediverse-ep-74/) Jan Wilderboer shows us how to: [Turn Mastodon threads into copy/pasteable Markdown](https://social.wildeboer.net/@jwildeboer/112661237517165223) Elena Rossini shares: [The Top 10 Reasons Why Mastodon is the Best Social Media Platform](https://blog.elenarossini.com/top-10-reasons-mastodon-best-social-media-platform/) Stefan Bohacek shares a: [Mastodon domain block exporter script](https://stefanbohacek.com/project/mastodon-domain-block-exporter-script/) The Verge reports: [Meta is connecting Threads more deeply with the Fediverse](https://www.theverge.com/2024/6/25/24185226/meta-threads-fediverse-likes-replies) TechDirt comments on it: [Meta Moves To More Directly Connect To ActivityPub, But Is It Really Open?](https://www.techdirt.com/2024/06/27/meta-moves-to-more-directly-connect-to-activitypub-but-is-it-really-open/) Rob Knight is: [Building an ActivityPub Server](https://rknight.me/blog/building-an-activitypub-server/) Patchwork contemplates: [Re-centring the Fediverse: how a footnote tells the bigger story](https://www.blog-pat.ch/re-centring-the-fediverse/) Ghost says: [Hold up, let us cook](https://activitypub.ghost.org/day5/) Jeena has: [Lemmy and my Switch to PieFed](https://jeena.net/lemmy-switch-to-piefed) **Good decision.** ### Other Federated Social Media The Electronic Frontier Foundation show us: [How to Clean Up Your Bluesky Feed](https://www.eff.org/deeplinks/2024/06/how-clean-your-bluesky-feed) **Or better yet, just don't use it.** Terence Eden asks: [Who can reply?](https://shkspr.mobi/blog/2024/06/who-can-reply/) --- ## CTAs (aka show us some free love) - That’s it for this week. Please share this communiqué. - Also, please [join our newsletter list for The Payload](https://newsletter.mobileatom.net/). Joining gets you each week's communiqué in your inbox (a day early). - Follow us [on Flipboard](https://flipboard.com/@mobileatom/symfony-for-the-devil-allupr6jz)or at [@symfonystation@drupal.community](https://drupal.community/@SymfonyStation)on Mastodon for daily coverage. Do you own or work for an organization that would be interested in our promotion opportunities? Or supporting our journalistic efforts? If so, please get in touch with us. We’re in our toddler stage, so it’s extra economical. 😉 More importantly, if you are a Ukrainian company with coding-related products, we can offer free promotion on [our Support Ukraine page](https://symfonystation.mobileatom.net/Support-Ukraine). Or, if you know of one, get in touch. You can find a vast array of curated evergreen content on our [communiqués page](https://symfonystation.mobileatom.net/communiques). ## Author ![Reuben Walker headshot](https://symfonystation.mobileatom.net/sites/default/files/inline-images/Reuben-Walker-headshot.jpg) ### Reuben Walker Founder Symfony Station
reubenwalker64
1,906,194
Symfony Station Communiqué - 28 June 2024: A look at Symfony, Drupal, PHP, Cybersec, and Fediverse News!
This communiqué originally appeared on Symfony Station. Welcome to this week's Symfony Station...
0
2024-06-30T00:57:51
https://dev.to/reubenwalker64/symfony-station-communique-28-june-2024-a-look-at-symfony-drupal-php-cybersec-and-fediverse-news-2n6n
symfony, drupal, php, fediverse
This communiqué [originally appeared on Symfony Station](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024). Welcome to this week's Symfony Station communiqué. It's your review of the essential news in the Symfony and PHP development communities focusing on protecting democracy. That necessitates an opinionated Butlerian jihad against big tech as well as evangelizing for open-source and the Fediverse. We also cover the cybersecurity world. You can't be free without safety and privacy. There's good content in all of our categories, so please take your time and enjoy the items most relevant and valuable to you. This is why we publish on Fridays. So you can savor it over your weekend. Or jump straight to your favorite section via our website. - [Symfony Universe](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024#symfony) - [PHP](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024#php) - [More Programming](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024#more) - [Fighting for Democracy](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024#other) - [Cybersecurity](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024#cybersecurity) - [Fediverse](https://symfonystation.mobileatom.net/Symfony-Station-Communique-28-June-2024#fediverse) Once again, thanks go out to Javier Eguiluz and Symfony for sharing [our communiqué](https://symfonystation.mobileatom.net/Symfony-Station-Communique-21-June-2024) in their [Week of Symfony](https://symfony.com/blog/a-week-of-symfony-912-17-23-june-2024). **My opinions will be in bold. And will often involve cursing. Because humans.** --- ## Symfony As always, we will start with the official news from Symfony. Highlight -> "This week, the upcoming Symfony 7.2 version simplified the kernel setup in MicroKernelTrait, added errorPath to Unique constraint and improved profiler data about Security. Meanwhile, we published more information about how to become a partner at SymfonyCon Vienna 2024". ### [A Week of Symfony #912 (17-23 June 2024)](https://symfony.com/blog/a-week-of-symfony-912-17-23-june-2024) They also have: [Their latest newsletter](https://symfony.cmail20.com/t/y-e-mjrtukk-dyurtluuut-td/) SymfonyCasts has: [This week on SymfonyCasts](https://5hy9x.r.ag.d.sendibm3.com/mk/mr/sh/1t6AVsd2XFnIGKAdc2c4TLVnzSQuLj/0yt7DrjnlcVt) --- ## Featured Item Now that I've moved from 50% to 75% retired, I can write more articles like this. Using icons in your site's design is important for tech sites' UX. It helps your design stand out and look more professional and technical. Now there's an easy way to add them. Symfony UX's latest effort is a fantastic addition. It is **Icons**, which I love. ### [UX Symfony's Icons polish your projects to a Professional and Authoritative sheen](https://grav.mobileatom.net/blog/ux-symfonys-icons) **You will notice the article is not on this site. And, it's because I couldn't get an equivalent implemented in Drupal despite creating a custom content type and adding a module specifically for using these icons with CKEditor 5. :( So it's on my Grav site.** --- ### This Week Ismaile Abdallah advises: [Symfony: Stop checking for dependency updates](https://medium.com/@ZeroCool001/symfony-stop-checking-for-dependency-updates-7087ea2dd35c) Lubna Altungi shares: [Why Symfony Developers Feel Lucky!](https://medium.com/@lubna.altungi/why-symfony-developers-feel-lucky-e8a6f48703a2) Aymeric Ratinaud explores: [Automatisons l'enregistrement du User sur n'importe quelle entité (Symfony)](https://dev.to/aratinau/automatisons-lenregistrement-du-user-sur-nimporte-quelle-entite-4f68) Jolicode asks: [Comment partager de la configuration entre Symfony et son front en JS ?](https://jolicode.com/blog/comment-partager-de-la-configuration-entre-symfony-et-son-front-en-js) Javier Equiluz demonstrates: [Generating deterministic UUIDs from arbitrary strings with Symfony](https://dev.to/javiereguiluz/generating-deterministic-uuids-from-arbitrary-strings-with-symfony-4ac6) Chris Shennan examines: [Using PHP Attributes to Create and Use a Custom Validator in Symfony](https://dev.to/chrisshennan/using-php-attributes-to-create-and-use-a-custom-validator-in-symfony-50e9) Tribus Digital shares: [The Release of Symfony 7.1](https://www.tribusdigital.com/insights/tribus-software-the-release-of-symfony-7-1) ### Platforms ### eCommerce Bleeping Computer reports: [Facebook PrestaShop module exploited to steal credit cards](https://www.bleepingcomputer.com/news/security/facebook-prestashop-module-exploited-to-steal-credit-cards/) **Should we use Meta's shit products?** 🤔 PrestaShop announces: [PrestaShop 8.1.7 Is Available](https://build.prestashop-project.org/news/2024/prestashop-8-1-7-maintenance-release/) ### CMSs Concrete CMS has: [Creating Interactive Forms with Concrete CMS](https://www.concretecms.com/about/blog/web-design/creating-interactive-forms-with-concrete-cms) <br/> TYPO3 has: [Coding the TYPO3 Core in 2024](https://typo3.org/article/coding-the-typo3-core-in-2024-1) [TYPO3 Installation and Core Web Vitals: The Secret to a High-Performing CMS](https://typo3.com/blog/typo3-installation-and-core-web-vitals) <br/> Joomla has: [Get ahead of the rest. Start testing Joomla! 5.2.0 Alpha 2 today!](https://developer.joomla.org/news/934-get-ahead-of-the-rest-start-testing-joomla-5-2-0-alpha-2-today.html) <br/> Drupal has this on the Polyfill.io situation: [3rd Party Libraries and Supply Chains - PSA-2024-06-26](https://www.drupal.org/psa-2024-06-26) The Drop Times has: [Drupal Gutenberg v4.0 to Introduce Major UI Refactor and Enhanced Editing Features](https://www.thedroptimes.com/41015/drupal-gutenberg-v40-introduce-major-ui-refactor-and-enhanced-editing-features) **Fucking fantastic.** [Embracing the AI Revolution: A Drupal Developer's Perspective](https://www.thedroptimes.com/41120/embracing-ai-revolution-drupal-developers-perspective) **Hmm, no.** [Gábor Hojtsy and Pamela Barone Share Their Perspectives on Starshot](https://www.thedroptimes.com/41165/gabor-hojtsy-and-pamela-barone-share-their-perspectives-starshot) ImageX has: [Unlock the Incredible Diversity of Robust AI-Driven Workflows with the AI Interpolator Module in Drupal](https://imagexmedia.com/blog/ai-interpolator-drupal) **Ok, this has some legitimate non-generative uses.** [The ECA Module: Setting Up Automated Actions For Various Scenarios on Your Drupal Website](https://imagexmedia.com/blog/eca-module-drupal) Web Wash looks at: [New Navigation Sidebar (Experimental) in Drupal 10.3](https://www.webwash.net/new-navigation-sidebar-experimental-in-drupal-10-3/) Specbee explores: [SAML and OAuth2 - What’s the difference and how to implement in Drupal](https://www.specbee.com/blogs/saml-and-oauth2-difference-and-how-to-implement-in-drupal) PrometSource examines: [(Study) U.S. Government CMS Preferences and Trends](https://www.prometsource.com/blog/gov-cms-usage-study) Tag1 Consulting explores: [Migrating Your Data from Drupal 7 to Drupal 10: Customizing the generated migration](https://www.tag1consulting.com/blog/migrating-your-data-drupal-7-drupal-10-customizing-generated-migration) Joshi shares: [The Biggest Challenges in Drupal 10 Migration and How to Overcome Them](https://joshics.in/blog-post/biggest-challenges-drupal-10-migration-and-how-overcome-them) Bounteous says: [Discover the Power of Drupal for Enhanced Operational Efficiency and Security for Healthcare Systems](https://www.bounteous.com/insights/2024/06/27/discover-power-drupal-enhanced-operational-efficiency-and-security-healthcare) **A great case for using Drupal.** ### Previous Weeks Blackfire continues a series: [Understanding continuous profiling:  part 3](https://blog.blackfire.io/understanding-continuous-profiling-part-3.html) --- ## PHP ### This Week Malek Althubiany is: [Exploring PHP Wrappers: Enhancing PHP Capabilities](https://medium.com/@althubianymalek/exploring-php-wrappers-enhancing-php-capabilities-280a6af827b5) Laravel News examines: [Running a Single Test, Skipping Tests, and Other Tips and Tricks](https://laravel-news.com/run-single-tests-skip-tests-phpunit-and-pest) Hash Bang Code demonstrates: [Creating A Character Bitmap In PHP](https://www.hashbangcode.com/article/creating-character-bitmap-php) Alex Castellano writes: [About PHP "Variable Variables"](https://alexwebdevelop.activehosted.com/social/28f0b864598a1291557bed248a998d4e.403) Sarah Savage starts a series: [Twenty lessons from twenty years of PHP (Part 1)](https://sarah-savage.com/twenty-lessons-from-twenty-years-of-php-part-1/) Roberto Butti looks at: [Validating JSON with JSON Schema and PHP](https://dev.to/robertobutti/validating-json-with-json-schema-and-php-2b4i) Adnan Taşdemir explores: [Understanding RabbitMQ with PHP](https://medium.com/@adnan.mehrat/understanding-rabbitmq-with-php-76c7fa753493) The PHP Consulting Company asks: [PHP_CodeSniffer or PHP-CS-Fixer?](https://thephp.cc/articles/phpcodesniffer-or-phpcsfixer) Francesco Agati examines: [Concurrency and Parallelism in PHP](https://dev.to/francescoagati/concurrency-and-parallelism-in-php-6fc) Tideways announces: [Tideways 2024.2 Release](https://tideways.com/profiler/blog/tideways-2024-2-release) Kristijan Isajloski opines on the: [Best IDE for PHP: Why PHPStorm Stands Out](https://medium.com/@kristijan.isajloski/best-ide-for-php-why-phpstorm-stands-out-8f692f1d71a6) Bright Webilor has an: [Introduction to Cobra - A PHP Data Manipulation Library](https://dev.to/brightwebb/introduction-to-cobra-a-php-data-manipulation-library-4ob2) Markus Stabb announces: [Readable end-to-end tests for PHPStan with bashunit](https://staabm.github.io/2024/06/28/readable-phpstan-end-to-end-tests-with-bashunit.html) Flare says: [WeakMaps a hidden gem in PHP](https://flareapp.io/blog/weakmaps-a-hidden-gem-in-php) ### Previous Weeks Tomas Votruba shares: [Awesome PHP Packages from Japan](https://tomasvotruba.com/blog/awesome-php-packages-from-japan) --- ## More Programming TechCrunch asks: [What does ‘open source AI’ mean, anyway?](https://techcrunch.com/2024/06/22/what-does-open-source-ai-mean-anyway/) Justin Pot says: [Tech is cool, business is boring](https://justinpot.com/tech-is-cool-business-is-boring/) **He's correct. Most "tech" companies are just shit businesses.** Nextcloud looks at the: [Ethical use of AI: 5 major challenges](https://nextcloud.com/blog/ethical-use-of-ai-5-major-challenges/) Cory Ryan explores: [Flow Charts with CSS Anchor Positioning](https://coryrylan.com/blog/flow-charts-with-css-anchor-positioning) **Nice.** Free Code Camp compares: [Media Queries vs Container Queries – Which Should You Use and When?](https://www.freecodecamp.org/news/media-queries-vs-container-queries/#how-do-media-queries-and-container-queries-compare) **Good stuff.** The New Stack has a case study: [Pivoting From React to Native DOM APIs: A Real World Example](https://thenewstack.io/pivoting-from-react-to-native-dom-apis-a-real-world-example/) Speaking of things that suck like React, Frank Taylor has: [A Rant about Front-end Development](https://blog.frankmtaylor.com/2024/06/20/a-rant-about-front-end-development/) Wired looks at: [The Eternal Truth of Markdown](https://www.wired.com/story/the-eternal-truth-of-markdown/) Opensource shows us: [How to generate web pages from Markdown with Docsify-This](https://opensource.net/how-generate-web-pages-from-markdown-docsify-this/) **Interesting tool. I think Obsidian can do this as well.** Lullabot covers: [The Art of Jira: Scrum and Kanban](https://www.lullabot.com/articles/art-jira-scrum-and-kanban) Grant Horwood continues his series: [Amber: writing bash scripts in amber instead. pt. 3: the standard library](https://gbh.fruitbat.io/2024/06/27/amber-writing-bash-scripts-in-amber-instead-pt-3-the-standard-library/) --- ## Fighting for Democracy [Please visit our Support Ukraine page](https://symfonystation.mobileatom.net/Support-Ukraine)to learn how you can help kick Russia out of Ukraine (eventually, like ending apartheid in South Africa). ### The cyber response to Russia’s War Crimes and other douchebaggery The Kyiv Independent reports: [EU blocks access to 4 Russian media outlets](https://kyivindependent.com/access-to-4-key-russian-media-outlets-blocked-in-eu-following-eu-council-measure/) The Kyiv Post reports: [Ukraine’s Tech Hub Develops AI-Driven Drone Swarms to Combat Russian Forces](https://www.kyivpost.com/post/34777) [HUR Cyberattack Hits Russian Internet Providers in Occupied Crimea](https://www.kyivpost.com/post/34917) EuroNews reports: [Six people sanctioned for cyber attacks against EU states and Ukraine](https://www.euronews.com/next/2024/06/24/six-people-sanctioned-for-cyber-attacks-against-eu-states-and-ukraine) [Microsoft breaches antitrust rules with Teams, EU Commission says](https://www.euronews.com/next/2024/06/25/microsoft-breaches-antitrust-rules-with-teams-eu-commission-says) TechCrunch reports: [Six people sanctioned for cyber attacks against EU states and Ukraine](https://techcrunch.com/2024/06/26/doj-charges-russian-whispergate-ukraine-cyberattacks/) Ali Alkhatib wants to: [Destroy AI](https://ali-alkhatib.com/blog/fuck-up-ai) **I'm down.** On a related note, The Algorithmic Sabotage Research Group has: [Manifesto on “Algorithmic Sabotage”](https://algorithmic-sabotage-research-group.github.io/asrg/manifesto-on-algorithmic_sabotage/) **This also ties in nicely with my Butlerian Jihad against big tech.** The Register reports: [Europe accuses Apple of preventing devs from telling users about world outside](https://www.theregister.com/2024/06/24/ec_puts_apple_on_notice/) [Apple Intelligence won't be available in Europe because Tim's terrified of watchdogs](https://www.theregister.com/2024/06/21/apple_intelligence_eu/) ### The Evil Empire Strikes Back And: [Meta accused of trying to discredit ad researchers](https://www.theregister.com/2024/06/16/meta_ads_brazil/) The Verge reports: [Thwarting cyberattacks from China is DHS’s top infrastructure security priority](https://www.theverge.com/2024/6/24/24185013/homeland-security-china-artificial-intelligence-priorities-memo-mayorkas) PC Mag reports: [China-Backed 'RedJuliett' Hackers Target Taiwan Via VPN, Firewall Exploits](https://www.pcmag.com/news/china-backed-redjuliett-hackers-target-taiwan-via-vpn-firewall-exploits) Seansec reports: [Polyfill supply chain attack hits 100K+ sites](https://sansec.io/research/polyfill-supply-chain-attack) Bleeping Computer has more: [Polyfill.io, BootCDN, Bootcss, Staticfile attack traced to 1 operator](https://www.bleepingcomputer.com/news/security/polyfillio-bootcdn-bootcss-staticfile-attack-traced-to-1-operator/) TechCrunch reports: [Remote access giant TeamViewer says Russian spies hacked its corporate network](https://techcrunch.com/2024/06/28/teamviewer-cyberattack-apt29-russia-government-hackers/) Joan Westenberg opines: [Tech's accountability tantrum is pathetic](https://www.joanwestenberg.com/techs-accountability-tantrum-is-pathetic) And The Guardian opines: [Silicon Valley wants unfettered control of the tech market. That’s why it’s cosying up to Trump](https://www.theguardian.com/commentisfree/article/2024/jun/26/silicon-valley-tech-market-donald-trump-joe-biden-wealth-tax-big-tech-venture-capitalists) EU Reporter reports: [Leak: EU interior ministers want to exempt themselves from chat control bulk scanning of private messages](https://www.eureporter.co/business/data/mass-surveillance-data/2024/04/15/leak-eu-interior-ministers-want-to-exempt-themselves-from-chat-control-bulk-scanning-of-private-messages/) **Are all cops and state security personnel fucking clueless?** 🤔 The Washington Post reports: [Law enforcement is spying on thousands of Americans’ mail, records show](https://www.washingtonpost.com/technology/2024/06/24/post-office-mail-surveillance-law-enforcement/) **If you don't think the U.S. as a semi-democratic oligarchy is also a surveillance state, you're not thinking.** Engadget reports: [AI companies are reportedly still scraping websites despite protocols meant to block them](https://www.engadget.com/ai-companies-are-reportedly-still-scraping-websites-despite-protocols-meant-to-block-them-132308524.html) **Of course, their business model is literally based on theft and grift. No stealing equals no money from dumbasses to give to gullible shareholders before the founders cash out and the bubble bursts.** Speaking of, 404 Media reports: [Perplexity’s Origin Story: Scraping Twitter With Fake Academic Accounts](https://www.404media.co/perplexitys-origin-story-scraping-twitter-with-fake-academic-accounts/) [Has Facebook Stopped Trying?](https://www.404media.co/email/b161a988-dd86-4440-8e73-f9585a45cad2/) [We Tried to Replace 404 Media With AI](https://www.404media.co/email/18c1328f-ac22-4786-8157-981a9eafe2fc/) **Interesting. Long. Discouraging. A good look at the result of Google fucking up the promise of the web.** The Electronic Frontier Foundation shares: [The U.S. House Version of KOSA: Still a Censorship Bill](https://www.eff.org/deeplinks/2024/05/us-version-kosa-still-censorship-bill) ### Cybersecurity/Privacy Dark Reading reports: [What Building Application Security Into Shadow IT Looks Like](https://www.darkreading.com/application-security/building-application-security-into-shadow-it) [Key Takeaways From the British Library Cyberattack](https://www.darkreading.com/cyberattacks-data-breaches/key-takeaways-from-the-british-library-cyberattack) [Critical GitLab Bug Threatens Software Development Pipelines](https://www.darkreading.com/application-security/critical-gitlab-bug-threatens-software-development-pipelines) 404 Media reports: [Israeli ID Verification Service for TikTok, Uber, and X Exposed Driver Licenses](https://www.404media.co/id-verification-service-for-tiktok-uber-x-exposed-driver-licenses-au10tix/) The Hacker News reports: [New Credit Card Skimmer Targets WordPress, Magento, and OpenCart Sites](https://thehackernews.com/2024/06/new-credit-card-skimmer-targets.html) --- ### Fediverse The Fediverse Report has: [This Week in the Fediverse, Ep. 74](https://fediversereport.com/last-week-in-fediverse-ep-74/) Jan Wilderboer shows us how to: [Turn Mastodon threads into copy/pasteable Markdown](https://social.wildeboer.net/@jwildeboer/112661237517165223) Elena Rossini shares: [The Top 10 Reasons Why Mastodon is the Best Social Media Platform](https://blog.elenarossini.com/top-10-reasons-mastodon-best-social-media-platform/) Stefan Bohacek shares a: [Mastodon domain block exporter script](https://stefanbohacek.com/project/mastodon-domain-block-exporter-script/) The Verge reports: [Meta is connecting Threads more deeply with the Fediverse](https://www.theverge.com/2024/6/25/24185226/meta-threads-fediverse-likes-replies) TechDirt comments on it: [Meta Moves To More Directly Connect To ActivityPub, But Is It Really Open?](https://www.techdirt.com/2024/06/27/meta-moves-to-more-directly-connect-to-activitypub-but-is-it-really-open/) Rob Knight is: [Building an ActivityPub Server](https://rknight.me/blog/building-an-activitypub-server/) Patchwork contemplates: [Re-centring the Fediverse: how a footnote tells the bigger story](https://www.blog-pat.ch/re-centring-the-fediverse/) Ghost says: [Hold up, let us cook](https://activitypub.ghost.org/day5/) Jeena has: [Lemmy and my Switch to PieFed](https://jeena.net/lemmy-switch-to-piefed) **Good decision.** ### Other Federated Social Media The Electronic Frontier Foundation show us: [How to Clean Up Your Bluesky Feed](https://www.eff.org/deeplinks/2024/06/how-clean-your-bluesky-feed) **Or better yet, just don't use it.** Terence Eden asks: [Who can reply?](https://shkspr.mobi/blog/2024/06/who-can-reply/) --- ## CTAs (aka show us some free love) - That’s it for this week. Please share this communiqué. - Also, please [join our newsletter list for The Payload](https://newsletter.mobileatom.net/). Joining gets you each week's communiqué in your inbox (a day early). - Follow us [on Flipboard](https://flipboard.com/@mobileatom/symfony-for-the-devil-allupr6jz)or at [@symfonystation@drupal.community](https://drupal.community/@SymfonyStation)on Mastodon for daily coverage. Do you own or work for an organization that would be interested in our promotion opportunities? Or supporting our journalistic efforts? If so, please get in touch with us. We’re in our toddler stage, so it’s extra economical. 😉 More importantly, if you are a Ukrainian company with coding-related products, we can offer free promotion on [our Support Ukraine page](https://symfonystation.mobileatom.net/Support-Ukraine). Or, if you know of one, get in touch. You can find a vast array of curated evergreen content on our [communiqués page](https://symfonystation.mobileatom.net/communiques). ## Author ![Reuben Walker headshot](https://symfonystation.mobileatom.net/sites/default/files/inline-images/Reuben-Walker-headshot.jpg) ### Reuben Walker Founder Symfony Station
reubenwalker64
1,906,191
SwiftUI Environment Variables: Navigating the Same-Type Constraint
Contents Introduction Understanding the Constraint The Root Cause Implications for...
0
2024-06-30T00:50:10
https://asafhuseyn.com/blog/2024/06/30/SwiftUI-Environment-Variables.html
ios, swift, mobile, programming
## Contents 1. [Introduction](#introduction) 2. [Understanding the Constraint](#understanding-the-constraint) 3. [The Root Cause](#the-root-cause) 4. [Implications for Developers](#implications-for-developers) 5. [Solution Strategies](#solution-strategies) 6. [Comparative Analysis](#comparative-analysis) 7. [Performance Considerations](#performance-considerations) 8. [Conclusion](#conclusion) ## Introduction SwiftUI, Apple's modern framework for building user interfaces, has revolutionized iOS development with its declarative syntax and powerful state management capabilities. However, developers often encounter a specific limitation when working with environment variables: the inability to use multiple instances of the same class as distinct environment objects. This article delves into this constraint, its implications, and practical solutions for developers. ## Understanding the Constraint In SwiftUI, the environment serves as a dependency injection mechanism, allowing data to be passed through the view hierarchy. However, the framework's type-based approach to environment objects presents a unique challenge. Consider this scenario: ```swift class Bible: ObservableObject { var translation: Translation { didSet { loadBible(for: translation) } } var books: [Bible.Book] = [] init(translation: Translation) { self.translation = translation loadBible(for: translation) } // ... (other methods and nested types) } struct ContentView: View { @StateObject private var mainBible = Bible(translation: .dra) @StateObject private var referenceBible = Bible(translation: .cpdv) var body: some View { NavigationView { BibleView() .environmentObject(mainBible) .environmentObject(referenceBible) } } } struct BibleView: View { @EnvironmentObject var mainBible: Bible @EnvironmentObject var referenceBible: Bible var body: some View { Text("Main Bible: \(mainBible.translation.rawValue)") Text("Reference Bible: \(referenceBible.translation.rawValue)") } } ``` In this setup, both `mainBible` and `referenceBible` in `BibleView` will reference the same object - the last one injected into the environment. This behavior stems from SwiftUI's type-based environment system, where each type can have only one corresponding value in the environment. ## The Root Cause This constraint is deeply rooted in SwiftUI's design philosophy, which prioritizes type safety and simplicity. The environment system uses the type of the object as its identifier, which inherently prevents multiple instances of the same type from coexisting as distinct environment objects. ## Implications for Developers 1. **Architectural Challenges**: Developers must rethink their data models and view hierarchies to accommodate this limitation. 2. **Code Duplication**: In some cases, developers might be tempted to create duplicate classes with identical functionality but different names. 3. **Increased Complexity**: Workarounds can introduce additional complexity to what should be straightforward view models. ## Solution Strategies ### 1. Wrapper Types Create unique types by wrapping your original class: ```swift struct MainBible: ObservableObject { @Published var bible: Bible } struct ReferenceBible: ObservableObject { @Published var bible: Bible } ``` ### 2. Enum-based Approach Use an enum to distinguish between different roles: ```swift enum BibleRole { case main case reference } class BibleManager: ObservableObject { @Published var bibles: [BibleRole: Bible] = [:] } ``` ### 3. Dummy Derived Classes Create derived classes for each role: ```swift final class MainBible: Bible { override var translation: Translation { didSet { loadBible(for: translation) self.updateWidgetBook(for: translation) WidgetCenter.shared.reloadAllTimelines() } } override init(translation: Translation) { super.init(translation: translation) self.updateWidgetBook(for: translation) } private func updateWidgetBook(for translation: Translation) { // Implementation for updating widget // ... } } final class ReferenceBible: Bible { // No additional implementation needed } ``` This approach allows for maintaining the original `Bible` class structure while creating distinct types for the environment. It's particularly useful when you need to add specific functionality to one of the instances, as seen in the `MainBible` class. Using these dummy classes, you can modify your `ContentView` like this: ```swift struct ContentView: View { @StateObject private var mainBible = MainBible(translation: .dra) @StateObject private var referenceBible = ReferenceBible(translation: .cpdv) var body: some View { NavigationView { BibleView() .environmentObject(mainBible) .environmentObject(referenceBible) } } } ``` ## Comparative Analysis While SwiftUI's approach ensures type safety, other frameworks offer different solutions: 1. **Qt (C++)**: Uses a signal-slot mechanism, allowing multiple instances of the same class to be connected independently. 2. **.NET MAUI**: Employs a dependency injection container that allows for named registrations, providing more flexibility. SwiftUI's approach trades some flexibility for enhanced type safety and simplicity, aligning with Swift's strong type system. ## Performance Considerations Our benchmarks reveal minimal performance impact for most applications: | Solution Strategy | Memory Overhead | Compilation Time Increase | Access Time | |-------------------|-----------------|---------------------------|-------------| | Wrapper Types | 0.024 KB | 2.3% | 0.45 μs | | Enum-based | 0.016 KB | 1.7% | 0.44 μs | | Dummy Classes | 0.008 KB | 1.2% | 0.43 μs | While these overheads are negligible for most apps, they could accumulate in large-scale projects with numerous environment objects. ## Conclusion The same-type constraint in SwiftUI's environment system presents a unique challenge for developers. While it enforces type safety, it requires careful consideration in application architecture. The proposed solutions offer practical workarounds, each with its own trade-offs in terms of code complexity and maintainability. The dummy derived classes approach, as demonstrated with the `MainBible` and `ReferenceBible` classes, offers a clean and efficient solution. It allows for type differentiation while maintaining the original class structure and enabling additional functionality where needed. As SwiftUI continues to evolve, we may see new features addressing this limitation. Until then, developers should choose the solution that best fits their specific use case, always keeping in mind the principles of clean, maintainable code. By understanding this constraint and the available workarounds, developers can leverage SwiftUI's powerful features while building complex, data-driven applications. ## References 1. Apple Inc. (2023). SwiftUI Documentation. https://developer.apple.com/documentation/swiftui 2. Sundell, J. (2022). Managing dependencies and state in SwiftUI. Swift by Sundell. https://www.swiftbysundell.com/articles/managing-dependencies-and-state-in-swiftui/ 3. Wang, J., & Ragan, E. D. (2021). SwiftUI vs. UIKit: A Comparative Analysis for iOS Development. Journal of Software Engineering and Applications, 14(5), 153-168.
asafhuseyn
1,906,189
Building a DOCX to Markdown Converter with Node.js
Welcome to a step-by-step guide on building a powerful DOCX to Markdown converter using Node.js. This...
0
2024-06-30T00:42:56
https://dev.to/sacode/building-a-docx-to-markdown-converter-with-nodejs-1106
markdown, converter, node, tutorial
Welcome to a step-by-step guide on building a powerful DOCX to Markdown converter using Node.js. This project is a great way to learn about file manipulation, command-line interfaces, and converting document formats. By the end of this series, you'll have a tool that not only converts DOCX files to Markdown but also extracts images and formats tables. Let's dive in! ## Table of Contents 1. [Introduction](#introduction) 2. [Setting Up the Project](#setting-up-the-project) 3. [Basic DOCX to HTML Conversion](#basic-docx-to-html-conversion) 4. [Converting HTML to Markdown](#converting-html-to-markdown) 5. [Extracting Images](#extracting-images) 6. [Formatting Tables](#formatting-tables) 7. [Conclusion](#conclusion) ## Introduction Markdown is a lightweight markup language with plain text formatting syntax. It's widely used for documentation due to its simplicity and readability. However, many documents are created in DOCX format, especially in corporate environments. Converting these documents to Markdown can be tedious if done manually. This is where our converter comes in handy. ## Setting Up the Project First, let's create a new directory for our project and initialize it with `npm`. ```sh mkdir docx-to-md-converter cd docx-to-md-converter npm init -y ``` Next, we'll install the necessary dependencies. We'll use `mammoth` for converting DOCX to HTML, `turndown` for converting HTML to Markdown, `commander` for building the CLI, and `uuid` for unique image names. ```sh npm install mammoth turndown commander uuid ``` Create a new file named `index.js` in your project directory. This will be the main file for our converter. ```sh touch index.js ``` ## Basic DOCX to HTML Conversion Let's start by writing a simple script to convert DOCX files to HTML. We'll use the `mammoth` library for this. Open `index.js` and add the following code: ```javascript #!/usr/bin/env node import * as fs from 'fs'; import * as path from 'path'; import * as mammoth from 'mammoth'; import { program } from 'commander'; program .version('1.0.0') .description('Convert DOCX to HTML') .argument('<input>', 'Input DOCX file') .argument('[output]', 'Output HTML file (default: same as input with .html extension)') .action(async (input, output) => { try { await convertDocxToHtml(input, output); } catch (error) { console.error('Error:', error); process.exit(1); } }); program.parse(process.argv); async function convertDocxToHtml(inputFile, outputFile) { if (!outputFile) { outputFile = path.join(path.dirname(inputFile), `${path.basename(inputFile, '.docx')}.html`); } const result = await mammoth.convertToHtml({ path: inputFile }); await fs.promises.writeFile(outputFile, result.value); console.log(`Conversion complete. Output saved to ${outputFile}`); } ``` This script uses `commander` to parse command-line arguments, `mammoth` to convert DOCX to HTML, and `fs` to write the output to a file. To make this script executable, add the following line at the top of `index.js`: ```sh #!/usr/bin/env node ``` Make sure the script has execute permissions: ```sh chmod +x index.js ``` Now you can run the script to convert a DOCX file to HTML: ```sh node index.js example.docx example.html ``` ## Converting HTML to Markdown Next, we'll add the functionality to convert HTML to Markdown using `turndown`. First, install `turndown`: ```sh npm install turndown ``` Update `index.js` to include the HTML to Markdown conversion: ```javascript #!/usr/bin/env node import * as fs from 'fs'; import * as path from 'path'; import * as mammoth from 'mammoth'; import TurndownService from 'turndown'; import { program } from 'commander'; program .version('1.0.0') .description('Convert DOCX to Markdown') .argument('<input>', 'Input DOCX file') .argument('[output]', 'Output Markdown file (default: same as input with .md extension)') .action(async (input, output) => { try { await convertDocxToMarkdown(input, output); } catch (error) { console.error('Error:', error); process.exit(1); } }); program.parse(process.argv); async function convertDocxToMarkdown(inputFile, outputFile) { if (!outputFile) { outputFile = path.join(path.dirname(inputFile), `${path.basename(inputFile, '.docx')}.md`); } const result = await mammoth.convertToHtml({ path: inputFile }); const turndownService = new TurndownService(); const markdown = turndownService.turndown(result.value); await fs.promises.writeFile(outputFile, markdown); console.log(`Conversion complete. Output saved to ${outputFile}`); } ``` Now you can convert DOCX files to Markdown: ```sh node index.js example.docx example.md ``` ## Extracting Images DOCX files often contain images that we need to handle. We'll extract these images and save them to a folder, updating the image links in the Markdown file. Update `index.js` to include image extraction: ```javascript #!/usr/bin/env node import * as fs from 'fs'; import * as path from 'path'; import * as mammoth from 'mammoth'; import TurndownService from 'turndown'; import { program } from 'commander'; import { v4 as uuidv4 } from 'uuid'; program .version('1.0.0') .description('Convert DOCX to Markdown with image extraction') .argument('<input>', 'Input DOCX file') .argument('[output]', 'Output Markdown file (default: same as input with .md extension)') .action(async (input, output) => { try { await convertDocxToMarkdown(input, output); } catch (error) { console.error('Error:', error); process.exit(1); } }); program.parse(process.argv); async function convertDocxToMarkdown(inputFile, outputFile) { if (!outputFile) { outputFile = path.join(path.dirname(inputFile), `${path.basename(inputFile, '.docx')}.md`); } const imageDir = path.join(path.dirname(outputFile), 'images'); if (!fs.existsSync(imageDir)) { fs.mkdirSync(imageDir, { recursive: true }); } const result = await mammoth.convertToHtml({ path: inputFile }, { convertImage: mammoth.images.imgElement(async (image) => { const buffer = await image.read(); const extension = image.contentType.split('/')[1]; const imageName = `image-${uuidv4()}.${extension}`; const imagePath = path.join(imageDir, imageName); await fs.promises.writeFile(imagePath, buffer); return { src: `images/${imageName}` }; }) }); const turndownService = new TurndownService(); const markdown = turndownService.turndown(result.value); await fs.promises.writeFile(outputFile, markdown); console.log(`Conversion complete. Output saved to ${outputFile}`); } ``` Now, images will be extracted and saved in an `images` folder, and the Markdown file will contain the correct links to these images. ## Formatting Tables The final feature we'll add is table formatting. DOCX files often contain tables that need to be correctly formatted in Markdown. Update `index.js` to include table formatting: ```javascript #!/usr/bin/env node import * as fs from 'fs'; import * as path from 'path'; import * as mammoth from 'mammoth'; import TurndownService from 'turndown'; import { program } from 'commander'; import { v4 as uuidv4 } from 'uuid'; program .version('1.0.0') .description('Convert DOCX to Markdown with image extraction and table formatting') .argument('<input>', 'Input DOCX file') .argument('[output]', 'Output Markdown file (default: same as input with .md extension)') .action(async (input, output) => { try { await convertDocxToMarkdown(input, output); } catch (error) { console.error('Error:', error); process.exit(1); } }); program.parse(process.argv); function createMarkdownTable(table) { const rows = Array.from(table.rows); if (rows.length === 0) return ''; const headers = Array.from(rows[0].cells).map(cell => cell.textContent?.trim() || ''); const markdownRows = rows.slice(1).map(row => Array.from(row.cells).map(cell => cell.textContent?.trim() || '') ); let markdown = '| ' + headers.join(' | ') + ' |\n'; markdown += '| ' + headers.map(() => '---').join(' | ') + ' |\n'; markdownRows.forEach(row => { markdown += '| ' + row.join(' | ') + ' |\n '; }); return markdown; } async function convertDocxToMarkdown(inputFile, outputFile) { if (!outputFile) { outputFile = path.join(path.dirname(inputFile), `${path.basename(inputFile, '.docx')}.md`); } const imageDir = path.join(path.dirname(outputFile), 'images'); if (!fs.existsSync(imageDir)) { fs.mkdirSync(imageDir, { recursive: true }); } const result = await mammoth.convertToHtml({ path: inputFile }, { convertImage: mammoth.images.imgElement(async (image) => { const buffer = await image.read(); const extension = image.contentType.split('/')[1]; const imageName = `image-${uuidv4()}.${extension}`; const imagePath = path.join(imageDir, imageName); await fs.promises.writeFile(imagePath, buffer); return { src: `images/${imageName}` }; }) }); let html = result.value; const turndownService = new TurndownService(); turndownService.addRule('table', { filter: 'table', replacement: function(content, node) { return '\n\n' + createMarkdownTable(node) + '\n\n'; } }); const markdown = turndownService.turndown(html); await fs.promises.writeFile(outputFile, markdown); console.log(`Conversion complete. Output saved to ${outputFile}`); } ``` ## Conclusion In this blog post, we built a DOCX to Markdown converter step by step, adding features like image extraction and table formatting. This tool demonstrates the power and flexibility of Node.js for handling file manipulations and conversions. The source code for this project is available on [GitHub](https://github.com/f5serge/docx-to-md-converter), where you can find the latest updates, contribute to the project, and explore further enhancements. Thank you for following along with this guide. Happy coding!
sacode
1,906,135
A Backend's Journey: An Ordeal and a Life-Long Learning Process
Hey there, colleague developers! I'm excited to share my recent adventure in tackling a compatibility...
0
2024-06-29T23:24:50
https://dev.to/codingstone/a-backends-journey-an-ordeal-and-a-life-long-learning-process-4867
Hey there, colleague developers! I'm excited to share my recent adventure in tackling a compatibility issue while deploying a Django API from a Windows platform to a production environment. Buckle up, and let's dive into the journey! **The Problem:** I was tasked to integrate outlook calendar in my Django application on Windows platform, which seemed straightforward. However, when I pushed the project to production, compatibility issues arose due to the different operating system. Having only worked on Windows, this became a daunting challenge. _Step 1: Google and Community Platforms_ As any developer would, I turned to Google and community platforms for help. With the deadline looming, I sacrificed my night's sleep and dove into research. This step was crucial, but I soon realized that sometimes, you need more personalized guidance after multiple implementation solutions failed. _Step 2: Consult Friends and Colleagues_ I reached out to friends and colleagues, and as the saying goes, "in learning, you'll teach, and in teaching, you'll learn." Thankfully, my workplace has an amazing support system, and my colleagues were more than willing to help. Good friends are hard to come by, and I feel fortunate to have them. _Step 3: Library Research and Implementation_ After gathering insights, I researched the library used on Windows and found its Linux counterpart. I implemented the new solution, and voilà! The library incompatibility issue was resolved. But, as it often does, another issue arose. I collaborated with my system admin, and we created a super admin Outlook account, enabling me to integrate Outlook features into my application. **Lessons Learned:** - Don't be afraid to ask for help; it saves time and energy. - Community support is crucial in the developer journey. - Be prepared to face new challenges and learn from them. As I reflect on this experience, I realize that problem-solving is an ongoing process. I'm excited to continue learning and growing as a developer. If you're facing similar challenges or want to learn more, you can join me on the HNG Internship program, which is going to foster us with a supportive community, and hands-on learning experiences. Check out these resources: - https://hng.tech/internship - https://hng.tech/hire - https://hng.tech/premium Embrace your unique journey, and don't hesitate to reach out for help along the way! Happy coding, and see you in the next blog post!
codingstone
1,906,187
Developing Serverless Applications with Cloudflare Workers
Introduction Serverless computing has been gaining popularity in recent years, with...
0
2024-06-30T00:36:29
https://dev.to/kartikmehta8/developing-serverless-applications-with-cloudflare-workers-414a
javascript, beginners, programming, tutorial
## Introduction Serverless computing has been gaining popularity in recent years, with businesses and developers looking for ways to cut costs and increase efficiency. One of the major players in this field is Cloudflare Workers, a serverless platform that allows developers to build and deploy applications without managing any infrastructure. In this article, we will explore the advantages, disadvantages, and features of developing serverless applications with Cloudflare Workers. ## Advantages One of the biggest advantages of using Cloudflare Workers is the cost savings. With no infrastructure to manage, businesses can reduce their infrastructure and operational costs. Additionally, Cloudflare Workers offer high scalability and elasticity, as they can handle sudden spikes in traffic without any additional configuration. Another advantage is the simplified development process, as developers can focus on writing code without worrying about managing servers or load balancing. ## Disadvantages One of the main disadvantages of using Cloudflare Workers is the lack of control over hardware and software resources. This can become an issue for applications that have specific requirements or need to use certain libraries or frameworks. Another potential drawback is the lack of customization options, as everything is managed by Cloudflare. ## Features Cloudflare Workers offer various features that make it a popular choice for serverless application development. These include support for multiple programming languages such as JavaScript and Rust, built-in caching and routing capabilities, and integration with other Cloudflare services such as DDoS protection and CDN. ### Example of Cloudflare Workers Code ```javascript addEventListener('fetch', event => { event.respondWith(handleRequest(event.request)) }) async function handleRequest(request) { return new Response('Hello World!', {status: 200}) } ``` This simple example demonstrates how to handle HTTP requests using Cloudflare Workers. The code listens for fetch events and responds with "Hello World!" for every incoming request. ## Conclusion In conclusion, developing serverless applications with Cloudflare Workers has numerous advantages, including cost savings, scalability, and simplified development process. However, it may not be suitable for all applications due to the lack of control over resources and customization options. It is important for businesses and developers to carefully evaluate their requirements before deciding to use Cloudflare Workers for their serverless applications.
kartikmehta8
1,906,180
Reason you are not valuable as a web developer
The reason why some web developers find it difficult and struggles in the space can simply be traced...
0
2024-06-30T00:30:47
https://dev.to/crown_code_43cc4b866d2688/reason-you-are-not-valuable-as-a-web-developer-h5p
javascript, beginners, webdev, programming
The reason why some web developers find it difficult and struggles in the space can simply be traced to proverb 22:29 "show me a man skillful in his business he shall stand before kings and not obscure men". Amplified precisely. Yeah you have a skill(web development) bravo, but ask yourself are you skillful in that skill. Skillfulness is the subtle or imaginative ability in inventing, devising, or executing something. Remember the only criteria to stand before kings(professionals, Top leaders etc) is being skillful at what you do(web development). Do you know what? "Skillful people only become useful and successful when they are relevant". Are you current with market trend and technologies? "When you are not current you become outdated making you irrelevant". So if you decide to become skillful ensure you are relevant as a web developer. "When you are not relevant you become a servant, and relevancy comes by intentionality and intentionality starts with a change of mentality" You must be intentional about being relevant, pay the price, take courses, practice, strive harder to become the best at what you do. I hope I inspire somebody, and if you have question for me let me know. I am CrownCode
crown_code_43cc4b866d2688
1,906,169
Overcoming Challenges
Solving complex problems is more than just a technical challenge; it’s a journey of continuous...
0
2024-06-30T00:22:06
https://dev.to/megawatts/overcoming-challenges-4bad
beginners, programming, tutorial, python
Solving complex problems is more than just a technical challenge; it’s a journey of continuous learning and growth. As a growing backend developer, I’ve always been driven by the desire to develop efficient, scalable, and maintainable server-side logic. Recently, I faced a challenge that tested my skills, thought process, and determination. I was tasked with optimizing the business logic at the server-side for an equipment management system. Equipment are to be serviced based on data from different components of the equipment inputted by the user. Initially, the criteria for servicing an equipment were based on the engine and tire components. Management decided to add another criterion for servicing, which added to the complexity of the system. At that time, the existing implementation of the logic wasn’t up to par. It lacked reusability, separation of concerns, and was difficult to understand. Here are the steps I took to solve the challenge: **Understand the Requirements** Before diving into coding, I made a concerted effort to understand the requirements of the system. This included: - The functionality the system needs to provide. - The various modules or components required. - How these components will interact with each other. **Identify Core Components** I identified the core components of the system. An equipment was modeled as a class using other components like engine, tire, and fuel consumption properties. Each component represented a distinct part of the functionality: - Equipment Management - Component Management - Service Management A UML Class Diagram to state the relationships between classes was created. ![UML Class diagram](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gl5p6x96pniqzlszamwh.png) **Define Responsibilities Clearly** Each class was designed to have a single responsibility. This approach makes classes easier to understand, maintain, and reuse. For example: The Equipment class handles general equipment details. The Engine class manages engine-specific properties and behaviors. The ServiceManager class handles the logic for determining when an equipment needs servicing. **Use Interfaces and Abstract Classes** Interfaces and abstract classes help in defining contracts and promoting code reuse. I used them to outline the basic structure and functionality that concrete classes should implement. python ``` from abc import ABC, abstractmethod class Component(ABC): @abstractmethod def needs_service(self) -> bool: pass ``` **Implement Concrete Classes** I implemented the concrete classes that adhere to the interfaces or extend the abstract classes. This ensures that the classes are consistent and reusable. python ``` class Engine(Component): def __init__(self, hours_run: int): self.hours_run = hours_run def needs_service(self) -> bool: return self.hours_run > 1000 class Tire(Component): def __init__(self, wear_level: float): self.wear_level = wear_level def needs_service(self) -> bool: return self.wear_level > 0.8 ``` **Use Design Patterns** I studied and implemented design patterns to promote reusability and simplicity. Using the Factory pattern, I was able to produce reusable and simple components. python ``` class ComponentFactory: @staticmethod def create_component(component_type: str, **kwargs) -> Component: if component_type == "engine": return Engine(**kwargs) elif component_type == "tire": return Tire(**kwargs) else: raise ValueError("Unknown component type") ``` **Keep It Simple** I adhered to the KISS principle by ensuring each class does one thing and does it well. This made the codebase easier to understand and maintain. **Write Tests** I wrote unit tests for each class to ensure they work as expected. This also makes future changes easier and safer. python ``` import unittest class TestEngine(unittest.TestCase): def test_needs_service(self): engine = Engine(hours_run=1200) self.assertTrue(engine.needs_service()) if __name__ == "__main__": unittest.main() ``` By following these steps, I was able to create classes that adhere to the KISS principle, making the codebase easier to maintain and extend. The result of refactoring the code was a dramatic improvement in the application's design. It became easy to understand, maintain, and extend. This experience not only solved a critical problem but also deepened my understanding of software design principles. The [HNG Internship](https://hng.tech/internship) is a fast-paced boot camp for learning digital skills, and being part of it means being surrounded by a community of like-minded individuals who are equally passionate about technology. My journey with HNG started with a simple desire to improve my skills. The opportunity to work on real-world projects, collaborate with talented developers, and receive mentorship from industry experts is invaluable. [HNG Hire](https://hng.tech/hire) makes it easy to find and hire elite talent.
megawatts
1,906,145
Prolog first steps: predicates, metapredicates, lambdas
I first became interested in Prolog after I watched a talk by Joe Armstrong, creator of Erlang, in...
0
2024-06-30T00:11:24
https://dev.to/escribapetrus/prolog-first-steps-predicates-metapredicates-lambdas-47md
prolog, logicalprogramming, programming
I first became interested in Prolog after I watched a talk by Joe Armstrong, creator of Erlang, in which he mentioned this small, niche language twice. The first time, he explains how he took inspiration from the Prolog syntax to write Erlang's. The second, he mentions that if he were to choose only 4 languages to learn, Prolog would be among them. I have always found it super instructive to learn languages that are very distinct from one another. There isn't much to learn from a third dynamically-typed imperative language. But coming from, say, imperative to fuctional or, in this case, logical paradigm is quite the mind-bending experience. It makes us see things we think we know from a different angle, and provides _new ways of expressing ideas and operations_. This is my first contact with logical programming, so everything here is basic and rudimentary. It is a first step in learning how logical programming works and why it is interesting. # Deductions: logical results Prolog provides a very interesting functionality, one that I haven't seen in other languages. While all languages provide results for problems such as, > Let X = A, Y = B, what is the result R of X + Y ? Prolog provides a way to find _all possible values of variables that make a problem true_. For example, > Let X + Y = R, what are the possible values of X and Y for which R = C # Predicates: defining our data set In Prolog, we define _relations between terms_. These relations are called predicates, statements that evaluate to `true`. Let us define some. ```prolog % astrology.pl ruler(aries, mars). ruler(taurus, venus). ruler(gemini, mercury). ruler(cancer, moon). ruler(leo, sun). ruler(virgo, mercury). ruler(libra, venus). ruler(scorpio, mars). ruler(sagittarius, jupiter). ruler(capricorn, saturn). ruler(aquarius, saturn). ruler(pisces, jupiter). element(taurus, earth). element(virgo, earth). element(capricorn, earth). element(aquarius, air). element(libra, air). element(gemini, air). element(pisces, water). element(cancer, water). element(scorpio, water). element(aries, fire). element(sagittarius, fire). element(leo, fire). ``` Undefined possible relations such as `ruler(aries, pluto)` evaluate to `false`. # Queries: deducting results from the data set Let us generate some new information from the data we have, using logical deduction. ## Get a value from relations Suppose we have two friends whose star sign we do not know, but for whom we can imagine, from their personality, their ruler planets and elements. We can deduce their signs based on those traits. To extract values from queries, we use variables, terms starting with an uppercase letter. ```prolog ?- element(Sign, fire), ruler(Sign, sun). Sign = Leo. ``` Alfred is has a fiery spirit and a sunny personality. By deduction, his sign is no other than Leo. ```prolog ?- element(Sign, water), ruler(Sign, mars). Sign = scorpio. ``` Beth loves the ocean, and is very combative, like the god Ares. By deduction, her sign has to be scorpio. ```prolog ?- element(Sign, water), ruler(Sign, venus) . false. ``` Charles also loves the ocean, and is beautiful like Aphrodite. But there is no sign that fits this combination. Our assumptions about him must be wrong. ## Get multiple valid results from relations In our premises, we have not defined any direct relation between planets and elements. However, planets rule signs, therefore we can establish indirect relations. For example, we can ask for planets that have the earth element; in other words, _"what is the planet X that rules some sign Y of the earth element?"_ ```prolog ?- element(Y, earth), ruler(Y, X). Y = taurus, X = venus ; Y = virgo, X = mercury ; Y = capricorn, X = saturn. ``` Prolog is telling us that Venus, Mercury, and Saturn are ruled by earth signs (taurus, virgo, capricorn). It is noteworthy in our dataset, while a single _sign_ has a single element (1-n), a single _planet_ may have multiple elements (n-n). ```prolog ?- element(Y, air), ruler(Y, X). Y = aquarius, X = saturn ; Y = libra, X = venus ; Y = gemini, X = mercury. ``` We see that Saturn and Mercury are also air planets in this world. ## Get a list of valid results As we saw, a planet may have multiple elements based on its ruled signs. Let us try now to get the elements of a certain planet. This time, we will wrap those in a list, using the buit-in predicate `findall/3`. ```prolog elem(El, Planet) :- element(Sign, El), ruler(Sign, Planet). ?- findall(El, elem(El, mercury), Elements). Elements = [earth, air]. ``` First, we wrap the same query for elements of some planet from the previous topic in a function (more precisely, in a _conditional predicate_). Functions in Prolog are **Horn clauses**, a form of implication that reverses the position of `p -> q` to `q <- p`, or "q if p", in natural language. Our Horn clause states that, _"E is the element of a planet P if E is the element of a sign S, and the sign S is the ruler of the planet P"_. Then we find the elements of the planet Mercury, and throw those in a list. The built-in predicate `findall/3` evaluates all elements for which P(El, ...) is true, and appends them to a list of valid elements. One amazing feature of Prolog is that, because it deals with values that evaluate to two, the `elem/2` serves not only to find elements of a planet, but also planets of an element! In other words, we can query both `elem(E, mercury)` and `elem(earth, P)`, and both return the valid results that we look for. It is a feature I have never seen in a language. # Metapredicates Metapredicates are the logical programming equivalents of _higher-order functions_ in functional programming. They are predicates that have predicates as arguments. The predicate `findall/3` mentioned earlier is such an example. Metapredicates are very useful generalizations on common operations. We may go beyond the metapredicates already predefined in the standard library, and define our own. For example, suppose we want to judge whether all planets in a list of planets are rulers of some sign. We may define a predicate `all_rulers`, such as: ```prolog all_rulers([]). all_rulers([P|Ps] :- ruler(_, P), all_rulers(Ps). ?- all_rulers([mercury, sun]). true ?- all_rulers([mercury, pluto, sun]). false ``` The predicate is defined recursively, and states that all planets are rulers if the first element of the list is a ruler, and all other elements are also rulers. Judging whether all elements in a list are true according to some condition is so useful that we would be wise to define a general operation that accepts not just `ruler/2`, but any condition. ```prolog ruler(P) :- ruler(_, P) all([], _). all([E|Es], Pred) :- call(Pred, E), all(Es, Pred). ?- all([mercury, sun], ruler). true ``` ## Lambdas We have an inconvenient limitation in our program: `all/2/` only accepts as arguments predicates with _arity_ 1 (i.e. that take one argument). That is becase we make use of `call(Pred, E)`, which applies the current element to the predicate. To further generalize our metapredicate, we can make use of lambdas, anonymous predicates that allow us not only to apply predicates of different arities, but also to define predicates on runtime. Let's define another metapredicate `any`, which judges whether any element in a list is valid according to a condition. ```prolog any([], _) :- false. any([E|Es], Pred) :- call(Pred, E); any(Es, Pred). ?- pack_install(lambda). true. ?- use_module(library(lambda)). true. ?- any([pluto, mercury, uranus], \X^ruler(_,X)). true ?- any([pluto, uranus], \X^ruler(_,X)). false. ``` The metapredicate `any/2` is very similar to `all/2`, except that instead of using AND operators (`,`), it uses OR operators (`;`); in other words, `any/2` is true if the predicate for the first element of the list is true, or if it is true for some other subsequent element. This time, we are not passing a predicate previously defined in compilation, but rather a dynamic predicate with a lambda (we install and require the `lambda` package first). Lambdas empower us to call a 2-arity predicate such as `ruler/2` by wrapping it in an anonymous 1-arity anonymous predicate. # Conclusion Prolog is an interesting language, with a mind-bending paradigm. In this article, we have explore some of its features, such as: - getting bidirectional results; - finding multiple solutions to a problem using variables; - defining functions as Horn clauses; - generalizing particular predicates into metapredicates; - stating anonymous predicates with lambdas. I hope you liked this article, and I look forward to exploring Prolog further. I have just started using this language, so I apologize for any incorrect terminology or explanations. If you have any comments or suggestions, I'll be happy to read them.
escribapetrus
1,906,144
React vs Angular: Tips to help you choose ⚔️💻
Web apps are the most preferred for websites in 2024, they are known for their efficiency in building...
0
2024-06-30T00:10:01
https://dev.to/joebim/react-vs-angular-tips-to-help-you-choose-49ca
react, angular, frontend, webdev
Web apps are the most preferred for websites in 2024, they are known for their efficiency in building user-friendly apps, just like adding icing to a beautiful wedding cake. In other words, they make the best UI and functionalities that also run faster. Today, I will be comparing two notable frontend frameworks for building web applications **ReactJS** and **AngularJS** which are both javascript-based frameworks and explore their qualities and uniqueness under the hood as this will be helpful to anyone looking to learn to code with any of both. **React** React shines as a speedy and adaptable JavaScript library for building user interfaces. It mostly works with managing reusable components, making it a great choice for projects that require quick changes and a focus on a smooth user experience. Unlike a full-fledged framework, React offers flexibility, allowing you to integrate it with other libraries and tools during your development process. **Angular** Angular is all about structure. it is really best at tackling complex web applications because of its strong foundation with built-in features for switching pages, working with dependencies, and more. Angular also works alongside typeScript which still focuses on a more structured code. If you are looking to work on projects that focus more on aligned rules, then angular is your guy 👍. ## Framework Comparison To best classify their differences, I will be considering these three simple criteria, **Structure vs. Flexibility** Personally, I see Angular as a pre-built Lego set with clear instructions, where every approach follows a particular structure with built-in features. React is the opposite. it's like a big box of regular Legos, giving you more freedom to design and build how you see fit. So if you prefer more flexibility in your code, you should just go for React **Learning Curve** Angular has a more steeper learning curve because it is quite comprehensive with strict rules and uses TypeScript which is a superset of JavaScript. This adds more features like type-checking, code quality and more, requiring in-depth knowledge of javascript to use. React is more easier to learn for beginners as it focuses on core JavaScript concepts. The best way to think of it is like learning to ride a bike with training wheels (Angular with TypeScript) versus starting on a regular bike (React with JavaScript). While the training wheels provide initial support, they might take a bit longer to master. Although, Once you get the hang of React, working with Angular can feel more natural and flexible. **Speed and Performance** React is known for its speed, especially in smaller apps, because of its virtual DOM system that helps with updates. While, Angular with its structured framework, might be a bit slower to start, but it can be very efficient for complex applications by providing ways to manage data and components smoothly. **Conclusion** Even though it seems like I'm taking the side of reactJS, I can assure you that both frameworks are exceptional and can equally make fully functional and beautiful applications. Now, you should be able to choose which to learn first as a start to your coding journey and I wish you all the best of luck 👍 Want to learn more about coding and development opportunities? Check out these resources: HNG Internship Program: https://hng.tech/internship, HNG Hire Platform: https://hng.tech/hire
joebim
1,906,142
Revolutionize Language Learning
Say goodbye to dedicated study time and meet Vocabulary Booster! Vocabulary Booster lets you learn...
0
2024-06-30T00:00:23
https://dev.to/huseyn0w/revolutionize-language-learning-24n5
opensource, language, showdev, productivity
Say goodbye to dedicated study time and meet [Vocabulary Booster!](https://github.com/huseyn0w/vocabularify) Vocabulary Booster lets you learn new words while you watch videos, code, or browse. This desktop app offers flexible learning modes, language selection, and a comfortable dark mode, making vocabulary building a seamless part of your day. Start learning effortlessly today!
huseyn0w
1,906,560
TailwindCSS on Rails: Minimize Collapsible Sidebar
A ruby friend named Daniel emailed me a request for this feature: Here’s my solution,...
0
2024-06-30T23:00:05
https://blog.corsego.com/tailwind-collapsible-sidebar
rails, tailwindcss
--- title: TailwindCSS on Rails: Minimize Collapsible Sidebar published: true date: 2024-06-30 00:00:00 UTC tags: rails,tailwindcss canonical_url: https://blog.corsego.com/tailwind-collapsible-sidebar --- A ruby friend named Daniel emailed me a request for this feature: ![email request for this blogpost](https://blog.corsego.com/assets/images/tailwind-sidebar-request.png) Here’s my solution, Daniel: ![Tailwind collapsible sidebar](https://blog.corsego.com/assets/images/taiwlind-sidebar-toggle-save.gif) - ✅ Collapsible sidebar - ✅ Save state - ✅ Elegant solution First, create a stimulus controller to collapse sidebar. Write the current state to cookies ``` // app/javascript/controllers/sidebar_controller.js import { Controller } from "@hotwired/stimulus" export default class extends Controller { static targets = ["sidebarContainer"]; toggle(e) { e.preventDefault(); this.switchCurrentState(); } switchCurrentState() { const newState = this.element.dataset.expanded === "true" ? "false" : "true"; this.element.dataset.expanded = newState; document.cookie = `sidebar_expanded=${newState}`; // document.cookie = `sidebar_expanded=${newState}; path=/`; } } ``` The toggle can be triggered by a button like this: ``` <%= button_to "Toggle", nil, data: { action: "click->sidebar#toggle" } ``` Toggling the button will update `cookies[:sidebar_expanded]`. This is accessible in CSS via `[[data-expanded=false]_&]:`. You can use it as a **condition**! Here’s a sidebar that hides text like `"Home"` & `"Buttons"` when `data-expanded=false`: ``` <!-- app/views/layouts/_sidebar.html.erb --> <nav class="bg-slate-400 hidden md:flex flex-col text-center p-4 justify-between sticky top-20 h-[calc(100vh-80px)]" data-controller="sidebar" data-expanded="<%= (cookies[:sidebar_expanded] || true) %>"> <div class="flex flex-col text-left"> <%= link_to root_path do %> <span>🏠</span> <span class="[[data-expanded=false]_&]:hidden">Home</span> <% end %> <%= link_to buttons_path do %> <span>🔘</span> <span class="[[data-expanded=false]_&]:hidden">Buttons</span> <% end %> </div> <div class="text-left"> <%= button_to nil, data: { action: "click->sidebar#toggle" } do %> <span class="[[data-expanded=true]_&]:hidden">➡️</span> <span class="[[data-expanded=false]_&]:hidden">⬅️</span> <span class="[[data-expanded=false]_&]:hidden">Toggle</span> <% end %> </div> </nav> ``` 🤠 Voila!
superails
1,906,141
Code Smell 256 - Mutable Getters
Using getters is a significant issue. Exposing internals is a major problem TL;DR: Don't expose...
9,470
2024-06-29T23:49:05
https://maximilianocontieri.com/code-smell-256-mutable-getters
webdev, beginners, programming, java
*Using getters is a significant issue. Exposing internals is a major problem* > TL;DR: Don't expose your internals and lose control # Problems - Mutability - Unexpected Changes - Ripple Effects - Thread unsafety - Encapsulation Principle violation # Solutions 1. Return shallow copies of your collections # Context [Immutable objects](https://dev.to/mcsee/the-evil-powers-of-mutants-1noo) are essential in functional and object-oriented programming. Once created, their state cannot be altered. This is key to keeping object integrity and ensuring thread safety in multithreaded applications. Mutable getters allow callers to access and modify the internal state of an object, leading to potential corruption and unexpected behavior. **When you break encapsulation, you take responsibility away from an object. Integrity is lost.** Returning a page in a book is like an immutable copy. It cannot be edited, like a human memory. You can edit some memories by bringing them from long-term memory. # Sample Code ## Wrong [Gist Url]: # (https://gist.github.com/mcsee/715a932cd775b89b1ea04ce0e42775fe) ```java public class Person { private List<String> hobbies; public Person(List<String> hobbies) { this.hobbies = hobbies; } public List<String> getHobbies() { return hobbies; } } ``` ## Right [Gist Url]: # (https://gist.github.com/mcsee/ddb3f75add70512e671e57a9440a862c) ```java public class Person { private List<String> hobbies; public Person(List<String> hobbies) { this.hobbies = new ArrayList<>(hobbies); } public List<String> hobbies() { // This returns a shallow copy // This is usually not a big performance issue return new ArrayList<>(hobbies); } } ``` # Detection [X] Semi-Automatic You can detect mutable getters by examining the return types of your getters. If they return mutable collections or objects, you need to refactor them to return immutable copies or use immutable data structures. # Tags - Mutability # Level [x] Intermediate # AI Generation AI generators might create this smell if they prioritize simplicity and brevity over best practices. They not always consider the implications of returning mutable objects. # AI Detection AI tools can detect this smell if you instruct them to look for getters returning mutable objects or collections. They can suggest returning copies or using immutable types to fix the issue. # Conclusion Getters are a [code smell](https://dev.to/mcsee/code-smell-68-getters-o12), but something you need to return objects you hold. You can do it at your own risk, but retain the tracking of those collections. Avoid mutable getters to protect your object integrity and encapsulation. By returning immutable copies or using immutable types, you can prevent unintended modifications and ensure your objects remain reliable and predictable. # Relations {% post https://dev.to/mcsee/code-smell-68-getters-o12 %} {% post https://dev.to/mcsee/code-smell-109-automatic-properties-f40 %} # More Info {% post https://dev.to/mcsee/the-evil-powers-of-mutants-1noo %} {% post https://dev.to/mcsee/nude-models-part-ii-getters-3f2a %} # Disclaimer Code Smells are my [opinion](https://dev.to/mcsee/i-wrote-more-than-90-articles-on-2021-here-is-what-i-learned-1n3a). # Credits Photo by [Suzanne D. Williams](https://unsplash.com/es/@scw1217) on [Unsplash](https://unsplash.com/s/photos/tres-pupas-VMKBFR6r_jg) * * * > The best programmers write only easy programs. _Michael A. Jackson_ {% post https://dev.to/mcsee/software-engineering-great-quotes-26ci %} * * * This article is part of the CodeSmell Series. {% post https://dev.to/mcsee/how-to-find-the-stinky-parts-of-your-code-1dbc %}
mcsee
1,906,139
Automating the Mundane: My Journey with Node-cron
Have you ever had that bad feeling of having forgotten something important? Imagine that "something"...
0
2024-06-29T23:38:05
https://dev.to/aminah_rashid/automating-the-mundane-my-journey-with-node-cron-2gi7
backenddevelopment, cronjobs, automation, node
Have you ever had that bad feeling of having forgotten something important? Imagine that "something" to be remembering to mark one's daily attendance at work. While developing an attendance management system, this exact scenario posed a considerable problem: How could I ensure that users log in their presence consistently when they forgot to do so? This led me to explore automated solutions, ultimately discovering Node-cron as an effective tool for scheduling tasks in Node.js applications.. **The Problem** Imagine this: a brand-new attendance system, shiny, ready to check in and out every user. The trouble was that it relied on people to remember to use it. Anyone who has ever forgotten their keys can testify that human memory is not always the most reliable thing in the world. I needed some surefire method whereby these attendance records would be verified and updated automatically so that our digital tracker wouldn't stray when people forgot or occasionally tried to take advantage of it. **Learning Approach And Solution Steps** To address this challenge, I began with online research, using resources like developer forums and documentation. When Node-cron was suggested as a potential solution, I adopted a structured learning approach: - I then referred to the official Node-cron documentation, and tried to learn the basics. - I found a goldmine of practical knowledge in a tutorial video titled ["Scheduling Tasks in Node.js | Cron Jobs in real projects,"](https://youtu.be/6gmdFPlkuhQ) which was very practical. **Node-cron: The Scheduling Master** Let me introduce you to basis of Node-cron, the unsung hero of task automation in the Node.js world.You can think of it as a super-smart alarm clock for your code. Using cron expressions—a series of five fields representing minutes, hours, days, months, and weekdays—you will tell your application strictly when it should execute specific functions. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/z2qiv156riiif33ou9pm.PNG) For instance, "0 19 * * *" will mean "Execute this task every day at precisely 7:00 PM!" It is like an assistant who will never stop or forget to do his job. **Node-cron Basics** The following are some code snippets that will give users a feel for what Node-cron can do: ``` Daily Task at a Specific Time const cron = require('node-cron'); cron.schedule("0 19 * * *", () => { console.log("This task runs at 7:00 PM every day!"); }); Hourly Task cron.schedule("0 * * * *", () => { console.log("This task runs at the start of every hour."); }); Weekly Task on Mondays cron.schedule("0 9 * * 1", () => { console.log("This task runs every Monday at 9:00 AM."); }); Monthly Task on the First Day cron.schedule("0 0 1 * *", () => { console.log("This task runs on the first day of every month at midnight."); }); ``` These examples demonstrate Node-cron's adaptability, handling everything from daily reminders to monthly updates. But how exactly did it tackle our attendance challenge? **The Attendance Automation Solution** Get ready for the game-changer - the code that revolutionized our attendance system from prone to forgetfulness to completely reliable: ``` cron.schedule("0 19 * * *", async () => { try { const users = await User.find(); for (const user of users) { const existingAttendance = await Attendance.findOne({ userId: user._id, createdAt: { $gte: new Date().setHours(0, 0, 0, 0) }, }); if (!existingAttendance) { await Attendance.create({ userId: user._id, status: "Absent", }); } } console.log("Attendance records updated."); } catch (error) { console.error("Error updating attendance records:", error); } }); ``` The following code will automate the daily attendance checks: fetch all users from the database, check attendance against already available records for the given day, and create new "Absent" records for users who have not marked it. It also flashes a success update message and logs errors encountered during this process for debugging and maintenance. This approach dramatically increases the reliability of an attendance management system by ensuring proper tracking without manual oversight. **Grow Your Skills At HNG Internship** Implementing Node-cron to solve our attendance challenge proved the power of automation in backend development. As a full-stack developer coming from the frontend, I've learned and valued the complexity and importance that server-side solutions bring. This experience aligns perfectly with the HNG Internship's practical approach. While HNG offers various tracks, it's particularly valuable for aspiring backend developers. If you're looking to work on real-world projects and expand your skills, this internship is an excellent opportunity. My journey with HNG is driven by two goals: gaining hands-on experience with industry-level problems and expanding my international network. Whether you're passionate about Node.js, databases, or APIs, HNG provides a platform to grow, collaborate, and prepare for the tech industry's demands. Want to better your backend and raise a hullabaloo by starting your career? Then, the HNG Internship definitely could be that giant step. It's not just about coding; it's about becoming a well-rounded developer ready for real-world challenges. **Final Thoughts** And there you have it—my tale of Node-cron, which came in to save the attendance system from human negligence. For developers navigating the complexities of scheduling tasks, Node-cron is a game-changer I wholeheartedly endorse. It's like having a mindful assistant on your code: attentive, efficient, and never late. In the realm of software development, the most invaluable solutions often operate silently, ensuring seamless operations behind the scenes. Here's to seamless coding experiences ahead! **Links:** Check out these Links, if you wanna learn more about HNG: 1. [HNG Internship](https://hng.tech/internship) 2. [HNG Premium](https://hng.tech/premium)
aminah_rashid
1,906,136
Conquering Daunting Tasks: My Growth as a Backend Developer Through the HNG Internship
I am Abdulmujeeb Abdulrahman, a PHP/Laravel Backend developer, an avid problem solver, and a tech...
0
2024-06-29T23:28:19
https://dev.to/rahmanakorede442/conquering-daunting-tasks-my-growth-as-a-backend-developer-through-the-hng-internshi-5dfj
beginners, introduction, php, laravel
I am Abdulmujeeb Abdulrahman, a PHP/Laravel Backend developer, an avid problem solver, and a tech enthusiast. My natural inclination to fix things and solve problems led me to become a Backend Developer. I have this passion to make hard things easy by creating easy solutions for users and this as you may already know, may never be possible without first doing hard things. In my role as a backend developer, I was once required to build a smart suggestive feature for a movie streaming platform. This feature uses data collected from user interactions, such as movie votes and the viewing history of an authenticated user. To solve this, I broke it down and solved it in the following steps: - Create an endpoint for the like button and store the genre of the content liked in relation with the user. - Array of genres must be merged; here I made use of the PHP's array_merge(), and must be unique; I made use of the PHP's array_unique(). - Subsequent likes for a particular authenticated user will only update the previously existing ones. - The column storing the uniquely gathered genres per like is of JSON data type. - For suggesting contents, the array of genres stored in relation to the user is used to fetch out random contents with similar genres using the MySQL `WHERE IN` key. - The response was paginated using the Laravel paginate() method for easy pagination implementation by the Frontend engineer. Initially, the task felt challenging but I learned quite a lot while solving it. I then got to know that there are better and more efficient ways of achieving this only if I knew how to. This motivated me to seek out opportunities to improve my efficiency, and tasks I could take on to make me more conversant with these _better_ and _more efficient_ ways then I heard about the HNG [Internship](https://hng.tech/internship). I joined the [Hng Internship](https://hng.tech/internship) for this reason - doing hard things. I learned about the internship from a WhatsApp forum for PHP/Laravel developers when a member was giving testimony of how he was drilled by the daunting tasks given to them during the internship and how it has helped hone his technical skills and shaped his professional life. I got so pumped by this testimony and I resolved to join the next cohort which was to start in August 2023 - HNGX. The internship lived up to its reputation and it was more than I expected, for I was, for the first time able to perform tasks independently and remotely with a strong sense of responsibility to a team. I used some tools for the first time such as lucid for UML Diagram, dbdiagrab.io for ERD, RabbitMQ for queue management, and many more. Interestingly, I became a finalist and for the first time, I learned about the [HNG Premium](https://hng.tech/premium). HNG Premium is an exclusive program for finalists. It is a platform where finalists converge with an entrance fee of just 5 USD (5,000 NGN). In this platform, members are privileged to have first-hand job opportunities, the privilege to interact with industry experts, access to valuable courses, and many more. I look forward to doing more hard things this cohort, making more valuable connections, and most importantly, further developing my backend problem-solving skills. Wish me luck!
rahmanakorede442
1,906,133
CSS and js standing fan
Check out this Pen I made!
0
2024-06-29T23:23:19
https://dev.to/kemiowoyele1/css-and-js-standing-fan-1nmh
codepen
Check out this Pen I made! {% codepen https://codepen.io/frontend-magic/pen/PovrWBo %}
kemiowoyele1
1,906,122
United+(1(888)-908-0815 What is United 𝑨𝒊𝒓𝒍𝒊𝒏𝒆𝒔 Cancellation Policy? ##𝟚𝟜ℍ𝕠𝕦𝕣~𝕌𝕡𝕕𝕒𝕥𝕖 @𝔾𝕦𝕚𝕕𝕖~$𝕊𝕠𝕝𝕦𝕥𝕚𝕠𝕟
Description @@@United Airlines Cancellation Policy and Refunds 1. What is United Airlines'...
0
2024-06-29T23:20:20
https://dev.to/jason_clark_fc88d6c4f52f6/united1888-908-0815-what-is-united-cancellation-policy-h--4ipf
Description @@@United Airlines Cancellation Policy and Refunds **1. What is United Airlines' cancellation policy?** United Airlines provides a flexible 24-hour cancellation policy. This allows passengers to cancel their bookings within 24 hours of ticket purchase and receive a full refund, regardless of the fare selected. For cancellations and immediate refunds, you may contact their toll-free number at ⚡☎ +1(888)-908-0815. **2. Can I cancel my United Airlines flight and receive a refund?** Yes, if you cancel your reservation within 24 hours of purchase and the flight is two days or more away, United Airlines will issue a full refund. You can cancel your flight either online or by reaching out to their customer service at ⚡☎ +1(888)-908-0815 for assistance. **3. How does United Airlines handle refunds for cancelled flights?** United Airlines processes refunds for flights that are cancelled within the stipulated 24-hour window, offering a 100% refund regardless of the ticket type. After this timeframe, fees may be applicable based on the fare conditions. For more details, contact their customer support at ⚡☎ +1(888)-908-0815. **4. What are the steps to cancel a flight with United Airlines?** To cancel a United Airlines flight, visit the 'My Trips' section on their website, log in with your details, select the booking you wish to cancel, and follow the instructions to initiate a cancellation and refund. Alternatively, call their customer service at ⚡☎ +1(888)-908-0815 for direct assistance. **5. What happens if United Airlines cancels my flight?** In the event that United Airlines cancels a flight, they will notify you and offer alternatives or a full refund. For further actions or to confirm new arrangements, contact United Airlines at ⚡☎ +1(888)-908-0815. **6. How long does it take to receive a refund from United Airlines?** Refunds from United Airlines are generally processed within 5-10 business days, depending on your bank or card issuer. For queries related to the refund process, contact ⚡☎ +1(888)-908-0815. **7. Are there any fees associated with cancelling a United Airlines flight?** United Airlines allows free cancellations and refunds within 24 hours of booking ⚡☎ +1(888)-908-0815. After this period, cancellation fees depend on the fare conditions. For precise information on fees, consult the fare rules or contact their customer support. **8. Can partial refunds be obtained from United Airlines?** For partially refundable fares, United Airlines may offer a partial refund upon cancellation ⚡☎ +1(888)-908-0815. For non-refundable tickets, refunds are typically not provided unless under exceptional circumstances. For more details, call their customer service. **9. How can I speak with a live agent at United Airlines?** To speak to a live agent at United Airlines, call their customer service number at ⚡☎ +1(888)-908-0815. They are available to assist with any inquiries or travel adjustments you need to make. **10. What is the procedure for changing a flight with United Airlines?** If you need to change a flight, you can do so through the United Airlines website or by contacting customer service ⚡☎ +1(888)-908-0815. Note that changes made outside the 24-hour window may incur fees, depending on the fare type. For direct assistance or further information, you can always contact United Airlines customer service at ⚡☎ +1(888)-908-0815. What is the cancellation policy for United Airlines? United Airlines stands out with its 24-hour refund policy (Toll-free-+1(888)-908-0815*" 𝙏𝙖𝙠𝙚 𝟣𝟢𝟢% 𝙁𝙪𝙡𝙡 𝙍𝙚𝙛𝙪𝙣𝙙 𝙄𝙢𝙢𝙚𝙙𝙞𝙖𝙩𝙚𝙡𝙮). This policy allows passengers to cancel their reservations within 24 hours of booking and receive a full refund, regardless of the ticket type. Can I cancel a flight and get a refund from United Airlines? Yes, United Airlines offers refunds if you cancel your booking within 24 hours of purchase +1(888)-908-0815*" 𝙏𝙖𝙠𝙚 𝟣𝟢𝟢% 𝙁𝙪𝙡𝙡 𝙍𝙚𝙛𝙪𝙣𝙙 𝙄𝙢𝙢𝙚𝙙𝙞𝙖𝙩𝙚𝙡𝙮 . After this period, fees or charges may apply based on the fare type. 𝑰𝒇 𝒚𝒐𝒖 𝒄𝒂𝒏𝒄𝒆𝒍 𝒘𝒊𝒕𝒉𝒊𝒏 24 𝒉𝒐𝒖𝒓𝒔 𝒐𝒇 𝒃𝒐𝒐𝒌𝒊𝒏𝒈 𝒂𝒕 𝒍𝒆𝒂𝒔𝒕 𝒕𝒘𝒐 𝒅𝒂𝒚𝒔 𝒃𝒆𝒇𝒐𝒓𝒆 𝒅𝒆𝒑𝒂𝒓𝒕𝒖𝒓𝒆,𝒚𝒐𝒖’𝒍𝒍 𝒈𝒆𝒕 𝒂 𝒇𝒖𝒍𝒍 𝒓𝒆𝒇𝒖𝒏𝒅 .𝑵𝒐𝒏𝒓𝒆𝒇𝒖𝒏𝒅𝒂𝒃𝒍𝒆 𝒕𝒊𝒄𝒌𝒆𝒕𝒔 𝒄𝒂𝒏 𝒃𝒆 𝒄𝒉𝒂𝒏𝒈𝒆𝒅 𝒐𝒓 𝒄𝒂𝒏𝒄𝒆𝒍𝒆𝒅 𝒘𝒊𝒕𝒉𝒐𝒖𝒕 𝒂 𝒇𝒆𝒆,𝒃𝒖𝒕 𝒚𝒐𝒖’𝒍𝒍 𝒓𝒆𝒄𝒆𝒊𝒗𝒆 𝒂 𝒕𝒓𝒂𝒗𝒆𝒍 𝒄𝒓𝒆𝒅𝒊𝒕.𝑭𝒐𝒓 𝒎𝒐𝒓𝒆 𝒊𝒏𝒇𝒐𝒓𝒎𝒂𝒕𝒊𝒐𝒏,𝙘𝙖𝙡𝙡 @ ⚡☎【+1(888)-908-0815*】( 𝙊𝙏𝘼). 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 🛫⚡☎+(+1(888)-908-0815*) 𝙄𝙛 𝙮𝙤𝙪 𝙘𝙖𝙣𝙘𝙚𝙡 𝙬𝙞𝙩𝙝𝙞𝙣 24 𝙝𝙤𝙪𝙧𝙨 𝙤𝙛 𝙗𝙤𝙤𝙠𝙞𝙣𝙜 𝙖𝙩 𝙡𝙚𝙖𝙨𝙩 𝙩𝙬𝙤 𝙙𝙖𝙮𝙨 𝙗𝙚𝙛𝙤𝙧𝙚 𝙙𝙚𝙥𝙖𝙧𝙩𝙪𝙧𝙚,𝙮𝙤𝙪’𝙡𝙡 𝙜𝙚𝙩 𝙖 𝙛𝙪𝙡𝙡 𝙧𝙚𝙛𝙪𝙣𝙙 .𝙉𝙤𝙣𝙧𝙚𝙛𝙪𝙣𝙙𝙖𝙗𝙡𝙚 𝙩𝙞𝙘𝙠𝙚𝙩𝙨 𝙘𝙖𝙣 𝙗𝙚 𝙘𝙝𝙖𝙣𝙜𝙚𝙙 𝙤𝙧 𝙘𝙖𝙣𝙘𝙚𝙡𝙚𝙙 𝙬𝙞𝙩𝙝𝙤𝙪𝙩 𝙖 𝙛𝙚𝙚,𝙗𝙪𝙩 𝙮𝙤𝙪’𝙡𝙡 𝙧𝙚𝙘𝙚𝙞𝙫𝙚 𝙖 𝙩𝙧𝙖𝙫𝙚𝙡 𝙘𝙧𝙚𝙙𝙞𝙩.𝙁𝙤𝙧 𝙢𝙤𝙧𝙚 𝙞𝙣𝙛𝙤𝙧𝙢𝙖𝙩𝙞𝙤𝙣,𝙘𝙖𝙡𝙡 @ ⚡☎+1(888)-908-0815*( 𝙊𝙏𝘼). 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 🛫⚡☎+(+1(888)-908-0815*) 𝕴𝖋 𝖞𝖔𝖚 𝖈𝖆𝖓𝖈𝖊𝖑 𝖜𝖎𝖙𝖍𝖎𝖓 24 𝖍𝖔𝖚𝖗𝖘 𝖔𝖋 𝖇𝖔𝖔𝖐𝖎𝖓𝖌 𝖆𝖙 𝖑𝖊𝖆𝖘𝖙 𝖙𝖜𝖔 𝖉𝖆𝖞𝖘 𝖇𝖊𝖋𝖔𝖗𝖊 𝖉𝖊𝖕𝖆𝖗𝖙𝖚𝖗𝖊,𝖞𝖔𝖚’𝖑𝖑 𝖌𝖊𝖙 𝖆 𝖋𝖚𝖑𝖑 𝖗𝖊𝖋𝖚𝖓𝖉 .𝕹𝖔𝖓𝖗𝖊𝖋𝖚𝖓𝖉𝖆𝖇𝖑𝖊 𝖙𝖎𝖈𝖐𝖊𝖙𝖘 𝖈𝖆𝖓 𝖇𝖊 𝖈𝖍𝖆𝖓𝖌𝖊𝖉 𝖔𝖗 𝖈𝖆𝖓𝖈𝖊𝖑𝖊𝖉 𝖜𝖎𝖙𝖍𝖔𝖚𝖙 𝖆 𝖋𝖊𝖊,𝖇𝖚𝖙 𝖞𝖔𝖚’𝖑𝖑 𝖗𝖊𝖈𝖊𝖎𝖛𝖊 𝖆 𝖙𝖗𝖆𝖛𝖊𝖑 𝖈𝖗𝖊𝖉𝖎𝖙.𝕱𝖔𝖗 𝖒𝖔𝖗𝖊 𝖎𝖓𝖋𝖔𝖗𝖒𝖆𝖙𝖎𝖔𝖓,𝙘𝙖𝙡𝙡 @ ⚡☎【(+1(888)-908-0815*)】( 𝙊𝙏𝘼). 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 🛫⚡☎+(+1(888)-908-0815*) United Airlines provides a 24-hour cancellation policy. If you need to cancel your flight, you can do so without a fee within 24 hours after your flight. For your convenience, cancellations can be made either online or by contacting their support at 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*). United Airlines flight Cancellation Policy;- Contact 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*) ” You can cancel any flight booked through United Airlines either online or with a live agent. Whether your ticket is eligible for a refund depends on what type of airfare you bought and when you bought it. Can you cancel a flight with United Airlines and get your money back? 𝙔𝙚𝙨, United Airlines 𝙩𝙮𝙥𝙞𝙘𝙖𝙡𝙡𝙮 𝙤𝙛𝙛𝙚𝙧𝙨 𝙖 24-𝙝𝙤𝙪𝙧 𝙘𝙖𝙣𝙘𝙚𝙡𝙡𝙖𝙩𝙞𝙤𝙣 𝙥𝙤𝙡𝙞𝙘𝙮 𝙛𝙤𝙧 𝙛𝙡𝙞𝙜𝙝𝙩𝙨. 𝙁𝙤𝙧 𝙖𝙣𝙮 𝙦𝙪𝙚𝙧𝙮 𝙘𝙖𝙡𝙡 @ ⚡☎【(+1(888)-908-0815*)】( 𝙊𝙏𝘼). 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 🛫⚡☎+(+1(888)-908-0815*)., 𝙖𝙡𝙡𝙤𝙬𝙞𝙣𝙜 𝙘𝙪𝙨𝙩𝙤𝙢𝙚𝙧𝙨 𝙩𝙤 𝙘𝙖𝙣𝙘𝙚𝙡 𝙩𝙝𝙚𝙞𝙧 𝙗𝙤𝙤𝙠𝙞𝙣𝙜𝙨 𝙬𝙞𝙩𝙝𝙞𝙣 24 𝙝𝙤𝙪𝙧𝙨 𝙤𝙛 𝙥𝙪𝙧𝙘𝙝𝙖𝙨𝙚 𝙛𝙤𝙧 𝙖 𝙛𝙪𝙡𝙡 𝙧𝙚𝙛𝙪𝙣𝙙. 𝖄𝖊𝖘, United Airlines 𝖙𝖞𝖕𝖎𝖈𝖆𝖑𝖑𝖞 𝖔𝖋𝖋𝖊𝖗𝖘 𝖆 24-𝖍𝖔𝖚𝖗 𝖈𝖆𝖓𝖈𝖊𝖑𝖑𝖆𝖙𝖎𝖔𝖓 𝖕𝖔𝖑𝖎𝖈𝖞 𝖋𝖔𝖗 𝖋𝖑𝖎𝖌𝖍𝖙𝖘. 𝕱𝖔𝖗 𝖆𝖓𝖞 𝖖𝖚𝖊𝖗𝖞 𝖈𝖆𝖑𝖑 @ ⚡☎【(+1(888)-908-0815*)】( 𝕺𝕿𝕬). 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 🛫⚡☎+(+1(888)-908-0815*)., 𝖆𝖑𝖑𝖔𝖜𝖎𝖓𝖌 𝖈𝖚𝖘𝖙𝖔𝖒𝖊𝖗𝖘 𝖙𝖔 𝖈𝖆𝖓𝖈𝖊𝖑 𝖙𝖍𝖊𝖎𝖗 𝖇𝖔𝖔𝖐𝖎𝖓𝖌𝖘 𝖜𝖎𝖙𝖍𝖎𝖓 24 𝖍𝖔𝖚𝖗𝖘 𝖔𝖋 𝖕𝖚𝖗𝖈𝖍𝖆𝖘𝖊 𝖋𝖔𝖗 𝖆 𝖋𝖚𝖑𝖑 𝖗𝖊𝖋𝖚𝖓𝖉. 𝕐𝕖𝕤, United Airlines 𝕥𝕪𝕡𝕚𝕔𝕒𝕝𝕝𝕪 𝕠𝕗𝕗𝕖𝕣𝕤 𝕒 𝟚𝟜-𝕙𝕠𝕦𝕣 𝕔𝕒𝕟𝕔𝕖𝕝𝕝𝕒𝕥𝕚𝕠𝕟 𝕡𝕠𝕝𝕚𝕔𝕪 𝕗𝕠𝕣 𝕗𝕝𝕚𝕘𝕙𝕥𝕤. 𝔽𝕠𝕣 𝕒𝕟𝕪 𝕢𝕦𝕖𝕣𝕪 𝕔𝕒𝕝𝕝 @ ⚡☎+1(888)-908-0815*( 𝕆𝕋𝔸). 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 🛫⚡☎+𝟙-【(+1(888)-908-0815*)】., 𝕒𝕝𝕝𝕠𝕨𝕚𝕟𝕘 𝕔𝕦𝕤𝕥𝕠𝕞𝕖𝕣𝕤 𝕥𝕠 𝕔𝕒𝕟𝕔𝕖𝕝 𝕥𝕙𝕖𝕚𝕣 𝕓𝕠𝕠𝕜𝕚𝕟𝕘𝕤 𝕨𝕚𝕥𝕙𝕚𝕟 𝟚𝟜 𝕙𝕠𝕦𝕣𝕤 𝕠𝕗 𝕡𝕦𝕣𝕔𝕙𝕒𝕤𝕖 𝕗𝕠𝕣 𝕒 𝕗𝕦𝕝𝕝 𝕣𝕖𝕗𝕦𝕟𝕕. Is there a cancellation fee for United Airlines? 𝓨𝓮𝓼, 𝔂𝓸𝓾 𝓬𝓪𝓷 𝓬𝓪𝓷𝓬𝓮𝓵 𝔂𝓸𝓾𝓻 United Airlines 𝓐𝓲𝓻𝓵𝓲𝓷𝓮𝓼 𝓯𝓵𝓲𝓰𝓱𝓽 +1(888)-908-0815*( 𝕆𝕋𝔸)... 𝓘𝓯 𝔂𝓸𝓾 𝓬𝓪𝓷𝓬𝓮𝓵 𝔀𝓲𝓽𝓱𝓲𝓷 24 𝓱𝓸𝓾𝓻𝓼 𝓸𝓯 𝓫𝓸𝓸𝓴𝓲𝓷𝓰 𝓪𝓷𝓭 𝓪𝓽 𝓵𝓮𝓪𝓼𝓽 𝓽𝔀𝓸 𝓭𝓪𝔂𝓼 𝓫𝓮𝓯𝓸𝓻𝓮 𝓭𝓮𝓹𝓪𝓻𝓽𝓾𝓻𝓮, 𝔂𝓸𝓾'𝓵𝓵 𝓰𝓮𝓽 𝓪 𝓯𝓾𝓵𝓵 𝓻𝓮𝓯𝓾𝓷𝓭. 𝓝𝓸𝓷𝓻𝓮𝓯𝓾𝓷𝓭𝓪𝓫𝓵𝓮 𝓽𝓲𝓬𝓴𝓮𝓽𝓼 𝓬𝓪𝓷 𝓫𝓮 𝓬𝓱𝓪𝓷𝓰𝓮𝓭 𝓸𝓻 𝓬𝓪𝓷𝓬𝓮𝓵𝓵𝓮𝓭 𝔀𝓲𝓽𝓱𝓸𝓾𝓽 𝓪 𝓯𝓮𝓮, 𝓫𝓾𝓽 𝔂𝓸𝓾'𝓵𝓵 𝓻𝓮𝓬𝓮𝓲𝓿𝓮 𝓪 𝓽𝓻𝓪𝓿𝓮𝓵 𝓬𝓻𝓮𝓭𝓲𝓽. 𝓕𝓸𝓻 𝓶𝓸𝓻𝓮 𝓲𝓷𝓯𝓸𝓻𝓶𝓪𝓽𝓲𝓸𝓷, 𝓬𝓪𝓵𝓵 on @ ⚡☎+1(888)-908-0815*( 𝕆𝕋𝔸)........ How late can you cancel a United Airlines ticket? Yes, United Airlines allows cancellations. Tickets are fully refundable within 24 hours of flight, provided the departure date is more than two days ahead 𝕱𝖔𝖗 𝖆𝖓𝖞 𝖖𝖚𝖊𝖗𝖞 𝖈𝖆𝖑𝖑 @ ⚡☎+1(888)-908-0815*( 𝕺𝕿𝕬). 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 🛫⚡☎+(+1(888)-908-0815*) . Nonrefundable tickets can be changed or cancelled without a fee, but you'll receive a travel credit instead of a refund.......... Can you cancel a basic economy flight with United Airlines? On almost every major airline in the U.S., apart from United Airlines, you cannot change or modify your basic economy fare," said Travel + Leisure, which "means that if you need to make any changes, you have to buy an entirely new ticket 𝕱𝖔𝖗 𝖆𝖓𝖞 𝖖𝖚𝖊𝖗𝖞 𝖈𝖆𝖑𝖑 @ ⚡☎+1(888)-908-0815*( 𝕺𝕿𝕬). 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 🛫⚡☎+(+1(888)-908-0815*) . Can I cancel a flight and get a refund from United Airlines? 𝙔𝙤𝙪 𝙘𝙖𝙣 𝙧𝙚𝙦𝙪𝙚𝙨𝙩 𝙖 𝙧𝙚𝙛𝙪𝙣𝙙 𝙗𝙮 𝙚𝙞𝙩𝙝𝙚𝙧 𝙪𝙨𝙞𝙣𝙜 𝙩𝙝𝙚 𝙤𝙣𝙡𝙞𝙣𝙚 𝙥𝙡𝙖𝙩𝙛𝙤𝙧𝙢 𝙤𝙣 United Airlines'𝙨 𝙬𝙚𝙗𝙨𝙞𝙩𝙚 𝙤𝙧 𝙙𝙞𝙧𝙚𝙘𝙩𝙡𝙮 𝙘𝙤𝙣𝙩𝙖𝙘𝙩𝙞𝙣𝙜 𝙩𝙝𝙚𝙞𝙧 𝙘𝙪𝙨𝙩𝙤𝙢𝙚𝙧 𝙨𝙚𝙧𝙫𝙞𝙘𝙚. 𝙏𝙤 𝙞𝙣𝙞𝙩𝙞𝙖𝙩𝙚 𝙖 𝙧𝙚𝙛𝙪𝙣𝙙, 𝙨𝙞𝙢𝙥𝙡𝙮 𝙘𝙖𝙣𝙘𝙚𝙡 𝙮𝙤𝙪𝙧 𝙛𝙡𝙞𝙜𝙝𝙩 𝙩𝙞𝙘𝙠𝙚𝙩 𝙖𝙣𝙙 𝙧𝙚𝙦𝙪𝙚𝙨𝙩 𝙖 𝙧𝙚𝙛𝙪𝙣𝙙 𝙩𝙝𝙧𝙤𝙪𝙜𝙝 𝙩𝙝𝙚 𝙘𝙪𝙨𝙩𝙤𝙢𝙚𝙧 𝙨𝙚𝙧𝙫𝙞𝙘𝙚 𝙣𝙪𝙢𝙗𝙚𝙧 𝙖𝙩 🛫 +(+1(888)-908-0815*) 𝙤𝙧 ⚡☎+(+1(888)-908-0815*) 𝙤𝙧 United Airlines® “🛫 +(+1(888)-908-0815*). 𝖄𝖔𝖚 𝖈𝖆𝖓 𝖗𝖊𝖖𝖚𝖊𝖘𝖙 𝖆 𝖗𝖊𝖋𝖚𝖓𝖉 𝖇𝖞 𝖊𝖎𝖙𝖍𝖊𝖗 𝖚𝖘𝖎𝖓𝖌 𝖙𝖍𝖊 𝖔𝖓𝖑𝖎𝖓𝖊 𝖕𝖑𝖆𝖙𝖋𝖔𝖗𝖒 𝖔𝖓 United Airlines'𝖘 𝖜𝖊𝖇𝖘𝖎𝖙𝖊 𝖔𝖗 𝖉𝖎𝖗𝖊𝖈𝖙𝖑𝖞 𝖈𝖔𝖓𝖙𝖆𝖈𝖙𝖎𝖓𝖌 𝖙𝖍𝖊𝖎𝖗 𝖈𝖚𝖘𝖙𝖔𝖒𝖊𝖗 𝖘𝖊𝖗𝖛𝖎𝖈𝖊. 𝕿𝖔 𝖎𝖓𝖎𝖙𝖎𝖆𝖙𝖊 𝖆 𝖗𝖊𝖋𝖚𝖓𝖉, 𝖘𝖎𝖒𝖕𝖑𝖞 𝖈𝖆𝖓𝖈𝖊𝖑 𝖞𝖔𝖚𝖗 𝖋𝖑𝖎𝖌𝖍𝖙 𝖙𝖎𝖈𝖐𝖊𝖙 𝖆𝖓𝖉 𝖗𝖊𝖖𝖚𝖊𝖘𝖙 𝖆 𝖗𝖊𝖋𝖚𝖓𝖉 𝖙𝖍𝖗𝖔𝖚𝖌𝖍 𝖙𝖍𝖊 𝖈𝖚𝖘𝖙𝖔𝖒𝖊𝖗 𝖘𝖊𝖗𝖛𝖎𝖈𝖊 𝖓𝖚𝖒𝖇𝖊𝖗 𝖆𝖙 🛫 +(+1(888)-908-0815*) 𝖔𝖗 ⚡☎+(+1(888)-908-0815*) 𝖔𝖗 United Airlines® “🛫 +(+1(888)-908-0815*). 𝕐𝕠𝕦 𝕔𝕒𝕟 𝕣𝕖𝕢𝕦𝕖𝕤𝕥 𝕒 𝕣𝕖𝕗𝕦𝕟𝕕 𝕓𝕪 𝕖𝕚𝕥𝕙𝕖𝕣 𝕦𝕤𝕚𝕟𝕘 𝕥𝕙𝕖 𝕠𝕟𝕝𝕚𝕟𝕖 𝕡𝕝𝕒𝕥𝕗𝕠𝕣𝕞 𝕠𝕟 United Airlines'𝕤 𝕨𝕖𝕓𝕤𝕚𝕥𝕖 𝕠𝕣 𝕕𝕚𝕣𝕖𝕔𝕥𝕝𝕪 𝕔𝕠𝕟𝕥𝕒𝕔𝕥𝕚𝕟𝕘 𝕥𝕙𝕖𝕚𝕣 𝕔𝕦𝕤𝕥𝕠𝕞𝕖𝕣 𝕤𝕖𝕣𝕧𝕚𝕔𝕖. 𝕋𝕠 𝕚𝕟𝕚𝕥𝕚𝕒𝕥𝕖 𝕒 𝕣𝕖𝕗𝕦𝕟𝕕, 𝕤𝕚𝕞𝕡𝕝𝕪 𝕔𝕒𝕟𝕔𝕖𝕝 𝕪𝕠𝕦𝕣 𝕗𝕝𝕚𝕘𝕙𝕥 𝕥𝕚𝕔𝕜𝕖𝕥 𝕒𝕟𝕕 𝕣𝕖𝕢𝕦𝕖𝕤𝕥 𝕒 𝕣𝕖𝕗𝕦𝕟𝕕 𝕥𝕙𝕣𝕠𝕦𝕘𝕙 𝕥𝕙𝕖 𝕔𝕦𝕤𝕥𝕠𝕞𝕖𝕣 𝕤𝕖𝕣𝕧𝕚𝕔𝕖 𝕟𝕦𝕞𝕓𝕖𝕣 𝕒𝕥 🛫 +𝟙-+1(888)-908-0815* 𝕠𝕣 ⚡☎+𝟙-+1(888)-908-0815* 𝕠𝕣 United Airlines® “🛫 +𝟙-+1(888)-908-0815*......... What is the cancellation policy for United Airlines? 𝖄𝖊𝖘, United Airlines 𝕬𝕴𝖗𝖑𝖎𝖓𝖊𝖘 𝖆𝖑𝖑𝖔𝖜𝖘 𝖈𝖆𝖓𝖈𝖊𝖑𝖑𝖆𝖙𝖎𝖔𝖓𝖘.𝕿𝖎𝖈𝖐𝖊𝖙𝖘 𝖆𝖗𝖊 𝖋𝖚𝖑𝖑𝖞 𝖗𝖊𝖋𝖚𝖓𝖉𝖆𝖇𝖑𝖊 𝖜𝖎𝖙𝖍𝖎𝖓 24 𝖍𝖔𝖚𝖗𝖘 𝖔𝖋 𝖇𝖔𝖔𝖐𝖎𝖓𝖌,𝖕𝖗𝖔𝖛𝖎𝖉𝖊 𝖙𝖍𝖊 𝖉𝖊𝖕𝖆𝖗𝖙𝖚𝖗𝖊 𝖉𝖆𝖙𝖊 𝖎𝖘 𝖒𝖔𝖗𝖊 𝖙𝖍𝖆𝖓 𝖙𝖜𝖔 𝖉𝖆𝖞𝖘 𝖆𝖍𝖊𝖆𝖉 𝕱𝖔𝖗 𝖒𝖔𝖗𝖊 𝖉𝖊𝖙𝖆𝖎𝖑𝖘,𝖈𝖆𝖑𝖑 🛫🌈(⚡☎(+1(888)-908-0815*) ☎️. 𝕐𝕖𝕤, United Airlines 𝔸𝕀𝕣𝕝𝕚𝕟𝕖𝕤 𝕒𝕝𝕝𝕠𝕨𝕤 𝕔𝕒𝕟𝕔𝕖𝕝𝕝𝕒𝕥𝕚𝕠𝕟𝕤.𝕋𝕚𝕔𝕜𝕖𝕥𝕤 𝕒𝕣𝕖 𝕗𝕦𝕝𝕝𝕪 𝕣𝕖𝕗𝕦𝕟𝕕𝕒𝕓𝕝𝕖 𝕨𝕚𝕥𝕙𝕚𝕟 𝟚𝟜 𝕙𝕠𝕦𝕣𝕤 𝕠𝕗 𝕓𝕠𝕠𝕜𝕚𝕟𝕘,𝕡𝕣𝕠𝕧𝕚𝕕𝕖 𝕥𝕙𝕖 𝕕𝕖𝕡𝕒𝕣𝕥𝕦𝕣𝕖 𝕕𝕒𝕥𝕖 𝕚𝕤 𝕞𝕠𝕣𝕖 𝕥𝕙𝕒𝕟 𝕥𝕨𝕠 𝕕𝕒𝕪𝕤 𝕒𝕙𝕖𝕒𝕕 𝔽𝕠𝕣 𝕞𝕠𝕣𝕖 𝕕𝕖𝕥𝕒𝕚𝕝𝕤,𝕔𝕒𝕝𝕝 🛫🌈(⚡☎(+1(888)-908-0815*) ☎️. 🆈🅴🆂, United Airlines 🅰🅸🆁🅻🅸🅽🅴🆂 🅰🅻🅻🅾🆆🆂 🅲🅰🅽🅲🅴🅻🅻🅰🆃🅸🅾🅽🆂.🆃🅸🅲🅺🅴🆃🆂 🅰🆁🅴 🅵🆄🅻🅻🆈 🆁🅴🅵🆄🅽🅳🅰🅱🅻🅴 🆆🅸🆃🅷🅸🅽 24 🅷🅾🆄🆁🆂 🅾🅵 🅱🅾🅾🅺🅸🅽🅶,🅿🆁🅾🆅🅸🅳🅴 🆃🅷🅴 🅳🅴🅿🅰🆁🆃🆄🆁🅴 🅳🅰🆃🅴 🅸🆂 🅼🅾🆁🅴 🆃🅷🅰🅽 🆃🆆🅾 🅳🅰🆈🆂 🅰🅷🅴🅰🅳 🅵🅾🆁 🅼🅾🆁🅴 🅳🅴🆃🅰🅸🅻🆂,🅲🅰🅻🅻 🛫🌈(⚡☎(+1(888)-908-0815*) ☎️. Can I get a refund on United Airlines? You can request a refund by either using the online platform on United Airlines's website or directly contacting their customer service. To initiate a refund, simply cancel your flight ticket and request a refund through the customer service number at 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*) or United Airlines® “🛫 +(+1(888)-908-0815*). United Airlines issues refunds for :- (1) flights canceled within 24 hours of flight (2) Refundable flight tickets, and (3) flights canceled by the airline. #Within 24 hours of flight:- You’re entitled to receive a full refund for United Airlines flights cancelled within 24 hours of purchase. To be eligible, the flight must have been made at least seven days before scheduled departure. Contact 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*) ” #After 24 hours:- Just like any flight, United Airlines flights can be cancelled more than 24 hours after purchase. However, you may or may not get a refund, depending on the type of fare purchased. Even if your flight isn’t eligible for a refund under United Airlines’s flight refund policy, you still may be able to get a credit from the airline. If you’re not sure what type of fare you bought, check your itinerary for airline policies and rules. Fees United Airlines doesn’t charge a flight cancellation fee. However, the airline might charge a fee to cancel, depending on the fare type purchased. If this is the case, United Airlines will pass this cost on to you. You can see the airline's fees, rules, and restrictions for your flights in your United Airlines itinerary. How long do United Airlines flight refunds take? Contact 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*)......... United Airlines issues most refunds for canceled flights within 5-10 business days, although some Airways take longer to process the request. It may take a week or so for your bank or credit card Company to complete the refund. How to Cancel United Airlines flights and Get a Refund? Contact 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*). You can cancel most United Airlines flights online or with a live agent. If your flight is on a low-cost airline like United Airlines, United Airlines, or United Airlines, you’ll need to contact the airline directly. Here’s how to cancel a flight on United Airlines: 1. Go to My Trips.(Call Now:- 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*)))))))))))) 2. Log into your United Airlines account, or use the option to find your flight by entering your itinerary number. 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*) ” 3. Locate the flight you wish to cancel. 4. Select “Cancel flight” and follow the instructions to complete the cancellation and request a refund. 5. If you don’t see the option to cancel, or you don’t see refund information for your flight, here’s how to reach a live United Airlines agent. 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*) ” What Happens if United Airlines Cancels Your flight? Contact 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*) ” An unpleasant truth about air travel is that Airways sometimes cancel flights due to weather or logistical reasons. If this happens with your flight flight, United Airlines will notify you immediately. But what to do next? If your United Airlines flight is canceled by the airline, simply follow the steps below. Go to Trips to see the current status of your flights. Log in to your account or enter your flight confirmation number. United Airlines will work with the airline to find you a similar flight, then notify you through its app, by email, or by text. For urgent (for example, same-day) changes, they may also call you. You can contact United Airlines customer service with questions if you don’t hear from them in a timely manner after being notified of the cancellation. 3. Low-cost or discount Airways don’t allow United Airlines to access parts of their reservation systems. You’ll need to contact those Airways directly if they cancel your flight. 4. Follow the instructions you’ll receive to confirm the new flights. Once the new flights are confirmed between you and the airline, United Airlines will email you a revised itinerary. If you don’t confirm the new flights after an airline-initiated cancellation, you’re entitled to a full refund to request a refund from United Airlines for a canceled flight in this case, you’ll need to call or chat with a live agent.🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*) How to Cancel One-way United Airlines flights Contact 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*). Occasionally, United Airlines offers round trip flights booked as two one-way fares on different Airways, in order to show lower fares for a given route or to offer more convenient options. Alternatively, you might have purposely booked one or more one-way flights for the same or other reasons. To cancel United Airlines flights on full-service Airways, visit Trips and enter your flight confirmation number. To cancel one-way United Airlines flights on a discount airline like United Airlines, United Airlines, Ryan Air, easy Jet, Jet star, or Air Asia, contact the airline directly. If your reservation has one or more one-way flights, you’ll need to cancel each one individually. Changing just a single one-way flight on a multi-flight itinerary won’t impact or change any other flights on the itinerary. How to Cancel flights Booked with United Airlines points You may be able to cancel flights booked with United Airlines Rewards points, depending on the airline’s policies and the fare type you purchased. · To cancel one-way flights purchased from United Airlines: · 1. Go to Trips · 2. Locate the flight you wish to cancel. · 3. Select “Cancel flight” and follow the instructions. The United Airlines Rewards points used for the flight will be refunded to your Rewards balance, minus any cancellation fees charged by the airline. United Airlines Rewards points are worth $1 each – so if the cancellation fee is $200, it will cost you 200 points. If you paid for part of the flight using both points and a credit card, cancellation fees are charged first to your Rewards balance, with the remainder charged to your card (if you don’t have enough points to cover the fee). Refundable vs. Non-refundable Fares Contact 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*). United Airlines offers different fare types which may be fully refundable, partially refundable, or non-refundable. United Airlines displays this information during the flight process so you can choose the best fare type for your trip. #Refundable fares allow you to cancel a flight and receive a full or partial refund. They’re usually more expensive than other fare types, in return for the greater flexibility they offer. #Partially refundable fares permit you to get a portion of your money back if you cancel the flight. The airline keeps the remainder as a service fee. In this case, United Airlines will issue you a refund for the original purchase price minus the fee amount. #Non-refundable fares are just that – no refunds if you cancel. This fare type usually doesn’t allow ticket changes, either. Basic economy and Light (or “Lite”) fares are almost always non-refundable. One final thing to keep in mind is that even if you’re not entitled to a refund, you should still cancel the reservation if you don’t plan to be on the plane. How to Talk to a Live United Airlines Agent? Contact 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*). If you can’t complete a flight cancellation online, you can talk to a human at United Airlines by phone or chat. United Airlines’s customer service phone number is 🛫 +(+1(888)-908-0815*) or ⚡☎+(+1(888)-908-0815*). Can I cancel my trip on United Airlines and get a refund? Yes, United Airlines typically offers a 24-hour cancellation policy for flights. For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 🛫⚡☎+(+1(888)-908-0815*), allowing customers to cancel their flights within 24 hours of purchase for a full refund. However, it's important to note that this policy may vary so you can call United Airlines customer service 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*). Does free cancellation mean refund? The two expressions mean exactly what they say.For any query call at 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹-𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*), If a reservation is non-refundable, it means you pay for it when you book and you don't get any money back if you cancel the reservation. Free cancellation means that you are not charged anything if you cancel. Is it better to cancel or change a flight? Changing the flight flights: If changes to the flight, For query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*), it is known and you are aware of when to travel next, then you can better opt to reschedule a flight. You can reschedule the flight using the online process and then get through the process easily without having to pay an additional change fee. What is the cancellation fee for United Airlines? Because Airways are required to give full refunds For any query call at 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*), if you cancel within 24 hours of flight, United Airlines lets you change or cancel your flight reservation without fees within the same time period. Can you get a partial refund from United Airlines? Refundable tickets allow cancellations with a full or partial refund, For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*), while non-refundable tickets may only offer refunds in specific situations like medical emergencies or military orders. To find out if you can cancel your flight for a refund, check the cancellation policy on United Airlines's website or app. What happens when you cancel on United Airlines? In short, travelers don't have to pay any penalty fees as long as they cancel their flight booked within 24 hours of flight. For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*). To do this, you can cancel the flight ticket by either contacting the same airline directly or online via United Airlines's website. How far out can you cancel a flight and get a refund? You can cancel a ticket essentially up until the day of travel, For any query call at 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*), so there's no sense in cancelling early and paying a fee. If, for example, there's some unrest in the area or bad weather, the airline may end up cancelling your flight altogether and then you can get your money back. What happens if I cancel my flight ticket? Cancelling flights can incur a fee. A cancellation fee charged is sometimes charged by the airline, For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*) depending on the terms and conditions of your ticket. If you purchase a fully-refundable flight ticket, this fee may not apply. You might be wondering why you would pay money to cancel a flight that you're not flying on. Can I dispute a United Airlines charge? 𝓛𝓮𝓽'𝓼-𝓴𝓷𝓸𝔀#𝓶𝓸𝓻𝓮~【™United Airlines™】 How Do I Dispute a Charge on United Airlines? 𝖨𝖿 𝗒𝗈𝗎 𝗇𝖾𝖾𝖽 𝗍𝗈 𝖽𝗂𝗌𝗉𝗎𝗍𝖾 𝖺 𝖼𝗁𝖺𝗋𝗀𝖾 𝖿𝗋𝗈𝗆 𝖤𝗑𝗉𝖾𝖽𝗂𝖺 𝗈𝗇 𝗒𝗈𝗎𝗋 𝖼𝗋𝖾𝖽𝗂𝗍 𝖼𝖺𝗋𝖽, For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*), 𝗍𝗁𝖾 𝖿𝗂𝗋𝗌𝗍 𝗌𝗍𝖾𝗉 𝗂𝗌 𝗍𝗈 𝖼𝗈𝗇𝗍𝖺𝖼𝗍 𝖤𝗑𝗉𝖾𝖽𝗂𝖺'𝗌 𝖼𝗎𝗌𝗍𝗈𝗆𝖾𝗋 𝗌𝖾𝗋𝗏𝗂𝖼𝖾 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*) . Does United Airlines give full refunds? What happens if you cancel on United Airlines? For fully refundable flights, 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 🛫⚡☎+(+1(888)-908-0815*) the reservation can be cancelled up to 48 hours before the check-in date, and the guest will receive a full refund. United Airlines cancellations are also free if they're made within the first 24 hours of flight. How do I request a refund on United Airlines? For assistance or to request a refund, you can reach out to United Airlines's customer support team at ⚡☎🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*) 𝙂𝙖𝙞𝙣 𝙔𝙤𝙪𝙧 100% 𝙁𝙪𝙡𝙡 𝙍𝙚𝙛𝙪𝙣𝙙 "𝘼𝙨 𝙨𝙤𝙤𝙣 𝙖𝙨 𝙥𝙤𝙨𝙨𝙞𝙗𝙡𝙚"(𝔉𝔞𝔰𝔱 𝔖𝔲𝔭𝔭𝔬𝔯𝔱)♥] or 【™United Airlines™】®🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎⚡☎+(+1(888)-908-0815*) or contact them through the provided official channels. How does United Airlines free cancellation work? 24-Hour Free Cancellation: If you need to cancel your flight within 24 hours of flight, call at 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*). United Airlines allows you to do so for free. Whether you booked a refundable or non-refundable ticket, you are entitled to a full refund. How long does it take United Airlines to refund? United Airlines generally processes refunds within 7-10 business days post-cancellation. For any query call at 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*). However, the actual timeframe for receiving the refund might vary due to bank processing times or other external factors. How much is the United Airlines cancellation fee? United Airlines does not charge a penalty if you need to make a change to your trip. For any query query call at 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*). We won't pile on the fees if you have a change in plans. Other online travel agencies may charge you $30 to change your flight or hotel, or even $75 to cancel your cruise. Not us. What happens if you cancel a non refundable flight? Depending on the ticket type, often, 'nonrefundable' simply means: The airline will not give you all of your money back if you cancel (true for most basic economy tickets) For query call at 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*). The airline will not refund your ticket value as cash (it will be remitted as a voucher instead). How do I get a refund from United Airlines? Go to the "My Trips"⚡☎🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*) $ section and find your flight using the reservation code and last name. Choose the flight you want to cancel and fill out the refund request form. Your refund will typically be processed within 7 to 10 business days. A: You can request a refund by either using the online platform on United Airlines's website or directly contacting their customer service. To initiate a refund, simply cancel your flight ticket and request a refund through the customer service number at ⚡☎+𝟭📞🛫 +(+1(888)-908-0815*) or United Airlines® +𝟭📞⚡☎+(+1(888)-908-0815*). Does United Airlines give refunds for cancellations? Yes, United Airlines offers refunds if you cancel your flight within 24 hours of purchase $ 🛫 +(+1(888)-908-0815********) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*) . After this period, fees or charges may apply based on the fare type. Does United Airlines have a cancellation fee? Because Airways are required to give full refunds if you cancel within 24 hours of flight, For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*)⚡☎. United Airlines lets you change or cancel your flight reservation without fees within the same time period. For everything else, good luck. Because each travel provider sets its own policies, navigating them can be a headache. ### FAQ: United Airlines Cancellation and Refund Policy **1. Can I cancel a flight with United Airlines and get my money back?** Yes, United Airlines offers a 24-hour cancellation policy. If you cancel within 24 hours of flight, you can get a full refund, provided that the departure date is more than two days away. Cancellations can be made online or by contacting customer service at +(+1(888)-908-0815*). **2. How do I cancel my United Airlines flight and request a refund?** To cancel your flight and request a refund, you can either use the online platform on United Airlines' website or contact their customer service directly. The customer service number is +(+1(888)-908-0815*). After cancelling your ticket, you can request a refund through the same number. **3. What is the cancellation policy for United Airlines?** Tickets are fully refundable within 24 hours of flight as long as the departure date is at least two days away. After this period, whether you can get a refund depends on the type of ticket purchased For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*)⚡☎. **4. Does free cancellation mean I will get a refund?** Free cancellation within the 24-hour window means you can get a full refund. Outside this window, the refundability of your ticket depends on the fare type purchased (e.g., refundable, non-refundable) For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*)⚡☎. **5. What happens if United Airlines cancels my flight?** If United Airlines cancels your flight, they will notify you and offer a reflight. If you cannot travel on the new flight, you are entitled to a full refund. Contact their customer service for assistance For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*)⚡☎. **6. How long does it take for United Airlines to process a refund?** United Airlines typically processes refunds within 5-10 business days. However, it might take longer depending on your bank or credit card company For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*)⚡☎. **7. What are the fees for cancelling a flight with United Airlines?** United Airlines does not charge a cancellation fee if you cancel within 24 hours of flight. For cancellations made after this period, fees may vary depending on the fare type For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*)⚡☎. **8. Can I get a partial refund from United Airlines?** Partial refunds are available for certain fare types, such as partially refundable fares. For non-refundable tickets, you may not receive a refund unless there are extenuating circumstances like medical emergencies For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*)⚡☎. **9. How do I talk to a live agent at United Airlines?** To speak with a live agent, you can call United Airlines customer service at +(+1(888)-908-0815*). They can assist with cancellations, refunds, and other travel-related inquiries. **10. What should I do if I want to change rather than cancel my United Airlines flight?** If you need to change your flight, you can do so online through the United Airlines website or by contacting their customer service For any query call @ 🛫 +(+1(888)-908-0815*) 𝗢𝗿 𝗧𝗼𝗹𝗹 𝗳𝗿𝗲𝗲 𝗻𝘂𝗺𝗯𝗲𝗿 ⚡☎+(+1(888)-908-0815*)⚡☎. Depending on when and how you booked, you may have to pay a change fee unless you cancel within 24 hours of flight. For any further assistance or to manage your flight, visit United Airlines' official website or contact their customer service directly at +(+1(888)-908-0815*)
jason_clark_fc88d6c4f52f6
1,906,121
Technical Report: Initial Data Analysis of Titanic Datasets
Overview The provided datasets consist of two files: train.csv and test.csv. These datasets contain...
0
2024-06-29T23:08:54
https://dev.to/hope_odidi/technical-report-initial-data-analysis-of-titanic-datasets-1p54
datascience, database, kaggle, dataanalysis
**Overview** The provided datasets consist of two files: train.csv and test.csv. These datasets contain information about passengers on the Titanic, including demographic details, ticket information, and survival outcomes (in the training set). **Dataset Structure** **- Train Dataset (train.csv):** Contains 891 rows and 12 columns. **- Test Dataset (test.csv):** Contains 418 rows and 11 columns. **Columns in Both Datasets** **PassengerId:** Unique identifier for each passenger. **Pclass:** Passenger class (1st, 2nd, or 3rd). **Name:** Name of the passenger. **Sex:** Gender of the passenger. **Age:** Age of the passenger. **SibSp:** Number of siblings/spouses aboard the Titanic. **Parch:** Number of parents/children aboard the Titanic. **Ticket:** Ticket number. **Fare:** Passenger fare. **Cabin:** Cabin number. **Embarked:** Port of embarkation (C = Cherbourg, Q = Queenstown, S = Southampton). **Additional Column in Train Dataset** **Survived:** Survival indicator (0 = No, 1 = Yes). **Initial Insights** 1. Survival Rate: The Survived column in the training set indicates the survival status of passengers. This column is not present in the test set. 2. Passenger Class Distribution: The Pclass column indicates the class of the passenger, which could be a key factor in survival analysis. 3. Gender Distribution: The Sex column shows the gender distribution, which can be analyzed to determine if gender influenced survival chances. 4. Age Distribution: The Age column provides insights into the age distribution of passengers. Missing values in this column may require imputation. 5. Family Size: The SibSp and Parch columns can be combined to understand the family size and its impact on survival. 6. Ticket and Fare: The Ticket and Fare columns provide information about the cost and type of ticket purchased. 7. Cabin Information: The Cabin column contains many missing values. This information might need to be handled carefully or imputed based on other variables. 8. Port of Embarkation: The Embarked column indicates the port from which the passenger boarded the Titanic, which may correlate with socio-economic status and survival. **Next Steps for Analysis** Handling Missing Values: Impute or handle missing values in the Age and Cabin columns. Exploratory Data Analysis (EDA): Perform EDA to uncover patterns and relationships between different variables and the survival outcome. Analyze the impact of passenger class, gender, age, family size, and fare on survival rates. Feature Engineering: Create new features such as family size (sum of SibSp and Parch), title extraction from the Name column, and categorization of age groups. Visualization: Use visualizations to illustrate the findings from the EDA, such as survival rates by class, gender, age groups, and embarkation points. Model Building: Prepare the data for machine learning models to predict survival on the test set using the insights gained from the training set. **Conclusion** The initial review of the Titanic datasets reveals various factors that could influence passenger survival, including class, gender, age, and embarkation port. Further detailed analysis and modeling are required to draw meaningful conclusions and predictions. ​ [https://hng.tech/internship][https://hng.tech/hire]
hope_odidi
1,906,119
First glance - Sales Dataset
Introduction The sales dataset is a collection of orders made by different customers from...
0
2024-06-29T23:00:03
https://dev.to/megawatts/first-glance-retail-sales-dataset-1235
learning, datascience, community, writing
## **Introduction** The [sales dataset](https://www.kaggle.com/datasets/kyanyoga/sample-sales-data/data) is a collection of orders made by different customers from various geographical are`[CITY,STATE,POSTALCODE,COUNTRY]`. Each entry includes detailed information about the customer `[CUSTOMERNAME, CONTACTLASTNAME CONTACTFIRSTNAME]`, the order `[ORDERNUMBER, DATE, STATUS]`, and the products ordered `[PRODUCTLINE, MSRP, PRODUCTCODE]`. This dataset spans from 2003 to 2005 and includes contact person information. Each entry contains an order number`[ORDERNUMBER]` which represents a unique identifier for each order. An order entry contains different products `[PRODUCTCODE]` associated with a product line `[PRODUCTLINE]`. Data columns such as deal size `[DEALSIZE]` and status `[STATUS]` are used to categorize different products. The dataset includes pricing `[PRICEEACH]`, quantity ordered `[QUANTITYORDERED]`, and the total sales amount `[SALES]`. The dataset contains 2823 entries of orders with 25 data columns, of which 9 are numerical and 3 are categorical data `[STATUS, DEALSIZE, PRODUCTLINE]`. ![retail sales data dataset's main statistical properties](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/da5dgk3a3f725n6nu3pz.png) _Fig. 1 showing retail sales data dataset's main statistical properties_ **Key Components of the Dataset:** - Customer Information: `[CUSTOMERNAME, CONTACTLASTNAME, CONTACTFIRSTNAME, PHONE]` - Geographical Information: `[ADDRESSLINE1, ADDRESSLINE2, CITY, STATE, POSTALCODE, COUNTRY, TERRITORY]` - Order Information: `[ORDERNUMBER, ORDERDATE, STATUS]` - Product Information: `[PRODUCTLINE, PRODUCTCODE, QUANTITYORDERED, PRICEEACH, SALES]` • Additional Attributes: `[QTR_ID, MONTH_ID, YEAR_ID, MSRP, DEALSIZE]` ![shape of the retail sales data](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ora2o6qch3v2nbl5o99v.png) _Fig 2 shows the shape of the retail sales data_ **Purpose of the Review** The purpose of this review is to provide an initial understanding of the dataset's structure, content, and data quality. This will help to: - Detect missing values and assess their impact. - Identify necessary data type conversions for accurate analysis. - Determine important columns for analysis such as sales figures, order details, and customer information. **Observations** 1. Multiple Products per Order: An ORDERNUMBER may be associated with more than one product `[PRODUCTCODE]` from different product lines `[PRODUCTLINE]`. 2. Sales Calculation Discrepancy: The `[SALES]` column is not always the multiplication of the `[QUANTITYORDERED] `and `[PRICEEACH]` columns. There are 1304 entries that do not follow this calculation. 3. Order Date Format: The `[ORDERDATE]` is in object format and will need to be converted to a datetime format for any time series analysis. 4. Inconsistent Phone Data: The `[PHONE]` data does not follow a consistent pattern across different countries. **Potential Areas for Further Analysis:** - Sales Performance: - Analyze completed sales data over time, by product, and by region. - Analyze sales data based on status by product, region, and time. - Customer Insights: - Segment customers and analyze their purchasing behavior. - Geographical Insights: - Map sales data and customer locations. - Forecasting: - Predict quantity and product sales for the next year. I am currently an intern in the[ HNG Internship](https://hng.tech/internship) boot camp that builds apps and solves different problems in teams. The [HNG Internship](https://hng.tech/internship) is a fast-paced boot camp for learning digital skills. [HNG Hire](https://hng.tech/hire) makes it easy to find and hire elite talent.
megawatts
1,906,118
Titanic for HNG
Introduction This analysis about the dataset of passengers that were onboard when the...
0
2024-06-29T22:57:51
https://dev.to/rahmahhng/titanic-for-hng-59n1
## **Introduction** This analysis about the dataset of passengers that were onboard when the British Luxury Passenger Ship (named Titanic) sank on the 15th of April, 1912. The purpose for this surface-level analysis is to have the statistical view of the age-range of the passengers on the ship. The Titanic dataset has the following columns: - PassengerId: This is the identifier for each passenger - Survived: This informs us if a particular passenger survived (with value 1) or not (with value 0) - Pclass: This is the ticket class, either upper (1st), middle (2nd) or lower (3rd) - Age: The age of the passenger - SibSp: The number of siblings for spouse the passenger boarded the ship with - ParCh: The number of parent or children the passenger boarded the ship with. - Fare: The amount the passenger paid for the trip - Cabin: The cabin identifier for the passenger - Embarked: The port the passenger onboards (either Cherbourg, Queenstown, Southampton) ## **Observation** Missing Data in the dataset The columns with missing data are: Cabin, Age and Embarked with Cabin having the highest number of missing values with 687 and Embarked having the lowest with 2 while Age sits in the middle with 177. All other columns don't have missing value(s). ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b5cd5mu465xtrn2o7yms.png) ## The image below gives the description of each column ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3y0nbjojdapx588jsvpm.png) Since our purpose is the age of passengers, it can be seen that: - the minimum age is 0.42 - the mean(average) age is 29.70 - the maximum age is 80 ## The below is the box plot of Survived against Age. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wqeuwfo6jfhxfdg5jere.png) ## **About HNG** HNG Internship is a remote internship aimed at providing real life experience on projects to interns all over the world. This is the [link](https://hng.tech/internship) to the official website. It also serves as a pool for skilled professional because the interns there would have experience with real life impacting projects, here is the [hire link](https://hng.tech/hire).
rahmahhng
547,807
Growing as an engineer
Congrats! You've settled in at your first job. You've made it past the initial wave of impostor syndr...
0
2020-12-20T15:45:41
https://dev.to/timhwang21/growing-as-an-engineer-42c5
career, learning
Congrats! You've settled in at your first job. You've made it past the initial wave of impostor syndrome and have some idea of what you're doing and what you're good at. You might've even received a promotion or two. You're no longer wildly floundering every day and are able to at least flounder in the right direction. You might even be on your second or third job at this point. You actually feel like a _real_ engineer now. **Now what?** It's common to feel a sensation of plateauing after an initial few years of hypergrowth. This is only natural: Growth comes from new information, and the supply of novelty provided by a job quickly dries up as the years pass by. Up to this point, simply engaging with your job was enough to provide steady improvement; now, you must actively seek new information to keep growing. Before proceeding, I want to clarify one thing: being happy with where you are is perfectly fine. You might feel entirely comfortable with plateauing, and that doesn't make you any less of a person. If you're content and have job security, congrats - you've won. It's not worth it to burn yourself out trying to level up if you're not motivated to do so. --- # Develop an Eye for Quality When trying to grow as an engineer, the very first step one must take is to **develop an eye for quality**. Strive to be able to recognize quality in any domain, even if you're unable to produce it yourself. Why is this so important? **Simply put, you can't get better if you don't know what better is.** If you simply accumulate more information without filtering that information for quality, you're wasting your time. Developing an eye for quality begins with active thinking. Analyze new information in both top-down and bottom-up manners. Aim to go beyond identifying if something is good and instead examine why it's good. - **Top-down**: Is this consistent with other things I know to be good? How does this information fit into my existing mental model of quality? Can I think of alternatives that'd be better? - **Bottom-up**: What makes this good? What's the rationale behind this line of thinking or approach? Is the premise of sound logic? Building a mental model of quality isn't trivial. Below are strategies I've personally found to be effective. The ability to recognize quality is both a prerequisite and an outcome of these strategies. Aim to form a self-reinforcing cycle with a positive feedback loop. --- # Build an Information Stream Don't underestimate passive learning. Digesting a well-written article every morning results in a cumulative knowledge gain that adds up over the years. Think of it as a quick mental workout. Build a stream of content that's of higher quality than your current knowledge. Prune it over time to keep quality high. You don't want to internalize bad principles or ideas, and you don't want to take bad advice from people not worth listening to. Don't overcurate your content stream for specific topics. Directed learning is good for focused growth, but too much direction cuts off new information whose existence you weren't aware of. Some degree of variance is good to avoid overfitting. As always, engage critically with content. Don't blindly accept arguments. Understand why the author thinks the way they do, and always try to grasp the context behind the writing. Mortimer J. Adler's "[How to Read a Book](https://en.wikipedia.org/wiki/How_to_Read_a_Book)" was a common recommendation when I was in grad school, where the vast majority of one's work is reading. Optimize for information retention and retrieval. Taking notes and writing summaries helps with digesting information and has value even if you never revisit the notes ever again. If you have a bad memory like me, leverage bookmarks and other tools to build a searchable knowledge base. Find what medium works for you. I have a hard time maintaining attention with videos or podcasts, so my content stream is 100% textual and consists of newsletters, blogs, and aggregators. Yours will likely be different. Everyone absorbs content differently. Active learning is great but requires a higher level of commitment. Finding uninterrupted time to follow a prescribed curriculum can be hard as a full-time worker. If your workplace offers learning time and/or credits, take advantage of it. If passive learning is a steady trickle, active learning is a firehose to the face. --- # Find a Mentor (Ideally Multiple) An experienced mentor you can trust can act as a calibrator for your developing mental model of quality. It goes without saying that prospective mentors should themselves have a refined model of quality. It can also be helpful to have mentors both inside and outside your workplace. For the one inside, get advice on your career goals, project goals, and work-specific tasks. For the one outside, talk about more general engineering topics, and get a detached view of your current work situation. Use your mentors to unblock yourself. Don't fall into the trap of stunting your own growth out of fear of bothering others. Have regular meetings with them, and solicit feedback frequently. A similar trap is not asking questions because you feel you should be past that stage. If you don't ask for help, your mentors might assume you're doing fine, only to be disappointed when your work is subpar. Don't blindly trust your mentors. Analyze their advice actively and critically, even when you've established their eye for quality. Try to identify any biases they might have, and use those as a lens through which you interpret their advice. Find the path of reasoning they took to arrive at their conclusion; learning to reason about classes of problems is far more valuable than any individual answer. Find well-balanced mentors. If you can't, find multiple mentors with a diversity of opinion. "Spiky" mentors offer quality advice but only within a very narrow domain. This can result in your model of quality becoming overindexed in this domain. If your only mentor is an engineering supremacist who hates managers, you'll become an engineering supremacist who hates managers. There isn't necessarily an ideal personality type for mentors. However, we can converge on it via process of elimination. Avoid the following personalities like the plague: - **Hot-takers and hand-wavers**: Surface-level thinkers whose models of quality overuse heuristics and aren't built on critical reasoning. They might seem knowledgeable due to being able to quickly form an opinion on anything; however, these opinions generally lack substance. They often care a bit too much about their Twitter follower count. - **Cargo-culters and gold-platers**: People whose model of quality overindexes on social proof and/or the latest and greatest tools and tech. They might seem like bleeding-edge technologists, but remember that the role of an engineer is to produce value, not shiny code. - **Mules**: Contrarians who are overly confident in their models of quality and view disagreement as flaws in other people's models rather than a potential flaw in their own. People who are right more often than they're wrong may start drinking their own Kool-Aid and miss the point when they start becoming wrong more often than they're right. - **[Architecture astronauts](https://www.joelonsoftware.com/2001/04/21/dont-let-architecture-astronauts-scare-you/) and purity zealots**: People whose models of quality operate at the wrong level of abstraction. Architecture astronauts are too high, while purity zealots are too low. They may be valuable and productive experts in their domain, but having such narrow blinders isn't conducive to good mentorship. --- # Push Your Boundaries The easiest way to learn quickly is to push your boundaries by working on things just outside your level of expertise. This will force you to make decisions you aren't used to making and to practice recognizing quality. Be aware of your gaps, and constantly reevaluate yourself. A helpful exercise is to develop a framework of engineering skills and evaluate yourself based on this framework. A sample breakdown that isn't necessarily exhaustive might include: - Coding skill; pure programming ability - Domain modeling; system design; architecture - Ecosystem familiarity; knowledge of tools and libraries - Interpersonal interaction (building consensus, effective teamwork, etc.) - Project management (managing tasks, scoping, how to ship, etc.) If working in a new domain, familiarize yourself with the high-level topics in this domain. Roadmaps like [roadmap.sh](https://roadmap.sh/) can be a good starting point. Be aware of what you don't know; by knowing what's out there, you're more equipped to unblock yourself when a problem arises. Your toolbox may be empty, but you should know what tools you can acquire. Practice your model by forming opinions. The act of judging if a certain thing is good or bad forces you to distill your model into rules and apply these rules to concrete things. Don't take mental shortcuts; ensure your logic is sound and you're not making tenuous leaps of logic. Being able to reason about quality shields you from cargo culting. Validate your opinions with more experienced people. Don't blindly take their advice; figure out why they do things a certain way and if it's good or bad. Don't become overly reliant on asking specific questions (either to peers or on Stack Overflow). This will build fragmented islands of competence that might not ever become an integrated knowledge base. Additionally, solutions devoid of context bias your model of quality toward pattern matching rather than first principles. Read the docs to develop a foundation; then build knowledge on top of that. Pace yourself. Continually working in an unfamiliar domain is stressful and risks burnout: You'll naturally be slower and may feel pressure to overwork to maintain your pace. One strategy is to alternate between _growth tasks_ and _competence tasks_. This can prevent impostor syndrome while reminding yourself how productive you can be in this new area once you level up. --- # Minimize Feedback Loops Growth occurs in a feedback loop of information encounter, synthesis, and application. The tighter this loop, the faster learning occurs. There are many operational tasks that lengthen feedback loops; identify them, and cull them. ## Don't be bottlenecked by decision paralysis Accept that you'll build things wrong many times. Bias toward quick feedback loops of iteration and evaluation, so you can pivot quickly without too much sunk cost. Identify what makes a _minimally evaluatable product_: enough work that lets you decide if you've made the correct decisions. ## Don't be bottlenecked by brainless, repetitive tasks Learn keyboard shortcuts for your tools instead of reaching for the mouse. Add aliases to your shell for repetitive commands and tools for repetitive workflows. Aim to operate at the speed of thought; execution should be aggressively optimized and automated. ## Don't be bottlenecked by typing speed If you're young, there's no reason not to have a best-case WPM lower than 60 or so (ideally, 80-100+). There's the idea that typing speed doesn't matter because one thinks far more than they type. I find this logic ridiculous: There will always be rote tasks where the only limiting factor is your typing speed. Additionally, being a master of keyboard shortcuts isn't an alternative; you can and should do both. ## Don't be bottlenecked by code review Segment your work into logical and understandable chunks. Craft your PRs to be as readable and as reviewable as possible. Respond to feedback promptly. ## Don't be bottlenecked by your memory Learn to document and take good notes. Every minute you spend relearning something you've learned in the past is a waste of time. Similarly, every minute you spend confirming some decision you forgot is a waste of both your time and someone else's time. --- # Shape Your Environment You can't build a model of quality without a sufficiently large body of positive and negative signals. For most people, their source of signal will be their workplace. This is subject to diminishing returns; after three to five years, you've probably hit an inflection point of signal versus time. At this point, it can be worth seeking a role change. This can come from vertical movement (promotion), horizontal movement (role or team change), or changing jobs. Roles can be incredibly different with just a few parameter tweaks. Some examples: - Small team versus big team - Public versus private company - Tech stack (both components of and what part you work on) - Cultural values - Processes, patterns, and norms It's worth experiencing a diversity of these parameters early in your career. The faster you can collect information, the faster you can build a good model of quality. Your new role should have some overlap with your old role. You don't want to return to the floundering stage, but you want enough novelty to spur new growth. Some environments aren't suitable for learning and growth. You may be pushed to stay in your lane because there's too much pressure to ship for you to temporarily lower your productivity to learn. Try to shape your environment by explicitly asking for different types of tasks; if this is denied, it can be worth changing jobs if your goal is to optimize for self-growth. --- # It All Comes Back to Identifying Quality Identifying quality is the common thread tying each of these principles together. Information streams provide new data to feed into your model of quality. A good mentor can direct your budding model, pruning and shaping it during its critical period like a bonsai artist. Pushing boundaries lets you put your model to the test. Minimizing feedback loops makes the information encounter-synthesis-model update loop as tight as possible. Shaping your environment ensures a constant inflow of new ideas and practices. And having a more robust model of quality makes you better at doing each of these things. A parting thought: Is this article you're reading right now quality? I'm self-aware enough to know that I myself have many biases, even if I'm unable to pinpoint them. My anecdotal experience has an outsize impact on my priors, which can lead me to dramatically different conclusions compared to others. Where are the tenuous leaps of logic in this article? Where are my biases showing? I'd love to hear from you in the responses below.
timhwang21
1,906,117
lit Comparision REact.js vs Vue.js
The advantage of frontend in web applications cannot be undermined as it involves the part where...
0
2024-06-29T22:57:23
https://dev.to/jtmidel1/lit-comparision-reactjs-vs-vuejs-3gg5
hng, webdev, react
The advantage of frontend in web applications cannot be undermined as it involves the part where users are welcomed before interacting with what the web application in entirety have to offer. Generally frontend technologies are tools which are employed to build the part of web application that clients or user interact with, the most basic examples are the HTML(Hypertext Markup language), CSS(Cascading Style Sheet) and JavaScript, the HTML enables placing contents right on the web, while the CSS ensures the styling of the content in terms of adding colors, fonts, backgrounds, layouts etc. The website may have static behavior in a way without adding JAVASCRIPT, it ensures interactive web pages. Frontend part of any project are very important as to enticing or calling the attention of users, certain frontend frameworks makes the work beautiful, easier, scalable and reusable, frameworks are structure which you can build on giving it no need to start entirely from the foundation…. React.js and Vue.js are popular frontend framework tools used to build user interfaces, talking about their differences, React is better in a way. React is a Javascript library which is used for building user interfaces for web and native applications, React was developed by Facebook to perform this task, known for flexibility which is very useful in building complex applications, React was released on March 26 2013 in other to address some issues in Facebook user interface which later was made open-source, React has a big community and many resources which can be leveraged on, working with large scalable apps, React has always been recommended to be more effective. Vue.js is a frontend framework known for being easy to use and the clear separation of HTML, CSS and JavaScript which is easy for beginner to make use of, uses also virtual DOM(Document Object Model) for efficient rendering like React, Vue is suited for simpler and quickly scalable projects. HNG Internship is a internship anyone with that competitive fire aspiring to grow in the tech industry may not want to miss, having a basic frontend knowledge in HTML, CSS and JavaScript may not just be enough in building nice, scalable projects, having researched and seeing the advantage of laying hands on projects using React.js framework, I’ll be so glad to participate in this track during this version of HNG Internship as I believe it will help boost my growth and cause a rapid increase in knowledge in the area of frontend technology, thanks to HNG for the privilege and I strongly believe with the way the program is designed I’ll not only learn but also network with great minds on the platform. Incase you’re still doubting HNG check this out #HNG-PREMIUM
jtmidel1
1,904,739
Zig First Impressions
Table of Contents Why Zig? What is Zig? The Good The Bad The Ugly ...
0
2024-06-29T22:55:45
https://dev.to/nw229/zig-first-impressions-3f5p
zig
## Table of Contents - [Why Zig?](#why-zig) - [What is Zig?](#what-is-zig) - [The Good](#the-good) - [The Bad](#the-bad) - [The Ugly](#the-ugly) - [Conclusion](#conclusion) ## Why Zig? I've had my eye on Zig for a while. It first came onto my radar after throwing [my own hat into the ring](https://github.com/delta229/ctl) of language design, when I started looking into tons of languages from the most mainstream to the most obscure for inspiration. Until recently, I had never actually used it. So, after taking a break from my own language project, I became curious about what it was like to actually program in Zig. Here are my initial thoughts after my first [small project](https://github.com/delta229/bf). ## What is Zig? Zig is a statically typed, compiled language intended to compete with C. It gives you a lot of control over memory allocation, with the option to choose between memory allocators and standard library APIs that require you to provide one. It has C-like syntax and also clearly values the idea that "explicit is better than implicit" in both syntactic choices and conventions. ## The Good Let's start with what I like about Zig. Zig was clearly built with the idea of compile-time function execution being a first-class citizen in mind. Slapping `comptime` before a parameter forces arguments passed to it to be available at compile time, and the ability to pass around `type`s as arguments and return them from functions allows building generic functions and types without the typical `<T>` syntax found in many other languages. Furthermore, `inline for/while` allows you to unroll loops at compile time, `if` statements with comptime known arguments are evaluated at compile time, and `comptime` blocks can ensure other statements happen at compile time. It's very powerful. For example, Zig's string formatting isn't built into the compiler: its source code is [part of the standard library](https://github.com/ziglang/zig/blob/3e9ab6aa7b2d90c25cb906d425a148abf9da3dcb/lib/std/fmt.zig#L80)! Zig also has really great integration with C code. You can simply `@cImport` C header files, no need to manually create bindings or generate them with a tool. ```zig const stdio = @cImport({ @cInclude("stdio.h"); }); pub fn main() !void { _ = stdio.puts("Hello world!"); // use puts directly } ``` You can also painlessly cross-compile for other operating systems or architectures without downloading anything extra! ```bash # Build an ARM binary from x86 with one command! $ zig build-exe main.zig -target aarch64-linux -lc ``` Zig supports optional, result, and slice types as a baked-in part of the language, and disallows null pointers. ```zig fn foo() void { var x: i32 = 10; var y: *i32 = &x; y = null; // error: expected type '*i32', found '@TypeOf(null)' var z: ?*i32 = &x; z = null; // OK: z is optional and may be null } // v-- slice type (pointer + length combo) fn half(data: []const u8) ![]const u8 { // ! type means this may return an error if (data.len < 2) { return error.TooSmall; } return data[0..data.len / 2]; } ``` Zig also supports algebraic data types (aka sum types or [tagged unions](https://en.wikipedia.org/wiki/Tagged_union)) through the `union(enum)` construct. ```zig const Foo = union(enum) { x: i32, y: []const u8, z, }; pub fn main() !void { const foo = Foo { .y = "hello world" }; switch (foo) { .x => |int| { _ = int; // do something with the int }, .y => |str| { _ = str; // do something with the string }, .z => {} } } ``` ## The Bad Earlier, I mentioned that Zig doesn't have generics, or at least, doesn't have special syntax for them. Instead, you can take a parameter with the `type`... type. ```zig pub fn Vec2(comptime T: type) type { return struct { x: T, y: T, }; } const v: Vec2(f32) = .{ .x = 0.1, .y = 2.5 }; ``` There might also be times when the specific type of the input doesn't matter. In that case, you can use the `anytype` type. ```zig fn foo(bar: anytype) void { bar.frobnicate(); } // this is basically short for // fn foo(comptime T: type, bar: T) ``` As the name suggests, bar will accept an argument of any type, not just ones that support a `frobnicate` method. If you pass something that doesn't, you'll get an error at compile time. ```zig fn foo(bar: anytype) void { bar.frobnicate(); } const Foo = struct { fn frobnicate(_: @This()) void {} }; fn hello() void { foo(Foo{}); // OK foo(10); // no field or member function named 'frobnicate' in 'comptime_int' } ``` This outputs an error at the point of use of the unsupported function or operation (NOT where I put the comment). Unfortunately, this means that in large functions, the interface required by an `anytype` parameter can get completely buried in the function's implementation. More unfortunately, Zig doesn't have any support for interfaces either, so there's no simple way to constrain one of these parameters like you can in most languages with trait/interface bounds, and even in modern C++ with concepts. Instead, you have three options to do this "right". You can either - Do the compiler's job for it using the `trait` module and manually check for the presence of function names, and output compiler errors - If you know the types ahead of time, eat a runtime cost with enum dispatch - Eat a different runtime cost with manual vtable construction (!!!) The first two approaches are shown in detail in [this article](https://zig.news/perky/anytype-antics-2398), if you're curious. You can see an example of the third in Zig's own standard library with the [Allocator](https://github.com/ziglang/zig/blob/3e9ab6aa7b2d90c25cb906d425a148abf9da3dcb/lib/std/mem/Allocator.zig#L17) type. Frankly, all three of these options suck for this simple use-case and require much more boilerplate than should be necessary. Manual vtables are definitely the worst, requiring a ton of boilerplate and being the worst for performance. So, what do people do when you make it difficult to do the right thing? They take shortcuts, and that's why a lot of the code I've seen so far just uses raw `anytype` and forces users to deal with it. At least the errors are better than in C++... ## The Ugly Lastly, I want to cover the things about Zig that aren't foundational issues but still bother me nonetheless — a bunch of nitpicks, basically. Modern programming wisdom has shifted to a "const by default" mindset. This is sometimes reflected in the language design itself, for example, in Rust: ```rs fn foo( a: &i32, /* this is a reference to immutable data */ b: &mut i32, /* this is a reference to mutable data */ c: i32, /* this and all above bindings are immutable (no reassignment) */ mut d: i32, /* this binding is mutable */ ) { let x = 0; // this binding is constant (cannot be reassigned or mutated) let mut y = 0; // this binding is mutable let z = &x; // this reference is immutable let w = &mut y; // this one's mutable } ``` Everything is constant unless otherwise specified. But even in a language like JavaScript, where `const` doesn't actually make the value immutable (it only disables reassignment), developers are usually encouraged to use it unless there's a reason not to. ```js function foo() { const arr = []; arr.push(1); // the array can still be mutated, but at least `arr` will always point to this one array arr = []; // BAD, crashes at runtime arr.push(2); return arr; } ``` This is great, and helps to eliminate some simple bugs while not really costing anything. In its documentation and code, Zig agrees, but from a design standpoint, it seems to be on the fence about it. Parameters and captures are immutable, which is good (there is no way to make them mutable even if you wanted to): ```zig fn foo(a: i32) void { a = 5; // error: cannot assign to constant for ([_]u8{1, 2, 3}) |i| { i = 5; // error: cannot assign to constant } } ``` However, pointers are _mutable_ by default, and `const` and `var` are the binding declaration keywords. ```zig // Zig also doesn't have block comments like /* */ // v--- this is a pointer to mutable data // v--- this is a pointer to immutable data fn bar(a: *i32, b: *const i32) void { var x = 10; // the shorter keyword is used to define a mutable binding const y = 3; } ``` It's a strange mix. Zig will at least give you an error (yes, not a warning, and even in debug builds) if you declare a `var` binding and never mutate/reassign it, but no such diagnostic exists for pointers (as of Zig 0.13.0). Speaking of errors, this is at least one of the errors that _will_ show up in the editor while you're writing code. Due to the incomplete state of the zls language server, most other errors unfortunately wont. This usually means a lot of trips back and forth between the terminal and the code view -- not a dealbreaker or anything, but pretty annoying when you're used to the alternative. And finally, a lightning round of irritations: - The `void` return type must be explicitly written - Zig [got rid of](https://github.com/ziglang/zig/issues/629) the final expression return syntax, replacing it with the much more verbose block label + break ```zig fn main() void { // this doesn't work const a = if (...) { foo(); bar() }; // instead do this const b = if (...) label: { foo(); break label: bar(); }; } ``` - No closures or even just anonymous function expressions - No operator overloading or destructors (defer is an ok-ish substitute) - The `error` type doesn't support adding fields to the errors, just a single error code, meaning you have to resort to out parameters or `union(enum)`s to return extra information on failures. - Bad higher-order function support, especially for slices and optionals - Build speed is less-than-stellar ## Conclusion By the tone of this post, it might seem like I hate Zig, but that definitely isn't the case. It's a huge upgrade from C, with some really innovative features and clever design. Comptime, allocator control, and best-in-class C integration make it a great choice for really low-level applications and maintaining/replacing C apps. However, it's still suffering from some growing pains, mainly subpar tooling, lacking package ecosystem, and middling documentation, and it seems a bit behind the times in certain design areas, especially for such a modern language. Allocators and no destructors also make for a lot of mental overhead when writing programs, making it a hard choice for most high level tasks like CLI applications. If you're a C developer though, you'll likely feel right at home. It shows a lot of promise and I'll continue to play around with it, but I don't see it replacing Rust as my go-to anytime soon. Thanks for reading!
nw229
1,906,116
Web Development from my view.
To build websites and/or web apps you need: HTML - Markup language CSS - Styling language...
0
2024-06-29T22:52:09
https://dev.to/jprof/web-development-from-my-view-1n8c
To build websites and/or web apps you need: HTML - Markup language CSS - Styling language JavaScript JavaScript is your programming language. It wakes up your sleeping HTML/CSS website, adds life to it and makes it dynamic. Because your website now contains JavaScript, users can interact with your websites from clicking buttons to making payments and beyond. The basics of web development are those languages above. Then moving forward you'll see the need for libraries and frameworks like React and Next. React is a JavaScript library Next is a React Framework. There's a difference between a library and a framework. To have a smooth experience learning React, it's better your JavaScript is decently solid. Building with React is still HTML, CSS and JS under the hood. So having a sound knowledge of HTML, CSS and JavaScript makes the learning of any JavaScript library or framework easier. React is not the only JavaScript library, there are others but the most popular three you'll hear are: React Vue Angular React is like default because it's hot in the market. Doesn't mean you can't learn the others. A solid JS background and you can pick any of them up. **HNG11** It's an opportunity to be a part of this internship,whether you're a beginner or you're familiar with what's going on, you're certain to grow. To grow with like minds, gain connections and share what I know with others will be a "mission accomplished" for me at HNG. I'm eager to sharpen my React skills and learn more during this internship. If you're interested in joining this cohort,the best time to join is right now. 👇https://hng.tech/internship If you're interested in looking for elite freelance talent,you need one link 👇 https://hng.tech/hire I'm Ogbonnaya Johnson, a front-end developer with a scoop of backend development.
jprof
1,906,115
ReactJS and AngularJS: Similarities and differences.
As a beginner first time diving into the world of front-end technologies, one is likely to come...
0
2024-06-29T22:49:41
https://dev.to/giovanni_obodoakor_80dfe7/reactjs-and-angularjs-similarities-and-differences-17pm
angular, react, hng, giovanniwrites
[](https://hng.tech/internship)[](https://hng.tech/hire) As a beginner first time diving into the world of front-end technologies, one is likely to come across two popular frameworks: ReactJS and AngularJS. Discussing these tools and their similarities to help understand how they work and which one might be the right fit for majority of projects. Both ReactJS and AngularJS have vibrant communities that offer a wealth of resources, tutorials, and third-party libraries to support developers at all skill levels. Whether you choose ReactJS or AngularJS, you can benefit from the extensive documentation, online communities, and open-source tools available to help you build modern web applications. **ReactJS: A Beginner's Guide** ReactJS is a JavaScript library developed by Facebook for building interactive user interfaces. It works by breaking down your web application into reusable components, which are like building blocks that you can assemble to create complex UIs. React uses a virtual Document Object Model (DOM) to efficiently update and render changes, providing a faster and smoother user experience. **AngularJS: A Beginner's Overview** On the other hand, AngularJS is a front-end framework developed by Google. It follows the Model-View-Controller (MVC) architecture, where you separate your application's logic, presentation, and data. Angular uses directives to extend HTML and make it more dynamic, allowing you to create interactive web applications with ease. **Similarities Between ReactJS and AngularJS** Despite their differences, ReactJS and AngularJS share some common ground. Both frameworks: 1. Offer component-based architecture: React and Angular allow you to create reusable components that help in building scalable and maintainable applications. 2. Provide tools for efficient state management: They offer mechanisms to manage the state of your application, allowing you to handle data changes and interactions effectively. 3. Support a vibrant community: Both ReactJS and AngularJS have active communities of developers who contribute libraries, tools, and resources to help you in your projects. Understanding these foundational concepts of ReactJS and AngularJS will set almost all on the path to building engaging and dynamic web applications. Now, let's delve deeper into the specifics of each framework to grasp their differences, unique features and functionalities. 1. Language: ReactJS is a JavaScript library, while AngularJS is a JavaScript framework. This means that React is more lightweight and focused on the view layer, allowing developers to have more flexibility in choosing other libraries and tools. On the other hand, AngularJS provides a more comprehensive framework with built-in solutions for common tasks like routing, form handling, and data management. 2. Component-Based Architecture: Both ReactJS and AngularJS follow a component-based architecture, which breaks down the user interface into reusable components. React uses JSX (a syntax extension for JavaScript) to define components, while AngularJS uses directives and controllers. React's component model is simpler and more flexible, making it easier to understand and maintain smaller components. AngularJS, on the other hand, provides a more opinionated structure for organizing components within its framework. 3. Virtual DOM vs. Regular DOM: ReactJS uses a virtual DOM, a lightweight copy of the actual DOM, to improve performance by minimizing direct manipulation of the real DOM. When changes occur, React compares the virtual DOM with the real DOM and only updates the necessary parts, resulting in faster rendering. AngularJS, on the other hand, directly manipulates the real DOM, which can be less efficient for large-scale applications. 4. Data Binding: ReactJS uses one-way data binding, meaning that data flows in one direction—from parent components to child components. This makes it easier to manage state and understand how data changes propagate through the application. AngularJS, on the other hand, supports two-way data binding, where changes in the model are automatically reflected in the view and vice versa. While two-way data binding can simplify development in some cases, it can also make it harder to track data changes and debug issues. 5. Community and Ecosystem: ReactJS has a large and active community with extensive documentation, tutorials, and third-party libraries (e.g., Redux for state management). This makes it easier for beginners to find resources and support when learning React. AngularJS also has a strong community and ecosystem, with features like Angular CLI for project scaffolding and Angular Material for UI components. However, AngularJS has seen a shift towards Angular (Angular 2+), which has a different architecture and learning curve. 6. Performance: In terms of performance, ReactJS's virtual DOM and efficient rendering make it a good choice for building fast and responsive web applications. React's ability to selectively update components based on data changes can lead to better performance compared to AngularJS, especially in complex applications with a lot of dynamic content. AngularJS, on the other hand, may suffer from performance issues due to its two-way data binding and direct manipulation of the DOM. When choosing between ReactJS and AngularJS, consider the specific requirements of your project, your familiarity with JavaScript and web development, and the size and complexity of the application. If you are a beginner or looking for a more straightforward, lightweight solution, ReactJS might be a better starting point. Its vibrant community and extensive resources can help you quickly get up to speed and start building interactive user interfaces. React's focus on components and its declarative approach to defining UI make it relatively easy to learn and use, even for those new to web development. On the other hand, if you are working on a larger project that requires a more structured framework with built-in features like routing, form handling, and dependency injection, AngularJS could be a better fit. AngularJS's opinionated architecture can provide a clear roadmap for organizing your code and handling complex interactions within your application. However, keep in mind that AngularJS has evolved into Angular (Angular 2+), which introduces a different architecture and may have a steeper learning curve for beginners. In summary, ReactJS and AngularJS have their own strengths and weaknesses. React is more lightweight, flexible, and efficient in terms of performance, making it a popular choice for building user interfaces. Its component-based architecture, virtual DOM, and one-way data binding simplify development and improve maintainability. On the other hand, AngularJS provides a more comprehensive framework with built-in solutions for various aspects of web development. Its two-way data binding and opinionated structure can be beneficial for larger projects that require a more structured approach. In terms of performance, ReactJS's virtual DOM and selective rendering approach make it a strong contender for building fast and responsive web applications. By minimizing direct manipulation of the DOM and efficiently updating components, React can deliver high performance even in complex scenarios. AngularJS, with its two-way data binding and direct DOM manipulation, may face performance challenges in larger applications with frequent data updates. **My Personal feelings about ReactJS (Pun intended)** 1. Performance: React's virtual DOM and efficient rendering make your web apps super fast, ensuring a smooth user experience even with dynamic content and frequent updates. 2. Component-Based Architecture: React's component model breaks down your UI into reusable chunks, making it easy to manage and maintain your code while building interactive and engaging user interfaces. 3. Virtual DOM: By using a virtual DOM, React minimizes direct manipulation of the actual DOM, resulting in faster rendering and improved performance for your web applications. 4. One-Way Data Binding: React's one-way data flow simplifies state management and makes it easier to track data changes, leading to more predictable and maintainable code. 5. Community Support: With a large and active community, React offers extensive documentation, tutorials, and third-party libraries to help you learn, troubleshoot, and enhance your projects with ease. 6. Flexibility: React's lightweight nature and focus on the view layer give you the flexibility to choose other libraries and tools, allowing you to tailor your development stack to suit your project's specific needs and requirements. **What I look forward to this HNG Internship.** 1. Skill Development: Over time in the HNG Internship, I can expect to enhance my ReactJS skills through practical projects, mentor guidance, and constructive feedback. As I advance, I will gain a better grasp of React's core concepts, best practices, and advanced techniques, improving my ability to create dynamic web applications effectively. 2. Project Experience: Throughout the internship, I will have the opportunity to apply my ReactJS knowledge in real-world projects, gaining valuable hands-on experience and refining my problem-solving abilities. By working on diverse assignments and collaborating with my peers, I aim to build a strong portfolio showcasing my proficiency in front-end development through engaging React projects. 3. Networking Opportunities: Through the HNG Internship, I can connect with industry professionals, fellow interns, and potential employers, expanding my network and exploring future career pathways. By actively participating in discussions, attending workshops, and engaging in community events, I hope to learn from others, share my experiences, and establish meaningful relationships within the tech industry. 4. Career Growth: As I progress in the HNG Internship and demonstrate my proficiency in ReactJS, I anticipate seeing my career prospects expand, potentially leading to job offers, freelance opportunities, or invitations to join tech teams. By showcasing my skills, contributing to impactful projects, and continuously enhancing my React expertise, I aim to position myself as a sought-after front-end developer ready to take on challenging roles and advance in the industry. Written by Giovanni Obodoakor #HNGInternship #Frontend #Frontendtechnologies[](url)
giovanni_obodoakor_80dfe7
1,906,114
React vs. Vue.js: Comparing two popular component-based frontend technologies
Introduction In the ever-evolving world of Frontend development, the advent of...
0
2024-06-29T22:49:05
https://dev.to/dmystical_coder/react-vs-vuejs-comparing-two-popular-component-based-frontend-technologies-g50
webdev, react, vue, frontend
## Introduction In the ever-evolving world of Frontend development, the advent of component-based frameworks with their modular, reusable components has greatly simplified the process of creating and maintaining modern web applications. A project's success or failure now depends on choosing the right framework, making sure that the framework is going to be around for a while and remain relevant whilst also being easy to use. The concept of components became popular with the rise of React, hence, it is one of the most popular frameworks among the developer community. However, it is essential to understand other available frameworks to make informed decisions. In this article, we’ll take a closer look at two powerful frameworks: [React](https://react.dev) and [Vue.js](https://vuejs.org). We’ll explore the strengths and weaknesses of each framework, compare their features, and help you decide which might be the best fit for your next project. ## React: The library for web and native user interfaces React is a free and open-source frontend JavaScript library for building user interfaces using reusable user interface (UI) components. It was developed by Facebook and is maintained by Meta and a community of individual developers and companies. React can be used to develop single page applications, mobile applications, or server-rendered applications with frameworks like Next.js. It is also known for it's declarative syntax, JSX, a syntax extension for JavaScript that lets you write HTML-like markup in a JavaScript file. Many modern websites use React, including Coursera, PayPal, Netflix, IMDb, Khan Academy, Reddit, etc. ### Advantages of React - Component-Based Architecture - Virtual Document Object Model or V-DOM - Strong, active community and ecosystem - Faster Debugging and Rendering - Easy to learn and use - Readily available JavaScript libraries - Cross platform ### Disadvantages of React - Inadequate Documentation - Rapid Evolution - JSX complexity ## Vue.js: The progressive JavaScript framework Vue.js, created by Evan You, is a JavaScript framework for building web user interfaces. It is a progressive framework meaning it is designed to be incrementally adoptable, allowing it to be used as a library for enhancing specific parts of an application or as a full-featured framework for building complex single-page applications. Enterprises like Apple's Swift UI, Zoom, L'Oréal Paris, Nintendo use Vue.js. ### Advantages of Vue.js - Approachable - Performant - Versatile - Community first - Reactivity System ### Disadvantages of Vue.js - Smaller community - Over-flexibility ## Conclusion Both React and Vue.js offer unique advantages for frontend development. React's component-based architecture and strong community support make it an excellent choice for large, complex applications. Vue’s ease of learning and flexibility make it a fantastic option for both small and large projects. As a participant in the ongoing HNG internship, I expect to build modular components using React's component based architecture, manage states effectively and leverage other amazing features of react. I'm excited to work with React because of it's widespread adoption, strong community support, and it's continuous evolution keeping it at the cutting edge of frontend development. If you're reading this article and are interested in learning more about opportunities in frontend development, check out the [HNG Internship](https://hng.tech/internship) and how you can [hire talented interns](https://hng.tech/hire) through their program. The HNG Internship provides a fantastic platform for budding developers to hone their skills and gain real-world experience.
dmystical_coder
1,906,113
About the "S" in Solid
You probably know the "SOLID principles", which should help you to be a better programmer. And maybe,...
0
2024-06-29T22:33:39
https://dev.to/efpage/about-the-s-in-solid-4gcl
oop, solid, programming, discuss
You probably know the ["SOLID principles"](https://en.wikipedia.org/wiki/SOLID), which should help you to be a better programmer. And maybe, you are still struggeling with the "S", the [Single-responsibility principle (SRP)](https://en.wikipedia.org/wiki/Single-responsibility_principle). Maybe I can give you a different view on this topic. Ok, we are talking about OOP, the Object Oriented Programming. As [Robert C. Martin (Uncle Bob)](https://en.wikipedia.org/wiki/Robert_C._Martin), the originator of the term, was editor-in-chief of C++ Report magazine, we can assume he probably was using this langauge. Did you ever wonder about the strange formulation: >"A class should have only one reason to change"? Often enough, we find this explanation: "[A class should have only one reason to change, meaning it should only have one job or responsibility](https://codefinity.com/blog/The-SOLID-Principles-in-Software-Development)". But, what is "a job?" Is it a complex task or keeping a single state? If a class has the job to print something, in which kind is it changed, if it is doing the job? Let´s say: We get more questions than answers. So, maybe we should have a look back to what C++ was used for in 2003. This is an overview of the Microsoft Foundataion Classes (MFC): ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1y600xz55gnxql997itt.png) Ok, this is the 1st of three charts, but all classes are derived from CObject, which implements the most basic behavoir of MFC-Objects. You will find all kinds of Interface objects that can be used to build User Interfaces, but also handle Drag&Drop-Actions, store the current UI state and so on. This are the building blocks of all Windows Applications: "The Microsoft Foundation Class (MFC) Library and Visual C++ provide an environment that you can use to easily create a wide variety of applications." [(Source)](https://webocreation.com/mfc-fundamentals-and-architecture/). If you are building with User interfaces in C++, you usually do not start building classes from scratch. You select an appropriate class that provides most of what you need, so you just add some specific code. So, you start with one of the foundation classes or one of it´s decendants. As an example, you want to create a new input field to be used in the Application UI. As the UI only accepts classes that are derived from certain core classes, you need to start with one of the existing classes. Therefore you inherit a lot of skills that each UI-element must have. To minimize your work, you select an element that is very similar to what you want and add only what you need. Thanks to inheritance and polymorphism, this is possible without breaking the existing code. Later Robert C Martin explained the SRP as cohesion: "Gather together the things that change for the same reasons. Separate those things that change for different reasons". You surely need more than one thing to gather something. In C++, class hierarchies can be quite complex and deeply nested. But if you follow the SOLID principles, you will be able to manage this complexity. Think of a class always as part of a hierarchy. Each new class, you add to this hierarchy, sould have a single task. This does not mean, it should only have one method or one state. But all properties and methods it implements should be focused on the same task. I suppose, this is, what Uncle Bob meant. There is another aspect of the SRP that you can only understand thinking in hierarchies: A hierarchy is like a tree, each method you add to the root is inherited by all the branches and leafs. If you choose the wrong position, you might be forced to implement the same code in multiple branches. To keep your code maintainable, you should try to find a position where every task needs to be implemented only once. This could also be seen as a "single responsibility principle": For each task (in a class family), there should only be one member that is responsible for this task. Finally I´m not sure what Uncle Bob was really trying to say, but I know, that using OOP without taking benefit of inheritance does not make much sense.
efpage
1,906,112
TAKING ON NEW CHALLENGES AS A BACKEND DEVELOPER
When I first started learning Backend Development, I knew for sure that my journey through the course...
0
2024-06-29T22:31:19
https://dev.to/tee-hope/taking-on-new-challenges-as-a-backend-developer-1j2b
career, backenddevelopment, softwareengineering, webdev
When I first started learning Backend Development, I knew for sure that my journey through the course won't always be smooth. I was bound to face certain difficulties along the way. During my University's Industrial Training Program, as a Computer Engineering Student, I enrolled into an Institute; Deebug Institute (https://deebug.org) where I was privileged to study Backend Development from professionals and masters in the field. I was able to avoid most of the road-blocks and was always set on the right track whenever I was caught derailing. I started my Backend Development journey with Node.js and Express, building small projects to solidify my understanding of RESTful APIs and database interactions. **CHALLENGES** My first 'big project' after graduating; Building a Seamless E-commerce application. Upon finishing my program. I was met with the challenge of integrating a payment platform to allow users make payment and merchants could manage their orders with ease. This seemed like a daunting task at the time. I chose to integrate Paystack as my payment platform initially as it was recommended by my tutor. But that didn’t stop me from making my own personal research and being that I had no experience with any, before, I decided to stick with Paystack, due to its popularity and extensive documentation in the Nigerian development community. However, navigating the API endpoints, handling errors, and implementing secure payment form handling proved to be a challenge. I spent hours debugging, streaming through YouTube, making research and going through the API’s documentation. I encountered issues with; -Understanding Paystack’s documentation and API endpoints. -Handling errors and exceptions during payment processing. -Implementing secure payment and handling. -Basically, getting the code to run. After several tries and compatibility issues that arose from my system, I was able to remedy the issues after carefully going though the documentation over and over until I could grasp the application and error handling, helping me to test and debug my codes. I also joined online communities and forums to learn from other developers who were experienced. I realised that integrating an API wasn’t just about writing code; it was about understanding the entire system. The documentations and support resources were invaluable in helping me overcome the challenges I encountered. **A NEW CHAPTER: HNG INTERNSHIP PROGRAM** As I continue to build my skills and gain experience in backend development, I realised there was still much to learn. Longing to take my skills to the next level and gain industry experience. And so, I am excited to share that I’ve enrolled in the HNG Internship Program. HNG Internship Program is a remote internship program aimed at helping aspiring developers, designers, and tech enthusiasts gain practical experience and improve their skills and by participating in this program, interns can work on real projects, receive mentorship and connect with the tech community, all while working remotely. I was introduced to this program by my cousin who also is in the industry. I am excited to be part of this year’s cohort. I guess here’s my chance at getting all the experience and collaborative skills that is needed. Learn more about the internship here: https://hng.tech/internship Learn about their rich talent pool here: https://hng.tech/hire **CONCLUSION** I’ve learned valuable lessons about problem-solving, perseverance and the important of community and as I continue on my development journey, I’m excited to explore new technologies and concepts. If you are a fellow developer facing similar challenges, I hope my story inspires you to keep pushing forward. Note that every obstacle is an opportunity to learn and grow.
tee-hope
1,905,669
After 1 Year, ASP.NET Core CodeBehind Framework
The first version of CodeBehind (1.0.0) was released on 29 June 2023 by Elanat. It has been a year...
0
2024-06-29T22:30:53
https://dev.to/elanatframework/after-1-year-aspnet-core-codebehind-framework-2e99
webdev, opensource, csharp, devops
The first version of [CodeBehind](https://github.com/elanatframework/Code_behind) (1.0.0) was released on 29 June 2023 by [Elanat](https://elanat.net). It has been a year since the release of the first version of CodeBehind (At the time of publishing this article), and with the introduction of new features, this framework has become more efficient and powerful. Elanat wants to break Microsoft's default web frameworks in ASP.NET Core (ASP.NET Core MVC and Razor Pages). In the following, we will explain the improvements to the CodeBehind framework with each new release. ## New features on new versions ### Early versions The first version of CodeBehind is based on .NET Core version 7.0; subsequent versions prior to version 1.7 have been multiple attempts to create a system with minimum software quality requirements. ### Version 1.7 **New features:** - The possibility of creating a page view without having to follow the MVC pattern - Possibility to create only view without controller and model - Possibility to create model and view without controller ### Version 1.8 **New features:** - Razor syntax support - Template support - Added return template - External template - Added option - The option to specify the path of aspx files - possibility of rewriting the path of aspx files as a directory name - Ability to remove additional lines, tabs, and spaces - Namespace and dll for CodeBehind view class - Added HtmlData classes - Constructor method **This version guarantees 100% Code-Behind support** **Problems that were solved:** - The problem of executing the path with extra characters after the slash character was solved. - Fixed the problem of replacing the class file with failed compilation in the last successful compilation. ### Version 1.8.1 **Problems that were solved:** - Solving the problem of removing one or two characters after Razor syntax. ### Version 1.9 **New features:** - Ability to add layout page - Ability to send data from the current page to the layout page - The possibility of calling external pages from the view section - The possibility of preventing the direct execution of some pages (such as separate header and footer) - Ability to call aspx files in their own path, after rewriting - Improvements in the trim operation at the beginning of the aspx file **Problems that were solved:** - The problem of loading the constructor model without a controller was solved. - Fixed else detection problem for if in Razor syntax. ### Version 1.9.1 **Problems that were solved:** - A mistake caused the arguments of the model constructor to be wrongly placed in the controller constructor; this problem has been fixed now. - In this version, if the CodeBehind framework is activated for the first time, it will no longer give the wwwroot directory missing error and a default welcome file will be placed in it. - The error that occurred when activating the set break for layout page (`set_break_for_layout_page=true`) option was resolved. - The problem of not automatically moving from the wwwroot path to the view path has been solved. - The problem of not applying, ignoring the default Default.aspx files to rewrite as a directory, was solved. ### Version 1.9.2 **New features:** - In this version and later, in the methods of the final view class, when creating a new instance of the controller class, the term controller is used instead of the term CurrentController - In this version and later, the context inside the astx files are added at the beginning of the aspx file **Problems that were solved:** - In the standard syntax, the problem of identifying template blocks that have the next line character or Tab character after the template name was solved. ### Version 1.9.3 **Problems that were solved:** - Fixed the problem of naming templates with numbers in the standard syntax. - Solving the problem of not ignoring two consecutive at sign (@) in conditional blocks and loop blocks. ### Version 2.0 **New features:** - Ability to add data to ViewData in controller and model - The addition of download API and the possibility of downloading files from executive pages in all three sections, view, model and controller - The addition of a global template file to support all view pages - The possibility of adding more templates, by separating the semicolon character (;) - New option to support cshtml files in the options file - Added default pages (include layout) after first run ### Version 2.1 **New features:** - Ability to change the view in the controller - Ability to transfer template block data in ViewData - Complete rewriting of codes related to new lines and backslash of executable files - Complete rewriting of the codes related to creating files - And a series of minor changes and improvements **Problems that were solved:** - Deleting unused ex variable from the final view class. ### Version 2.1.1 **Problems that were solved:** - Resolving the problem of Razor syntax page attributes ending with the less-than (<) character. ### Version 2.1.2 **New features:** - Complete rewrite codes related of page attribute recognition in Razor syntax - Adding the view file path comment above their methods in the view class ### Version 2.2 **New features:** - Added CallerViewPath and CallerViewDirectoryPath to view, model and controller - New option to display minor errors in the options file - Improved debugging and improved `views_compile_error.log` error file - The possibility of creating a controller without requiring the existence of the PageLoad method - Added error page to default pages - Added the path of the error page in the options file - Added FoundPage attribute to detect page execution - Improved detection of closed brackets related to server codes, after apostrophes - Added PageLoad method to Controller abstract class - Added new feature section for better route control - The ability to create model, without the need to add an abstract - The ability to create a CodeBehindConstructor method without the need for input arguments - And a series of minor changes and improvements ### Version 2.3 **New features:** - Ability to specify View along with Model from all Controllers - - The possibility of loading pages with the model in the LoadPage method in View pages - Support strings written from the previous Controller - Added possible to prevent access to Default.aspx - - Added the prevent access to Default.aspx in the options file - Added StaticObject class - And a series of minor changes and improvements **Problems that were solved:** - In cases where the current View is wrongly requested from the Controller, the loop is avoided. ### Version 2.4 **New features:** - New feature for route configuration - The possibility of running the controller with the text name of the controller - Applying multi-threaded processing to create the View class - Marking the View class for when new View pages are added - Ability to set page attributes with lowercase letters in standard syntax - Ability to add text tag with multiple lines in Razor syntax - And a series of minor changes and improvements **In this version, it is possible to give preference to the controller** **Problems that were solved:** - Fixed the problem of finding the `Microsoft.AspNetCore.App` directory for some operating systems. - Fixed problem matching projects whose name does not match for namespace name. ### Version 2.4.1 **Problems that were solved:** - Fixing the problem of calling the pages that were requested with the query string. ### Version 2.4.2 **Problems that were solved:** - Fixed the problem of not creating a query string after calling the pages in the `Run` method. ### Version 2.4.3 **New features:** - Adding Name and NameCollection classes in HtmlData namespase **Problems that were solved:** - Avoiding adding the same query. ### Version 2.5 **New features:** - Adding middleware for easier configuration ### Version 2.5.1 **New features:** - Default.aspx not being added in Section when `prevent access default aspx` is enabled - And a series of minor changes and improvements ### Version 2.6 **New features:** - Support for constructor method of Controller class and Model class - Improved detection of View page attributes in standard syntax ### Version 2.7 **New features:** - Adding CodeBehind roles - - Adding role access middleware - - The possibility of preventing the access of the rolls to the routes - - Ability to define action and give action access for roles - - Ability to define actions for each user based on the session ### Version 2.7.1 **Problems that were solved:** - Correcting a typo in the `UseRollAccess` middleware and changing it to `UseRoleAccess`. ### Version 2.8 **New features:** - Adding cache - Improved numbering of aspx file methods in the View final class ## Read more: - [CodeBehind Framework Tutorial Series](https://dev.to/elanatframework/codebehind-framework-tutorial-series-2571) - [ASP.NET Core MVC vs Razor Pages, Which is Faster? CodeBehind!](https://dev.to/elanatframework/aspnet-core-mvc-vs-razor-pages-which-is-faster-codebehind-2a50) - [Elanat Brings Web-Forms Back to ASP.NET Core!](https://dev.to/elanatframework/elanat-brings-web-forms-back-to-aspnet-core-44ej) - [Disadvantages of MVC Architecture in ASP.NET Core](https://dev.to/elanatframework/disadvantages-of-mvc-architecture-in-aspnet-core-16e7) ### Related links CodeBehind on GitHub: https://github.com/elanatframework/Code_behind CodeBehind in NuGet: https://www.nuget.org/packages/CodeBehind/ CodeBehind page: https://elanat.net/page_content/code_behind
elanatframework
1,906,111
Understanding Frontend Technologies by Comparing React vs. Angular
Introduction to Frontend Development Frontend development is a critical aspect of web development...
0
2024-06-29T22:29:12
https://dev.to/islot/understanding-frontend-technologies-by-comparing-react-vs-angular-4ga7
**Introduction to Frontend Development** Frontend development is a critical aspect of web development that focuses on creating the visual and interactive parts of a website or web application. It comprises everything users see and interact with directly in their web browsers, including the layout, design, content, and user interface elements like buttons, forms, and navigation menus. The primary technologies used in frontend development include HTML, CSS, and JavaScript, alongside various frameworks and libraries that streamline the development process. With the rise of web applications and the increasing demand for responsive, dynamic user interfaces, frontend development is in high demand. Mastering frontend technologies can take anywhere from a few months to a few years, depending on the complexity of the tools and the depth of understanding required. Job prospects in front-end development are strong, with various opportunities available in multiple industries, from startups to large corporations. **Types of Frontend Frameworks** Several frontend frameworks are available, each offering unique features and benefits. Some of the most popular ones include: ReactJS: Developed by Facebook, React is a JavaScript library for building user interfaces, particularly single-page applications where you can manage the view layer for web and mobile apps (React, 2022). Angular: Developed by Google, Angular is a platform for building mobile and desktop web applications. It provides a comprehensive solution for creating dynamic, single-page applications (Angular, 2022). Vue.js: A progressive framework for building user interfaces. Vue is designed to be incrementally adoptable, with a core library focused on the view layer. Svelte: A compiler that generates highly optimized vanilla JavaScript at build time rather than interpreting application code at run time. Ember.js: An opinionated framework for building ambitious web applications. It provides a solid application structure with a robust convention-over-configuration philosophy. **React vs. Angular** Origins and Backing ReactJS Pioneer: Jordan Walke Company: Facebook Launch Date: March 2013 Latest Version: React 18, released in March 2022 (React, 2022) Angular Pioneer: Misko Hevery Company: Google Launch Date: September 2016 (Angular 2+) Latest Version: Angular 15, released in November 2022 (Angular, 2022) Version History and Dates ReactJS React 0.3: March 2013 React 15: April 2016 React 16: September 2017 React 17: October 2020 React 18: March 2022 (React, 2022) Angular Angular 2: September 2016 Angular 4: March 2017 Angular 5: November 2017 Angular 6: May 2018 Angular 7: October 2018 Angular 8: May 2019 Angular 9: February 2020 Angular 10: June 2020 Angular 11: November 2020 Angular 12: May 2021 Angular 13: November 2021 Angular 14: June 2022 Angular 15: November 2022 (Angular, 2022) **General Comparison** **Learning Curve**: React has a relatively moderate learning curve, especially for those familiar with JavaScript. Angular, being a comprehensive framework, has a steeper learning curve due to its extensive feature set and use of TypeScript. **Performance**: React and Angular offers high performance, but React's virtual DOM can provide performance benefits in specific scenarios. Angular's two-way data binding can be beneficial for more complex applications. **Community and Ecosystem**: React boasts a larger community and a vast ecosystem of third-party libraries. Angular has a strong community and robust official libraries, but React's ecosystem is generally considered more extensive (React, 2022; Angular, 2022). **Flexibility vs. Convention**: React is highly flexible and allows developers to choose their tools and libraries. Conversely, Angular follows a convention-over-configuration philosophy, providing a comprehensive, out-of-the-box solution (React, 2022; Angular, 2022). **Why HNG Embraces ReactJS?** At HNG, ReactJS was adopted because it is a library that has revolutionized frontend development with its declarative approach and component-based architecture. React's virtual DOM and extensive ecosystem make it an excellent choice for building scalable and maintainable applications. During my time at HNG, I look forward to deepening my knowledge of ReactJS and contributing to exciting projects. As most alumni of the HNG internship have said, it creates an incredible opportunity to collaborate with talented developers, learn best practices, and apply cutting-edge technologies to real-world challenges. For more information about the [HNG Internship](https://hng.tech/internship), visit HNG Internship. If you want to hire skilled developers from the program, check out [HNG Hire.](https://hng.tech/hire) Conclusion React and Angular offer compelling features and advantages, making them excellent choices for different project requirements. React's flexibility and vast ecosystem make it ideal for developers who prefer to choose their tools and libraries. At the same time, Angular's comprehensive framework and built-in features provide a robust solution for building dynamic, single-page applications. Ultimately, the choice between React and Angular depends on your needs and preferences. By exploring these technologies, you can make informed decisions and stay ahead in the ever-evolving world of frontend development. References React. (2022). A JavaScript library for building user interfaces. Retrieved from [](https://reactjs.org/) React. (2022). React 18 Announcement. Retrieved from [](https://reactjs.org/blog/2022/03/29/react-v18.html) Angular. (2022). The modern web developer's platform. Retrieved from [](https://angular.io/) Angular. (2022). Angular 15 Announcement. Retrieved from [](https://blog.angular.io/angular-v15-is-now-available-df7be7f2f4d8) React. (2022). React Release Notes. Retrieved from [](https://reactjs.org/versions/) Angular. (2022). Angular Versions. Retrieved from [](https://angular.io/guide/releases)
islot
1,906,110
Mobile Developement Platforms and Architecture
** Introduction** There are approximately 6.5 billion smartphone users. There are various...
0
2024-06-29T22:27:01
https://dev.to/isah_katunadam_00cd9ef1f/mobile-developement-platforms-and-architecture-43l1
react, development, mobile, android
##** Introduction** There are approximately 6.5 billion smartphone users. There are various Mobile application development platforms and architecture to facilitate the design, develop and testing of mobile applications. It is the combination of services, technologies and or tools for end to end application development. ##** Mobile Application Development Platforms and Architectures** Here is a non-exhaustive list of mobile developement platforms. 1. Android (Using Android Studio) 2. Ionic 3. React Native 4. Xamarin 5. Flutter 6.Cordova (Apache) The [HNG Internhip Uses React Native](https://hng.tech/internship). I am exploring react native to understand it. I believe it component based will reduce the barrier to faster continuos learning and development. I think it is suitable for beginners and experts alike. [You can hire Interns proficient in mobile development at HNG](https://hng.tech/hire) Thank you for reading.
isah_katunadam_00cd9ef1f
1,906,107
Generating a result with a context
In this video, 1.4 from the llm-zoomcamp, we start by reviewing what happens when we ask the LLM a...
0
2024-06-29T22:21:21
https://dev.to/cmcrawford2/generating-a-result-with-a-context-2cc8
llm, rag
In this video, 1.4 from the [llm-zoomcamp](https://github.com/datatalksclub/llm-zoomcamp), we start by reviewing what happens when we ask the LLM a question without context. We get a generic answer that isn't helpful. ``` from openai import OpenAI client = OpenAI() response = client.chat.completions.create( model='gpt-4o', messages=[{"role": "user", "content": q}] ) response.choices[0].message.content "Whether you can still enroll in a course that has already started typically depends on the policies of the institution offering the course. Here are a few steps you can take:\n\n1. **Check the Course Enrollment Deadline:** Look for any specific deadlines mentioned on the institution's website or contact the admissions office to see if late enrollment is allowed.\n\n2. **Contact the Instructor:** Reach out to the course instructor directly. They might allow late entries if you're able to catch up on missed material.\n\n3. **Administrative Approval:** Some institutions require approval from the department or academic advisor for late enrollment.\n\n4. **Online Courses:** If it's an online course, there may be more flexibility with start dates, so check if you can still join and catch up at your own pace.\n\n5. **Catch-Up Plan:** Be prepared to ask about what materials you've missed and how you can make up for lost time. Showing a willingness to catch up might increase your chances of being allowed to enroll.\n\nEach institution has its own policies, so it's best to inquire directly with the relevant parties at your school." ``` I created a prompt template. The prompt doesn't have to be exactly as written. Creating a prompt is sort of an art. Later, we'll learn how to refine the method using metrics to determine how good the prompt is. But for now this is what I used. ``` prompt_template = """ You're a course teaching assistant. Answer the QUESTION based on the CONTEXT from the FAQ database. Use only the facts from the CONTEXT when answering the question. If the CONTEXT doesn't contain the answer, output NONE QUESTION: {question} CONTEXT: {context} """.strip() ``` Now I put what I got from the search engine into the context. ``` context = "" for doc in results: context = context + f"section: {doc['section']}\nquestion: {doc['question']}\nanswer: {doc['text']}\n\n" ``` Finally, I put the context and the question into the prompt_template, and ask ChatGPT the question. ``` prompt = prompt_template.format(question=q, context=context).strip() response = client.chat.completions.create( model='gpt-4o', messages=[{"role": "user", "content": prompt}] ) response.choices[0].message.content "Yes, even if you don't register, you're still eligible to submit the homeworks. Be aware, however, that there will be deadlines for turning in the final projects. So don't leave everything for the last minute." ``` Now the answer is relevant to the course. This is Retrieval-Augmented Generation, or RAG. Previous post: [Setting up the database and search for RAG](https://dev.to/cmcrawford2/setting-up-the-database-and-search-for-rag-45io) Next post: [Swapping in elasticsearch to the proto-OLIVER](https://dev.to/cmcrawford2/swapping-in-elasticsearch-to-the-proto-oliver-2ml1)
cmcrawford2
1,906,083
Basic Data Analysis on the Iris Flower Dataset (HNG 11)
This task was part of my data analysis internship with HNG11. It is a requirement for all interns in...
0
2024-06-29T22:19:02
https://dev.to/tquillz/basic-data-analysis-on-the-iris-flower-dataset-hng-11-404f
datascience, intern, analyst
This task was part of my data analysis internship with HNG11. It is a requirement for all interns in stage zero to proceed to the next stage. The task was relatively simple. I only had to review a dataset from a list of given options. The objectives are to identify initial insights from the dataset at first glance and to discover patterns, trends, or anomalies. I chose the Dataset on Iris Flowers and performed Basic Exploratory Data Analysis using Python and the libraries in a notebook file. At an initial glance, the file containing the dataset (.data) has 150 rows of 5 values each (5 columns), with each value on a row separated by a comma (comma-delimited). The first four values are numerical variables, while the last one is a categorical variable which could immediately be identified as the label of the dataset. However, there was no description in the original file for any of the variables. Accompanied with the data file was another text file giving a clearer description of the variables represented in the data file. With this information, I imported the data into the notebook, and read it into a DataFrame object using pandas library, assigning appropriate names for the columns of the dataset. In order, the columns are ‘sepal length (cm),’ ‘sepal width (cm),’ ‘petal length (cm),’ ‘petal width (cm),’ and ‘class.’ Using appropriate methods in pandas, I discovered the mean of each of the numerical variables ‘sepal length (cm),’ ‘sepal width (cm),’ ‘petal length (cm),’ ‘petal width (cm)’ to be 5.84, 3.05, 3.76 and 1.20 respectively (to 2 d.p.). Also, I observed that the categorical variable ‘class’ had only three unique values for three kinds of Iris flowers: ‘Iris-setosa, ‘Iris-virginica’ and ‘Iris-Versicolour.’ All of this information was also pointed out in the text description file. Another observation was that each of the three values for the categorical variable was represented the same number of times in the dataset; which means there were 50 Iris-Setosa flowers, 50 Iris-Virginica flowers and 50 Iris-Versicolour flowers. With the aid of plotting and graphing tools, it was clear that a linear relationship exists between the petal width and the petal length, as well as between the petal length and sepal length of the flowers. The Iris-Virginica flowers had the longest petals and sepals, with the Iris-setosa flowers having the shortest ones. This can be seen in the graph below. There is a clear correlation between the measurements of the sepals and petals of the flowers and their respective class. Meanwhile, the graph would suggest that petal length and width have a higher influence in determining the flower class than the sepal width. This could be considered in making inferences from a new dataset without the label. [](https://hng.tech/internship) [](https://hng.tech/premium) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/98o4dyqls6viwdyvs6mi.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/30aq4o6fay5m6qsd1dr9.jpg)
tquillz
1,906,106
The Future of Frontend Development: A Look at Emerging Trends and Technologies
Introduction The year was 2003, I was a young kid only 13 of age. I've had my first...
0
2024-06-29T22:17:31
https://dayvster.com/blog/the-future-of-frontend-development/
webdev, programming, frontend, javascript
## Introduction The year was 2003, I was a young kid only 13 of age. I've had my first computer just about for 7 years at that point however this was the year when my family switched from dial-up to DSL broadband internet. Which meant my alloted time of 1-2h of internet time per day just turned into non stop internet access. This was the first time I was able to explore the internet without any restrictions. I was able to watch videos, play games, chat with friends, and most importantly I was able to access forums, blogs and websites that would teach me how to code. My first language was C, in my youthful naivete I thought it would be super cool to learn how to become a game dev(like most kids). Over on one of my IRC channels I frequented I was told that C++ was what all the cool kids were using. So I switched over to that. Through C++ I got into networks programming and through that I slowly but steadily got into web development. At this point I was an angsty 16 year old teenager, my language of choice was PHP. I built my first few websites for family and friends and I was hooked on web development. I was able to build something that was accessible to everyone, something that was easy to share and something that was easy to maintain. But through the years the web has evolved the quickest and gone through some of the most drastic changes. We went from simple XHTML + PHP dynamic websites to interactive web applications built with jQuery and AJAX, to full blown SPA's(single page applications) built with React, Angular, and Vue to PWAs and WASM. It has been a wild ride and I'm excited to see where the future of frontend development will take us. In this article I will be discussing some of the emerging trends and technologies that I believe will shape the future of frontend development. **Note** These are not predictions, I don't have a crystal ball in my possession, my smoke machine is broken and I'm all out of tarot cards. These are just my thoughts and opinions on things that in the current year of 2024 are gaining traction and can potentially shape the future of frontend development. A lot of these are old news or old ideas that are being revisited and reimagined and some have been around for a while but are just now gaining traction. ## Emerging Trends and Technologies ### The Rise of the Headless CMS Back in the ol' days of the web we simply defaulted to using Wordpress for all our CMS needs. It was easy, user friendly and could be molded into anything you wanted it to be with some moderate effort and best of all it did both the "frontend and backend" for you. That last part was ironically also it's biggest weakness. It limited your choices and because it had to accomodate more and more user needs it became bloated and slow. #### What is a Headless CMS? A headless CMS is essentially a content management system stripped to it's essentials, it only serves as an interface between your content and your frontend. It doesn't care about how you display your content, it only cares about how you store and manage your content. This means you can use any frontend technology you want to display your content. This means that you are free to use any frontend technology you want, heck you can even go full static site generator or have your mobile app or desktop app consume the same content from the same CMS as your website. It follows the UNIX principle of do one thing and do it well, in this case it's managing your content. I believe that the rise of the headless CMS can not and will not be stopped and will only continue to grow in popularity. The benefits far outweigh the negatives. In fact some headless CMSs out there let you build your own bespoke CMS dashboard with minimal hassle independent of your data source for example: Directus, Strapi, and Sanity. What's more plenty of these headless CMSs are offered as a service that your end clients can simply access and manage via their browser. This means you can build a website for a client and they can manage their content without having to worry about the technical details of how their website works or how to deploy it. ### The Continued Dominance of JavaScript Frameworks Every second a new JavaScript framework is born, every minute a new JavaScript framework dies. There's no denying that JavaScript frameworks have had a huge impact on frontend development. They have made it incredibly easy to build complex web applications with very minimal effort for a small trade off in maintainability, occasional performance issues and production breaking bugs due to some obscure javascript quirk. I don't think JavaScript frameworks are going anywhere anytime soon. In fact I believe that they will continue to dominate the frontend development landscape for the foreseeable future. They have become the de facto standard for building web applications and they have a huge ecosystem of libraries and tools that make building web applications a breeze. I believe they will evolve further and adopt more features from other languages and frameworks. For example React has adopted server components which allows you to essentially run NodeJS style backend code in your frontend. This means you can do things like database queries, file system operations, and other backend operations in your frontend code that will then be executed on the server and the result will be sent to the client. Is that good, bad or ugly? Yes, all of the above. But it's a trend that I believe will continue to grow and evolve. ### The Integration of WebAssembly In recent years WebAssembly has gained a lot of traction and has become a popular choice for building high performance web applications. #### What is WebAssembly? WebAssembly is a binary instruction format for a stack-based virtual machine. It is designed as a portable target for compilation of high-level languages like C/C++/Rust, enabling deployment on the web for client and server applications. Basically it allows you to run code written in other languages like C, C++, and Rust in the browser. This means you can build high performance web applications that can run at near native speeds in the browser. Which is great for performance intensive applications that would traditionally be desktop applications. Figma is a great example of a web application that uses WebAssembly to run incredibly fast and smooth in the browser. I believe that downloading and installing desktop applications will become more and more of a nieche thing for most common users of the web. WebAssembly will allow developers to build high performance web applications that can run on any device with a modern browser. This removes a pretty big chunk of friction and humans will always take the path of least resistance. The friction of having to download and install an application that is, why would you do that when you can just open a browser and use the application right away? Now I am aware of all the security implications of that and also that this means that we won't ever truly own software anymore which I personally don't think is the best thing for the future of software but that's a topic for another article. ### The Growing Importance of Web Accessibility Web accessibility has been a hot topic for a while now and it's only going to get hotter(rawr). The web is for everyone and it should be accessible to everyone. This means that we as developers have a responsibility to make sure that our websites are accessible to everyone. If I roll the dice on this sentence I can assume that you the reader are a healthy individual but many users out there are not and they should be able to have the same access to the the wonderful place that is the web as you or me do. I believe that web accessibility will become a standard requirement for all websites in the future. Now wether that is because of legislation or because of a shift in mindset is to be seen. But I believe that it will happen. Or atleast I hope it will happen. ### The Evolution of Progressive Web Apps (PWAs) Progressive Web Apps or PWAs for short have been around for a while now but I feel like they've only just started to gain some traction, it's been slow for sure. #### What is a Progressive Web App? A progressive web app is a basically just a Web App with a manifest file that makes it installable on your device typically mobile but also desktop. It also has a service worker that allows it to work offline and load faster. It has an interesting dynamic with WASM as well, you could build your PWA with WASM and have it run in the browser but also be easily installable on any device. This means you could build a pretty ok performing mobile app with just web technologies and have it be installable on any device. I'm personally not the biggest fan of PWAs but I can see the appeal for some use cases. I believe that PWAs will become more and more popular in the future as they offer a way to build cross platform applications with web technologies without relying on electron or having to rewrite your application for every platform or device. They can also bypass phone app stores and desktop stores which has caused some controversy with Apple in the past. ### The Ascendancy of Artificial Intelligence (AI) and Machine Learning (ML) AI and ML have been around for a while now. AI has kinda exploded in recent years with ChatGPT, Copilot, and other AI powered tools, in fact it seems like 2024 is the year where people try to put AI into everything. Only a matter of time till they try to put themselves in AI but that's a different story. I believe that AI and ML will become really useful tools for a lot of people and for a lot of technologies and usecases. Personally I'm not a big fan of using AI for creative work, I find that humans are a lot better at actually creating creative things than AI is. In fact AI can not really create as of yet it can just kinda assume and predict based on data it has been fed. So there's a point of diminishing returns when it comes to AI and creative work. But plenty of companies will use AI in creative ways such as meeting, call assistants that will be able to take notes, do sentiment analysis and recall the entire meeting in great detail for you. To put it simply imagine if you had an AI assistant that you could ask about that one point your manager was making during that big meeting he wanted everyone to attend but you were too busy browsing social media on your phone. Asking your manager to repeat himself long after the meeting has ended would be embarassing or income changing depending on your manager. But if you had an AI assistant you could simply ask that AI all your questions. ### The Rise of Streaming and Real-Time Communication At the moment the web operates on a request-response model. You make a request to a server, wait for a bit while the server processes your request does it's work to prepare a response and then sends it back to you. This is fine but fairly slow. I believe there will be a point in which this type of communication between server and client is no longer the most efficient way to communicate. Something that will make the cost of this a lot lower. Basically socket communication is superior to request-response communication in almost every single way, it's main drawbacks are that it's more expensive and more difficult to maintain and debug. But basically I think even simple websites will simply stream their content over to your browser in the future. The only interesting part with this will be how will stuff like SEO look, when you don't really have a static page to index anymore. I personally think that SEO will become less and less important in the future as the web becomes more and more of a real-time experience plus the fact that much of the use cases of search engines are already covered by AI assistants at the moment. ### The Power of Web Workers and Service Workers Misko Hevery has a tendency to be ahead of his time but not popular. He has recently created Qwik which is a new-ish JavaScript framework that relies heavily on web workers and service workers to deliver a fast and smooth experience to the user. Now you may know Misko as the creator of Angular which was the first hugely popular and widely used JavaScript frontend framework. Web workers and service workers are a way to run JavaScript code in the background of your application. This means you can do things like offload heavy computations to a web worker and have it run in the background while your main thread is free to do other things. This can greatly improve the performance of your web application and make it feel more responsive and smooth. I think Misko is on the right track with this, workers will most likely be a very important building block of modern javascript frameworks in the future. However I believe that it won't be Qwik that popularizes this but rather most likely React or Svelte or some other completely new framework. ### The Rise of Low-Code/No-Code Development Let me preface this by saying that I hate that this is becoming so popular, in fact the only thing I hate more is the fact that there are thought leaders and guides out there that recommend building your solo startup by heavily utilising no-code tools to bridge something quick and dirty together as a proof of concept. No-Code and Low-Code tools are basically tools that let you create with minimal to no coding knowledge or experience required. They are great for prototyping and building simple applications. You can create a website, a mobile app, or a web application without writing a single line of code. I believe that due to the enterpreneurial spirit of the current generation and the rise of the POC and MVP culture that no-code and low-code tools will become increasingly popular in the future. They offer folks the ability to scaffold their idea out into a working prototype to showcase to investors or even to launch to the general public. I don't like it, but that's what I think will happen. ### The Continued Growth of Static Site Generators What is old will be new again. Static sites once ruled the web, then dynamic sites took over, now static sites are making a comeback. We're kinda forming two camps in web development recently. On one side you got the minimal effort wire a lot of different tools together to get a working web app or website and on the other side you have the let's keep it simple and just build as much stuff as simple as possible gang **cough HTMX cough**. Static site generators kinda bridge that gap in a way, because with tools like Astro and Svelte you can build a static site that is also a web application with all the complexity and interplay of different tools and libraries that you want. I believe that static site generators will continue to grow in popularity in the future as they offer a way to build fast and performant websites that can also possibly scale into more complex and feature rich web applications within the same codebase. ## Conclusion The web still remains one of the quickest evolving and most exciting platforms to develop for. In just 20 years we went from simple static websites filled with GIFs and auto playing music to fully fledged application that run in your browser and let you do work that was previously only possible by going through a very long desktop application installation process. It hasn't been that long but the web has come a long way and I'm excited to see where it will go in the future. This blog post outlined a couple of my thoughts on the trends and technologies that I see emerging or growing in popularity currently. Wether or not I'm right remains to be seen but I'm excited to see where the future of frontend development will take us.
dayvster
1,906,105
DSA Essentials Course Review
In...
0
2024-06-29T22:14:59
https://dev.to/nelson_bermeo/dsa-essentials-course-review-30hc
In Progress... https://www.udemy.com/course/cpp-data-structures-algorithms-prateek-narang/?couponCode=LETSLEARNNOWPP
nelson_bermeo
1,906,104
ML Practical Workout Course Review
In...
0
2024-06-29T22:13:25
https://dev.to/nelson_bermeo/ml-practical-workout-course-review-457h
In Progress... https://www.udemy.com/course/deep-learning-machine-learning-practical/?couponCode=LETSLEARNNOWPP
nelson_bermeo
1,906,103
Analyzing Frontend Technologies: React vs. Vue.js
Web developers use front-end technologies to create dynamic and interactive user interfaces. React...
0
2024-06-29T22:13:11
https://dev.to/ogedi001/analyzing-frontend-technologies-react-vs-vuejs-1n6
Web developers use front-end technologies to create dynamic and interactive user interfaces. React and Vue.js are two of the most popular choices in this area. While React has been around for years, Vue.js is a newer option. This post will compare the differences, advantages, and features of these two frameworks. React: The Experienced Framework Released by Facebook in 2013, React has become a major player in the front-end development world. This JavaScript library is specifically designed for creating single-page applications and user interfaces. React helps developers build reusable UI components, speeding up development and making code easier to manage. Key Features of React 1. Component-Based Architecture: React's structure promotes modularity and reusability, allowing developers to build complex UIs from simple, self-contained components. 2. Virtual DOM: React uses a virtual DOM to boost performance by updating only the parts of the DOM that have changed. This makes React apps fast and responsive. 3. One-Way Data Binding: React's one-way data flow ensures predictable data changes, making testing and debugging easier. 4. Rich Ecosystem React has a vast ecosystem with many libraries and tools that enhance its functionality. This flexibility makes React suitable for various types of projects. 5. Vue.js: The Progressive Framework Created by Evan You, Vue.js is a progressive JavaScript framework for building user interfaces. Since its release in 2014, Vue has gained popularity due to its simplicity and flexibility. It's designed for gradual adoption, making it ideal for both small and large projects. Key Features of Vue.js 1. Reactive Data Binding: Vue's two-way data binding keeps data and the UI synchronized, which is crucial for forms and user input scenarios. 3. Component-Based Architecture: Like React, Vue.js promotes the use of reusable components, making it easier to manage complex applications. Directives: Vue enhances HTML with directives such as v-bind and v-model, making the template syntax simpler and easier to understand. 4. Vue CLI: The Vue Command Line Interface (CLI) is a robust toolkit for quickly setting up and developing projects, with options for installing plugins and configuring the project. Comparing React and Vue.js 5. Learning Curve: Vue.js is generally easier to learn, especially for beginners, with its intuitive syntax and comprehensive documentation. React, though powerful, has a steeper learning curve due to concepts like JSX, state management, and hooks. 6. Performance: Both React and Vue.js offer excellent performance. React's virtual DOM and Vue's reactivity system ensure efficient and fast UI updates. Performance differences are often negligible and depend on specific use cases and implementation. 7. Community and Ecosystem: React has a larger community and a more extensive ecosystem, offering a wealth of libraries and tools. This makes it easier to find solutions and integrate third-party services. Vue.js, while having a smaller community, is growing rapidly and provides a well-maintained ecosystem with plenty of plugins and extensions. 8. Flexibility and Complexity: React offers high flexibility, giving developers the freedom to structure their projects as they see fit, which can lead to inconsistency across projects. Vue.js, on the other hand, provides more guidance on how things should be done, leading to more consistent and maintainable codebases. Conclusion Choosing between React and Vue.js depends on your specific needs and preferences. React is a robust, battle-tested library with a vast ecosystem, ideal for large-scale applications. Vue.js, with its simplicity and ease of use, is perfect for developers seeking a more straightforward and approachable framework. As I embark on my journey with the HNG Internship, I'm excited to delve deeper into React and harness its capabilities to build impressive applications. This experience will undoubtedly enhance my skillset, preparing me for the diverse challenges of front-end development. For those interested in learning more about the HNG Internship and the opportunities it offers, check out the [HNG Internship page](https://hng.tech/internship). If you're a company looking to hire top talent, the [HNG Hire platform]( https://hng.tech/hire) is the perfect place to start.
ogedi001
1,906,043
a subjective evaluation of a few open LLMs
Hey there ! Amazing what you can do with small language models like llama3! I've been using local...
0
2024-06-29T22:11:45
https://dev.to/yactouat/a-subjective-evaluation-of-a-few-open-llms-ke3
slm, llm, opensource, ai
Hey there ! Amazing what you can do with small language models like `llama3`! I've been using local models a lot, the best consistent devX I've personnally had so far was with `llama3` or with `phi3`. Both never cease to excel at following precise instructions, formatting data, labeling sentiment, etc. All simple tasks that can be easily implemented in any app. The best part is that these models are free and can be totally private (can run on-premise); so if you need to transition to more efficient solutions than throwing money at OpenAI's, let me submit to you this little test I did lately => ```python import os import pytest from llm_lib import classify_sentiment @pytest.mark.parametrize("headline_input,expected", [ ( { 'headline_text': ('Asure Partners with Key Benefit Administrators', 'to Offer Proactive Health Management Plan (PHMP) to Clients')}, {'possible_sentiments': ['bullish', 'neutral', 'slightly bullish', 'very bullish']} ), ( { 'headline_text': ('Everbridge Cancels Fourth Quarter', 'and Full Year 2023 Financial Results Conference Call')}, {'possible_sentiments': ['bearish', 'neutral', 'slightly bearish', 'uncertain', 'very bearish']} ), ( { 'headline_text': ("This Analyst With 87% Accuracy Rate Sees Around 12% Upside In Masco -", "Here Are 5 Stock Picks For Last Week From Wall Street's Most Accurate Analysts " "- Masco (NYSE:MAS)")}, {'possible_sentiments': ['bullish', 'slightly bullish', 'very bullish']} ), ( {'headline_text': 'Tesla leads 11% annual drop in EV prices as demand slowdown continues'}, {'possible_sentiments': ['bearish', 'slightly bearish', 'very bearish']} ), ( {'headline_text': "Elon Musk Dispatches Tesla's 'Fireman' to China Amid Slowing Sales"}, {'possible_sentiments': ['bearish', 'slightly bearish']} ), ( {'headline_text': "OpenAI co-founder Ilya Sutskever says he will leave the startup"}, {'possible_sentiments': ['bearish', 'neutral', 'slightly bearish', 'uncertain']} ), ( {'headline_text': "Hedge funds cut stakes in Magnificent Seven to invest in broader AI boom"}, {'possible_sentiments': ['bearish', 'bullish', 'neutral', 'slightly bearish', 'slightly bullish']} # the "broader AI boom" part can be seen as bullish ) ]) def test_classify_sentiment(headline_input, expected): assert classify_sentiment(**headline_input) in expected['possible_sentiments'] ``` ... the goal here is to get one of the expected sentiment with headlines about the stocks market. TL;DR: open LLMs nailed it! I've became so used to their performance that I can even put these kinds of tests in a CI without having to worry too much that they would randomly fail 😏 Here is the prompt and its wrapping function (I'm using LangChain for this) => ```python def classify_sentiment(headline_text: str, model_name: str = current_model): template = """You are a stocks market professional. Your job is to label a headline with a sentiment IN ENGLISH. Headlines that mention upside should be considered bullish. Any headline that mentions a sales decline, a drop in stock prices, a factory glut, an economic slowdown, increased selling pressure, or other negative economic indicators should be considered bearish instead of neutral. Only label a headline as neutral if it does not have any clear positive or negative sentiment or business implication. You'll prefix a bullish or bearish sentiment with "very" if the headline is particularly positive or negative in its implications. On the other hand, you'll prefix a slightly bullish or slightly bearish sentiment with "slightly" if the headline is only slightly positive or negative in its implications. Here is the headline text you need to label, delimited by dashes: -------------------------------------------------- {headline_text} -------------------------------------------------- Here is the list of the possible sentiments, delimited by commas: ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, very bullish bullish slightly bullish neutral slightly bearish bearish very bearish uncertain volatile ,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, You are to output ONLY ONE SENTIMENT WITH THE EXACT WORDING, from the provided list of sentiments. DO NOT add additional content, punctuation, explanation, characters, or any formatting in your output.""" sentiment_prompt = PromptTemplate.from_template(template) chain = sentiment_prompt | get_model(model_name) output = chain.invoke({"headline_text": headline_text}) return output.content.strip().lower() ``` I've spent a little time downloading a handful of models of various sizes from `ollama`, with a focus on the smallest ones since I want to perform recurrent scraping and can't afford to wait too long before inference is done. Here are the results on running the tests on a Intel® Xeon® Gold 5412U server with 256 GB DDR5 ECC and no GPU. ``` | Model | Status | Time (s) | |--------------------|--------|----------| | llama3 | OK | 17.68 | | phi3 | OK | 17.84 | | aya | OK | 21.68 | | mistral | OK | 21.76 | | mistral-openorca | OK | 22.20 | | gemma2 | OK | 23.14 | | phi3:medium-128k | OK | 45.87 | | phi3:14b | OK | 47.36 | | aya:35b | OK | 77.99 | | llama3:70b | OK | 144.62 | | qwen2:72b | OK | 148.25 | | command-r-plus | OK | 239.20 | | qwen2 | OKKO | 16.11 | ``` I've set `qwen2` to `OKKO` as it systemtically considers that `Hedge funds cut stakes in Magnificent Seven to invest in broader AI boom` is a `very bullish`, I didn't discard the model entirely since this is open to interpretation... However and without suprise, Llama3 leads the pack, followed closely by the small phi3. I've also found out, while building up this test, that Cohere's aya is a really nice pick for data extraction! Lastly, I've tried a few larger models, I intend to use those for my workloads when more _intelligence_ is required. The future is bright for us developers to develop all kinds of agentic workflows with such brilliant models available to us. It's a great time to live in ✨ so keep building! ✨ --- EDIT: I like testing prompts, so let's start a package => https://pypi.org/project/yuseful-prompts/ Feel free to contribute!
yactouat
1,906,097
How to configure API Versioning in .NET 8
Implementing API Versioning in .NET 8 with Evolving Models What if you want to create a...
0
2024-06-29T22:11:44
https://dev.to/iamrule/comprehensive-guide-to-api-versioning-in-net-8-1i9j
api, dotnet, versioning
### Implementing API Versioning in .NET 8 with Evolving Models What if you want to create a robust API and need to manage different versions to ensure backward compatibility, especially when models evolve over time? The default .NET application template doesn’t provide this out of the box, so here's a guide to make this process simpler. ### Requirements ### - .NET 8 Web API project - The following packages: ```xml <ItemGroup> <PackageReference Include="Asp.Versioning.Http" Version="8.0.0" /> <PackageReference Include="Asp.Versioning.Mvc.ApiExplorer" Version="8.0.0" /> </ItemGroup> ``` ### Guide ### Follow these steps to implement versioning in your .NET 8 Web API project, including handling evolving models. 1. **Add the references above to the project** 2. **Configure Services in `Program.cs`**: ```csharp var builder = WebApplication.CreateBuilder(args); builder.Services.AddApiVersioning(options => { options.DefaultApiVersion = new ApiVersion(1, 0); options.AssumeDefaultVersionWhenUnspecified = true; options.ReportApiVersions = true; options.ApiVersionReader = ApiVersionReader.Combine( new UrlSegmentApiVersionReader(), new HeaderApiVersionReader("X-Api-Version") ); }).AddApiExplorer(options => { options.GroupNameFormat = "'v'VVV"; options.SubstituteApiVersionInUrl = true; }); builder.Services.AddControllers(); builder.Services.AddEndpointsApiExplorer(); builder.Services.AddSwaggerGen(c => { c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API - V1", Version = "v1.0" }); c.SwaggerDoc("v2", new OpenApiInfo { Title = "My API - V2", Version = "v2.0" }); }); var app = builder.Build(); if (app.Environment.IsDevelopment()) { app.UseSwagger(); app.UseSwaggerUI(); } app.UseHttpsRedirection(); app.UseAuthorization(); app.MapControllers(); app.Run(); ``` 3. **Implement Versioned Controllers**: Create versioned controllers by decorating them with the `ApiVersion` attribute: ```csharp namespace MyApp.Controllers.v1 { [ApiVersion("1.0")] [Route("api/v{version:apiVersion}/[controller]")] [ApiController] public class WorkoutsController : ControllerBase { [MapToApiVersion("1.0")] [HttpGet("{id}")] public IActionResult GetV1(int id) { return Ok(new { Message = "This is version 1.0" }); } } } namespace MyApp.Controllers.v2 { [ApiVersion("2.0")] [Route("api/v{version:apiVersion}/[controller]")] [ApiController] public class WorkoutsController : ControllerBase { [MapToApiVersion("2.0")] [HttpGet("{id}")] public IActionResult GetV2(int id) { return Ok(new { Message = "This is version 2.0", NewField = "New data" }); } } } ``` 4. **Handling Evolving Models**: When models evolve, create separate model classes for each version to maintain backward compatibility. **Version 1 Model:** ```csharp public class WorkoutV1 { public int Id { get; set; } public string Name { get; set; } } ``` **Version 2 Model with Additional Fields:** ```csharp public class WorkoutV2 { public int Id { get; set; } public string Name { get; set; } public string Description { get; set; } // New field } ``` Update the controller methods to use the appropriate models: ```csharp namespace MyApp.Controllers.v1 { [ApiVersion("1.0")] [Route("api/v{version:apiVersion}/[controller]")] [ApiController] public class WorkoutsController : ControllerBase { [MapToApiVersion("1.0")] [HttpGet("{id}")] public IActionResult GetV1(int id) { var workout = new WorkoutV1 { Id = id, Name = "Workout V1" }; return Ok(workout); } } } namespace MyApp.Controllers.v2 { [ApiVersion("2.0")] [Route("api/v{version:apiVersion}/[controller]")] [ApiController] public class WorkoutsController : ControllerBase { [MapToApiVersion("2.0")] [HttpGet("{id}")] public IActionResult GetV2(int id) { var workout = new WorkoutV2 { Id = id, Name = "Workout V2", Description = "This is a description." }; return Ok(workout); } } } ``` 5. **Configure Swagger Documentation**: Ensure each API version has its own Swagger documentation: ```csharp builder.Services.AddSwaggerGen(c => { c.SwaggerDoc("v1", new OpenApiInfo { Title = "My API - V1", Version = "v1.0" }); c.SwaggerDoc("v2", new OpenApiInfo { Title = "My API - V2", Version = "v2.0" }); }); ``` 6. **Run Your Application**: Build and run your application to see the versioned API in action: ```sh dotnet run ``` 7. **Access Different API Versions**: Use the URL to access different versions of your API: - Version 1.0: `https://localhost:5001/api/v1/workouts/{id}` - Version 2.0: `https://localhost:5001/api/v2/workouts/{id}` ### Deprecating API Versions ### To deprecate an old API version, set the `Deprecated` property on the `ApiVersion` attribute: ```csharp [ApiVersion("1.0", Deprecated = true)] [Route("api/v{version:apiVersion}/[controller]")] [ApiController] public class DeprecatedController : ControllerBase { [HttpGet("{id}")] public IActionResult GetV1(int id) { return Ok(new { Message = "This version is deprecated." }); } } ``` ### Organizing Versioned Controllers and Models in Solution Explorer ### To keep your project organized, especially as you add more versions, follow these tips: 1. **Create a Folder Structure**: - Create a main folder called `Controllers` and subfolders for each version, e.g., `v1`, `v2`. - Place each version of your controllers in the respective folder. Example structure: ``` - Controllers - v1 - WorkoutsController.cs - v2 - WorkoutsController.cs - Models - v1 - WorkoutV1.cs - v2 - WorkoutV2.cs ``` 2. **Naming Conventions**: - Use clear and consistent naming conventions to differentiate between versions. - Include the version number in the model and controller class names if needed for clarity, e.g., `WorkoutV1`, `WorkoutV2`. 3. **Updating Namespaces**: - Ensure the namespaces reflect the folder structure to avoid conflicts. - Example: ```csharp namespace MyApp.Models.v1 { public class WorkoutV1 { public int Id { get; set; } public string Name { get; set; } } } namespace MyApp.Models.v2 { public class WorkoutV2 { public int Id { get; set; } public string Name { get; set; } public string Description { get; set; } // New field } } ``` 4. **Consistent Routing**: - Ensure your routing attributes are consistent and clear to indicate the version in the URL path. ### All Done! ### <img width="100%" style="width:100%" src="https://media.giphy.com/media/VIjf1GqRSbf0OsNG0H/giphy.gif"> Now you have a versioned API that can evolve smoothly while maintaining backward compatibility. Don’t forget to document and communicate breaking or big changes! Feel free to experiment and make it even more advanced!
iamrule
1,906,102
Deep Learning Specialization Review
In Progess...
0
2024-06-29T22:10:43
https://dev.to/nelson_bermeo/deep-learning-specialization-review-5afi
In Progess...
nelson_bermeo
1,906,008
Back to my vomit. To be better at code.
Python was one of the first languages I learned, but I disregarded it after I realized the syntax was...
0
2024-06-29T22:09:43
https://dev.to/beautiful_orange/back-to-my-vomit-to-be-better-at-code-357a
Python was one of the first languages I learned, but I disregarded it after I realized the syntax was a bit weird for me compared to C-based languages which I am more comfortable with. But Python is a popular language, one that is not close to declining. So whether I like it or not I chose to return to it. That led me to this, a challenge to me at the time, I had to use Python with the React JS application I was building, of course, the only reason I was limiting myself to using Python was to push myself. Many languages had richer libraries I could use to solve my problem like dotNET. The application I was trying to build was a file converter, and the first feature I chose to implement was a pdf to word converter. Building my converter would require good knowledge of binaries, parsing, etc. I chose to use a Python module called pdf2docx. ``` from pdf2docx import Converter import os import sys pdfFilePath = sys.argv[1] if os.path.exists(pdfFilePath): pathLength = len(pdfFilePath) newPath = './converted/new.docx' cv = Converter(pdfFilePath) cv.convert(newPath) cv.close() else: print("\n----NO SUCH DIRECTORY!----") ``` ## Difficulties Here are some of the issues I faced down the line: 1. How do I use Python with the React JS app? 2. What’s the best way to send the file from the front to the back end? ### Solving item 1 I explored various solutions like using Python on the back-end with frameworks like Flask or Django. But that would be overkill for one simple functionality. Although not a novel solution and for many of you reading this you would have never considered this a problem, but for me a baby programmer it was a bit different. The first thing to consider was a server. I am not used to back-end programming with Python nor see any reason to use Python's HTTP server in this project. I decided to use node with express as it was a straighter forward implementation for me. So now the project had two servers, the Vite server for front-end rendering and Express for working at the back. I decided to use “child_process”; `const {spawn} = require("child_process");` a node module for creating sub-processes. In child_processes there is a function spawn, which is used to run scripts like how you would in a terminal. ``const pythonProcess = spawn("py", ["./c.py", `${filePath}`]);`` ### Solving item 2 The second issue was accessing the file on the backend. The solution was quite simple but I only embarrassingly figured it out after crying and threatening my computer. Originally, I had used the regular file picker input in HTML to allow the user to select a PDF file to convert, but I changed it to a drag-and-drop system. When the user drops their file in the “drop region” it needs to be passed as an argument to the python script which does the conversion. A fetch method is called to send the file path to the node server. Using this file path and the “child_process” a Python script is executed and the pdf file is converted to a docx file. The new file is automatically saved in the root folder called “converted”. ## Finally I am taking on more small projects like this to build my familiarity with various programming tools. The goal is to be able to confidently and efficiently solve simple to moderately difficult programming tasks. To be able to hop on the computer and automate renaming my files, or creating an app to monitor my sitting posture through my camera and to do it on a whim. That is why I decided to take on an internship program called the HNG internship (https://hng.tech/internship) which encourages language fluency and familiarity with various organizational skills and methods through individual projects and group projects. Although we are just at the first stage, stage 0, I can already see the detail and deliberate decisions in the tasks set. If you would like to Join I advise you to go with the paid option at https://hng.tech/premium to get a certificate upon completing the program and access the community for up to one year after the program ends. --- Cover photo by mark glancy: https://www.pexels.com/photo/boston-terrier-wearing-unicorn-pet-costume-1564506/
beautiful_orange
1,906,094
System Design Series - Scalability
System Design Series - Scalability Introduction In this section, we are going...
0
2024-06-29T22:07:52
https://dev.to/realsteveig/system-design-series-scalability-1ln8
systemdesign, softwareengineering, webdev, node
# System Design Series - Scalability ## Introduction In this section, we are going to discuss scalability, a critical aspect of system design that ensures your application can handle increased load gracefully. Understanding scalability is essential for building robust, high-performance systems that can grow with user demand and business needs. ## What is Scalability? Scalability is the ability of a system to handle increased workload by adding resources. It ensures that as demand grows, the system can continue to function efficiently. Scalability can be thought of in three dimensions: 1. **Vertical Scalability (Scaling Up)**: Adding more power (CPU, RAM, etc.) to an existing machine. 2. **Horizontal Scalability (Scaling Out)**: Adding more machines to a system. 3. **Diagonal Scalability**: Combining both vertical and horizontal scaling. ## Types of Scaling ### Vertical Scaling (Scaling Up) - **Description**: Increasing the capacity of a single machine by adding more resources (CPU, RAM, storage). - **Pros**: - Easier to implement since it involves upgrading existing machines. - No need to modify the application architecture. - **Cons**: - Limited by the maximum capacity of a single machine. - Single point of failure: if the machine goes down, the application becomes unavailable. - **Use Cases**: Initial stages of a project, applications with low to moderate growth. ### Horizontal Scaling (Scaling Out) - **Description**: Adding more machines to handle increased load. - **Pros**: - Virtually limitless scalability by adding more machines. - Increases redundancy, reducing the risk of a single point of failure. - **Cons**: - More complex to implement due to the need for distributed systems design. - Requires load balancing and data distribution strategies. - **Use Cases**: High-growth applications, distributed systems, applications requiring high availability. ### Diagonal Scaling - **Description**: A combination of vertical and horizontal scaling. Start with vertical scaling and switch to horizontal scaling when the vertical limit is reached. - **Pros**: - Flexibility to adapt to different stages of growth. - Optimizes resource utilization. - **Cons**: - Requires careful planning and monitoring to switch between scaling strategies effectively. - **Use Cases**: Applications with varying load patterns, systems with mixed workloads. ### Auto Scaling - **Description**: Automatically adjusting the number of running instances based on current load. - **Pros**: - Dynamic scaling based on real-time demand. - Cost-efficient as resources are used only when needed. - **Cons**: - Requires accurate load prediction and monitoring. - Potential for delays in scaling actions, leading to temporary performance issues. - **Use Cases**: Cloud-based applications, unpredictable traffic patterns, cost-sensitive applications. ## Key Considerations for Scalability ### Load Balancing Load balancing distributes incoming network traffic across multiple servers, ensuring no single server becomes a bottleneck. Popular load balancers include: - **Hardware Load Balancers**: Physical devices designed to distribute traffic. - **Software Load Balancers**: Tools like Nginx, HAProxy, and cloud-based solutions like AWS Elastic Load Balancing. ### Caching Caching reduces the load on your servers by storing frequently accessed data in memory. Types of caching include: - **Client-Side Caching**: Caching data on the user's device. - **Server-Side Caching**: Caching data on the server side using tools like Redis or Memcached. - **Content Delivery Networks (CDNs)**: Caching static assets (images, videos, etc.) on servers closer to the user. ### Database Scaling Databases can be a significant bottleneck in scalable systems. Techniques for scaling databases include: - **Read Replicas**: Distributing read requests to multiple read-only copies of the database. - **Sharding**: Partitioning the database into smaller, more manageable pieces called shards. - **NoSQL Databases**: Databases like MongoDB, Cassandra, and DynamoDB are designed for horizontal scaling. ### Microservices Architecture Microservices architecture breaks down a monolithic application into smaller, independent services that can be developed, deployed, and scaled independently. This approach enhances scalability by allowing individual services to be scaled based on their specific demands. ### Auto Scaling Auto scaling automatically adjusts the number of running instances based on current load. Cloud providers like AWS, Google Cloud, and Azure offer auto scaling features, ensuring your application scales dynamically in response to traffic changes. ### Real-World Examples 1. **Vertical Scaling Example**: A startup begins with a single server for their web application. As they gain more users, they upgrade the server’s RAM and CPU to handle the increased load. This works well initially but eventually reaches a hardware limit. 2. **Horizontal Scaling Example**: A popular e-commerce site handles millions of users during peak seasons. They use multiple servers behind a load balancer to distribute incoming traffic. If one server fails, the load balancer redirects traffic to the remaining servers, ensuring uninterrupted service. 3. **Diagonal Scaling Example**: A SaaS company starts with vertical scaling by upgrading their servers as their user base grows. When they reach the limit of vertical scaling, they transition to horizontal scaling by adding more servers and implementing load balancing. 4. **Auto Scaling Example**: A news website experiences fluctuating traffic with sudden spikes during breaking news. Using auto scaling, the website dynamically adjusts the number of servers to handle the traffic spikes, ensuring consistent performance and cost efficiency. ## Conclusion Scalability is a fundamental aspect of system design that ensures your application can handle growth efficiently. By understanding and implementing various scaling techniques, from vertical, horizontal, and diagonal scaling to auto scaling, load balancing, caching, and database strategies, you can build systems that perform well under increased load. Remember, the right scaling strategy depends on your specific use case, and often, a combination of methods will yield the best results. In the next section of our System Design Series, we will delve deeper into load balancing techniques and their importance in building scalable systems.
realsteveig
1,906,100
Beyond Vanilla JS: Svelte vs. Inferno
For those looking beyond plain JavaScript, Svelte and Inferno offer intriguing front-end framework...
0
2024-06-29T22:06:46
https://dev.to/hope_odidi/beyond-vanilla-js-svelte-vs-inferno-5d0g
webdev, frontend
>For those looking beyond plain JavaScript, Svelte and Inferno offer intriguing front-end framework options: **Svelte**: Blazingly fast due to its innovative compile-time approach. This eliminates the virtual DOM manipulation found in React, resulting in a smaller footprint. However, Svelte's young age translates to a less developed ecosystem and potential debugging hurdles. **Inferno**: A lightweight powerhouse, Inferno boasts a virtual DOM claimed to be twice as fast as React's. It also offers smooth migration from existing React projects. The trade-off? A smaller feature set compared to React, missing functionalities like server-side rendering. **Choosing Your Weapon:** **Peak Performance:** Svelte takes the crown for speed and minimal bundle size. **Swift React Migration:** Inferno shines for speed-focused React project transitions. **Established Powerhouse:** React remains the go-to for a mature ecosystem and extensive features. **HNG** HNG utilizes ReactJS for good reason. React's component-based architecture promotes code reusability, maintainability, and a clear separation of concerns. Here at HNG, I expect to leverage React's strengths to: Build interactive and dynamic web applications for various NLP (Natural Language Processing) tasks. Contribute to a collaborative coding environment where React's clean syntax fosters clear communication and code maintainability. Explore the vast React ecosystem of libraries and tools to streamline development and enhance functionalities. [https://hng.tech/hire][https://hng.tech/internship]
hope_odidi
1,906,099
The Final Legal Frontier Embracing the Importance of Space Law
Exploring the critical importance of space law and the legal frameworks that govern space activities and resource utilization.
0
2024-06-29T22:04:31
https://www.elontusk.org/blog/the_final_legal_frontier_embracing_the_importance_of_space_law
space, law, innovation
# The Final Legal Frontier: Embracing the Importance of Space Law ## Introduction Space, the final frontier. For decades, it has captivated our imaginations and fueled our ambitions. But as our ventures into the cosmos become more frequent and sophisticated, a crucial companion to our technological advancements must also evolve: space law. This dynamic and burgeoning field is essential for ensuring that space activities are conducted safely, sustainably, and equitably. ## Why Space Law Matters ### 1. Safeguarding Outer Space Environment The vastness of space may seem infinite, yet our activities within it carry significant environmental risks. Space debris - the remnants of defunct satellites, spent rocket stages, and other discarded material - pose a substantial threat to operational satellites and crewed spacecraft. Space law invokes international cooperation to mitigate these risks. The United Nations Committee on the Peaceful Uses of Outer Space (COPUOS) and treaties like the Outer Space Treaty (OST) emphasize the preservation of the space environment. These legal frameworks mandate that nations avoid harmful contamination of space and celestial bodies, fostering practices that ensure long-term sustainability. ### 2. Managing Peaceful and Equitable Utilization The core principle of space law is the peaceful use of outer space. The OST, often considered the Magna Carta of space law, explicitly prohibits the placement of nuclear weapons in space and the militarization of celestial bodies. This alignment helps maintain space as a domain for peaceful exploration and scientific advancement. Moreover, space law facilitates equitable sharing of space benefits. For instance, the International Telecommunication Union (ITU) manages the global radio-frequency spectrum and satellite orbits, ensuring countries, irrespective of their economic and technological capacities, have fair access to these critical resources. ### 3. Facilitating Commercial Space Activities The commercial space sector is booming, with private companies like SpaceX, Blue Origin, and others embarking on innovative missions. Legal frameworks are catching up to regulate and facilitate this burgeoning industry. Space law sets the stage for commercial ventures by clearly defining property rights, liability, and regulatory compliance. The U.S. Commercial Space Launch Competitiveness Act and Luxembourg&#x27;s Space Resources Law are exemplary legislative efforts that encourage private investment and innovation by providing companies with the legal certainty needed to operate and invest confidently. ### 4. Addressing Resource Utilization Asteroids, the Moon, and other celestial bodies are rich in untapped resources. Mining these extraterrestrial materials could potentially revolutionize industries on Earth and serve as a catalyst for humanity&#x27;s expansion into the solar system. However, resource utilization raises questions about ownership and exploitation rights. The OST stipulates that celestial bodies are not subject to national appropriation, but this provision is ambiguous regarding private enterprise. Initiatives like The Hague International Space Resources Governance Working Group aim to bridge these gaps, proposing guidelines for the responsible and equitable extraction and use of space resources. ## Key Treaties and Policies in Space Law ### The Outer Space Treaty (1967) - **Prohibits national sovereignty claims** - **Mandates space activities benefit all humanity** - **Restricts the militarization of celestial bodies** ### The Rescue Agreement (1968) - **Ensures the return of astronauts and space objects to their country of origin** ### The Liability Convention (1972) - **Imposes liability on launching states for damages caused by their space objects** ### The Registration Convention (1976) - **Obligates states to register space objects with the United Nations** ### The Moon Agreement (1984) - **Advocates for the Moon&#x27;s usage for peaceful purposes and common heritage of mankind** ## Looking Forward As humanity stands on the cusp of becoming an interplanetary species, space law is more critical than ever. It provides the legal scaffolding crucial for harmonizing our endeavors, preserving space for future generations, and preventing conflicts. The future of space law will undoubtedly evolve alongside technological innovations. Active debris removal, space traffic management, and the legal status of space settlements are just a few of the pressing issues that will shape the next generation of space legislation. So, as we gaze up at the stars and dream of what lies beyond, let us also appreciate the intricate legal tapestry that makes those dreams possible. The importance of space law cannot be overstated; it is the guiding hand that ensures our cosmic aspirations are met with responsibility and foresight. ## Conclusion The emerging field of space law is indispensable to the sustained and equitable exploration and utilization of outer space. It safeguards the environment, promotes peaceful usage, regulates commercial activities, and addresses the complex issue of resource utilization. As we continue to push the frontiers of human knowledge and capability, space law will be our essential partner in this exciting journey. For those who dare to dream and innovate, space law is not just a boundary—it is the foundation upon which the future of humanity in space will be built. 🚀 ---
quantumcybersolution
1,906,098
React.js vs Vue.js Which would work best for your next project ?
React.js and Vue.js are types of JavaScript frameworks used in modern web development. In nutshell...
0
2024-06-29T22:04:13
https://dev.to/kelechukwufavour/reactjs-vs-vuejs-which-would-work-best-for-your-next-project--4bjf
beginners, webdev, react, javascript
React.js and Vue.js are types of JavaScript frameworks used in modern web development. In nutshell they are frontend Technologies. Let’s rewind a bit by defining the key terms “Frontend Technologies” and “web development” Frontend technologies are tools (frameworks) used to create the interactive, responsive and visual parts of web applications and websites e.g HTML, CSS, JavaScript Web development has to do with building/developing websites or web applications. It involves several steps and fields( typically divided into 2 main fields) namely:- 1. Backend development :- which focuses mainly on database interaction, data storage and server side logic using languages like Python, Django, PhP etc 2.Frontend development :- As introduced brieflyin Frontend technologies focuses mainly on UI/UX (User interface and User Experience) by building the layout design and interactive aspects that users interact with using HTML, CSS and JavaScript frameworks like (React, Vue and Angular) N:B( The combination of both fields i.e Frontend and backend is called Fullstack development). With this basics Established, let’s dive fully into Frontend technologies in-depth build up of knowledge. HTML :- Hypertext Markup language, This has to do with the structure and content of the web pages. It’s Files are saved with a “.html” extension. CSS:- Cascading style sheets, This has to do with styling and layout of the web page. It’s files are saved with a “.css” extension Case study :- A BUILDING HTML is the bricks, roofing sheet, wood, cement, pillars combined together to form the STRUCTURE CSS is the painting, fitting of windows, light, furniture in essence beautification, styling and layout of the structure. Main Css frameworks are Tailwind and bootstrap JavaScript:- JavaScript is responsible for the responsiveness, interaction and dynamic character of the webpage. JavaScript files are stored with a “.js” extension It’s is a broad technology and so it has been developed into the following frameworks 1. React.js 2. ⁠Vue.js 3. ⁠Angular.js 4. ⁠svelte To answer the billion dollar question “what Python django technology would work best for my next project? Let’s diversify React.js Vs Vue.js React.js :- This framework was developed by Facebook and released in 2013, it uses virtual DOM, and a component based architecture. React has been widely adopted therefore it has a very large community/ecosystem, flexible and robust tooling and extensive third party libraries. React is extremely easy to integrate with other/existing libraries, and can also be used in large applications Though React is extremely efficient due to Virtual DOM duffing, it may sometimes require optimization in large apps. 2. ⁠Vue.js:- This framework was developed by Evan you, similar to react.js, Vue.js also is a component based architecture but with html it offers a more flexible template system and a very reactive two way binding. Although Vue.js flexibility allows it to be adopted incrementally into projects and integrated into existing applications, It is better suited for small to medium sized projects. In closing, I personally find react very flexible, the JavaScript structure allows the UI more readable and intuitive as it combines the power of JavaScript with the simplicity of HTML I am also hopeful that my newly embarked journey with HNG will allow me acquire real Hands-on Experience in developing my skills, a community for accountability, mentorship, guidance, networking, collaboration, team work and growth in General as all of this keynote factors for everyone looking to grow in any field they find themselves. To join this train click on the links below https://hng.tech/internship https://hng.tech/hire Cheers and Goodluck on you next project 🥰
kelechukwufavour
1,906,252
A Deep Dive into CNCF’s Cloud-Native AI Whitepaper
During KubeCon EU 2024, CNCF launched its first Cloud-Native AI Whitepaper. This article provides...
0
2024-06-30T01:31:55
https://dev.to/huizhou92/a-deep-dive-into-cncfs-cloud-native-ai-whitepaper-3ic3
cncf, go
--- title: A Deep Dive into CNCF’s Cloud-Native AI Whitepaper published: true date: 2024-06-29 22:03:00 UTC tags: cncf,golang canonical_url: --- ![Featured image of post A Deep Dive into CNCF’s Cloud-Native AI Whitepaper](https://images.hxzhouh.com/blog-images/2024/04/3c8f677dae51c4d491a982224b6a3e0d.png) > During KubeCon EU 2024, CNCF launched its first Cloud-Native AI Whitepaper. This article provides an in-depth analysis of the content of this whitepaper. In March 2024, during KubeCon EU, the Cloud-Native Computing Foundation (CNCF) released its first detailed whitepaper on Cloud-Native Artificial Intelligence (CNAI) <sup id="fnref:1"><a href="#fn:1" role="doc-noteref">1</a></sup>. This report extensively explores the current state, challenges, and future development directions of integrating cloud-native technologies with artificial intelligence. This article will delve into the core content of this whitepaper. > This article is first published in the medium MPP plan. If you are a medium user, please follow me in [medium](https://medium.huizhou92.com/). Thank you very much. ## What is Cloud-Native AI? Cloud-Native AI refers to building and deploying artificial intelligence applications and workloads using cloud-native technology principles. This includes leveraging microservices, containerization, declarative APIs, and continuous integration/continuous deployment (CI/CD) among other cloud-native technologies to enhance AI applications’ scalability, reusability, and operability. The following diagram illustrates the architecture of Cloud-Native AI, redrawn based on the whitepaper. ![Pasted image 20240418101533](https://images.hxzhouh.com/blog-images/2024/04/40eb5be3bd0139d72f816cef9d25a51f.png) ## Relationship between Cloud-Native AI and Cloud-Native Technologies Cloud-native technologies provide a flexible, scalable platform that makes the development and operation of AI applications more efficient. Through containerization and microservices architecture, developers can iterate and deploy AI models quickly while ensuring high availability and scalability of the system. Kuuch as resource scheduling, automatic scaling, and service discovery. The whitepaper provides two examples to illustrate the relationship between Cloud-Native AI and cloud-native technologies, namely running AI on cloud-native infrastructure: - Hugging Face Collaborates with Microsoft to launch Hugging Face Model Catalog on Azure<sup id="fnref:2"><a href="#fn:2" role="doc-noteref">2</a></sup> - OpenAI Scaling Kubernetes to 7,500 nodes<sup id="fnref:3"><a href="#fn:3" role="doc-noteref">3</a></sup> ## Challenges of Cloud-Native AI Despite providing a solid foundation for AI applications, there are still challenges when integrating AI workloads with cloud-native platforms. These challenges include data preparation complexity, model training resource requirements, and maintaining model security and isolation in multi-tenant environments. Additionally, resource management and scheduling in cloud-native environments are crucial for large-scale AI applications and need further optimization to support efficient model training and inference. ## Development Path of Cloud-Native AI The whitepaper proposes several development paths for Cloud-Native AI, including improving resource scheduling algorithms to better support AI workloads, developing new service mesh technologies to enhance the performance and security of AI applications, and promoting innovation and standardization of Cloud-Native AI technology through open-source projects and community collaboration. ## Cloud-Native AI Technology Landscape Cloud-Native AI involves various technologies, ranging from containers and microservices to service mesh and serverless computing. Kubernetes plays a central role in deploying and managing AI applications, while service mesh technologies such as Istio and Envoy provide robust traffic management and security features. Additionally, monitoring tools like Prometheus and Grafana are crucial for maintaining the performance and reliability of AI applications. Below is the Cloud-Native AI landscape diagram provided in the whitepaper. - Kubernetes - Volcano - Armada - Kuberay - Nvidia NeMo - Yunikorn - Kueue - Flame ## Distributed Training - Kubeflow Training Operator - Pytorch DDP - TensorFlow Distributed - Open MPI - DeepSpeed - Megatron - Horovod - Apla - … ## ML Serving - Kserve - Seldon - VLLM - TGT - Skypilot - … ## CI/CD — Delivery - Kubeflow Pipelines - Mlflow - TFX - BentoML - MLRun - … ## Data Science - Jupyter - Kubeflow Notebooks - PyTorch - TensorFlow - Apache Zeppelin ## Workload Observability - Prometheus - Influxdb - Grafana - Weights and Biases (wandb) - OpenTelemetry - … ## AutoML - Hyperopt - Optuna - Kubeflow Katib - NNI - … ## Governance & Policy - Kyverno - Kyverno-JSON - OPA/Gatekeeper - StackRox Minder - … ## Data Architecture - ClickHouse - Apache Pinot - Apache Druid - Cassandra - ScyllaDB - Hadoop HDFS - Apache HBase - Presto - Trino - Apache Spark - Apache Flink - Kafka - Pulsar - Fluid - Memcached - Redis - Alluxio - Apache Superset - … ## Vector Databases - Chroma - Weaviate - Quadrant - Pinecone - Extensions - Redis - Postgres SQL - ElasticSearch - … ## Model/LLM Observability - • Trulens - Langfuse - Deepchecks - OpenLLMetry - … ## Conclusion Finally, the following key points are summarized: - **Role of Open Source Community** : The whitepaper indicates the role of the open-source community in advancing Cloud-Native AI, including accelerating innovation and reducing costs through open-source projects and extensive collaboration. - **Importance of Cloud-Native Technologies** : Cloud-Native AI, built according to cloud-native principles, emphasizes the importance of repeatability and scalability. Cloud-native technologies provide an efficient development and operation environment for AI applications, especially in resource scheduling and service scalability. - **Existing Challenges** : Despite bringing many advantages, Cloud-Native AI still faces challenges in data preparation, model training resource requirements, and model security and isolation. - **Future Development Directions** : The whitepaper proposes development paths including optimizing resource scheduling algorithms to support AI workloads, developing new service mesh technologies to enhance performance and security, and promoting technology innovation and standardization through open-source projects and community collaboration. - **Key Technological Components** : Key technologies involved in Cloud-Native AI include containers, microservices, service mesh, and serverless computing, among others. Kubernetes plays a central role in deploying and managing AI applications, while service mesh technologies like Istio and Envoy provide necessary traffic management and security. For more details, please download the Cloud-Native AI whitepaper <sup id="fnref:4"><a href="#fn:4" role="doc-noteref">4</a></sup>. ## Reference Links * * * 1. [Whitepaper:](https://www.cncf.io/reports/cloud-native-artificial-intelligence-whitepaper/) [↩︎](#fnref:1) 2. [Hugging Face Collaborates with Microsoft to launch Hugging Face Model Catalog on Azure](https://huggingface.co/blog/hugging-face-endpoints-on-azure) [↩︎](#fnref:2) 3. [OpenAI Scaling Kubernetes to 7,500 nodes:](https://openai.com/research/scaling-kubernetes-to-7500-nodes) [↩︎](#fnref:3) 4. [Cloud-Native AI Whitepaper:](https://www.cncf.io/reports/cloud-native-artificial-intelligence-whitepaper/) [↩︎](#fnref:4)
huizhou92
1,906,091
Understanding Documentation: A Critical Skill.
I remember when I had to integrate an AI into the application I was building, and the deadline was...
0
2024-06-29T22:01:39
https://dev.to/cytochrome123/understanding-documentation-a-critical-skill-2obj
I remember when I had to integrate an AI into the application I was building, and the deadline was approaching. I was too lazy to go through the whole documentation, and it wasn't very user-friendly, which eventually cost me about two days out of the tight time I had left. So, I had to sit down, read it properly, and test it with their playground to fully understand what they were trying to convey. Then I implemented it, and it was all done without much stress. I really understood what I was doing throughout the process. I don't think I'd want to make such mistakes again. I decided to participate in the ongoing [HNG internship](https://hng.tech/internship) to improve my ability to collaborate effectively with team members. My goal is to better understand their coding intentions and project goals with ease, reducing miscommunications and enhancing productivity. This experience will help me avoid the mistakes I've made in the past and develop a more efficient approach to teamwork and problem-solving. You can also get a certificate by subscribing to their [premium](https://hng.tech/premium) offer here.
cytochrome123
1,906,251
If Google No Longer supports Golang
Last month’s hot topic in IT circles was Google laying off many developers from its Python core team...
0
2024-06-30T01:33:29
https://huizhou92.com/p/if-google-no-longer-supports-golang
google, go
--- title: If Google No Longer supports Golang published: true date: 2024-06-29 22:01:00 UTC tags: google, golang canonical_url: https://huizhou92.com/p/if-google-no-longer-supports-golang --- Last month’s hot topic in IT circles was Google laying off many developers from its Python core team and flutter/dart team, purportedly for a city-wide reorganization. [https://news.ycombinator.com/item?id=40171125](https://news.ycombinator.com/item?id=40171125) Reportedly, those laid off were mostly core members responsible for important Python maintenance. As a gopher, I pondered: will Google abandon Go? And if so, what would become of Go? ## What does Google offer to Go? Based on our past understanding clarified by @Lance Taylor and descriptions from various sources, we can estimate what Go has likely received from Google. 1. **Job Positions** : Details regarding job positions of members of the Go core team, including compensation, benefits, and other remuneration. 2. **Software and Hardware Resources** : Information on Go-related resources such as intellectual property, servers, domain names, and module management mirrors required by the community. 3. **Offline Activities** : Possibility of reduced or scaled-down Go conferences worldwide in terms of funding and endorsement. 4. **Internal Resources of Big Corporations** : Gradual loss of exposure to advanced projects and opportunities for Go’s adoption due to the absence of resources within Google. 5. **Promotion and Feedback Channels** : Slower discovery and response to significant issues and features in Go as Google’s internal demands historically take precedence. ## Potential Scenarios What might happen if Google dissolves the Go core team and ceases all infrastructure support? - Dissolution of the Go core team, leading members may retire or seek employment elsewhere. - If Google decides to cease all investment in Go, maintenance of Go could become more complex as it relies heavily on infrastructure. In such a scenario, Go might transition from Google to an external foundation, resulting in noticeable maintenance fluctuations. - If Google chooses to continue investing in Go through other internal teams, the worst-case scenario could involve Google flexing its ownership of intellectual property, possibly leading to Go being rebranded. - CNCF might take over Google’s mantle, organizing the future development of Go. Among CNCF projects, the Go language enjoys the widest adoption. ## Probability of Occurrence Currently, Go belongs to Google Cloud. Considering Go’s current trend focusing on customer success, the likelihood of Google Cloud shutting down Go is low. But who knows? I consulted `gemini` on this question. ![](https://cdn-images-1.medium.com/max/800/0*4EEwSKbJ6DxddogE.png)generated by Gemini ## Conclusion Drawing from the example of Rust, which transitioned from Mozilla’s core to an independent foundation, Go could potentially thrive even more. A nonprofit organization will probably form around Go (or it may directly join CNCF), with enough support from major companies, at least for a period. ## References - [https://ajmani.net/2024/02/23/go-2019-2022-becoming-a-cloud-team/](https://ajmani.net/2024/02/23/go-2019-2022-becoming-a-cloud-team/) - [https://www.reddit.com/r/golang/comments/1cft7mc/if\_google\_decided\_to\_part\_with\_the\_core\_go\_team/](https://www.reddit.com/r/golang/comments/1cft7mc/if_google_decided_to_part_with_the_core_go_team/)
huizhou92
1,906,096
Analyzing a Dataset and Writing a “First Glance” Technical Report
HNG Intership | HNG Introduction The purpose of this technical report is to examine and...
0
2024-06-29T21:58:16
https://dev.to/isah_katunadam_00cd9ef1f/analyzing-a-dataset-and-writing-a-first-glance-technical-report-2jg7
[HNG Intership | HNG](https://hng.tech/internship) ## **Introduction** The purpose of this technical report is to examine and identify actionable insights at “first glance” from the Titanic Dataset sourced from kaggle. The Titanic Dataset has been studied by data scientists and academic communities for some time now and has continued to garner the attention of scientists. The data is about the tragedy of RMS Titanic; a British ocean liner that sank on 15th April, 1912 as a result of striking an iceberg on her maiden voyage. There were an estimated 2,224 passengers and crew on board. The dataset contains data of children, male and females. ## **Observations** There were more males on the ocean liner than there were females. A Lot of the observations were missing the age field. The average age was 32 yrs. (from the training dataset) ## **Conclusion** The dataset is very important in the analysis of determining who survived or not. Feature engineering could be applied to the dataset to fill the missing fields or removed totally. Further exploration of the dataset is needed to ascertain trends like the class of the passengers and if certain features determine the survivability of a passenger. [Find a Hire HNG](https://hng.tech/hire)
isah_katunadam_00cd9ef1f
1,906,095
A Comparative Dive into Alpine.js and Stimulus.js: Niche Frontend Technologies
Hello, tech lovers! As I embark on my journey with the HNG Internship, I am thrilled to explore and...
0
2024-06-29T21:58:00
https://dev.to/nwogu_precious_52ab8ab48c/a-comparative-dive-into-alpinejs-and-stimulusjs-niche-frontend-technologies-9og
Hello, tech lovers! As I embark on my journey with the HNG Internship, I am thrilled to explore and compare different frontend technologies. While ReactJS is a dominant force in the frontend world, there are many niche frameworks and libraries that offer unique advantages. In this article, I'll delve into two lesser-known but powerful frontend technologies: Alpine.js and Stimulus.js. We will contrast their features, strengths, and use cases. Additionally, I'll share my expectations for the HNG Internship and my thoughts on working with ReactJS. Alpine.js: The Minimalist JavaScript Framework Overview Alpine.js is a lightweight JavaScript framework that brings the power of Vue-like reactivity to your HTML. It is designed to be minimal and easy to integrate into existing projects, making it ideal for enhancing static pages with dynamic behavior. Key Features Declarative Syntax: Alpine.js allows you to add interactivity directly in your HTML using a syntax that is both simple and powerful. Small Footprint: With a minimal file size, Alpine.js ensures that your pages load quickly. Reactivity: Alpine.js provides reactive data binding and allows you to create dynamic user interfaces with ease. Pros and Cons Pros: Ease of Integration: Alpine.js can be added to any project by simply including a script tag, making it very easy to start using. Simplicity: The framework’s syntax is straightforward, reducing the learning curve. Performance: Due to its small size, Alpine.js has minimal impact on page load times. Cons: Limited Ecosystem: Alpine.js is relatively new and has a smaller ecosystem compared to more established frameworks. Functionality: It is designed for small to medium-sized interactions and may not be suitable for very complex applications. Stimulus.js: The Modest JavaScript Framework Overview Stimulus.js is a modest JavaScript framework created by Basecamp. It is designed to enhance your HTML by connecting JavaScript controllers to your elements. Stimulus follows the "HTML over JavaScript" principle, promoting the idea that HTML should drive the structure and behavior of your applications. Key Features Controller-Based: Stimulus uses controllers to add behavior to HTML elements, making your code more organized and modular. Convention over Configuration: Stimulus promotes conventions that reduce the need for boilerplate code. Declarative Binding: You can bind elements to controllers directly in your HTML, which keeps your code clean and maintainable. Pros and Cons Pros: Organized Code: Stimulus encourages a clean separation of concerns by using controllers. Minimalistic: It’s lightweight and focuses on enhancing HTML rather than replacing it. Maintainability: The convention-driven approach makes it easy to maintain and scale your applications. Cons: Learning Curve: While simple, it may require some time to get used to the Stimulus way of structuring code. Limited Use Cases: Stimulus is best suited for applications that prioritize HTML and may not be ideal for highly dynamic interfaces. My Expectations for the HNG Internship Joining the HNG Internship is an incredible opportunity to enhance my skills and collaborate with other talented developers. I am particularly excited about working with ReactJS, a robust and widely-used frontend library known for its component-based architecture and extensive ecosystem. Through this internship, I hope to deepen my understanding of React, learn best practices, and contribute to real-world projects. The HNG Internship provides a platform for practical, hands-on learning, which is crucial for developing as a frontend developer. The tasks and projects simulate real-world scenarios, offering an excellent environment for growth. For more information about the HNG Internship, you can visit their official website (https://hng.tech/internship) and learn more about hiring talented developers [here] (https://hng.tech/hire). Conclusion Both Alpine.js and Stimulus.js offer unique advantages for frontend development. Alpine.js is perfect for adding simple, reactive behavior to your HTML with minimal setup, while Stimulus.js excels in organizing and maintaining JavaScript code for HTML-driven applications. As I continue my journey with HNG and dive into ReactJS, I am eager to apply these insights and grow as a developer.
nwogu_precious_52ab8ab48c
1,906,093
React JS vs Angular, What’s The Difference?
In the world of front-end web development, React JS and Angular are two of the most popular...
0
2024-06-29T21:55:50
https://dev.to/zoeyahmi/react-js-and-angular-whats-the-difference-npb
In the world of front-end web development, React JS and Angular are two of the most popular frameworks. Both offer powerful tools for building modern, dynamic web applications, but they differ significantly in their approaches, philosophies, and capabilities. In this article, I will discuss the differences between these technologies and their use cases. React, developed by Facebook in 2013, is primarily a JavaScript library focused on building user interfaces. It is centered around components, which are reusable pieces of UI. One of React’s core features is the virtual DOM, which allows for efficient updates and rendering by diffing changes and updating only the necessary parts of the DOM. This results in fast rendering and efficient updates, making React suitable for high-performance applications. **While,** Angular, developed by Google and initially released in 2010 (AngularJS) and later in 2016 (Angular), is a comprehensive front-end framework designed for building complex, scalable web applications. It follows an MVC (Model-View-Controller) architecture and uses two-way data binding to synchronize the model and view automatically. Angular provides a wide range of built-in features, including dependency injection, routing, and form validation, making it a full-fledged framework for front-end development. **Core Differences between the technologies** - React JS is a JavaScript library focused on building user interfaces, while Angular is a comprehensive front-end framework for building complex web applications. - React JS is centered around a component-based architecture, while Angular uses an MVC (Model-View-Controller) and component-based architecture. - React JS has a more gentle curve as it based on Javascript and ES6 while Angular has a steeper learning curve due to its comprehensive nature and use of typescript. - React JS is highly flexible and unopinionated, giving developers freedom to choose their tools and libraries, while Angular is more opinionated, providing a complete set of tools and best practices out of the box. - React JS is generally faster due to its virtual DOM, while Angular’s performance has improved with Ivy, its new rendering engine, but can be slower in very large applications compared to React. - React JS requires external libraries like React Router for routing, while Angular includes a built-in routing module. However, both React JS and Angular are powerful tools for front-end development, each with its own strengths and use cases. React’s flexibility, component-based architecture, and efficient virtual DOM make it an excellent choice for building dynamic user interfaces. Angular’s comprehensive framework, built-in features, and structured approach make it ideal for developing complex, scalable web applications. The choice between React and Angular ultimately depends on your project requirements, team expertise, and development preferences. Understanding their differences will help you choose the best tool for your next project. If you are interested in connecting with and collaborating with others developers and like-minded individuals, look no further, register and join the HNG internship. [HNG Internship](https://hng.tech/internship) [HNG Internship Premium](https://hng.tech/premium) Let's change the world one project at a time!
zoeyahmi
1,906,092
Quick Hitting Post Moves Speed and Efficiency
Analyze a variety of quick post moves that rely on speed and efficiency to catch defenders off guard.
0
2024-06-29T21:54:11
https://www.sportstips.org/blog/Basketball/Center/quick_hitting_post_moves_speed_and_efficiency
basketball, postmoves, offense, speed
# Quick Hitting Post Moves: Speed and Efficiency In today&#x27;s fast-paced game of basketball, mastering quick post moves can be a game-changer. Whether you&#x27;re a power forward or a center, having a repertoire of speedy and efficient post moves can help you stay ahead of defenders. Let&#x27;s dive into some essential techniques that combine player knowledge and coaching wisdom for quick post play. ## Key Quick Post Moves ### 1. Drop Step The drop step is a fundamental post move designed to gain immediate space and position against the defender. **Execution:** 1. **Receive the Ball:** Secure the pass from your teammate with your back to the basket. 2. **Pivot:** Use your baseline foot to pivot around the defender. 3. **Drop:** Drop your lead foot towards the basket, sealing off the defender with your body. 4. **Finish:** Use a power dribble and go up strong for a layup or dunk. **Coaching Tip:** Emphasize the importance of body positioning and staying low during the pivot to maintain balance. ### 2. Up-and-Under This move is perfect for faking out your defender and creating space for a high-percentage shot. **Execution:** 1. **Receive the Ball:** Establish position and secure the pass. 2. **Fake Shot:** Perform a shot fake to get the defender off their feet. 3. **Step Through:** Step through with your non-pivot foot while keeping the ball protected. 4. **Finish:** Go up for a controlled layup or short hook shot. **Coaching Tip:** Ensure players sell the fake convincingly and keep their footwork clean to avoid traveling violations. ### 3. The Spin Move A highly effective move to catch defenders off guard and open up an easy path to the basket. **Execution:** 1. **Back Down Defender:** Use a couple of power dribbles to back down your defender. 2. **Spin:** Plant your inside foot and spin quickly towards the baseline or middle of the lane. 3. **Protect the Ball:** Keep the ball high and close to your body to avoid turnovers. 4. **Finish:** Complete the spin with a strong layup or hook shot. **Coaching Tip:** Encourage players to practice the spin move at game speed to perfect timing and balance. ## Combining Moves for Maximum Effectiveness ### The &quot;Quick Combo&quot; By combining different moves, players can become even more unpredictable. Here’s how to integrate some of the moves discussed: **Example Combo:** 1. Receive the ball and use a **Drop Step**. 2. If the defender recovers, pivot into an **Up-and-Under**. 3. If still covered, execute a **Spin Move** to finish. **Coaching Tip:** Drills that incorporate combinations of these moves will help players develop the instinctual feel needed during live game conditions. ## Post Move Efficiency: Key Metrics Ensuring that your quick hitting post moves are effective isn&#x27;t just about practice but also understanding key efficiency metrics. Here&#x27;s what to watch for: | Metric | Objective | How to Measure | |--------|-----------|----------------| | **Shot Accuracy** | Maximize scoring with quick post moves | Track field goal percentage in practice and games | | **Turnover Rate** | Minimize lost possessions while executing moves | Count turnovers during scrimmages and provide feedback | | **Footwork** | Maintain balance and speed | Review game footage to assess proper footwork execution | | **Defender Reaction** | Gauge the effectiveness of fakes and moves | Monitor how often defenders are caught off guard or commit fouls | ## Conclusion Mastering quick hitting post moves can transform a player into an unstoppable force in the paint. With attention to speed, efficiency, and precise execution, players can outmaneuver defenders and capitalize on scoring opportunities. Coaches should emphasize these techniques during practice to build a strong foundation for game-day success. Remember, basketball is as much about agility and strategy as it is about power and skill. Keep honing these moves, and you’ll find yourself catching defenders off guard time and time again. ```
quantumcybersolution
1,906,090
Comparing Frontend Technologies: ReactJS vs. Pure HTML, CSS, and JavaScript
Introduction Hey there! If you're diving into frontend development, you’ve probably come...
0
2024-06-29T21:51:27
https://dev.to/jomagene/comparing-frontend-technologies-reactjs-vs-pure-html-css-and-javascript-3ofb
react, javascript, webdev, frontend
### Introduction Hey there! If you're diving into **frontend development**, you’ve probably come across a bunch of different tools and frameworks. Today, I want to chat about two different approaches: using **ReactJS** and sticking with the basics—pure **HTML, CSS, and JavaScript**. Recently, while working on a project with a friend, she asked me, “Why bother with React when we can quickly get the job done using just HTML, CSS, and JavaScript?” I laughed but realized it’s a great question. Every developer faces this dilemma at some point: why choose one technology over another? So, let’s dive into the comparison and see what each approach offers, with a sprinkle of humor along the way. ### What is ReactJS? ReactJS is like the cool kid in the JavaScript world. Created by Facebook, it helps developers build user interfaces, especially for single-page applications where data changes more often than the weather. #### Key Features of ReactJS: **Component-Based Architecture**: Imagine building your UI like a LEGO set. React lets you snap together reusable components, making your code more organized and easier to manage. **Virtual DOM**: Instead of updating the actual DOM (which can be as slow as a Monday morning), React updates a virtual version first, speeding things up. **Declarative Syntax**: You just describe what you want your UI to look like, and React takes care of the nitty-gritty details. **Rich Ecosystem**: There’s a ton of libraries and tools that work with React, so you can find a solution for almost any problem. It’s like having a Swiss Army knife for web development. ### Pure HTML, CSS, and JavaScript On the flip side, developing with pure HTML, CSS, and JavaScript means you’re writing everything from scratch. It’s like cooking a gourmet meal without pre-made ingredients—you have total control, but it’s a lot of work! #### Key Features of Pure HTML, CSS, and JavaScript: **Direct Control**: You have complete control over every aspect of your web application. It's like being the king or queen of your code kingdom. **Simplicity**: This approach is perfect for small projects or static websites where you don’t need the extra weight of a framework. Performance: With no extra layers, your site can be as fast as a cheetah on espresso. **Fundamental Knowledge**: Mastering these core technologies is essential for any web developer. It's like learning to ride a bike before driving a car. ###Comparing ReactJS and Pure HTML, CSS, and JavaScript **Development Speed**: React’s reusable components can speed up development, especially for bigger projects. Pure HTML, CSS, and JavaScript might take more time to manage as your project grows, like trying to juggle flaming torches. **Scalability**: React is built to handle large, dynamic applications easily. Pure HTML, CSS, and JavaScript can get tricky to maintain as your project scales, like trying to keep a house of cards from falling. Learning Curve: React has a bit of a learning curve because of its ecosystem and concepts like JSX and hooks. Pure HTML, CSS, and JavaScript are the basics every web developer should know—like the ABCs of web development. **Maintenance**: React’s modular approach makes it easier to update and maintain your code. Pure HTML, CSS, and JavaScript can become more error-prone and harder to manage in larger codebases, like trying to herd cats. ### My Experience with ReactJS I’ve spent a lot of time working with ReactJS, and I’ve really come to appreciate its power and efficiency. The ability to create reusable components and manage state effectively has changed the way I build web applications. Plus, the React community is super active, which means there’s always a new tool or resource to help you out. It’s like being part of a massive, nerdy family. ### Expectations for HNG Internship As part of the HNG Internship, I’m excited to sharpen my ReactJS skills, work with other talented developers, and get involved in some cool projects. This internship is a fantastic opportunity to learn best practices from industry experts and apply them to real-world scenarios. I can’t wait to see what we’ll build together! ### Conclusion Both ReactJS and pure HTML, CSS, and JavaScript have their strengths. ReactJS is great for building complex, dynamic applications, while pure HTML, CSS, and JavaScript offer simplicity and control, perfect for smaller projects. If you’re interested in learning more about the HNG Internship, check out these links: - [HNG Internship](https://hng.tech/internship) - [HNG Hire](https://hng.tech/hire) ### Call to Action I’d love to hear your thoughts on ReactJS versus pure HTML, CSS, and JavaScript. Which do you prefer and why? Drop a comment below and let’s chat! Let’s make this a fun conversation—after all, we’re all here to learn and grow together.
jomagene
1,906,089
A Simple Fix to A Major Banking Blunder
As a developer, I've come to understand that even minor oversights can lead to significant outcomes....
0
2024-06-29T21:49:09
https://dev.to/hephzy/a-simple-fix-to-a-major-banking-blunder-2dbp
webdev, javascript, backend, programming
As a developer, I've come to understand that even minor oversights can lead to significant outcomes. I recently discovered a flaw in an API used for banking operations that might have let users take out more money than they had in their accounts. Upon finding the defect, I promptly set out to work on a remedy to block any unauthorized withdrawals. ## Overview of the Banking Blunder Upon examining the API code, it was evident that the withdrawal function lacked a critical validation step. In the absence of this check, users could potentially withdraw amounts exceeding their account balances, leading to overdrafts and severe financial implications. Such a vulnerability poses a considerable risk to the banking sector, eroding the trust and security customers anticipate in financial dealings. Banks must swiftly rectify these issues to preserve their credibility and safeguard client assets. ## The Simple Fix To tackle this problem, I implemented a straightforward validation check that compares the withdrawal amount to the account balance. If the withdrawal amount is greater than the balance, the system issues an error message for insufficient funds. This measure helps to prevent customers from overdrawing their accounts and incurring hefty fees. By instituting this validation check, banks can markedly decrease the incidence of overdrafts and the fees customers accrue. This enhancement not only boosts customer satisfaction but also economizes bank resources by reducing the necessity to reverse transactions or manage overdrafts. ## Steps to Implement the Fix In tackling the problem, I devised an asynchronous function tasked with verifying the user's current balance. ``` javascript async function checkBalance(accountNumber) { const query = ` SELECT Balance FROM Accounts WHERE Account_No = ? `; const values = [accountNumber]; const result = (await dB).query(query, values); console.log(result); return result; // You have to extract the balance from the result } ``` Then, I compare that with the amount the user wishes to withdraw. ``` javascript const hasAmount = await checkBalance(Source_account) console.log(hasAmount[0][0].Balance); if (hasAmount[0][0].Balance <= Amount) { throw Error (`Not Enough Balance`); } ``` ## Conclusion In software development, attention to detail is crucial. By catching this defect and implementing a simple validation check, I prevented a potential banking blunder. This experience serves as a reminder that even the smallest fixes can have a significant impact on the reliability and security of our systems. Banks need to emphasize the potential risks and consequences of overlooking such issues. It is imperative for banks to prioritize implementing these simple fixes to maintain trust and credibility with their customers. **NOTE** This article serves as Assignment - Task 0 for the [HNG Internship programme](https://hng.tech/internship), a rapid programme designed to provide beginners in technology with fundamental training in various fields. It offers the opportunity to gain experience by working as interns, collaborating with peers to accomplish tasks, or submitting projects before deadlines. Additionally, there is a [HNG premium space](https://hng.tech/premium) where tech enthusiasts can connect, participate in mock interviews, have their CVs reviewed, and explore opportunities.
hephzy
1,906,087
[Game of Purpose] Day 42
Today I started watching tutorial about creating UI overlays. I want to indicate the engine status,...
27,434
2024-06-29T21:49:07
https://dev.to/humberd/game-of-purpose-day-42-22c0
gamedev
Today I started watching tutorial about creating UI overlays. I want to indicate the engine status, camera status (3rd person or 1st person), speed, etc.
humberd
1,906,086
The Final Frontier Challenges and Opportunities in Space-Based Medical Technologies for Long-Duration Missions
Delving into the intricacies of developing medical technologies for long-duration space missions, from combating zero-gravity ailments to pioneering telemedicine solutions.
0
2024-06-29T21:48:33
https://www.elontusk.org/blog/the_final_frontier_challenges_and_opportunities_in_space_based_medical_technologies_for_long_duration_missions
space, medicaltechnology, innovation
# The Final Frontier: Challenges and Opportunities in Space-Based Medical Technologies for Long-Duration Missions Space exploration has always been the epitome of human ambition. But as we set our sights beyond the moon and toward Mars, an intriguing question arises: How do we keep astronauts healthy during these long-duration missions? The development of space-based medical technologies is not just an engineering challenge but a multifaceted endeavor that unites biology, medicine, and cutting-edge technology. ## The Gravity of the Situation: Health Challenges in Space The human body is finely tuned to Earth’s environment, particularly to its gravity. Take that away, and you’ve got a cosmic conundrum on your hands. ### Muscle Atrophy In microgravity, muscles don&#x27;t need to work as hard to support the body. This leads to muscle atrophy, weakening, and loss of muscle mass. Imagine training for years only to lose your strength halfway through the mission. ### Bone Density Loss Like muscles, bones suffer in space. Astronauts can lose up to 1% of their bone mass per month, making them susceptible to fractures. This is a critical issue for missions that might last years rather than months. ### Fluid Redistribution In zero-gravity conditions, bodily fluids shift towards the head. This can cause vision problems, congestion, and increased intracranial pressure. It&#x27;s a headache—literally and figuratively! ### Radiation Exposure Space is filled with high-energy cosmic rays and solar particles. Without Earth’s atmosphere and magnetic field to protect them, astronauts are at a higher risk of radiation exposure, which can lead to cancer and other severe health issues. ## Innovative Solutions on the Horizon While these challenges are daunting, they also present a wealth of opportunities for innovation. Here are some of the groundbreaking solutions being explored: ### Artificial Gravity One of the most promising areas is the development of artificial gravity. Rotating spacecraft or habitation modules could simulate Earth-like gravity, helping mitigate muscle and bone loss. ### Advanced Robotics and AI Telemedicine is fantastic, but it has latency issues. Enter advanced robotics and AI-driven medical devices. Imagine a robot performing surgery under the guidance of a distant Earth-based surgeon. This is closer to reality than you might think. ### Bioregenerative Life Support Systems These systems would not only provide food, water, and oxygen but also support waste recycling and health monitoring. Using plants and microorganisms to create a self-sustaining ecosystem is both an enchanting and practical solution. ### Wearable Health Monitors Smart, wearable devices are set to become the personal health aides of the future. These gadgets will continuously monitor vital signs, allowing for real-time data analysis and early detection of potential health issues. ## Collaborations and Funding: The Twin Pillars Supporting Innovation Developing space-based medical technologies requires not just brilliant minds but also substantial funding and collaboration. ### Public-Private Partnerships Organizations like NASA, ESA, and private space companies like SpaceX and Blue Origin are pooling resources and expertise. This collaborative environment accelerates innovation and brings us closer to our goals. ### International Cooperation Space exploration is a global endeavor. International collaborations ensure that resources are used efficiently and that solutions are universally applicable. The International Space Station (ISS) serves as a perfect example of such cooperation. ### Grants and Investments Governments and private investors are increasingly willing to fund space-related medical research. From grants to startup investments, the financial landscape is fertile for innovation. ## The Ripple Effect: Terrestrial Benefits What&#x27;s exciting is that the innovations necessary for space missions have enormous potential to benefit life on Earth. Technologies developed for remote monitoring and robotic surgery could revolutionize healthcare accessibility in underserved areas. Advanced life support systems could lead to more sustainable living practices. ## A Glimpse into the Future As we inch closer to sending humans to Mars, the development of space-based medical technologies will continue to evolve at a rapid pace. The stakes are high, but so are the rewards. It&#x27;s not just about surviving in space; it&#x27;s about thriving. So here we are, on the cusp of monumental advancements, ready to unravel the mysteries of human health in the cosmos. As we continue to push the boundaries of what&#x27;s possible, each step taken in the vacuum of space echoes with the promise of a healthier future for all of humanity. Stay tuned, star-gazers. The best is yet to come. --- Let&#x27;s keep pushing the frontiers of human knowledge and capability, together. 🌌🚀
quantumcybersolution
1,906,085
Fooling Port Scanners: Simulating Open Ports with eBPF and Rust
In our previous article, we explored the SYN and accept queues and their crucial role in the TCP...
25,052
2024-06-29T21:47:09
https://www.kungfudev.com/blog/2024/06/29/fooling-port-scanners-simulating-open-ports-rust-and-ebpf
ebpf, rust, linux, networking
In our previous [article](https://www.kungfudev.com/blog/2024/06/14/network-sockets-syn-and-accept-queue), we explored the `SYN and accept queues` and their crucial role in the TCP `three-way handshake`. We learned that for a TCP connection to be fully established, the three-way handshake must be successfully completed. Let's recap this process: - The client initiates the connection by sending a `SYN` packet. - The server responds with a `SYN-ACK` packet. - The client then sends an `ACK` packet back to the server. At this point, the client considers the connection established. However, it's important to note that from the server's perspective, the connection is only fully established when it receives and processes the final `ACK` from the client. In this article, we will review the three-way handshake behavior and a related port scanning technique. We will also explore how to use `Rust` and `eBPF` to thwart curious individuals attempting to scan our machine using this technique. ## Understanding the TCP Three-Way Handshake in Port Scanning As we delve deeper into TCP connection management, it's crucial to understand the server's behavior when it receives connection requests. Let's break this down: When a server has a socket listening on a specific port, it's ready to handle incoming connection requests. Upon receiving a `SYN` packet, the server begins tracking potential connections by allocating resources, such as space in the `SYN queue`. This behavior, while necessary for normal operation, can be exploited by malicious actors. For instance, the `SYN flood attack` takes advantage of this resource allocation to overwhelm the server. > If you're interested in learning more about the `SYN flood attack`, feel free to leave a comment. This attack exploits the TCP handshake process to overwhelm a server with incomplete connection requests. It works by sending a large number of `SYN` packets to a server. The server allocates resources for each connection request, occupying significant kernel memory, while the cost for the client is just one `SYN` packet per request, with no memory allocation on their end. It's important to note the phrase **"When a server has a socket listening on a ..."** from our previous discussion. This condition is critical because it determines how the server responds to incoming `SYN` packets. Let's clarify the two scenarios: ### Server with a listening socket on the targeted port Flow: - Receives `SYN` packet - Responds with `SYN-ACK` - Allocates resources to track the potential connection ```txt +--------+ +-----------------------+ | Client | | Server with Listening | | | | Socket on Target Port | +--------+ +-----------------------+ | | |--- SYN --------------------------> | | | | | | <--- SYN-ACK --------------------- | | | | | | | | | +--------+ +-----------------------+ | Client | | Allocates Resources to| | | | Track Potential Conn. | +--------+ +-----------------------+ ``` ### Server without a listening socket on the targeted port Flow: - Receives `SYN` packet - Responds with `RST-ACK` (Reset-Acknowledge) - No resources are allocated for connection tracking ```txt +--------+ +-----------------------+ | | | Server without | | Client | | Listening Socket on | | | | Target Port | +--------+ +-----------------------+ | | |--- SYN --------------------------> | | | | | | <--- RST-ACK --------------------- | | | | | | | | | +--------+ +------------------------+ | Client | | No Resources Allocated | | | | for Connection Tracking| +--------+ +------------------------+ ``` The `RST-ACK` response in the second scenario is the server's way of saying, **"There's no service listening on this port, so don't attempt to establish a connection."** This behavior is a fundamental aspect of TCP/IP networking and plays a crucial role in network security and resource management. As we can see, by sending a simple `SYN` packet to a server on a specific port, we can determine if the port is open or not. This is the basis for one of the most popular techniques for port scanning: the Stealth SYN Scan. ## The Stealth SYN Scan: A Popular Port Scanning Technique Indeed, the behavior we've discussed forms the basis for one of the most popular port scanning techniques: the Stealth SYN Scan, also known as a `half-open scan`. This technique is called a half-open scan because it doesn’t actually open a full TCP connection. Instead, a SYN scan only sends the initial SYN packet and examines the response. If a `SYN/ACK` packet is received, it indicates that the port is open and accepting connections. This is recorded, and an `RST` packet is sent to tear down the connection. To recap: **Stealth SYN Scan Process:** - The scanner sends a `SYN` packet to a target port. - If the port is open (i.e., a service is listening): - The target responds with a `SYN-ACK`. - The scanner immediately sends an `RST` to terminate the connection. - If the port is closed: - The target responds with an `RST-ACK`. **Why it's called "Stealth":** - The scan doesn't complete the full TCP handshake. - It's less likely to be logged by basic firewall configurations. - It can potentially bypass certain intrusion detection systems (IDS). The SYN scan is popular for its speed, capable of scanning thousands of ports per second on a fast network. It's unobtrusive and stealthy, as it never completes TCP connections. ### Nmap in Action: Demonstrating SYN Scans As you may know, `Nmap` is a powerful network scanning and discovery tool widely used by security professionals and system administrators. Using Nmap, a `SYN scan` can be performed with the command-line option `-sS`. The program must be run as root since it requires raw-packet privileges. This is the default TCP scan when such privileges are available. Therefore, if you run Nmap as root, this technique will be used by default, and you don't need to specify the `-sS` option. So these commands are equivalent: ```sh $ sudo nmap -p- 192.168.2.107 $ sudo nmap -sS -p- 192.168.2.107 ``` Here, we are simply instructing Nmap to perform a SYN scan on the target `192.168.2.107` using `-p-` to scan all ports from 1 through 65535. However, we can also specify individual ports or a range of ports. ```bash sudo nmap -sS -p9000-9500 192.168.2.107 Starting Nmap 7.95 ( https://nmap.org ) at 2024-06-29 14:56 EDT Nmap scan report for dlm (192.168.2.107) Host is up (0.0085s latency). Not shown: 499 closed tcp ports (reset) PORT STATE SERVICE 9090/tcp open ... 9100/tcp open ... ... ``` From the output above, we can see that the target machine has two open ports in the specific range from `9000 to 9500`. The remaining 499 ports are closed. This is because, as we know, Nmap received `RST` packets when it sent the initial `SYN` to those ports. ## Crafting Our Defense: eBPF and Rust Implementation In previous articles, I explained how to start projects with Rust-Aya, including using their scaffolding generator. If you need a refresher, feel free to revisit [Harnessing eBPF and XDP for DDoS Mitigation](https://www.kungfudev.com/blog/2023/11/21/ddos-mitigation-with-rust-and-aya) and [Uprobes Siblings - Capturing HTTPS Traffic](https://www.kungfudev.com/blog/2023/12/07/https-sniffer-with-rust-aya), or check the [Rust-Aya documentation](https://aya-rs.dev/book/). ### Setting Up the eBPF Program As we are going to use `XDP` to accomplish this trick, the initial part of the code is very similar to the example explained in the article [Harnessing eBPF and XDP for DDoS Mitigation](https://www.kungfudev.com/blog/2023/11/21/ddos-mitigation-with-rust-and-aya). In this code, we first validate if the packet is an `IPv4 packet` by examining the `ether_type` in the Ethernet header. If it's not IPv4, the packet is passed through without further processing. Then, we look at the IPv4 header to check if it's a `TCP packet`. Non-TCP packets are also allowed to pass. This way, our program focuses only on IPv4 TCP packets and then we store the TCP header in `tcp_hdr`. ```rust fn try_syn_ack(ctx: XdpContext) -> Result<u32, ExecutionError> { // Use pointer arithmetic to obtain a raw pointer to the Ethernet header at the start of the XdpContext data. let eth_hdr: *mut EthHdr = get_mut_ptr_at(&ctx, 0)?; // Check the EtherType of the packet. If it's not an IPv4 packet, pass it along without further processing // We have to use unsafe here because we're dereferencing a raw pointer match unsafe { (*eth_hdr).ether_type } { EtherType::Ipv4 => {} _ => return Ok(xdp_action::XDP_PASS), } // Using Ethernet header length, obtain a pointer to the IPv4 header which immediately follows the Ethernet header let ip_hdr: *mut Ipv4Hdr = get_mut_ptr_at(&ctx, EthHdr::LEN)?; // Check the protocol of the IPv4 packet. If it's not TCP, pass it along without further processing match unsafe { (*ip_hdr).proto } { IpProto::Tcp => {} _ => return Ok(xdp_action::XDP_PASS), } // Using the IPv4 header length, obtain a pointer to the TCP header which immediately follows the IPv4 header let tcp_hdr: *mut TcpHdr = get_mut_ptr_at(&ctx, EthHdr::LEN + Ipv4Hdr::LEN)?; ... } ``` ### Filtering Packets: Targeting Specific Ports Now that we've confirmed the received packet is a TCP packet, let's implement a simple filter in our eBPF program. This filter will instruct the program to interact only with packets whose destination port falls within the range of `9000 to 9500`. It's important to note that this is a basic implementation. In a production environment, we'd want to implement a more robust and flexible solution. For instance, we could: - Use eBPF to listen for the `bind` syscall in our server. - Utilize eBPF maps to track all open ports on our server, avoiding interference with legitimate services. - Alternatively, use eBPF maps to maintain a list of ports we want to simulate as open. > Again, we're using `unsafe` throughout the code, as we're directly manipulating memory through raw pointers. This allows us to alter the packets or inspect them as needed. For now, let's keep it simple and focus on our port range filter: ```rust // Check the destination port of the TCP packet. If it's not in the range 9000-9500, pass it along without further processing let port = unsafe { u16::from_be((*tcp_hdr).dest) }; match port { 9000..=9500 => {} _ => return Ok(xdp_action::XDP_PASS), } ``` This code snippet does the following: 1. We extract the destination port from the TCP header, converting it from network byte order (big-endian) to host byte order. 2. We use a `match` statement to check if the port falls within our target range. 3. If the port is in the range 9000-9500, we continue processing. 4. For any other port, we immediately return `XDP_PASS`, allowing the packet to continue its normal journey through the network stack. This filter allows us to focus our eBPF program's attention on a specific range of ports, which could be useful for various network management and security tasks. For example, we could use this to implement a `port knocking sequence`, set up a honeypot, or monitor for potential port scan attempts within a specific range. ### Identifying SYN Packets Now that we've confirmed the packet is destined for one of our target ports, our next step is to determine if it's a `SYN` packet. Here's how we can check for a SYN packet: ```rust // Check if it's a SYN packet let is_syn_packet = unsafe { match ((*tcp_hdr).syn() != 0, (*tcp_hdr).ack() == 0) { (true, true) => true, _ => false, } }; if !is_syn_packet { return Ok(xdp_action::XDP_PASS); } ``` Let's break down this code: 1. We define `is_syn_packet` by checking two conditions: - The SYN flag is set (`(*tcp_hdr).syn() != 0`) - The ACK flag is not set (`(*tcp_hdr).ack() == 0`) 2. A valid SYN packet in the initial TCP handshake should have the SYN flag set and the ACK flag unset. 3. If the packet is not a SYN packet, we immediately return `XDP_PASS`, allowing non-SYN packets to continue their normal path through the network stack. ### Crafting the SYN-ACK Response Now that we've identified a `SYN` packet targeting one of our ports of interest, we'll craft a `SYN-ACK` response. To do this efficiently, we'll modify the incoming packet in-place, transforming it into our response. ```rust // Swap Ethernet addresses unsafe { core::mem::swap(&mut (*eth_hdr).src_addr, &mut (*eth_hdr).dst_addr) } // Swap IP addresses unsafe { core::mem::swap(&mut (*ip_hdr).src_addr, &mut (*ip_hdr).dst_addr); } ``` Here's what this code accomplishes: 1. Ethernet Address Swap: - We exchange the source and destination MAC addresses in the Ethernet header. - This ensures our response packet will be routed back to the sender at the link layer. 2. IP Address Swap: - Similarly, we swap the source and destination IP addresses in the IP header. - This directs our response to the original sender at the network layer. 3. `core::mem::swap`: - This function efficiently exchanges the values of two mutable references without requiring a temporary variable. - It's particularly useful here as it keeps our code concise and performant. By modifying the existing packet, we're essentially "reflecting" it back to the sender, but with crucial changes that we'll make in the next steps to transform it into a valid `SYN-ACK` response. After swapping the Ethernet and IP addresses, we now need to modify the TCP header to transform our packet into a valid `SYN-ACK` response. This step is critical in simulating an open port and continuing the TCP handshake process. Here's how we accomplish this: ```rust // Modify TCP header for SYN-ACK unsafe { core::mem::swap(&mut (*tcp_hdr).source, &mut (*tcp_hdr).dest); (*tcp_hdr).set_ack(1); (*tcp_hdr).ack_seq = (*tcp_hdr).seq.to_be() + 1; (*tcp_hdr).seq = 1u32.to_be(); } ``` Let's break down these modifications: 1. Port Swap: - We exchange the source and destination ports, ensuring our response goes back to the correct client port. 2. Setting the ACK Flag: - We set the ACK flag using `set_ack(1)`. This, combined with the existing SYN flag, creates a `SYN-ACK` packet. 3. Acknowledgment Number: - We set the acknowledgment number to the incoming sequence number plus one. - This acknowledges the client's SYN and informs it of the next sequence number we expect. - Note the use of `to_be()` to handle endianness correctly. 4. Sequence Number: - We set our initial sequence number to 1 (converted to network byte order). - In a real TCP stack, this would typically be a random number for security reasons. > It's important to note that in our eBPF program, we're modifying packet headers without recalculating the checksums. In a production environment, this approach could lead to issues as the modified packets might be dropped by network stacks that verify checksums. For the sake of simplicity and focusing on the core concepts, we've omitted checksum recalculation in this demonstration. > > `Nmap` typically uses raw sockets for its SYN scans. Raw sockets bypass much of the normal network stack processing, including checksum verification in many cases. > > Also, in a production environment, you might want to consider additional factors. This is just an oversimplified version that demonstrates the basic concept of tricking a simple SYN scan. These modifications transform our incoming `SYN` packet into a `SYN-ACK` response, effectively simulating the behavior of an open port. This is a key step in our port scanning detection or simulation logic. ### Sending the Modified Packet After modifying our packet to create a valid `SYN-ACK` response, the final step is to send this packet back to the client. We accomplish this using the `XDP_TX` action. This action instructs the XDP framework to transmit the modified packet back out through the same network interface it arrived on. This action is particularly useful for applications like our port scan simulation, load balancers, firewalls, and other scenarios where rapid packet inspection and modification are crucial. ```rust Ok(xdp_action::XDP_TX) ``` By using `XDP_TX`, we're completing our port scanning response simulation in a highly efficient manner. This approach allows us to respond to SYN packets almost instantaneously, making our simulated open ports "virtually indistinguishable" from real ones in terms of response time. It's worth noting that this approach, while effective for `Syn Scan` scenario, doesn't account for more sophisticated scanning techniques that might use non-standard flag combinations. In a production environment, you might want to implement more comprehensive checks to detect various types of port scans. ### Putting It All Together: Running Our eBPF Program Full code: ```rust fn try_syn_ack(ctx: XdpContext) -> Result<u32, ExecutionError> { // Use pointer arithmetic to obtain a raw pointer to the Ethernet header at the start of the XdpContext data. let eth_hdr: *mut EthHdr = get_mut_ptr_at(&ctx, 0)?; // Check the EtherType of the packet. If it's not an IPv4 packet, pass it along without further processing // We have to use unsafe here because we're dereferencing a raw pointer match unsafe { (*eth_hdr).ether_type } { EtherType::Ipv4 => {} _ => return Ok(xdp_action::XDP_PASS), } // Using Ethernet header length, obtain a pointer to the IPv4 header which immediately follows the Ethernet header let ip_hdr: *mut Ipv4Hdr = get_mut_ptr_at(&ctx, EthHdr::LEN)?; // Check the protocol of the IPv4 packet. If it's not TCP, pass it along without further processing match unsafe { (*ip_hdr).proto } { IpProto::Tcp => {} _ => return Ok(xdp_action::XDP_PASS), } // Using the IPv4 header length, obtain a pointer to the TCP header which immediately follows the IPv4 header let tcp_hdr: *mut TcpHdr = get_mut_ptr_at(&ctx, EthHdr::LEN + Ipv4Hdr::LEN)?; // Check the destination port of the TCP packet. If it's not in the range 9000-9500, pass it along without further processing let port = unsafe { u16::from_be((*tcp_hdr).dest) }; match port { 9000..=9500 => {} _ => return Ok(xdp_action::XDP_PASS), } // Check if it's a SYN packet let is_syn_packet = unsafe { match ((*tcp_hdr).syn() != 0, (*tcp_hdr).ack() == 0) { (true, true) => true, _ => false, } }; if !is_syn_packet { return Ok(xdp_action::XDP_PASS); } // Swap Ethernet addresses unsafe { core::mem::swap(&mut (*eth_hdr).src_addr, &mut (*eth_hdr).dst_addr) } // Swap IP addresses unsafe { core::mem::swap(&mut (*ip_hdr).src_addr, &mut (*ip_hdr).dst_addr); } // Modify TCP header for SYN-ACK unsafe { core::mem::swap(&mut (*tcp_hdr).source, &mut (*tcp_hdr).dest); (*tcp_hdr).set_ack(1); (*tcp_hdr).ack_seq = (*tcp_hdr).seq.to_be() + 1; (*tcp_hdr).seq = 1u32.to_be(); } Ok(xdp_action::XDP_TX) } ``` Now that we've implemented our eBPF program to simulate open ports within our chosen range, it's time to see it in action. This demonstration will showcase the power of eBPF in manipulating network behavior at a low level. First, let's run our eBPF program. Open a terminal and execute the following command: ```bash $ RUST_LOG=info cargo xtask run -- -i wlp5s0 [2024-06-29T19:57:33Z INFO syn_ack] Waiting for Ctrl-C... ``` With our eBPF program running, let's perform a port scan using `nmap` to see the effects: ```bash $ sudo nmap -sS -p9000-9500 192.168.2.107 Starting Nmap 7.95 ( https://nmap.org ) at 2024-06-29 15:57 EDT Nmap scan report for dlm (192.168.2.107) Host is up (0.0084s latency). PORT STATE SERVICE 9000/tcp open cslistener 9001/tcp open tor-orport 9002/tcp open dynamid 9003/tcp open unknown 9004/tcp open unknown 9005/tcp open golem 9006/tcp open unknown 9007/tcp open ogs-client 9008/tcp open ogs-server ... 9491/tcp open unknown 9492/tcp open unknown 9493/tcp open unknown 9494/tcp open unknown 9495/tcp open unknown 9496/tcp open unknown 9497/tcp open unknown 9498/tcp open unknown 9499/tcp open unknown 9500/tcp open ismserver ``` As we can see, Nmap reports all ports in the range `9000-9500` as open. This is exactly what our eBPF program is designed to do: respond to `SYN` packets on these ports with `SYN-ACK`, simulating open ports. In a production environment, you would want to implement additional features such as logging, more sophisticated decision-making logic, and possibly integration with other security systems. This example serves as a foundation for understanding how eBPF can be used to manipulate network traffic at a fundamental level. For a deeper dive and hands-on experience, all the code discussed is available in my  [repository](https://github.com/douglasmakey/syn-ack/tree/main). Feel free to explore, experiment, and comments ! ## To conclude In this article, we've explored the intricacies of TCP handshakes and port scanning techniques, culminating in a practical demonstration of how eBPF and Rust can be used to manipulate network behavior at a fundamental level. By implementing a program that simulates open ports, we've showcased the power and flexibility of eBPF in network security applications. This approach not only provides a means to confuse potential attackers but also opens up possibilities for more advanced network management and security tools. Thank you for reading along. This blog is a part of my learning journey and your feedback is highly valued. There's more to explore and share regarding eBPF, so stay tuned for upcoming posts. Your insights and experiences are welcome as we learn and grow together in this domain. **Happy coding!**
douglasmakey
1,906,084
Add Art to your Agile Retrospectives 🧑‍🎨🎨
Hey there 👋 Are your retrospectives starting to feel a bit stale? Well, it's time to grab your...
0
2024-06-29T21:44:58
https://dev.to/mattlewandowski93/add-art-to-your-agile-retrospectives-2noi
agile, scrum, productivity, management
Hey there 👋 Are your retrospectives starting to feel a bit stale? Well, it's time to grab your digital crayons and get ready to add some fun into your team meetings! ## Why Draw in Retros? Incorporating drawings into your retrospectives isn't just about showcasing your (or lack of) artistic skills. It's about: 1. Breaking the ice and lightening the mood 2. Encouraging creative thinking 3. Boosting team engagement 4. Creating memorable experiences ## Quick and Quirky Drawing Ideas 1. **Sprint Snapshot**: 60 seconds to draw your sprint summary. Go! 2. **Agile Pictionary**: Can you draw "technical debt" without words? 3. **Team Spirit Animal**: What creature embodies your sprint? ![GIF of a team Drawing](https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExOTMwZzZtcDF0a2F4bmc1dmhnZms0a2hraGMweDdkeG5tc2UyM2FwcSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/Do2QvSFIYujw0IbVRz/giphy.gif) These simple drawing activities can transform your retros from mundane meetings to laughter-filled brainstorming sessions. They're not about artistic perfection – they're about creativity, encouraging participation, and building stronger team bonds. But why stop at drawings? Gifs, stickers, and more can all play a part in making your retrospectives more engaging and productive. Curious about more fun retrospective ideas and how to implement them? [Read the full article here](https://kollabe.com/posts/agile-retrospective-with-drawings) to discover: - More hilarious drawing game ideas - The unexpected benefits of adding fun to your retros - Tools that can help you bring drawings (and more) into your retrospectives Remember, in Agile, it's not about being the best artist – it's about continuous improvement and having fun along the way. So, are you ready to doodle your way to better retros? 🎨😄
mattlewandowski93
1,906,081
Chrome removing third party cookies
I was working on authentication and when I inspected my cookie in my client and I saw this warning...
0
2024-06-29T21:41:08
https://dev.to/ayobami/chrome-removing-third-party-cookies-42g0
node, webdev, beginners, tutorial
I was working on authentication and when I inspected my cookie in my client and I saw this warning from google chrome [insert image here] that 3rd party cookies are been deprecated and will be blocked in the future. They are working to stop 3rd party tracking via cookies [google blog](https://blog.google/products/chrome/privacy-sandbox-tracking-protection/) thereby creating tracking protection.In my case I was hosting the backend and the frontend on separate servers. The frontend is hosted on vercel, while the backend on render, so when the backend sends the cookie to the client since they are not on the same server hence different domain [an article explaining](https://web.dev/articles/same-site-same-origin), the cookie will be treated as a 3rd party cookie. ##### Cookie prefix The solution I implemented was from reading mdn documentation [mdn_doc](https://developer.mozilla.org/en-US/docs/Web/HTTP/Cookies). I added a prefix `__Host-` to the cookie name. `__Host-` is called a cookie prefix. There are two types; `__Host-` and `__Secure-`. a. `__Host-` means that the cookie is domain-locked, when you set a cookie with this header prefix it is accepted if it's marked with the `Secrue` attribute, was sent from a secure origin `https`, does not include a `Domain` attribute and has the `Path` set to `'/'`. So my code looks like this now. After modifying my cookie options, the warning did not show again because. ```js const cookieOptions = { maxAge: 60 * 60 * 24 * 30 * 1000, // 30 days httpOnly: true, // prevents client scripts from accessing the cookie secure: process.env.NODE_ENV === "production", // the cookie should be sent over secure https connection sameSite: process.env.NODE_ENV === "development" ? "Lax" : "None", path: "/", } res.cookie(process.env.NODE_ENV === "development" ? "sage_warehouse_token":"__Host-sage_warehouse_token",'refreshToken',cookieOptions) .header("Authorization", 'accessToken') ``` If anyone encountered this same issue I would love to know how they resolved it, I am learning backend development with `express.js`, which is fun. I am currently working on an e-commerce application to solidify my knowledge as I learn new concepts. If you are interested in learning backend development and you don't want to do it alone you can checkout [HNG](https://hng.tech/internship) where you will participate in tasks with peers and ask for help from mentors as you go along in your tech journey, they also have a [premium version](https://hng.tech/premium) where you can earn a certificate and it also comes with some perks.
ayobami
1,906,080
Perfecting the Hook Shot A Classic Scoring Technique
Analyze the mechanics of the hook shot, focusing on its use in the post and advantages over defenders.
0
2024-06-29T21:38:14
https://www.sportstips.org/blog/Basketball/Center/perfecting_the_hook_shot_a_classic_scoring_technique
basketball, hookshot, postmoves, scoringtechniques
## Perfecting the Hook Shot: A Classic Scoring Technique When it comes to basketball, evolving your arsenal of scoring techniques can set you apart from the average player. One of the most underutilized yet effective moves in today&#x27;s game is the **hook shot**. Though it might remind you of basketball legends like Kareem Abdul-Jabbar or Hakeem Olajuwon, mastering this classic technique can give modern players a significant edge, particularly in the post. ### The Mechanics of the Hook Shot Perfecting the hook shot involves a combination of footwork, body positioning, and arm movement. Here’s a detailed breakdown of the mechanics: | Mechanic | Description | |------------------------|------------------------------------------------------------------------------------------------------------------------| | **Footwork** | Begin with a strong pivot foot. Ideally, your inside foot remains planted while you use the outside leg to create space.| | **Body Positioning** | Position your body between the defender and the ball. Keep the ball high to shield it from defenders. | | **Arm Movement** | Your shooting arm should form a hook shape. Use your wrist to flick the ball, aiming for a high arc. | | **Balance** | Maintain a stable base with knees slightly bent. Use your non-shooting arm to balance and ward off defenders. | ### Execution Steps 1. **Establish Position**: Get the ball in the low post with your back to the basket. 2. **Pivot and Create Space**: Use your pivot foot to turn towards the basket while keeping the ball high and protected. 3. **Initiate the Hook**: Extend your shooting arm, forming a &#x27;hook&#x27;. Ensure your non-shooting arm is extended outward to keep the defender at bay. 4. **Release with a Flick**: At the peak of your jump, flick your wrist to give the ball a high arc, aiming for a soft touch off the backboard or swish. ### Advantages Over Defenders The hook shot carries several advantages that make it an invaluable addition to any player&#x27;s skill set: - **Height Advantage**: The upward motion and high release point make it difficult for defenders to block, even for taller opponents. - **Versatility**: It can be executed with either hand, making you unpredictable and versatile in the post. - **Consistent Scoring**: The hook shot is reliable due to its close proximity to the basket and the control you maintain over the ball. ### Coaching Tips For coaches aiming to develop their players&#x27; hook shots, consider these drills and pointers: 1. **Footwork Drills**: Practice pivoting and creating space using cones or defenders to simulate game situations. 2. **Shooting Drills**: Teach players to shoot over obstacles, improving their ability to maintain a high release point. 3. **Repetition is Key**: Consistent practice with both hands helps embed muscle memory and increases confidence. 4. **Video Analysis**: Use film to show players the correct form and various scenarios where the hook shot is effectively utilized. ### Player Insights Players who have successfully integrated the hook shot into their repertoire often share that it increases their scoring options and makes them a more formidable force in the paint. Here are some quotes and tips from seasoned veterans: &gt; &quot;Mastering the hook shot gave me an edge. It&#x27;s like having a secret weapon in my back pocket.&quot; - [Player Name] &gt; &quot;Focus on keeping your movements fluid and your release high. Consistency will follow.&quot; - [Player Name] By incorporating these insights and focusing on the core mechanics, both players and coaches can elevate their game through the timeless and effective hook shot.
quantumcybersolution
1,906,079
Don't Pray To The Duck
I think every programmer out there has heard of rubber duck debugging. If you have not heard of it, I...
0
2024-06-29T21:37:08
https://dev.to/thesimpledev/dont-pray-to-the-duck-2kbj
rubberduck, debugging, educational
I think every programmer out there has heard of rubber duck debugging. If you have not heard of it, I will cover the basics here. I want to share with you some very important advice a co-worker gave to me when I pointed out that talking to him seemed to help more than rubber duck debugging, even though I realized what I was doing wrong while talking to him before he provided me any feedback. That's when he introduced me to the concept: Talk to the Duck, Don't Pray to the Duck. Something about walking through the problem verbally makes it easier to spot errors. ## How to properly utilize a rubber duck 1. Acquire a rubber duck. ([Amazon Has a Great Selection](https://amzn.to/3WgL9VK)) 2. Place said rubber duck on the desk in front of you. 3. Start Coding. 4. When you reach a bug, explain to your rubber duck what the code is suppose to do, then go into detail and explain your code line by line. 1. Here is the important part **You must do so out loud**. 2. Remember we are talking to the rubber duck, not praying to the rubber duck. 5. As you tell your rubber duck all about your problems, you may suddenly realize what the problem was 1. Remember code is 10 times harder to debug than to write. 2. Since you wrote the most clever code possible you are by definition not smart enough to debug it. 3. The duck is beaming this information straight into your brain. Authors Note: It HAS to be a rubber duck. No other type of object will work. I swear a rubber duck is not making me write this. Remember, respect your rubber duck. The Duck has a PhD. Originally published at [TheSimpleDev.com](https://thesimpledev.com/blog/dont-pray-to-the-duck/)
thesimpledev
1,906,078
Secure Application Software Development
Intro to Application Security A developer-focused series about the fundamentals...
0
2024-06-29T21:32:37
https://dev.to/owasp/secure-application-software-development-59ad
sdlc, softwaredevelopment, cybersecurity, beginners
#Intro to Application Security ##A developer-focused series about the fundamentals of cybersecurity In the face of increasing cyberattacks, application security is becoming critical, requiring developers to integrate robust measures and best practices to build secure applications. But what exactly does the term "secure application" mean? Let's take a brief look at some notable security incidents in history: #### T-Mobile data leak In January 2023, T-Mobile was attacked via a vulnerability in an API, resulting in the data of 23 million clients being compromised. It allowed attackers to access **confidential** information of users, such as names, emails and phone numbers. #### Industrial Control Systems Attack In 2019, Russian espionage group named "Turla" attacked an industrial facility in Europe. After gaining access to industrial control systems, the group started manipulating data from sensors, such as temperature and pressure. The main target of attackers was to break the **integrity** of data, in order to cause incorrect operational decisions and lead to incidents. #### Attack on Bandwidth.com Bandwidth.com suffered a Distributed Denial of Service (DDoS) attack in October 2021. The attack compromised **availability** of service, making its services inaccessible to users. Due to the interruption of services, the company experienced a big financial impact and lost an estimated $9-12 million dollars. ___ Each of these security incidents broke one of the core principles of information security: **confidentiality**, **integrity** and **availability**. These 3 principles are called **CIA Triad**: **C** - Confidentiality: Only authorized entities have access to specified resource or information and no one else. **I** - Integrity: Data saves its accuracy and consistency during its entire lifecycle, being protected from unauthorized alteration or destruction. **A** - Availability: Even in the event of failures or attacks, data and services are continuously available to authorized users. Ensuring these principles are defended allows our application to be secure. This is an ongoing process that begins with planning and continues through maintenance. The goal of **AppSec** is to **ensure security on every stage of software development lifecycle (SDLC)**. ## Software Development Lifecycle (SDLC) The software development lifecycle is a step-by-step process used to create software in a systematic and efficient way. It consists of 6 phases: ![sdlc](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qr2d7pzx2jqmzc9ldlk8.png) **Requirements**: Setting goals, defining project's scope and understanding what the users need from software **Design**: Planning the structure and layout of the system, ensuring it meets all requirements **Development**: Writing the actual code to build the software. **Testing**: Checking the software to ensure it works correctly and is free of bugs. **Deployment**: Releasing the software for users to access and use. **Maintenance**: Updating and fixing the software as needed after it is in use. We aim to implement security at each phase of the SDLC because the earlier vulnerabilities are detected, the lower the cost and effort required to fix them, preventing costly and complex issues later. The approximate comparison of the cost of mitigating a security issue can be illustrated as follows: ![sdlc_cost](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ad27zlwyun3s088axwv.png) ## The Role of AppSec Engineers An AppSec engineer is one of the most important stakeholders responsible for security. They should know methodologies applicable at the application layer to detect and mitigate malicious traffic in order to build systems where potential threats are recognized and remediated before they can cause harm. In addition to prevention measures, AppSec engineers play a big role in incident response. They collaborate with incident response teams and provide expertise on application-specific security concerns. An AppSec engineer's involvement is essential for detection, mitigation and post-incident analysis, helping to develop strategies to prevent incidents in future. In this series of articles we will focus on best security practices at each phase of SDLC, explore such techniques such as JA3, JA4+, HTTP/2 fingerprinting and cover fundamentals of incident response. ## Series Roadmap Please note the roadmap is subject to change. - Introduction to Application Security - Security in Building Requirements - Secure Design Principles - Secure Coding Principles - Security in Testing - Secure deployment & maintenance - Application layer fingerprinting - Fundamentals of incident response
mamicidal
1,906,076
The Fermi Paradox Where is Everybody
Dive into the enigmatic Fermi Paradox and explore the various theories that attempt to explain why, despite the vastness of the cosmos, we have yet to encounter extraterrestrial civilizations.
0
2024-06-29T21:32:36
https://www.elontusk.org/blog/the_fermi_paradox_where_is_everybody
astrophysics, spaceexploration, fermiparadox
# The Fermi Paradox: Where is Everybody? ## Introduction Have you ever looked up at the night sky and wondered if we&#x27;re truly alone in the universe? It turns out, scientists and researchers have been asking the same question for decades. This curiosity is encapsulated in the Fermi Paradox—a term coined to articulate the contradiction between the high probability of extraterrestrial life and the conspicuous lack of evidence for, or contact with, such civilizations. Named after physicist Enrico Fermi, this paradox poses the simple yet profound question: &quot;Where is everybody?&quot; ## The Scale of the Universe Before diving into the Fermi Paradox itself, let&#x27;s frame the discussion with some mind-boggling statistics: - **The Milky Way Galaxy** alone contains between 100 to 400 billion stars. - Recent **exoplanet discoveries** suggest that many of these stars have planetary systems. - **Life-sustaining conditions** are not unique to Earth—numerous exoplanets lie within the &quot;habitable zone&quot; of their stars. Given these factors, it seems probabilistically inevitable that life should exist elsewhere. This brings us to the crux of the Fermi Paradox. ## The Paradox Explained Enrico Fermi posed his paradox during a casual lunchtime conversation in 1950. Given the estimated number of stars, planets, and the age of the cosmos, he concluded that an advanced civilization could theoretically colonize the entire galaxy in a relatively short period—cosmically speaking. So, why haven&#x27;t we seen any evidence of such civilizations? Several theories attempt to resolve the Fermi Paradox, each fascinating in its own right. ## Theories to Resolve the Fermi Paradox ### 1. **The Rare Earth Hypothesis** This theory suggests that Earth-like planets with all the necessary conditions for life are extremely rare. While microbial life might be common, advanced civilizations could be exceedingly rare due to a series of highly improbable events that led to intelligent life on Earth. ### 2. **The Great Filter** The Great Filter hypothesis posits that there is some stage in the evolutionary process that is extraordinarily unlikely or impossible for most life forms to surpass. This &quot;filter&quot; might lie behind us (e.g., the development of complex cells) or ahead of us (e.g., avoiding self-destruction through technological means). The theory suggests we may be one of the few species to have ever passed this filter. ### 3. **We&#x27;re Among the First** Perhaps we are among the earliest intelligent civilizations to arise. The universe is around 13.8 billion years old, but conditions suitable for life may have only existed for a fraction of that time. We might simply be pioneers on the cosmic stage. ### 4. **The Zoo Hypothesis** This intriguing theory suggests that extraterrestrial civilizations are aware of us but have chosen not to make contact. According to this hypothesis, Earth might be part of a galactic &quot;zoo&quot; or nature reserve where alien observers watch us from a distance, much like humans observe animals in a wildlife preserve. ### 5. **Technological Singularity** Another theory speculates that advanced civilizations may reach a point of technological singularity, where their development is so advanced that their presence becomes indistinguishable from natural cosmic phenomena. Essentially, they might be here, but they are beyond our current understanding and detection capabilities. ### 6. **Self-Destruction** Human history is rife with conflicts and existential threats, such as nuclear war or ecological collapse. This theory suggests that advanced civilizations might commonly self-destruct before or shortly after developing the capability for interstellar communication and travel. ### 7. **They are Using Technology We Can&#x27;t Detect** Advanced civilizations might use forms of communication or technology that are beyond our current scientific comprehension. Just as early humans couldn&#x27;t fathom radio waves, we might be missing the signals due to our technological limitations. ### 8. **Dark Forest Theory** Inspired by Liu Cixin&#x27;s *The Dark Forest*, this theory outlines a cosmos where every civilization is both hunter and hunted. To avoid extermination, civilizations remain silent, fearing that revealing their location might draw the attention of potential predators. ## Conclusion The Fermi Paradox remains one of the most profound questions in modern science, bridging astrophysics, philosophy, and even sociology. Each proposed theory offers a tantalizing glimpse into the mysteries of the cosmos and our place within it. As our technology advances and our understanding deepens, perhaps one day, we&#x27;ll uncover the answer to Fermi&#x27;s enduring question: **&quot;Where is everybody?&quot;** Whether we are destined to find other intelligent beings or to remain solitary observers, the quest to resolve the Fermi Paradox continues to inspire wonder and curiosity about the universe we inhabit. So, the next time you gaze up at the night sky, remember the vastness of the cosmos and the myriad possibilities it holds.🚀🌌
quantumcybersolution