id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,877,012
T-Shirt Tuesday
Welcome back to T-Shirt Tuesday, our weekly post inspired by the creative ideas in our Discord...
0
2024-06-04T19:04:58
https://dev.to/buildwebcrumbs/t-shirt-tuesday-44g1
Welcome back to T-Shirt Tuesday, our weekly post inspired by the creative ideas in [our Discord Community](https://discord.gg/4PWXpPd8HQ). Each week, we bring fun and unique phrases that you'd love to see on a t-shirt. This week's cover image features piece of code that many developers live by! ``` js if (brain!empty){ keepCoding(); } else { orderCoffee(); } ``` Now, it's your turn: **What witty or inspirational phrase would you proudly wear on a t-shirt?** Drop your suggestions in the comments below! [And give us a ⭐ on GitHub if you haven't yet! ](https://github.com/webcrumbs-community/webcrumbs)
opensourcee
1,877,011
How to Capitalize String Python Dataframe Pandas
There are two main ways to capitalize strings in a pandas DataFrame: 1....
0
2024-06-04T19:02:38
https://pleypot.com/blog/how-to-capitalize-string-python-dataframe-pandas/
python, datascience, webdev
There are two main ways to capitalize strings in a pandas DataFrame: ### 1. str.capitalize(): This method capitalizes the first letter of each word in a string element. ```python import pandas as pd # Create a DataFrame df = pd.DataFrame({'text': ['hello world', 'python programming', 'pandas']}) # Capitalize the first letter of each word df['text'] = df['text'].str.capitalize() print(df) ``` ![Capitalize Dataframe Pandas](https://pleypot.com/wp-content/uploads/2024/05/Python-Replit-4-1.png) ### 2. Vectorized string methods: You can achieve capitalization using vectorized string methods like .str.upper() for uppercase and string slicing for selecting the first character. ```python import pandas as pd # Create a DataFrame df = pd.DataFrame({'text': ['hello world', 'python programming', 'pandas']}) # Capitalize the first letter and lowercase the rest df['text'] = df['text'].str[0].str.upper() + df['text'].str[1:].str.lower() print(df) ``` This will the **output**. ``` text 0 Hello World 1 Python Programming 2 Pandas ```
saim_ansari
1,876,662
BST (Binary Search Tree) and AVL Tree, Data Structures: (Trees, Part II)
Binary Search Tree (BST) A Binary Search Tree (BST) is a binary tree data structure where...
0
2024-06-04T19:01:10
https://dev.to/harshm03/bst-binary-search-tree-and-avl-tree-data-structures-trees-part-ii-2lp8
dsa, datastructures
## Binary Search Tree (BST) A Binary Search Tree (BST) is a binary tree data structure where each node has at most two children, referred to as the left child and the right child. The key property of a BST is that for every node, all elements in its left subtree are less than or equal to the node's value, and all elements in its right subtree are greater than the node's value. This property allows for efficient search, insertion, and deletion operations. ### Key Properties of BST: 1. **Ordering Property**: For any node `N`, all elements in the left subtree of `N` are less than or equal to the value of `N`, and all elements in the right subtree of `N` are greater than the value of `N`. 2. **No Duplicates**: BSTs typically do not allow duplicate elements. If a duplicate element is encountered during insertion, it can either be ignored or handled in a specific manner based on the implementation. 3. **Efficient Operations**: BSTs support efficient search, insertion, and deletion operations. Searching for an element in a BST has a time complexity of O(log n) on average, where `n` is the number of nodes in the tree. This efficiency is due to the binary search property, which allows us to eliminate half of the nodes at each step of the search. 4. **In-order Traversal**: Performing an in-order traversal of a BST results in a sorted sequence of elements. This property is useful for tasks such as finding the k-th smallest/largest element in the tree. ### Advantages of BST: - **Efficient Searching**: BSTs provide efficient searching, making them suitable for applications where fast retrieval of data is required, such as databases and search algorithms. - **Ordered Structure**: The ordering property of BSTs allows for easy traversal of elements in sorted order, facilitating tasks like range queries and finding minimum and maximum elements. - **Dynamic Structure**: BSTs can dynamically grow and shrink based on the number of elements inserted or deleted, making them adaptable to changing data sets. ### Limitations of BST: - **Imbalanced Trees**: In certain scenarios, such as when elements are inserted in sorted order, BSTs can become highly imbalanced, leading to degraded performance. This issue can be mitigated through techniques like AVL trees and Red-Black trees, which ensure balanced tree structures. - **Performance Degradation**: Although BSTs offer efficient operations on average, their performance can degrade to O(n) in the worst case, particularly for skewed trees. This can occur if elements are inserted in a sorted order, resulting in a linear chain-like structure. Binary Search Trees are fundamental data structures with a wide range of applications due to their efficiency and versatility in storing and retrieving data. Understanding their properties and operations is essential for designing and implementing efficient algorithms and applications. ### Implementation of Binary Search Tree in C++ Binary Search Trees (BSTs) are a fundamental data structure used for efficient storage and retrieval of data. They maintain a sorted order of elements, allowing for fast search, insertion, and deletion operations. Below is a class-based implementation of a Binary Search Tree in C++, including attributes, constructor, destructor, and a method for inserting elements into the tree. ```cpp #include <iostream> using namespace std; class BST { private: struct Node { int data; Node* left; Node* right; Node(int val) : data(val), left(nullptr), right(nullptr) {} }; Node* root; public: BST() : root(nullptr) {} ~BST() { clear(root); } void insert(int value) { root = insertNode(root, value); } private: Node* insertNode(Node* node, int value) { if (node == nullptr) { return new Node(value); } if (value < node->data) { node->left = insertNode(node->left, value); } else { node->right = insertNode(node->right, value); } return node; } void clear(Node* node) { if (node != nullptr) { clear(node->left); clear(node->right); delete node; } } }; ``` This implementation provides the basic framework for a Binary Search Tree. The `BST` class contains a private nested `Node` struct to represent individual nodes of the tree. It includes a constructor to initialize the root node to `nullptr` and a destructor to deallocate memory by recursively deleting all nodes. The `insert()` method allows for the insertion of elements into the BST while maintaining its binary search property. It recursively traverses the tree to find the appropriate position for the new element and creates a new node with the given value. This code serves as a foundation for building more advanced BST functionalities, such as search, deletion, traversal, and other operations. Further enhancements and optimizations can be made based on specific requirements and use cases. ### Operations on Binary Search Tree (BST) Binary Search Trees (BSTs) are versatile data structures that facilitate efficient manipulation and retrieval of data. By maintaining the binary search property, BSTs enable essential operations such as searching, insertion, and deletion. #### Search Operation Searching in a BST involves traversing the tree from the root node to find the target element. The algorithm compares the target value with the value of each node, guiding the search path towards the desired element. ```cpp bool search(Node* root, int target) { if (root == nullptr) { return false; } if (root->data == target) { return true; } if (target < root->data) { return search(root->left, target); } else { return search(root->right, target); } } ``` `Time Complexity: O(h) (worst case: O(n))` #### Insertion Operation Inserting a new element into a BST involves finding the appropriate position for the new node based on its value. The algorithm recursively traverses the tree, adjusting the structure to maintain the binary search property. ```cpp Node* insert(Node* root, int value) { if (root == nullptr) { return new Node(value); } if (value < root->data) { root->left = insert(root->left, value); } else if (value > root->data) { root->right = insert(root->right, value); } return root; } ``` `Time Complexity: O(h) (worst case: O(n))` #### Deletion Operation Deleting an element from a BST requires locating the node containing the target value and adjusting the tree structure while preserving the binary search property. ```cpp Node* deleteNode(Node* root, int key) { if (root == nullptr) { return root; } if (key < root->data) { root->left = deleteNode(root->left, key); } else if (key > root->data) { root->right = deleteNode(root->right, key); } else { if (root->left == nullptr) { Node* temp = root->right; delete root; return temp; } else if (root->right == nullptr) { Node* temp = root->left; delete root; return temp; } Node* temp = minValueNode(root->right); root->data = temp->data; root->right = deleteNode(root->right, temp->data); } return root; } ``` `Time Complexity: O(h) (worst case: O(n))` These operations are fundamental for maintaining the integrity of a BST and ensuring efficient data management. Understanding their complexities aids in designing optimal algorithms and applications utilizing BSTs. ### Full Code Implementation of BST A Binary Search Tree (BST) is a fundamental data structure that maintains a sorted order of elements, allowing for efficient search, insertion, and deletion operations. This implementation provides a comprehensive class-based approach to constructing and manipulating BSTs in C++. ```cpp #include <iostream> using namespace std; class BST { private: struct Node { int data; Node* left; Node* right; Node(int val) : data(val), left(nullptr), right(nullptr) {} }; Node* root; public: BST() : root(nullptr) {} ~BST() { clear(root); } void insert(int value) { root = insertNode(root, value); } bool search(int target) { return searchNode(root, target); } void remove(int key) { root = deleteNode(root, key); } private: Node* insertNode(Node* node, int value) { if (node == nullptr) { return new Node(value); } if (value < node->data) { node->left = insertNode(node->left, value); } else if (value > node->data) { node->right = insertNode(node->right, value); } return node; } bool searchNode(Node* node, int target) { if (node == nullptr) { return false; } if (node->data == target) { return true; } if (target < node->data) { return searchNode(node->left, target); } else { return searchNode(node->right, target); } } Node* deleteNode(Node* node, int key) { if (node == nullptr) { return node; } if (key < node->data) { node->left = deleteNode(node->left, key); } else if (key > node->data) { node->right = deleteNode(node->right, key); } else { if (node->left == nullptr) { Node* temp = node->right; delete node; return temp; } else if (node->right == nullptr) { Node* temp = node->left; delete node; return temp; } Node* temp = minValueNode(node->right); node->data = temp->data; node->right = deleteNode(node->right, temp->data); } return node; } Node* minValueNode(Node* node) { Node* current = node; while (current && current->left != nullptr) { current = current->left; } return current; } void clear(Node* node) { if (node != nullptr) { clear(node->left); clear(node->right); delete node; } } }; int main() { BST bst; // Insert some elements bst.insert(50); bst.insert(30); bst.insert(20); bst.insert(40); bst.insert(70); bst.insert(60); bst.insert(80); // Search for elements cout << "Searching for 30: " << (bst.search(30) ? "Found" : "Not Found") << endl; cout << "Searching for 45: " << (bst.search(45) ? "Found" : "Not Found") << endl; // Remove an element bst.remove(30); cout << "After removing 30, searching for 30: " << (bst.search(30) ? "Found" : "Not Found") << endl; return 0; } ``` This code demonstrates the class-based implementation of a Binary Search Tree (BST) in C++. It includes methods for insertion, searching, and deletion of elements, along with a main function showcasing the usage of these methods. Binary Search Trees are versatile data structures with various applications in computer science. Understanding their implementation and operations is essential for developing efficient algorithms and applications. ## AVL Tree (BST) An AVL tree is a type of self-balancing binary search tree named after its inventors Adelson-Velsky and Landis. It maintains the binary search tree property while ensuring that the tree remains balanced, which guarantees better performance for various operations. Here’s everything important to know about AVL trees: ### Key Properties of AVL Tree: 1. **Balanced Tree**: In an AVL tree, the heights of the two child subtrees of any node differ by at most one. If at any time they differ by more than one, rebalancing is performed to restore this property. 2. **Height-Balanced**: An AVL tree ensures that the height difference (balance factor) between the left and right subtrees for any node is no more than one. This balance factor is crucial for maintaining the tree’s efficiency. 3. **Self-Balancing**: The AVL tree automatically balances itself with each insertion and deletion operation. This ensures that the tree remains balanced and the operations remain efficient. ### Balancing Factor: - The balancing factor of a node in an AVL tree is the difference in heights between its left and right subtrees. It is calculated as: ``` BalanceFactor = Height(Left Subtree) - Height(Right Subtree) ``` - The balance factor can be -1, 0, or +1. If it goes outside this range, rebalancing is required. ### Rotations: To maintain balance, AVL trees perform rotations. There are four types of rotations: 1. **Left Rotation (LL Rotation)**: Performed when a node is inserted into the right subtree of a right subtree, causing an imbalance. 2. **Right Rotation (RR Rotation)**: Performed when a node is inserted into the left subtree of a left subtree, causing an imbalance. 3. **Left-Right Rotation (LR Rotation)**: Performed when a node is inserted into the right subtree of a left subtree, causing an imbalance. 4. **Right-Left Rotation (RL Rotation)**: Performed when a node is inserted into the left subtree of a right subtree, causing an imbalance. ### Advantages of AVL Tree: - **Balanced Structure**: Ensures that the tree remains balanced after every insertion and deletion, guaranteeing O(log n) time complexity for search, insert, and delete operations. - **Efficient Lookups**: Due to the balanced nature, lookups are efficient and fast. ### Limitations of AVL Tree: - **Complex Implementation**: AVL trees are more complex to implement than simple binary search trees due to the need for balancing operations. - **Higher Maintenance Cost**: The need to perform rotations can add overhead during insertion and deletion. Understanding AVL Trees is crucial for applications requiring balanced tree structures, such as databases and search algorithms. AVL Trees provide efficient operations while maintaining balance, ensuring optimal performance. ## Rotation In AVL trees, rotations are fundamental operations used to maintain the balance of the tree after insertions and deletions. Rotations help to rebalance the tree by adjusting the positions of the nodes while preserving the binary search tree property. There are four types of rotations: left rotation, right rotation, left-right rotation, and right-left rotation. Each rotation addresses a specific imbalance scenario in the tree. ### Left Rotation (LL Rotation) A left rotation is performed when a node becomes unbalanced due to an insertion in the right subtree of its right child. This rotation involves moving the right child up to become the new root of the subtree, with the original root becoming the left child of the new root. ```cpp Node* leftRotate(Node* root) { Node* newRoot = root->right; Node* leftSubtreeOfNewRoot = newRoot->left; // Perform rotation newRoot->left = root; root->right = leftSubtreeOfNewRoot; // Update heights root->height = max(height(root->left), height(root->right)) + 1; newRoot->height = max(height(newRoot->left), height(newRoot->right)) + 1; // Return new root return newRoot; } ``` **Explanation:** 1. Store the right child (`newRoot`) of the node to be rotated (`root`) and the left subtree (`leftSubtreeOfNewRoot`) of `newRoot`. 2. Perform the rotation by making `newRoot` the new root of the subtree, with `root` as its left child and `leftSubtreeOfNewRoot` as the right child of `root`. 3. Update the heights of `root` and `newRoot`. 4. Return the new root of the subtree (`newRoot`). ### Right Rotation (RR Rotation) A right rotation is performed when a node becomes unbalanced due to an insertion in the left subtree of its left child. This rotation involves moving the left child up to become the new root of the subtree, with the original root becoming the right child of the new root. ```cpp Node* rightRotate(Node* root) { Node* newRoot = root->left; Node* rightSubtreeOfNewRoot = newRoot->right; // Perform rotation newRoot->right = root; root->left = rightSubtreeOfNewRoot; // Update heights root->height = max(height(root->left), height(root->right)) + 1; newRoot->height = max(height(newRoot->left), height(newRoot->right)) + 1; // Return new root return newRoot; } ``` **Explanation:** 1. Store the left child (`newRoot`) of the node to be rotated (`root`) and the right subtree (`rightSubtreeOfNewRoot`) of `newRoot`. 2. Perform the rotation by making `newRoot` the new root of the subtree, with `root` as its right child and `rightSubtreeOfNewRoot` as the left child of `root`. 3. Update the heights of `root` and `newRoot`. 4. Return the new root of the subtree (`newRoot`). ### Left-Right Rotation (LR Rotation) A left-right rotation is a combination of a left rotation followed by a right rotation. It is performed when a node becomes unbalanced due to an insertion in the right subtree of its left child. ```cpp Node* leftRightRotate(Node* root) { root->left = leftRotate(root->left); return rightRotate(root); } ``` **Explanation:** 1. First, perform a left rotation on the left child of the unbalanced node (`root`). 2. Then, perform a right rotation on the unbalanced node itself (`root`). 3. This two-step process balances the tree. ### Right-Left Rotation (RL Rotation) A right-left rotation is a combination of a right rotation followed by a left rotation. It is performed when a node becomes unbalanced due to an insertion in the left subtree of its right child. ```cpp Node* rightLeftRotate(Node* root) { root->right = rightRotate(root->right); return leftRotate(root); } ``` **Explanation:** 1. First, perform a right rotation on the right child of the unbalanced node (`root`). 2. Then, perform a left rotation on the unbalanced node itself (`root`). 3. This two-step process balances the tree. Rotations in AVL trees are essential for maintaining balance after insertions and deletions. Each type of rotation addresses a specific imbalance scenario, ensuring that the tree remains height-balanced and operations such as search, insertion, and deletion remain efficient with a time complexity of O(log n). ## Implementation of AVL Tree in C++ AVL trees are self-balancing binary search trees that ensure logarithmic time complexity for essential operations like insertion, deletion, and search. This makes them suitable for scenarios where dynamic data storage with efficient lookup operations is required. Here's a class-based implementation of an AVL tree in C++, including a method for insertion. ```cpp #include <iostream> using namespace std; class AVLTree { private: struct Node { int data; Node* left; Node* right; int height; Node(int val) : data(val), left(nullptr), right(nullptr), height(1) {} }; Node* root; public: AVLTree() : root(nullptr) {} ~AVLTree() { clear(root); } void insert(int value) { root = insertNode(root, value); } private: int height(Node* node) { if (node == nullptr) { return 0; } return node->height; } int getBalance(Node* node) { if (node == nullptr) { return 0; } return height(node->left) - height(node->right); } Node* rightRotate(Node* root) { Node* newRoot = root->left; Node* rightSubtreeOfNewRoot = newRoot->right; // Perform rotation newRoot->right = root; root->left = rightSubtreeOfNewRoot; // Update heights root->height = max(height(root->left), height(root->right)) + 1; newRoot->height = max(height(newRoot->left), height(newRoot->right)) + 1; // Return new root return newRoot; } Node* leftRotate(Node* root) { Node* newRoot = root->right; Node* leftSubtreeOfNewRoot = newRoot->left; // Perform rotation newRoot->left = root; root->right = leftSubtreeOfNewRoot; // Update heights root->height = max(height(root->left), height(root->right)) + 1; newRoot->height = max(height(newRoot->left), height(newRoot->right)) + 1; // Return new root return newRoot; } Node* insertNode(Node* node, int value) { if (node == nullptr) { return new Node(value); } if (value < node->data) { node->left = insertNode(node->left, value); } else if (value > node->data) { node->right = insertNode(node->right, value); } else { return node; } node->height = 1 + max(height(node->left), height(node->right)); int balance = getBalance(node); if (balance > 1 && value < node->left->data) { return rightRotate(node); } if (balance < -1 && value > node->right->data) { return leftRotate(node); } if (balance > 1 && value > node->left->data) { node->left = leftRotate(node->left); return rightRotate(node); } if (balance < -1 && value < node->right->data) { node->right = rightRotate(node->right); return leftRotate(node); } return node; } void clear(Node* node) { if (node != nullptr) { clear(node->left); clear(node->right); delete node; } } }; ``` AVL trees are efficient data structures that provide fast insertion, deletion, and search operations while maintaining balance. This implementation of an AVL tree in C++ offers a foundation for building more advanced functionality, such as deletion, traversal, and other operations. Understanding AVL trees and their properties is essential for designing and implementing efficient algorithms and applications that require dynamic data storage.
harshm03
1,877,009
How to Select the Right Data Discovery Tool for Your Requirements
Sensitive data, like customer information and internal processes, often lurks hidden in employee...
0
2024-06-04T19:00:40
https://spectralops.io/blog/how-to-select-the-right-data-discovery-tool-for-your-requirements/
cybersecurity, devops, database
Sensitive data, like customer information and internal processes, often lurks hidden in employee devices or in unmanaged spreadsheets. This "shadow data" poses a security risk because it's difficult for IT teams to monitor and protect. Without visibility into this hidden data, organizations can't effectively enforce security policies, putting them at risk of data breaches. Mishandling sensitive data can have severe consequences. For example, [a data breach exposed](https://www.cpomagazine.com/cyber-security/a-third-party-data-breach-exposed-the-personal-information-of-18000-nissan-customers/) the information of 18,000 Nissan customers, highlighting the dangers of unsecured data. This type of incident can lead to hefty fines, damage a company's reputation, and even lead to legal trouble. However, data discovery tools offer a solution. These tools scan extensively to uncover hidden data. This visibility allows DevSecOps to secure every piece of data throughout the organization, improving compliance and overall security. ![You can't protect data if you don't know where it is meme](https://spectralops.io/wp-content/uploads/2024/04/You-cant-protect-data-if-you-dont-know-where-it-is-meme.jpg) Why You Need Data Discovery (Hint: Shadow Data) ----------------------------------------------- Hidden data poses risks far beyond unnoticed PII on personal devices. Internal [configurations](https://spectralops.io/blog/common-cloud-misconfigurations/), intellectual property, strategic plans, and other sensitive corporate information are as vulnerable. Mishandling of sensitive data can lead to severe consequences, including hefty fines, reputational damage, and potential legal action This overlooked data is a ticking time bomb, threatening more than just privacy: - Organizations risk incurring significant fines for non-compliance with data protection laws such as GDPR or CCPA. Depending on the severity of the breach, these fines can amount to millions of dollars or a percentage of global annual turnover. [Data discovery tools](https://spectralops.io/blog/top-10-data-discovery-tools-that-get-results/) can help mitigate these risks. - Operational disruptions are another significant consequence. Critical business processes suffer due to reliance on incorrect or outdated hidden data, potentially leading to financial losses and project delays. - Reputational damage is a particularly insidious outcome. News of data mishandling can spread rapidly, causing a loss of consumer confidence and, consequently, loyalty.  What is Data Discovery? Your Tool for Compliance and Data-Driven Insights ------------------------------------------------------------------------- Data discovery is all about getting to know your data better by indexing, profiling, and categorizing data across various sources to create a structured map of all your data assets. It reveals exactly what you have and where it's stored and organized.  ![Causes of shadow data](https://spectralops.io/wp-content/uploads/2024/04/Causes-of-shadow-data.png) The perks of data discovery are immense. Data discovery tools help uncover hidden datasets, which could lead to the development of new features, optimization of existing processes, or even the discovery of new revenue streams. Data discovery tools are indispensable for proactive compliance, helping identify and manage personally identifiable information (PII) as required under GDPR, CCPA, and other [data protection regulations](https://spectralops.io/blog/pci-compliance-levels-a-developers-guide-to-pci-compliance/). Data Discovery Tools -- Selection Criteria ----------------------------------------- Selecting the right data discovery tool is an investment that goes beyond just finding data. Here's what you need to consider to make an informed decision: - Data Source Compatibility: Does the tool seamlessly connect to all your data sources? Include databases (SQL, NoSQL), cloud storage (AWS, [Azure](https://learn.microsoft.com/en-us/azure/defender-for-cloud/concept-data-security-posture-prepare), Google Cloud), SaaS platforms (Salesforce, Workday), and file systems. Consider both structured and unstructured data compatibility. - Scalability: Can the tool handle your current data volume and anticipated growth comfortably? If your data landscape constantly expands, ensure the tool can scale reliably without performance hiccups. - Sensitivity Levels:  Does the tool allow for granular classification and tagging of data based on sensitivity? You should be able to categorize data with varying risk levels for appropriate security and compliance measures. - Automation: How much of the discovery process can be [automated](https://docs.aws.amazon.com/macie/latest/user/discovery-asdd-account-manage.html)? To streamline your workflows, look for tools that offer customizable scheduling, pattern recognition, and auto-tagging features. - ;Reporting and Visualization: Can the tool generate clear, insightful reports for audits, analytics, and executive summaries? Does it have visualization features to turn data patterns into easy-to-understand graphs and charts? - Integration Capabilities: Look for seamless integration with data catalogs, security tools, and business intelligence platforms, and consider specialized solutions like SaaS security posture management platforms ([SSPM](https://www.suridata.ai/blog/what-is-sspm/)) for comprehensive cloud security assessment. - Cost-effectiveness: Evaluate the total cost of ownership, including licensing, support, deployment, and training. Balance your budget with the long-term value and ROI the tool provides. Implementing Data Discovery -- A Strategic Approach -------------------------------------------------- Kicking off data discovery is strategically mapping out where and how to look for hidden data. You want to ensure your efforts align with your organization's priorities and security needs.  ### Connecting to Data Sources First, get your data discovery tool to talk about where your data lives.  Data could reside within databases, be stored in cloud solutions, or be distributed among various SaaS platforms. The crucial factor is selecting a tool with extensive compatibility and integration capabilities, which will facilitate a thorough and frictionless discovery process. ![Shadow data diagram](https://spectralops.io/wp-content/uploads/2024/04/Shadow-data-diagram-1024x958.png) ### Defining What to Discover With your connections ready, it's time to get selective. Understand that not all data holds the same value or risk, so focus on the juicy bits: sensitive customer information, data regulators monitor, and anything directly impacting your business goals.  This step is about filtering the noise to spotlight the data that's either a potential risk or a potential win. ### Configuration Now, onto the nuts and bolts---configuring your scans. Decide on the frequency and depth of your scans. Deep scans are more time-consuming but essential for sensitive or critical data areas. A lighter scan might be adequate for general oversight. Decide if you'll lean more on scheduled scans, which run automatically at set intervals, or on-demand scans, which you can launch manually in response to specific concerns or events.  Data Discovery in Action -- A Compliance Use Case ------------------------------------------------ Consider a scenario where an analyst is preparing for an upcoming audit focusing on personally identifiable information (PII). Aware of the impracticality of manual searches across their extensive data repositories, they turn to their data discovery tool for assistance. Here's how that process might go down: 1. Configure Data Discovery Tool -- The IT team configures the data discovery tool to target specific data types relevant to PII, such as names, social security numbers, and email addresses.  1. Schedule Scans---The analyst schedules the data discovery tool to scan all relevant databases, including customer databases, human resources systems, and cloud storage platforms. They verify that the tool can handle structured and unstructured data sources, such as documents and spreadsheets. 1. Execute Scans---The data discovery tool now carefully searches the designated data sources for instances of PII. It uses advanced algorithms and pattern recognition techniques to identify quickly and index data. 1. Index and Tag Data---Any discovered PII is automatically organized and tagged with its source. This indexing makes finding and managing the identified data easy for further analysis or action. 1. Generate Audit Report---The analyst uses reporting features to create a detailed audit report once the scans are complete. This report lays out all the PII found, its compliance status, and any areas that need fixing before the audit. Data Discovery is The Foundation for Data Governance and Analytics ------------------------------------------------------------------ A well-executed data discovery and governance strategy maximizes the value of your organization's data assets. It's more than just finding and organizing your data---it's also about integrating that discovered data into your systems for deep analysis and smart, strategic use. ### Data Governance Integration Feeding discovered data into [data catalogs](https://atlan.com/data-catalog-vs-data-lineage) is critical in stitching together solid data governance. This process takes the raw, discovered data and organizes it into a detailed, easily navigable inventory. These catalogs become a vital tool in crafting clear data access policies, laying out who gets to see and use what data, and ensuring it's done safely and in compliance with regulations. ![5 components of data governance](https://spectralops.io/wp-content/uploads/2024/04/5-components-of-data-governance-1024x675.png) It's about smoothing data management and building trust and security around your data assets. It will allow for more intelligent, more controlled data usage throughout the organization. ### Enhancing Reporting and Analytics Structured data revealed through the discovery process forms the bedrock for improved reporting and analytics. By integrating this structured data into business intelligence (BI) dashboards, your organization gains granular insights into operational metrics and customer behavior. ![Data discovery](https://spectralops.io/wp-content/uploads/2024/04/Data-discovery-1024x736.png) ### Data Discovery for a Comprehensive Security Strategy Pinpointing sensitive and high-risk data is the beginning of a truly effective security strategy. Once you've identified this critical information, it's time to dive deeper. Left unchecked, secrets sprawl (like misconfigured API keys or exposed credentials) poses a significant risk and increases the attack surface. Implement [continuous data monitoring](https://spectralops.io/blog/the-essential-guide-to-data-monitoring/) alongside user training to receive real-time alerts on anomalies, suspicious activity, or [unauthorized third-party access](https://www.apono.io/blog/permission-control-for-third-parties/) attempts. Data Discovery Best Practices for DevSecOps ------------------------------------------- Adopting solid data discovery practices boosts your security and how your teams work together. Let's explore practical strategies to make your data discovery work harder and smarter, aligning with DevSecOps principles. ### User Training and Enablement Getting the most out of your tools means ensuring everyone who needs them knows how to use them. It includes bringing all relevant team members up to speed, from developers to security folks and operations staff.  This training should cover the basics of tool operation, advanced features, best practices for data analysis and threat detection, and the importance of integrating with [security orchestration platforms](https://www.jit.io/blog/unlocking-the-power-of-security-orchestration) to streamline incident response. ### The Evolving Data Landscape Your data is constantly changing and growing in size and complexity. That means your approach to data discovery must change with it.  Revisit and regularly update your [data discovery configurations](https://docs.aws.amazon.com/wellarchitected/latest/analytics-lens/data-discovery.html) and goals to maintain relevance and effectiveness against new data patterns and emerging threats. Adjust your data classification schemas and access controls as your data evolves to guarantee accurate detection and safeguard critical information.  Data Discovery is Your DevSecOps Advantage ------------------------------------------ In the fast-moving DevSecOps world, the cost of reactivity can be devastating. Data discovery is the proactive advantage you need. It reduces code vulnerabilities, smoothing compliance, and speeds up your response to incidents. As you consider data discovery tools, prioritize those that align with your unique requirements. Whether your focus is on comprehensive scanning capabilities, integration ease, or specific compliance needs, the right tool does more than just the job -- it sets you up for solid data governance and strengthens your security. Ready to take control of your data and enhance your security posture? Discover hidden data, streamline compliance, and proactively defend against threats. [Get started ](https://spectralops.io/onboarding)today with a free SpectralOps account and see the difference firsthand.
yayabobi
1,877,008
Upstream preview: Welcome to Upstream 2024
Upstream is this week on Wednesday (June 5!), and wow, our schedule is shaping up brilliantly. For...
0
2024-06-04T19:00:13
https://dev.to/tidelift/upstream-preview-welcome-to-upstream-2024-2p06
upstream, opensource, cybersecurity, maintenance
<p><em>Upstream is this week on Wednesday (June 5!), and wow, our schedule is shaping up brilliantly. For the rest of this week, we’ll be giving you a sneak preview into some of the talks and the speakers giving them via posts like these. </em><a href="https://upstream.live/" style="font-style: normal;">RSVP now</a>!</p> <p><a href="https://explore.tidelift.com/upstream/main-2023/upstream-23-session-luis-villa-keynote?__hstc=151926246.97feb0f0180d1c08e142395baed37855.1710361664533.1717177892477.1717183981584.37&amp;__hssc=151926246.1.1717183981584&amp;__hsfp=3019692598" style="font-style: normal;"><span>Last year at Upstream</span></a><span style="font-style: normal;">, we talked about the accident</span>al “open source software supply chain,” a term coined to help others understand the breadth of open source, but with it came a problematic one-to-one comparison of open source and other supply chains.</p> <p>And clearly, open source maintainers weren't given the memo that they are part of this "supply chain." One poignant example:&nbsp; the viral blog post “<a href="https://www.softwaremaxims.com/blog/not-a-supplier"><span>I am not a supplier</span></a>” by Thomas Depierre.</p> <p>What our discussions last year showed was that open source isn’t one-size-fits-all, and it’s not meant to be compartmentalized in a phrase—it’s vast, it’s complex, and with that comes problems, some of which have existed from the start.&nbsp;</p> <p>At this year’s Upstream, Tidelift co-founder and Upstream host, <a href="https://upstream.live/speaker-2024/luis-villa"><span>Luis Villa</span></a>, will be welcoming attendees with an introduction to the <a href="https://upstream.live/"><span>Upstream 2024</span></a> theme: unusual ideas to solve the usual problems. Problems such as the rising consumption of open source that put stress on an already fatigued system—a system that sees big enterprise users relying heavily on open source projects created and maintained by unpaid volunteers.&nbsp;</p> <p>Open source’s popularity has made it an even more tempting target for those who seek to exploit it, and highly visible vulnerability incidents like the recent <a href="https://tidelift.com/resources/xz-backdoor-hack" rel="noopener">xz utils backdoor hack</a> have only added to the pressure. In the Upstream welcome, Luis will preview the day’s talks, including ones that illuminate these usual problems, like the panel “<a href="https://blog.tidelift.com/upstream-session-spotlight-life-after-the-xz-utils-backdoor-hack" rel="noopener">life after the xz utils backdoor hack</a>,” where we’ll hear maintainer and industry perspectives on the xz utils incident and what it means for the future of open source.</p> <p>However, it’s important to note that <em>solutions</em> play a big role in this day, too. When faced with a massively complex issue with its history of failing solutions, it’s hard not to tap out and turn to cynicism. In spite of this, in his opening talk, Luis provides an optimistic take that aims to circumvent the cynics and highlights how we’re already tackling the heavily nested problems of open source with innovative and thoughtful solutions. This year’s Upstream is exciting because it’s asking its speakers and attendees to embrace the challenge of finding different solutions to everyday problems, and we hope to see you there.</p> <p><a href="https://upstream.live/"><span>Register now</span></a> for our one-day, free virtual event, Upstream, on Wednesday, June 5th.&nbsp;</p>
caitbixby
1,876,359
Introduction to Trees and Binary Tree, Data Structures: (Trees, Part I)
Tree A tree is a hierarchical data structure used in computer science to represent data in...
0
2024-06-04T18:59:22
https://dev.to/harshm03/introduction-to-trees-and-binary-tree-data-structures-trees-part-i-4cmf
datastructures, dsa
## Tree A tree is a hierarchical data structure used in computer science to represent data in a parent-child relationship. Starting from a root node, each node can branch out to multiple child nodes, forming a structure that resembles a tree. This makes trees ideal for representing hierarchical data like organizational charts, file systems, and more. Trees allow for efficient searching, insertion, and deletion operations, making them fundamental in various algorithms and applications. ### Tree Terminologies 1. **Node**: The fundamental element of a tree that contains data and links to other nodes. Each node can have multiple children. 2. **Root**: The topmost node of a tree, which has no parent. It is the starting point of the tree structure. 3. **Edge**: A connection between two nodes, representing the parent-child relationship. 4. **Leaf**: A node that does not have any children. Leaves are the end nodes of the tree. 5. **Internal Node**: A node that has at least one child. These nodes are neither the root nor leaves and form the intermediate structure of the tree. 6. **Height of a Node**: The length of the longest path from the node to a leaf. The height of a tree is the height of its root. 7. **Depth of a Node**: The length of the path from the root to the node. It indicates the level at which the node is located in the tree. 8. **Level**: All nodes at the same depth. For example, all nodes at depth 2 form level 2. 9. **Degree of a Node**: The number of children a node has. The degree of the tree is the maximum degree of any node in the tree. 10. **Parent Node**: A node that has one or more children, establishing the parent-child relationship. 11. **Child Node**: A node that is connected to a parent node. 12. **Sibling**: Nodes that share the same parent, making them peers within the tree structure. ## Binary Tree A binary tree is a specific type of tree data structure in which each node has at most two children, referred to as the left child and the right child. This restriction makes binary trees simpler and more efficient for many algorithms, allowing for clear and efficient traversal, insertion, and deletion processes. When discussing trees in data structures, binary trees often take center stage due to the following reasons: 1. **Simplicity and Efficiency**: Binary trees are simple to implement and provide efficient means for data operations. Their structure allows for straightforward recursive algorithms for traversal, insertion, and deletion. 2. **Binary Search Trees (BSTs)**: BSTs are a type of binary tree where the left child contains values less than the parent node, and the right child contains values greater than the parent node. This property enables efficient searching, insertion, and deletion operations, with an average time complexity of O(log n). 3. **Heaps**: Binary heaps are binary trees that maintain a specific order property, such as the min-heap or max-heap property, which makes them ideal for implementing priority queues and efficient sorting algorithms like heapsort. 4. **Balanced Trees**: Various balanced binary trees, like AVL trees and Red-Black trees, ensure that the tree remains balanced, providing O(log n) time complexity for operations and maintaining efficiency even in the worst case. Overall, the binary tree's restricted structure, along with its adaptability to various forms and algorithms, makes it a central topic in the study of data structures. ### Types of Binary Trees 1. **Full Binary Tree**: Every node has either 0 or 2 children. 2. **Complete Binary Tree**: All levels are completely filled except possibly the last level, which is filled from left to right. 3. **Perfect Binary Tree**: All internal nodes have exactly two children, and all leaf nodes are at the same level. 4. **Balanced Binary Tree**: The height of the left and right subtrees of any node differ by at most one. ### Implementation of Binary Tree Binary trees can be implemented using linked lists. This method offers flexibility and is well-suited for trees that frequently change in size or structure. Linked lists allow dynamic memory allocation, making them ideal for trees where the number of nodes is not known in advance or changes frequently. #### Creation of Binary Tree using Linked List To create a binary tree using linked lists in C++, we define a class that includes a nested node class for representing individual nodes and the tree itself. ```cpp #include <iostream> using namespace std; class BinaryTreeLinkedList { private: struct Node { int data; Node* left; Node* right; Node(int val) : data(val), left(nullptr), right(nullptr) {} ~Node() { delete left; delete right; } }; Node* root; public: BinaryTreeLinkedList() : root(nullptr) {} ~BinaryTreeLinkedList() { delete root; } }; ``` **Attributes Explanation** - `Node`: A nested struct representing each node of the tree, containing: - `data`: The value stored in the node. - `left`: Pointer to the left child. - `right`: Pointer to the right child. - `root`: Pointer to the root node of the tree. **Constructor Explanation** - `Node(int val)`: Initializes a node with a given value, setting `left` and `right` to `nullptr`. - `BinaryTreeLinkedList()`: Initializes the tree with `root` set to `nullptr`, indicating an empty tree. - `~Node()`: Recursively deletes the node and its children to prevent memory leaks. - `~BinaryTreeLinkedList()`: Deletes the root node, which in turn deletes the entire tree through the recursive destructor of `Node`. This setup provides the foundational framework for a binary tree using linked list implementations. The constructors ensure the tree is properly initialized and ready for further operations like insertion, deletion, and traversal. ### Building Binary Tree Building a binary tree involves creating nodes and connecting them in a hierarchical manner, ensuring each node has at most two children. The process can be dynamic, where elements are added one by one based on certain conditions or inputs. This allows for a flexible and interactive way of constructing the tree, especially when the structure is not known beforehand. #### Inserting Elements One by One (Manually Making a Tree) One way to build a binary tree is to insert elements one by one. During this process, the user specifies the value of each node. If a node should not have a left or right child, the user inputs `-1` to indicate that the position is empty. Here is a C++ function to build a binary tree interactively by inserting elements: ```cpp private: Node* buildBinaryTree(Node* root) { int value; cin >> value; if (value == -1) { return nullptr; } root = new Node(value); root->left = buildBinaryTree(root->left); root->right = buildBinaryTree(root->right); return root; } public: void buildTree() { root = buildBinaryTree(root); } ``` By using these functions, you can construct a binary tree dynamically, allowing for user input at each step to determine the structure of the tree. The `buildBinaryTree` function is private, as it is a helper function that should only be called internally by the class. The `buildTree` function is public, providing an interface for initiating the tree-building process. #### Inserting Element Using Array (Level Order Insertion) Another way to build a binary tree is by using an array for level order insertion. This method is useful when we have a complete binary tree, where all levels are fully filled except possibly for the last level. The array representation simplifies the process, mapping array indices to tree positions. Here is a C++ function to insert elements in level order using an array: ```cpp private: Node* insertLevelOrder(vector<int> &array, int index) { Node *root = nullptr; if (index < array.size()) { root = new Node(array[index]); root->left = insertLevelOrder(array, 2 * index + 1); root->right = insertLevelOrder(array, 2 * index + 2); } return root; } public: void buildTreeFromArray(vector<int> &array) { root = insertLevelOrder(array, 0); } ``` Using this approach, the `insertLevelOrder` function recursively constructs the binary tree from the array. The `buildTreeFromArray` function is a public interface that initiates the tree-building process by calling `insertLevelOrder` with the initial index set to 0. This method ensures the tree is built in level order, making it well-suited for complete binary trees. ### Tree Traversal Tree traversal is the process of visiting all the nodes of a tree in a systematic way. There are three main traversal methods: preorder, inorder, and postorder. Each method defines a specific order in which nodes are visited, allowing us to perform different operations at each node. #### Preorder Traversal ```cpp private: void preorderTraversal(Node* root) { if (root == nullptr) { return; } cout << root->data << " "; preorderTraversal(root->left); preorderTraversal(root->right); } public: void preorder() { preorderTraversal(root); } ``` #### Inorder Traversal ```cpp private: void inorderTraversal(Node* root) { if (root == nullptr) { return; } inorderTraversal(root->left); cout << root->data << " "; inorderTraversal(root->right); } public: void inorder() { inorderTraversal(root); } ``` #### Postorder Traversal ```cpp private: void postorderTraversal(Node* root) { if (root == nullptr) { return; } postorderTraversal(root->left); postorderTraversal(root->right); cout << root->data << " "; } public: void postorder() { postorderTraversal(root); } ``` #### Level Order Traversal ```cpp public: void levelOrder() { if (root == nullptr) { return; } queue<Node*> q; q.push(root); while (!q.empty()) { Node* current = q.front(); q.pop(); cout << current->data << " "; if (current->left != nullptr) { q.push(current->left); } if (current->right != nullptr) { q.push(current->right); } } } ``` These traversal methods allow you to explore the nodes of the tree in different orders, performing various operations as needed. Each traversal method has its own applications and is useful for solving different types of problems. ### Operations on Trees Trees offer a variety of operations that allow us to explore and manipulate their structure. Here are some common operations along with brief explanations and code snippets: #### Height of the Tree The height of a tree is the length of the longest path from the root node to a leaf node. It represents the maximum number of edges in any path from the root to a leaf. ```cpp public: int heightOfTree() { return calculateHeight(root); } private: int calculateHeight(Node* node) { if (node == nullptr) { return -1; // Height of an empty tree is -1 } int leftHeight = calculateHeight(node->left); int rightHeight = calculateHeight(node->right); return 1 + max(leftHeight, rightHeight); } ``` #### Height of the Given Node The height of a given node in a tree is the length of the longest path from that node to a leaf node. ```cpp public: int heightOfNode(Node* node) { if (node == nullptr) { return -1; // Height of a null node is -1 } return calculateHeight(node); } private: int calculateHeight(Node* node) { if (node == nullptr) { return -1; } int leftHeight = calculateHeight(node->left); int rightHeight = calculateHeight(node->right); return 1 + max(leftHeight, rightHeight); } ``` #### Depth of the Tree The depth of a tree is the length of the longest path from the root node to any leaf node. ```cpp public: int depthOfTree() { return calculateDepth(root); } private: int calculateDepth(Node* node) { if (node == nullptr) { return 0; // Depth of an empty tree is 0 } int leftDepth = calculateDepth(node->left); int rightDepth = calculateDepth(node->right); return 1 + max(leftDepth, rightDepth); } ``` #### Depth of the Given Node The depth of a given node in a tree is the length of the path from the root node to that node. ```cpp public: int depthOfNode(Node* node) { return calculateDepth(root, node, 0); } private: int calculateDepth(Node* currentNode, Node* targetNode, int depth) { if (currentNode == nullptr) { return -1; // Node not found } if (currentNode == targetNode) { return depth; // Node found, return depth } // Recur on left and right subtrees int leftDepth = calculateDepth(currentNode->left, targetNode, depth + 1); if (leftDepth != -1) { return leftDepth; } return calculateDepth(currentNode->right, targetNode, depth + 1); } ``` These operations allow us to gain insights into the structure of a tree, such as its height and depth, and to navigate to specific nodes within the tree. They are essential for various tree-based algorithms and analyses. ### Full Code Implementation of Binary Tree Binary trees can be implemented using linked lists, offering flexibility and dynamic memory allocation. Below is the full C++ implementation of a binary tree, including the necessary functions for building the tree, performing preorder traversal, and calculating various properties such as height and depth. ```cpp #include <iostream> #include <queue> #include <vector> using namespace std; class BinaryTree { private: struct Node { int data; Node* left; Node* right; Node(int val) : data(val), left(nullptr), right(nullptr) {} ~Node() { delete left; delete right; } }; Node* root; public: BinaryTree() : root(nullptr) {} ~BinaryTree() { delete root; } void buildTree() { root = buildBinaryTree(root); } void buildTreeFromArray(vector<int>& array) { root = insertLevelOrder(array, 0); } void preorder() { preorderTraversal(root); } int heightOfTree() { return calculateHeight(root); } int depthOfTree() { return calculateDepth(root); } int depthOfNode(Node* node) { return calculateDepth(root, node, 0); } private: Node* buildBinaryTree(Node* root) { int value; cin >> value; if (value == -1) { return nullptr; } root = new Node(value); root->left = buildBinaryTree(root->left); root->right = buildBinaryTree(root->right); return root; } Node* insertLevelOrder(vector<int>& array, int index) { Node* root = nullptr; if (index < array.size()) { root = new Node(array[index]); root->left = insertLevelOrder(array, 2 * index + 1); root->right = insertLevelOrder(array, 2 * index + 2); } return root; } void preorderTraversal(Node* root) { if (root == nullptr) { return; } cout << root->data << " "; preorderTraversal(root->left); preorderTraversal(root->right); } int calculateHeight(Node* node) { if (node == nullptr) { return -1; // Height of an empty tree is -1 } int leftHeight = calculateHeight(node->left); int rightHeight = calculateHeight(node->right); return 1 + max(leftHeight, rightHeight); } int calculateDepth(Node* currentNode, Node* targetNode, int depth) { if (currentNode == nullptr) { return -1; // Node not found } if (currentNode == targetNode) { return depth; // Node found, return depth } // Recur on left and right subtrees int leftDepth = calculateDepth(currentNode->left, targetNode, depth + 1); if (leftDepth != -1) { return leftDepth; } return calculateDepth(currentNode->right, targetNode, depth + 1); } int calculateDepth(Node* node) { if (node == nullptr) { return 0; // Depth of an empty tree is 0 } int leftDepth = calculateDepth(node->left); int rightDepth = calculateDepth(node->right); return 1 + max(leftDepth, rightDepth); } }; int main() { BinaryTree tree; cout << "Enter the elements of the binary tree (enter -1 for empty nodes):" << endl; tree.buildTree(); cout << "Preorder Traversal: "; tree.preorder(); cout << endl; cout << "Height of the tree: " << tree.heightOfTree() << endl; cout << "Depth of the tree: " << tree.depthOfTree() << endl; // Example of calculating depth of a node (assuming node exists) // Node* targetNode = tree.findNode(5); // cout << "Depth of the node with value 5: " << tree.depthOfNode(targetNode) << endl; return 0; } ``` This code implements a binary tree using a linked list structure, allowing for dynamic creation and traversal of the tree. It includes functions for building the tree interactively or from an array, performing preorder traversal, and calculating the height and depth of the tree. You can customize and extend this code to include additional functionality as needed.
harshm03
1,877,002
Introduction to "Accel Record": A TypeScript ORM Using the Active Record Pattern
In this article, we'll briefly introduce Accel Record, an ORM for TypeScript that we're...
27,598
2024-06-04T18:57:50
https://dev.to/koyopro/introduction-to-accel-record-a-typescript-orm-using-the-active-record-pattern-2oeh
typescript, activerecord, orm, database
In this article, we'll briefly introduce [Accel Record](https://www.npmjs.com/package/accel-record), an ORM for TypeScript that we're developing. ## Overview of Accel Record [accel-record - npm](https://www.npmjs.com/package/accel-record) Accel Record is a type-safe, synchronous ORM for TypeScript. It adopts the Active Record pattern, with an interface heavily influenced by Ruby on Rails' Active Record. It uses Prisma for schema management and migration, allowing you to use your existing Prisma schema directly. As of June 2024, it supports MySQL and SQLite, with plans to support PostgreSQL in the future. ## Features - Active Record pattern - Type-safe classes - Synchronous API - Validation - Native ESM - Support for MySQL and SQLite We will introduce some of these features in more detail below. ## Usage Example For example, if you define a User model as follows, ```ts // prisma/schema.prisma model User { id Int @id @default(autoincrement()) firstName String lastName String age Int? } ``` you can use it like this: ```ts import { User } from "./models/index.js"; const user: User = User.create({ firstName: "John", lastName: "Doe", }); user.update({ age: 26, }); for (const user of User.all()) { console.log(user.firstName); } const john: User | undefined = User.findBy({ firstName: "John", lastName: "Doe", }); john?.delete(); ``` You can also extend models to define custom methods. ```ts // src/models/user.ts import { ApplicationRecord } from "./applicationRecord.js"; export class UserModel extends ApplicationRecord { // Define a method to get the full name get fullName(): string { return `${this.firstName} ${this.lastName}`; } } ``` ```ts import { User } from "./models/index.js"; const user = User.create({ firstName: "John", lastName: "Doe", }); console.log(user.fullName); // => "John Doe" ``` For more detailed usage, see the [README](https://github.com/koyopro/accella/blob/main/packages/accel-record/README-ja.md). ## Active Record Pattern Accel Record adopts the Active Record pattern. Its interface is heavily influenced by Ruby on Rails' Active Record. Those with experience in Rails should find it easy to understand how to use it. ### Example of Creating and Saving Data ```ts import { NewUser, User } from "./models/index.js"; // Create a User const user: User = User.create({ firstName: "John", lastName: "Doe", }); console.log(user.id); // => 1 // You can also write it like this const user: NewUser = User.build({}); user.firstName = "Alice"; user.lastName = "Smith"; user.save(); console.log(user.id); // => 2 ``` ### Example of Retrieving Data ```ts import { User } from "./models/index.js"; const allUsers = User.all(); console.log(`IDs of all users: ${allUsers.map((u) => u.id).join(", ")}`); const firstUser = User.first(); console.log(`Name of the first user: ${firstUser?.firstName}`); const john = User.findBy({ firstName: "John" }); console.log(`ID of the user with the name John: ${john?.id}`); const does = User.where({ lastName: "Doe" }); console.log(`Number of users with the last name Doe: ${does.count()}`); ``` ## Type-safe Classes Accel Record provides type-safe classes. The query API also includes type information, allowing you to leverage TypeScript's type system. Effective editor autocompletion and type checking help maintain high development efficiency. A notable feature is that the type changes based on the model's state, so we'll introduce it here. Accel Record provides types called `NewModel` and `PersistedModel` to distinguish between new and saved models. Depending on the schema definition, some properties will allow undefined in `NewModel` but not in `PersistedModel`. This allows you to handle both new and saved models in a type-safe manner. ```ts import { User, NewUser } from "./models/index.js"; /* Example of NewModel: The NewUser type represents a model before saving and has the following type. interface NewUser { id: number | undefined; firstName: string | undefined; lastName: string | undefined; age: number | undefined; } */ const newUser: NewUser = User.build({}); /* Example of PersistedModel: The User type represents a saved model and has the following type. interface User { id: number; firstName: string; lastName: string; age: number | undefined; } */ const persistedUser: User = User.first()!; ``` By using methods like `save()`, you can convert a NewModel type to a PersistedModel type. ```ts import { User, NewUser } from "./models/index.js"; // Prepare a user of the NewModel type const user: NewUser = User.build({ firstName: "John", lastName: "Doe", }); if (user.save()) { // If save is successful, the NewModel is converted to a PersistedModel. // In this block, user is treated as a User type. console.log(user.id); // user.id is of type number } else { // If save fails, the NewModel remains the same type. // In this block, user remains of type NewUser. console.log(user.id); // user.id is of type number | undefined } ``` ## Synchronous API Accel Record provides a synchronous API that does not use Promises or callbacks, even for database access. This allows you to write simpler code without using await, etc. This was mainly adopted to enhance application development efficiency. By adopting a synchronous API, you can perform related operations intuitively, as shown below. ```ts import { User, Setting, Post } from "./models/index.js"; const user = User.first()!; const setting = Setting.build({ theme: "dark" }); const post = Post.build({ title: "Hello, World!" }); // Operations on hasOne associations are automatically saved user.setting = setting; // Operations on hasMany associations are also automatically saved user.posts.push(post); ``` ```ts import { User } from "./models/index.js"; // Related entities are lazily loaded and cached // You don't need to explicitly instruct to load related entities when fetching a user. const user = User.first()!; console.log(user.setting.theme); // setting is loaded and cached console.log(user.posts.map((post) => post.title)); // posts are loaded and cached ``` Synchronous APIs have some drawbacks compared to implementations using asynchronous APIs, primarily related to performance. We will discuss these trade-offs in a separate article. ## Validation Like Ruby on Rails' Active Record, Accel Record also provides validation features. You can define validations by overriding the `validateAttributes` method of the BaseModel. ```ts // src/models/user.ts import { ApplicationRecord } from "./applicationRecord.js"; export class UserModel extends ApplicationRecord { override validateAttributes() { // Validate that firstName is not empty this.validates("firstName", { presence: true }); } } ``` When using methods like save, validations are automatically executed, and save processing only occurs if there are no errors. ```ts import { User } from "./models/index.js"; const newUser = User.build({ firstName: "" }); // If validation errors occur, save returns false. if (newUser.save()) { // If validation errors do not occur, saving succeeds } else { // If validation errors occur, saving fails } ``` ## Conclusion This concludes our brief introduction to Accel Record. If you are interested, please check the links below for more details. accel-record - npm https://www.npmjs.com/package/accel-record
koyopro
1,877,007
How to Implement Dropdown on Hover in React with Ant Design 5
In this tutorial, we'll create a dropdown-on-hover feature in React using Ant Design 5. First, you'll...
0
2024-06-04T18:57:02
https://frontendshape.com/post/how-to-use-dropdown-on-hover-in-react-ant-design-5
react, antdesign, webdev
In this tutorial, we'll create a dropdown-on-hover feature in React using Ant Design 5. First, you'll need to set up a React project with Ant Design 5. <br> [install & setup vite + react + typescript + ant design 5](https://frontendshape.com/post/install-setup-vite-react-typescript-ant-design-5) 1.Create a simple dropdown-on-hover in React using Ant Design 5 components such as MenuProps, Dropdown, Space, and the icons DownOutlined and SmileOutlined. ```jsx import React from 'react'; import { DownOutlined, SmileOutlined } from '@ant-design/icons'; import type { MenuProps } from 'antd'; import { Dropdown, Space } from 'antd'; const items: MenuProps['items'] = [ { key: '1', label: ( <a target="_blank" rel="noopener noreferrer" href="https://www.antgroup.com"> 1st menu item </a> ), }, { key: '2', label: ( <a target="_blank" rel="noopener noreferrer" href="https://www.aliyun.com"> 2nd menu item (disabled) </a> ), icon: <SmileOutlined />, disabled: true, }, { key: '3', label: ( <a target="_blank" rel="noopener noreferrer" href="https://www.luohanacademy.com"> 3rd menu item (disabled) </a> ), disabled: true, }, { key: '4', danger: true, label: 'a danger item', }, ]; const App: React.FC = () => ( <Dropdown menu={{ items }}> <a onClick={(e) => e.preventDefault()}> <Space> Hover me <DownOutlined /> </Space> </a> </Dropdown> ); export default App; ``` ![dropdown on hover items list](https://frontendshape.com/wp-content/uploads/2024/05/XLPqVnMKfArGHkQJ68rBtIgLakfpIJJG3LE2lXcZ.png) 2.Implement a dropdown-on-hover feature in React with Ant Design 5, showcasing placements at the top, left, right, bottom, bottom left, and bottom right. ```jsx import React from 'react'; import type { MenuProps } from 'antd'; import { Button, Dropdown, Space } from 'antd'; const items: MenuProps['items'] = [ { key: '1', label: ( <a target="_blank" rel="noopener noreferrer" href="https://www.antgroup.com"> 1st menu item </a> ), }, { key: '2', label: ( <a target="_blank" rel="noopener noreferrer" href="https://www.aliyun.com"> 2nd menu item </a> ), }, { key: '3', label: ( <a target="_blank" rel="noopener noreferrer" href="https://www.luohanacademy.com"> 3rd menu item </a> ), }, ]; const App: React.FC = () => ( <Space direction="vertical"> <Space wrap> <Dropdown menu={{ items }} placement="bottomLeft"> <Button>bottomLeft</Button> </Dropdown> <Dropdown menu={{ items }} placement="bottom"> <Button>bottom</Button> </Dropdown> <Dropdown menu={{ items }} placement="bottomRight"> <Button>bottomRight</Button> </Dropdown> </Space> <Space wrap> <Dropdown menu={{ items }} placement="topLeft"> <Button>topLeft</Button> </Dropdown> <Dropdown menu={{ items }} placement="top"> <Button>top</Button> </Dropdown> <Dropdown menu={{ items }} placement="topRight"> <Button>topRight</Button> </Dropdown> </Space> </Space> ); export default App; ``` 3.Create a dropdown-on-hover feature in React using Ant Design 5 with a menu. ```jsx import React from 'react'; import { DownOutlined, UserOutlined } from '@ant-design/icons'; import type { MenuProps } from 'antd'; import { Button, Dropdown, message, Space, Tooltip } from 'antd'; const handleButtonClick = (e: React.MouseEvent<HTMLButtonElement>) => { message.info('Click on left button.'); console.log('click left button', e); }; const handleMenuClick: MenuProps['onClick'] = (e) => { message.info('Click on menu item.'); console.log('click', e); }; const items: MenuProps['items'] = [ { label: '1st menu item', key: '1', icon: <UserOutlined />, }, { label: '2nd menu item', key: '2', icon: <UserOutlined />, }, { label: '3rd menu item', key: '3', icon: <UserOutlined />, danger: true, }, { label: '4rd menu item', key: '4', icon: <UserOutlined />, danger: true, disabled: true, }, ]; const menuProps = { items, onClick: handleMenuClick, }; const App: React.FC = () => ( <Space wrap> <Dropdown.Button menu={menuProps} onClick={handleButtonClick}> Dropdown </Dropdown.Button> <Dropdown.Button menu={menuProps} placement="bottom" icon={<UserOutlined />}> Dropdown </Dropdown.Button> <Dropdown.Button menu={menuProps} onClick={handleButtonClick} disabled> Dropdown </Dropdown.Button> <Dropdown.Button menu={menuProps} buttonsRender={([leftButton, rightButton]) => [ <Tooltip title="tooltip" key="leftButton"> {leftButton} </Tooltip>, React.cloneElement(rightButton as React.ReactElement<any, string>, { loading: true }), ]} > With Tooltip </Dropdown.Button> <Dropdown menu={menuProps}> <Button> <Space> Button <DownOutlined /> </Space> </Button> </Dropdown> <Dropdown.Button menu={menuProps} onClick={handleButtonClick} danger> Danger </Dropdown.Button> </Space> ); export default App; ``` ![dropdown on hover with menu](https://frontendshape.com/wp-content/uploads/2024/05/rqeaV5CBU6Hb6XgsEKt42oFbjQtWF9CNP1hGPnYm.png)
aaronnfs
1,868,053
Using Pieces: A Technical Writer's Perspective
Hello! I’m super excited to share a new tool I've been using recently that has significantly boosted...
0
2024-06-04T18:56:23
https://dev.to/elliezub/using-pieces-a-technical-writers-perspective-39ip
productivity, beginners, webdev, ai
Hello! I’m super excited to share a new tool I've been using recently that has significantly boosted my productivity and made my writing process a lot smoother. As you might have guessed from the title, I’ll be talking about [Pieces](https://pieces.app/?utm_source=youtube&utm_medium=cpc&utm_campaign=ellie-partner-twitter) (and I'm not talking about puzzle pieces). To give you a bit of background on why I'm writing this article, I'll tell you how I found out about Pieces. The first time I came across Pieces was a few weeks ago in one of their Twitter spaces. During the space, I got the impression that the team working on Pieces is **really passionate** about what they are doing, and that stuck in my mind. Later on, I got even more interested after a coffee chat with the amazing [Sophia Iroegbu](https://x.com/sophiairoegbu_). She talked about her time **writing articles for Pieces** and also her experience **working as a DevRel intern**. After hearing about her positive experiences, I got really curious, so I decided to download Pieces and try it out for myself. That said, let's first learn a little bit about Pieces. ## What even is Pieces? Well, basically, **Pieces is an AI workflow companion** (think an early version of Jarvis from Iron Man). To give you a better idea of what Pieces can do, check out this 1-minute intro video: {% youtube aP8u95RTCGE %} Now that you understand a bit about Pieces, I'll give you a rundown of how I normally use it.😄 <img width="100%" style="width:100%" src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExZG54ZDQ0N3pveWt5NjhhOHczZmkyYnU5YW1icm1raGx5cjNjbDhtayZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/EjZCMlsD3TtSOrST2X/giphy.gif"> ## How I use Pieces Over the past few weeks, I've had the chance to test out Pieces quite a bit. Mainly, I use it while writing articles and working on projects. What’s great is how well it integrates into my workflow, making me more productive and helping me stay on task. There are several ways to use Pieces, but in this article, I'll focus on my experience with the **Pieces Copilot** (in VS Code), the **Obsidian Extension**, and the **Desktop App**. I'll walk you through how I use it, what I like about it, and what I wish it could do. ### Pieces Obsidian Extension Recently, I've been working on an article about **the benefits of coffee chats for developers**. The [Pieces Obsidian Extension](https://docs.pieces.app/extensions-plugins/obsidian) has been super useful during the writing process. To put things in perspective, my initial outlines for articles usually look something like this: ``` Explain Coffee Chats Benefits of Coffee Chats for devs - Communication - Community - Career Conclusion/Tips ``` Pretty brief right? Ultimately, I need a much more thorough outline including headings and subheadings. So, to expand on what I have and brainstorm new ideas, I ask Pieces: > Give me a better outline, and create some title ideas for the blog Here, you can see the beginning of Pieces' response: ![Screenshot of the new outline Pieces generated](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ihtzkfca8ndhykrostmc.png) As you can see in the screenshot, Pieces responds with a WAY better outline & even adds some suggestions for subtopics. One interesting topic that it suggested including in the article was the "origin of coffee chats". I hadn't thought about mentioning that. Other than that, Pieces suggested some really good tips when doing coffee chats: - Prepare questions - Be respectful of time - Follow up with a thank-you note Without getting too off-topic, I really liked the new version of my outline and the fact that I didn't even need to leave the Obsidian app to create it. ### Pieces Copilot Another thing that I've been working on lately is my new portfolio site. It's been really interesting to go through my code and ask [Pieces Copilot](https://docs.pieces.app/extensions-plugins/vscode) how to make it more accessible: ![Screenshot using the Pieces Copilot, asking how to make the code more accessible](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0hzwkxp4h93f13cpfzr1.png) Here, Pieces gives me a list of about 10 tips, such as adding alt text, using semantic HTML, and ensuring good color contrast (for context, my entire website is purple). Then, it provides specific code snippets and suggested improvements. Honestly, I sometimes overlook making things accessible from the start and end up addressing it as one of my final tasks (I know, I'm terrible). So, for me, using Pieces in this way is extremely helpful. ### The Desktop App Last but not least, let's talk about the [Pieces Desktop App](https://docs.pieces.app/installation-getting-started/what-am-i-installing). The really cool thing about the desktop app is the **Live Context** feature. [Live Context](https://docs.pieces.app/product-highlights-and-benefits/live-context) essentially gives Pieces the ability to look at the other things that you are working on. This comes in handy when you are working on a larger project with a bunch of files. Instead of repeatedly copying & pasting code, you could just ask Pieces directly about whatever repo or file you need help with. <img width="100%" style="width:100%" src="https://i.giphy.com/media/v1.Y2lkPTc5MGI3NjExeTV1bGlhNmxrMjV2ZWg3eTA2d3JsOGRlb3YwZTFxeDFpbjQzbnBvcyZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/3oEjHYibHwRL7mrNyo/giphy.gif"> That said, let's see how **Live Context** works: ![Screenshot of turning the live context on](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tmzzok4rjx9a33mvaxxn.png) First, we need to turn on Live Context. Then, to give you an idea of what it can access, I ask it where I left off with my work. ![Screenshot of a message asking Pieces where I left off with my work](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dy2wibe5e3jh4r4m5dz8.png) As you can see, recently I've been working on my new portfolio site, which I haphazardly named Ellies-blog-links. Normally I wouldn't ask Pieces about where I left off, but I think this is a good way to demonstrate that Pieces is actually keeping tabs on what I do in VSCode. Now that I know that Pieces is aware of what I'm working on, I ask it to review my entire repo and check for accessibility issues: ![Screenshot of accessibility recommendations from Pieces desktop app](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xcs301b12hbsa8mdsm2e.png) Then, Pieces responds with some accessibility tips, links to helpful resources, & specific instances where I could improve my code. This is similar to the previous example using the Copilot, but this time Pieces considers the accessibility of my _entire_ repo. ## Final Thoughts Overall, Pieces has been a great addition to my workflow. I've also had a lot of fun learning about its capabilities and interacting with the Pieces community. Now, I'd like to share a bit about the things I like & the things that could use some improvement. ### Ways to Improve - I really wish the Live Context feature existed within the Obsidian extension & VSCode (although I heard they are working on getting this integrated🤞). - Occasionally the responses have some formatting issues, like too much space between bullet points, or going suddenly from markdown to regular text. - Sometimes Pieces (Copilot & Desktop App) changes my code a bit _too_ much (for ex: it might make quite a few changes outside of my original request, like variable name changes etc.) and sometimes it doesn't make enough changes. But to be fair, ChatGPT has the same issue. I assume that once Pieces has more time to learn about me and how I work, it will get more intuitive. ### Things I Love - Love the fact I don't need to exit whatever app I'm using to ask it questions (Obsidian, VSCode). - It really picks up on your personal writing style due to the Live Context, so I would say it's much better with editing than using ChatGPT directly. - The **Live Context** feature is very useful, I love how Pieces can look at ALL the files in my repo at once. - The Pieces Team is SUPER responsive to feedback of any kind. They respond really quickly if a user is facing any kind of issue & then they try their best to fix it. --- Well, that's all for this post. If you would like to try Pieces for yourself, you can download it for free [here](https://pieces.app/?utm_source=youtube&utm_medium=cpc&utm_campaign=ellie-partner-twitter). I would love to know what you think! If you have any questions about Pieces or want to connect with me, the best place to reach me is on [Twitter/X.](https://x.com/elliezub) Happy coding!
elliezub
1,876,993
Effortless Django & React: Introducing Reactivated
Django, a powerful Python web framework, excels in backend development. But for interactive user...
0
2024-06-04T18:53:00
https://dev.to/topunix/effortless-django-react-introducing-reactivated-218f
react, django, webdev, python
Django, a powerful Python web framework, excels in backend development. But for interactive user interfaces, you might find yourself reaching for a separate frontend framework like React. Here's where Reactivated comes in - a game-changer for building Django applications with React. **Reactivated: Zero-Configuration Django and React** Imagine using Django for the robust backend you love, and React for the dynamic frontend you crave, all without the usual configuration headaches. That's the magic of Reactivated. It seamlessly integrates Django and React, eliminating the need for complex tooling and setup. **Key Benefits of Reactivated:** * **Seamless Integration:** Reactivated makes it easier to incorporate React components directly into Django templates. * **Server-Side Rendering:** It supports SSR, improving performance and SEO by rendering React components on the server before sending them to the client. * **Simplified Development:** By combining Django and React more tightly, you can avoid the complexity of managing a separate API and front-end framework. * **Leveraging Django Features:** One of the key advantages of using Reactivated is the ability to utilize Django's powerful features such as forms, formsets, and views directly with React components. This provides a unified development experience and reduces the need for redundant code. **Who Should Consider Reactivated?** * **Django Developers:** If you're comfortable with Django but want to explore React for a more engaging frontend, Reactivated provides a smooth learning curve. * **React Developers:** You can leverage your React expertise and seamlessly integrate it with the power of Django's backend. * **Newcomers to Both:** Reactivated offers a gentle introduction to both frameworks, making it a great starting point for full-stack development. **Traditional Approach: Django REST Framework (DRF)** The prevalent method for using React with Django involves Django REST Framework (DRF). DRF provides tools for building RESTful APIs, which serve as the communication layer between your Django backend and React frontend. Here's a breakdown of this approach: * **Separate Codebases:** You'll maintain separate codebases for your Django backend and React frontend, requiring more coordination and potentially duplicating logic. * **API Design:** You'll need to meticulously design your API endpoints to provide the data your React application needs. * **Flexibility and Customization:** DRF offers extensive control over API functionality, allowing for complex use cases. **Reactivated vs. Django REST Framework (DRF):** | Feature | Reactivated | Django REST Framework (DRF) | |-------------------------|---------------------------------|----------------------------------| | Setup Complexity | Minimal configuration | More configuration required | | Code Maintainability | Focuses on single codebase | Maintains separate codebases | | Type Safety | Supports TypeScript or Mypy | Requires additional libraries | | Learning Curve | Easier for beginners | Steeper learning curve | | Ideal Use Cases | Simpler applications, faster MVPs | Complex APIs, fine-grained control | **Choosing the Right Approach** Both Reactivated and DRF have their strengths. Here's a quick guide to help you decide: * **Choose Reactivated for:** Simpler projects, rapid prototyping, focus on developer experience, and preference for a unified codebase. * **Choose DRF for:** Complex applications requiring fine-grained control over API interactions, and existing experience with building RESTful APIs. **Getting Started** * **Reactivated:** Head over to the Reactivated website [https://www.reactivated.io/](https://www.reactivated.io/) for detailed documentation and tutorials. * **Django REST Framework:** Refer to the official DRF documentation [https://www.django-rest-framework.org/tutorial/quickstart/](https://www.django-rest-framework.org/tutorial/quickstart/) for comprehensive guides and examples. **Reactivated offers a refreshing approach to building Django applications with React. Its focus on simplicity, type safety, and developer productivity makes it a valuable tool for both experienced and aspiring full-stack developers. However, DRF remains a powerful option for more intricate API requirements.**
topunix
1,877,005
Building An E-Commerce Store With NextJS
In this tutorial, you'll learn how to build an e-commerce store where customers can purchase products...
0
2024-06-04T18:52:25
https://novu.co/blog/building-an-e-commerce-store-with-nextjs/
webdev, javascript, tutorial, programming
In this tutorial, you'll learn how to build an e-commerce store where customers can purchase products and make payments via Stripe. After a successful payment, an email notification is sent to the customer, and an in-app notification to the Admin user. The Admin user can also create and delete products within the application. To build this application, we'll use the following tools: - [Appwrite](https://appwrite.io/) - for authenticating users, as well as saving and retrieving product details. - [Next.js](https://nextjs.org/) - for creating the application’s user interface and backend. - [Novu](https://docs.novu.co/getting-started/introduction)  - for sending email and in-app notifications. - [React Email](https://react.email/docs/introduction) - for creating email templates. - [Stripe](https://docs.stripe.com/) - for integrating a payment checkout to the application. ![app](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t02iyysrqjfqw8imuqxn.png) --- ## Building the application interface with Next.js The application pages are divided into two parts based on the roles assigned to the users. Customers can access the Home page and sign in to the application before making payments. Admin users can access all pages, including a sign-in page and a dashboard page where they can add and remove products. Now, let's build the application. ![https://media1.giphy.com/media/iopxsZtW2QVRs4poEC/giphy.gif?cid=7941fdc6aot3qt7vvq4voh5c1iagyusdpuga713m8ljqcqmd&ep=v1_gifs_search&rid=giphy.gif&ct=g](https://media1.giphy.com/media/iopxsZtW2QVRs4poEC/giphy.gif?cid=7941fdc6aot3qt7vvq4voh5c1iagyusdpuga713m8ljqcqmd&ep=v1_gifs_search&rid=giphy.gif&ct=g) Create a new Next.js Typescript project by running the code snippet below: ```bash npx create-next-app novu-store ``` Next, install [React Icons](https://react-icons.github.io/react-icons) and the [Headless UI](https://headlessui.com/) package. React Icons allows us to use various icons within the application, while Headless UI provides easy-to-use modern UI components. ```bash npm install react-icons @headlessui/react ``` Copy this code snippet from the [GitHub repository](https://github.com/dha-stix/ecom-store-with-nextjs-appwrite-novu-and-stripe/blob/main/src/app/page.tsx) into the `app/page.tsx` file. It renders a list of products on the screen and allows users to select items in a cart, similar to the image below. ![1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dj69givzhqfapgsg12rk.gif) Create a login route that enables users to sign using their GitHub account. Copy the code snippet below into the `app/login/page.tsx` file. ```tsx //👉🏻 create a login folder containing a page.tsx file export default function Home() { const handleGoogleSignIn = async () => {}; return ( <main className='w-full min-h-screen flex flex-col items-center justify-center'> <h2 className='font-semibold text-3xl mb-2'>Customer Sign in</h2> <p className='mb-4 text-sm text-red-500'> You need to sign in before you can make a purchase </p> <button className='p-4 border-[2px] border-gray-500 rounded-md hover:bg-black hover:text-white w-2/3' onClick={() => handleGoogleSignIn()} > Sign in with GitHub </button> </main> ); } ``` ![Customer Sign In](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3nh2rowpfg4hgksj5diy.png) When users click the Sign in button, it redirects them to the GitHub authentication page and prompts them to sign in to the application. You'll learn how to do this with [Appwrite](https://appwrite.io/) shortly. Next, let's create the admin pages. Add an `admin` folder containing a `login` and `dashboard` route within the `app` folder. ```bash cd app mkdir admin && cd admin mkdir dashboard login ``` Add a `page.tsx` file within the `dashboard` and `login` folders, and copy the code snippet below into the `login/page.tsx` file. ```tsx "use client"; import Link from "next/link"; import { useState } from "react"; export default function Login() { const [email, setEmail] = useState<string>(""); const [password, setPassword] = useState<string>(""); const handleLogin = async (e: React.FormEvent) => { e.preventDefault(); console.log({ email, password }); }; return ( <main className='w-full min-h-screen flex flex-col items-center justify-center'> <h2 className='font-semibold text-3xl mb-4'> Admin Sign in</h2> <form className='w-2/3' onSubmit={handleLogin}> <label htmlFor='email' className='block'> Email </label> <input type='email' id='email' className='w-full px-4 py-3 border border-gray-400 rounded-sm mb-4' required value={email} placeholder='admin@admin.com' onChange={(e) => setEmail(e.target.value)} /> <label htmlFor='password' className='block'> Password </label> <input type='password' id='password' className='w-full px-4 py-3 border border-gray-400 rounded-sm mb-4' required value={password} placeholder='admin123' onChange={(e) => setPassword(e.target.value)} /> <button className='p-4 text-lg mb-3 bg-blue-600 text-white w-full rounded-md'> Sign in </button> <p className='text-sm text-center'> Not an Admin?{" "} <Link href='/login' className='text-blue-500'> Sign in as a Customer </Link> </p> </form> </main> ); } ``` The code snippet above renders a form that accepts the Admin's email and password, validates the credentials, and then logs the user into the application. ![Admin Sign In](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gjd9wsi63t96d5cls9om.png) The Admin dashboard page renders the available products and allows the Admin user to add and delete products from the application. Copy [this code snippet](https://github.com/dha-stix/ecom-store-with-nextjs-appwrite-novu-and-stripe/blob/main/src/app/admin/dashboard/page.tsx) into the `dashboard/page.tsx` file to create the user interface. ![2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p1gd1uq1eq6n76fesjxu.gif) Congratulations! You've built the application interface. In the upcoming sections, you'll learn how to connect the application to an Appwrite backend and send data between the client and the server. --- ## How to add Appwrite to a Next.js application Appwrite is an open-source backend service that enables you to create secure and scalable software applications. It offers features such as multiple authentication methods, a secure database, file storage, cloud messaging, and more, which are essential for building full-stack applications. In this section, you'll learn how to set up an Appwrite project, including features such as authentication, database, and file storage. First, visit [Appwrite Cloud](https://cloud.appwrite.io/register), and create an account and organization for your projects. Next, create a new project and select your preferred region for hosting the project. ![Appwrite 1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/as6302olk60oklfo70x5.png) Select `Web` as the platform SDK for the application. ![Appwrite 2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bb5ae82i9fyoyrowsy96.png) Follow the steps displayed on the screen. Since you're currently building in development mode, you can use the wildcard (`*`) as your hostname and change it to your domain name after deploying the application. ![Appwrite 3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y5ccs0hzgs9ujf5lzh83.png) Install the Appwrite client SDK within your Next.js project. ```bash npm install appwrite ``` Finally, create an `appwrite.ts` file within your Next.js app folder and copy the code snippet below into the file to initialize Appwrite. ```tsx import { Client, Account, Databases, Storage } from "appwrite"; const client = new Client(); client .setEndpoint("https://cloud.appwrite.io/v1") .setProject(<YOUR_PROJECT_ID>); export const account = new Account(client); export const db = new Databases(client); export const storage = new Storage(client); ``` ### Setting up GitHub Authentication with Appwrite Here, you'll learn how to set up GitHub and Email/Password authentication with Appwrite. Email/Password authentication is already configured by default, so let's focus on setting up GitHub authentication. Before we proceed, you need to create a [GitHub OAuth application](https://github.com/settings/developers) using your GitHub account. Appwrite will require the client ID and secrets to set up GitHub authentication. ![GitHub 1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9znk1yr7tffus7soitq2.png) Enable Appwrite's GitHub authentication method by selecting `Auth` from the sidebar menu and navigating to the `Settings` tab. ![Appwrite 4](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/43uo6nho1bz9su14zsno.png) Copy your GitHub client ID and secret into the Appwrite's GitHub OAuth settings. ![Screenshot 2024-05-24 at 19.45.36.png](Building%20An%20E-Commerce%20Store%20With%20NextJS%20684b2e35ac5f4575930e9fbb8c958741/Screenshot_2024-05-24_at_19.45.36.png) Finally, ensure you copy the URI generated by Appwrite into your GitHub app settings. ![GitHub 2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g75q5r5hc6l5pi09k88m.png) ### Setting up Appwrite Database Select Databases from the sidebar menu and create a new database. You can name it `novu store`. ![Appwrite 5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y7kn1llmu7olqirfcrpa.png) Next, create a `products` collection. It will contain the lists of products within the application. ![Appwrite 6](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7p8laty6z37x0q1g6az4.png) Add name, price, and image attributes to the collection. ![Appwrite 7](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nzom3ptlz8t1rh9dtt1k.png) Under the Settings tab, update the permissions to allow every user to perform CRUD operations. However, you can change this after deploying the application to ensure that only authenticated users can perform various actions. ![Appwrite 8](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/37cqr8s0crtcttocjagk.png) Finally, copy your project, database, and collection IDs into an **`.env.local`** file. This keeps your credentials safe and allows you to reference each value from its environment variables. ``` NEXT_PUBLIC_PROJECT_ID=<YOUR_PROJECT_ID> NEXT_PUBLIC_DB_ID=<YOUR_DATABASE_ID> NEXT_PUBLIC_PRODUCTS_COLLECTION_ID=<YOUR_DB_COLLECTION_ID> ``` ### Setting up Appwrite Storage Select `Storage` from the sidebar menu and create a new bucket that will hold all the product images. ![Appwrite 9](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b84t9mk3k0wrkgiy4uca.png) Under the `Settings` tab, update the Permissions to allow any user for now. ![Appwrite 10](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zi3iozkaera7fohkwanm.png) Set the acceptable file formats. Since we are uploading images, you can select the **`.jpg`** and **`.png`** file formats. ![Appwrite 11](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zzxqpoq5dcpokkvdcsce.png) Finally, copy your bucket ID into `.env.local` file. ``` NEXT_PUBLIC_BUCKET_ID=<YOUR_BUCKET_ID> ``` Congratulations! You've successfully configured Appwrite. We can now start interacting with its various features. --- ## How to perform CRUD operations with Appwrite In this section, you'll learn how to create, retrieve, and delete products from Appwrite. Users need to be able to view existing products before making a purchase, while Admin users should have the permission to add and delete products from the application. First, create a **`utils.ts`** file within the Next.js **`app`** folder. This file will contain all Appwrite database interactions, which you can then import into the necessary pages. ```bash cd app touch utils.ts ``` ### Saving products to Appwrite Recall that the `products` collection has three attributes: name, image, and price. Therefore, when adding products to the database, you need to first upload the product's image, retrieve its URL and ID from the response, and then upload the URL as the product's image attribute, using the image's storage ID for the product data. Here is the code snippet that explains this: ```tsx import { db, storage } from "@/app/appwrite"; import { ID } from "appwrite"; export const createProduct = async ( productTitle: string, productPrice: number, productImage: any ) => { try { //👇🏻 upload the image const response = await storage.createFile( process.env.NEXT_PUBLIC_BUCKET_ID!, ID.unique(), productImage ); //👇🏻 get the image's URL const file_url = `https://cloud.appwrite.io/v1/storage/buckets/${process.env.NEXT_PUBLIC_BUCKET_ID}/files/${response.$id}/view?project=${process.env.NEXT_PUBLIC_PROJECT_ID}&mode=admin`; //👇🏻 add the product to the database await db.createDocument( process.env.NEXT_PUBLIC_DB_ID!, process.env.NEXT_PUBLIC_PRODUCTS_COLLECTION_ID!, response.$id, //👉🏻 use the image's ID { name: productTitle, price: productPrice, image: file_url, } ); alert("Product created successfully"); } catch (err) { console.error(err); } }; ``` The code snippet above uploads the image to Appwrite's cloud storage and retrieves the exact image URL using the bucket ID, image ID, and project ID. Once the image is successfully uploaded, its ID is used in the product's data to enable easy retrieval and reference. ### Retrieving products from Appwrite To fetch the products from Appwrite, you can execute the function below within the React **`useEffect`** hook when the page loads. ```tsx export const fetchProducts = async () => { try { const products = await db.listDocuments( process.env.NEXT_PUBLIC_DB_ID!, process.env.NEXT_PUBLIC_PRODUCTS_COLLECTION_ID! ); if (products.documents) { return products.documents; } } catch (err) { console.error(err); } }; ``` The `fetchProducts` function returns all the data within the `products` collection. ### Deleting products from Appwrite Admin users can also delete a product via its ID. The **`deleteProduct`** function accepts the product's ID as a parameter and deletes the selected product from the database, including its image, since they use the same ID attribute. ```tsx export const deleteProduct = async (id: string) => { try { await db.deleteDocument( process.env.NEXT_PUBLIC_DB_ID!, process.env.NEXT_PUBLIC_PRODUCTS_COLLECTION_ID!, id ); await storage.deleteFile(process.env.NEXT_PUBLIC_BUCKET_ID!, id); alert("Product deleted successfully"); } catch (err) { console.error(err); } }; ``` --- ## How to authenticate users with Appwrite In the previous sections, we've configured the GitHub authentication method. Here, you'll learn how to handle user sign-ins into the application. To enable customers to sign into the application using their GitHub account, execute the function below when they click the `Sign in` button. The function redirects the user to GitHub, where they can authorize or grant permission to the application and then sign into the application: ```tsx import { account } from "../appwrite"; import { OAuthProvider } from "appwrite"; const handleGoogleSignIn = async () => { try { account.createOAuth2Session( OAuthProvider.Github, "http://localhost:3000", "http://localhost:3000/login" ); } catch (err) { console.error(err); } }; ``` Admin users can sign into the application using an email and password. Appwrite validates the credentials before granting access to the application's dashboard. ```tsx import { account } from "@/app/appwrite"; const handleLogin = async (e: React.FormEvent) => { e.preventDefault(); try { await account.createEmailPasswordSession(email, password); alert(`Welcome back 🎉`); router.push("/admin/dashboard"); } catch (err) { console.error(err); alert("Invalid credentials ❌"); } }; ``` Appwrite also allows you to fetch the current user's data. For instance, if only authenticated users can make payments, you can do this by running the code snippet below. It retrieves the current user's data or returns null if the user is not logged in. ```tsx import { account } from "@/app/appwrite"; useEffect(() => { const checkAuthStatus = async () => { try { const request = await account.get(); setUser(request); } catch (err) { console.log(err); } }; checkAuthStatus(); }, []); ``` --- ## How to add Stripe payment checkout to Next.js In this section, you'll learn how to implement a Stripe payment checkout in the application. Stripe is a popular online payment processing platform that enables you to create products and integrate both one-time and recurring payment methods into your application. First, you need to [create a Stripe account](https://dashboard.stripe.com/login). You can use a test mode account for this tutorial. ![Stripe 1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nibs7bxb09i167mxm918.png) Click on `Developers` from the top menu and copy your secret key from the API keys menu. ![Stripe 2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/up8757knbquc3k0577ps.png) Paste your Stripe secret key into the `.env.local` file. ```bash STRIPE_SECRET_KEY=<your_secret_key> ``` Install the [Stripe Node.js SDK](https://docs.stripe.com/libraries). ```bash npm install stripe ``` Next, create an `api` folder within the Next.js `app` folder. The `api` folder will contain all the API routes and endpoints for the application. ```bash cd app mkdir api ``` Create a `checkout` endpoint by adding a `checkout` folder within the `api` folder. ```bash cd api mkdir checkout && cd checkout touch route.ts ``` Copy the code snippet below into the `route.ts` file. ```tsx import { NextRequest, NextResponse } from "next/server"; import Stripe from "stripe"; const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!); export async function POST(req: NextRequest) { //👇🏻 accepts the customer's cart const cart = await req.json(); try { //👇🏻 creates a checkout session const session = await stripe.checkout.sessions.create({ payment_method_types: ["card"], line_items: cart.map((product: Product) => ({ price_data: { currency: "usd", product_data: { name: product.name, }, unit_amount: product.price * 100, }, quantity: 1, })), mode: "payment", cancel_url: `http://localhost:3000/?canceled=true`, success_url: `http://localhost:3000?success=true&session_id={CHECKOUT_SESSION_ID}`, }); //👇🏻 return the session URL return NextResponse.json({ session: session.url }, { status: 200 }); } catch (err) { return NextResponse.json({ err }, { status: 500 }); } } ``` The code snippet above creates a checkout endpoint that accepts POST requests. It creates a checkout session for the customer and returns the session URL. The **`cancel_url`** and **`success_url`** determine where to redirect the user after completing or canceling a payment. Finally, you can send a customer's cart to the `/checkout` endpoint when a user decides to make payment for products by running the code snippet below: ```tsx const processPayment = async (cart: Product[]) => { try { if (user !== null) { //👇🏻 saves cart to local storage localStorage.setItem("cart", JSON.stringify(cart)); //👇🏻 sends cart to /checkout route const request = await fetch("/api/checkout", { method: "POST", body: JSON.stringify(cart), headers: { "Content-Type": "application/json" }, }); //👇🏻 retrieves the session URL const { session } = await request.json(); //👇🏻 redirects the user to the checkout page window.location.assign(session); } else { //👇🏻 redirects unauthenticated users router.push("/login"); } } catch (err) { console.error(err); } }; ``` The code snippet above saves the cart to the browser's local storage and sends it to the API endpoint, then retrieves the response (session URL) from the backend server and redirects the user to the Stripe checkout page. ![Stripe 3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/i5hokf2qyqyey3kwsg9x.gif) --- ## Sending in-app and email notifications with Novu [Novu](https://github.com/novuhq/novu) is the first notification infrastructure that provides a unified API for sending notifications through multiple channels, including In-App, Push, Email, SMS, and Chat. In this section, you'll learn how to add Novu to your application to enable you to send email and in-app messages. First, install the required Novu packages: ```bash npm install @novu/node @novu/echo @novu/notification-center ``` When users make a purchase, they will receive a payment confirmation email, and the admin user also receives an in-app notification. To do this, you need to [create an account on Novu](https://web.novu.co/auth/login) and set up a primary email provider. We'll use [Resend](https://resend.com/docs/introduction) for this tutorial. After creating an account on Novu, create a [Resend account](https://resend.com/docs/introduction), and select `API Keys` from the sidebar menu on your dashboard to create one. ![Resend 1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jhehx7s45x180zpir1ti.png) Next, return to your Novu dashboard, select `Integrations Store` from the sidebar menu, and add Resend as an email provider. You'll need to paste your Resend API key and email address into the required fields. ![Novu 1](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f03vb6nftyi8g790vg7m.png) Select **Settings** from the sidebar menu and copy your `Novu API`key and `App ID` into a **`.env.local`** file as shown below. Also, copy your `subscriber ID` into its field - you can get this from the `Subscribers` section. ```bash NOVU_API_KEY=<YOUR_API_FOR_NEXT_SERVER> NEXT_PUBLIC_NOVU_API_KEY=<YOUR_API_FOR_NEXT_CLIENT> NEXT_PUBLIC_NOVU_APP_ID=<YOUR_API_ID> NOVU_SUBSCRIBER_ID=<YOUR_API_FOR_NEXT_SERVER> NEXT_PUBLIC_NOVU_SUBSCRIBER_ID=<YOUR_API_FOR_CLIENT> ``` ![Novu 2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/voeofvvtv88pex9rpr1s.png) Finally, add the Novu notification bell to the Admin dashboard to enable admin users to receive notifications within the application. ```tsx import { NovuProvider, PopoverNotificationCenter, NotificationBell, } from "@novu/notification-center"; export default function AdminNav() { return ( <NovuProvider subscriberId={process.env.NEXT_PUBLIC_NOVU_SUBSCRIBER_ID!} applicationIdentifier={process.env.NEXT_PUBLIC_NOVU_APP_ID!} > <PopoverNotificationCenter colorScheme='light'> {({ unseenCount }) => <NotificationBell unseenCount={unseenCount} />} </PopoverNotificationCenter> </NovuProvider> ); } ``` ![Dashboard](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m62ft87ue9orse2yww9z.png) --- ## How to create notification workflows with Novu Echo [Novu](https://docs.novu.co/echo/quickstart) offers a code-first workflow engine that enables you to create notification workflows within your codebase. It allows you to integrate email, SMS, and chat template and content generators, such as [React Email](https://react.email/docs/introduction) and [MJML](https://mjml.io/), into Novu to create advanced and powerful notifications. In this section, you'll learn how to create notification workflows within your application, use email notification templates with Novu, and send in-app and email notifications with Novu. Install [React Email](https://react.email/docs/introduction) by running the following command: ```bash npm install react-email @react-email/components -E ``` Include the following script in your package.json file. The `--dir` flag gives React Email access to the email templates located within the project. In this case, the email templates are located in the `src/emails` folder. ```json { "scripts": { "email": "email dev --dir src/emails" } } ``` Next, create an `emails` folder containing an `email.tsx` within the Next.js `app` folder and copy the code snippet below into the file: ```tsx import { Body, Column, Container, Head, Heading, Hr, Html, Link, Preview, Section, Text, Row, render, } from "@react-email/components"; import * as React from "react"; const EmailTemplate = ({ message, subject, name, }: { message: string; subject: string; name: string; }) => ( <Html> <Head /> <Preview>{subject}</Preview> <Body style={main}> <Container style={container}> <Section style={header}> <Row> <Column style={headerContent}> <Heading style={headerContentTitle}>{subject}</Heading> </Column> </Row> </Section> <Section style={content}> <Text style={paragraph}>Hey {name},</Text> <Text style={paragraph}>{message}</Text> </Section> </Container> <Section style={footer}> <Text style={footerText}> You&apos;re receiving this email because your subscribed to Newsletter App </Text> <Hr style={footerDivider} /> <Text style={footerAddress}> <strong>Novu Store</strong>, &copy;{" "} <Link href='https://novu.co'>Novu</Link> </Text> </Section> </Body> </Html> ); export function renderEmail(inputs: { message: string; subject: string; name: string; }) { return render(<EmailTemplate {...inputs} />); } const main = { backgroundColor: "#f3f3f5", fontFamily: "HelveticaNeue,Helvetica,Arial,sans-serif", }; const headerContent = { padding: "20px 30px 15px" }; const headerContentTitle = { color: "#fff", fontSize: "27px", fontWeight: "bold", lineHeight: "27px", }; const paragraph = { fontSize: "15px", lineHeight: "21px", color: "#3c3f44", }; const divider = { margin: "30px 0", }; const container = { width: "680px", maxWidth: "100%", margin: "0 auto", backgroundColor: "#ffffff", }; const footer = { width: "680px", maxWidth: "100%", margin: "32px auto 0 auto", padding: "0 30px", }; const content = { padding: "30px 30px 40px 30px", }; const header = { borderRadius: "5px 5px 0 0", display: "flex", flexDireciont: "column", backgroundColor: "#2b2d6e", }; const footerDivider = { ...divider, borderColor: "#d6d8db", }; const footerText = { fontSize: "12px", lineHeight: "15px", color: "#9199a1", margin: "0", }; const footerLink = { display: "inline-block", color: "#9199a1", textDecoration: "underline", fontSize: "12px", marginRight: "10px", marginBottom: "0", marginTop: "8px", }; const footerAddress = { margin: "4px 0", fontSize: "12px", lineHeight: "15px", color: "#9199a1", }; ``` The code snippet above creates an customizable email template using React Email. You can find more [easy-to-edit inspirations or templates](https://demo.react.email/preview/notifications/vercel-invite-user). The component also accepts a message, subject, and name as props, and fills them into the elements. Finally, you can run `npm run email` in your terminal to preview the template. Next, let's integrate the email template to Novu Echo. First, close the React Email server, and run the code snippet below. It opens the [Novu Dev Studio](https://docs.novu.co/echo/concepts/studio) in your browser. ```bash npx novu-labs@latest echo ``` Create an `echo` folder containing a `client.ts` file within the Next.js app folder and copy this code snippet into the file. ```tsx import { Echo } from "@novu/echo"; import { renderEmail } from "@/app/emails/email"; interface EchoProps { step: any; payload: { subject: string; message: string; name: string; totalAmount: string; }; } export const echo = new Echo({ apiKey: process.env.NEXT_PUBLIC_NOVU_API_KEY!, devModeBypassAuthentication: process.env.NODE_ENV === "development", }); echo.workflow( "novu-store", async ({ step, payload }: EchoProps) => { //👇🏻 in-app notification step await step.inApp("notify-admin", async () => { return { body: `${payload.name} just made a new purchase of ${payload.totalAmount} 🎉`, }; }); //👇🏻 email notification step await step.email( "email-customer", async () => { return { subject: `${payload ? payload?.subject : "No Subject"}`, body: renderEmail(payload), }; }, { inputSchema: { type: "object", properties: {}, }, } ); }, { payloadSchema: { type: "object", properties: { message: { type: "string", default: "Congratulations! Your purchase was successful! 🎉", }, subject: { type: "string", default: "Message from Novu Store" }, name: { type: "string", default: "User" }, totalAmount: { type: "string", default: "0" }, }, required: ["message", "subject", "name", "totalAmount"], additionalProperties: false, }, } ); ``` The code snippet defines a Novu notification workflow named `novu-store`, which accepts a payload containing the email subject, message, the customer's name and the total amount. The workflow has two steps: in-app and email notification. The in-app notification sends a message to the Admin using the notification bell and the email sends a message to the customer’s email. Next, you need to create an API route for Novu Echo. Within the `api` folder, create an `email` folder containing a `route.ts` file and copy the provided code snippet below into the file. ```tsx import { serve } from "@novu/echo/next"; import { echo } from "@/app/echo/client"; export const { GET, POST, PUT } = serve({ client: echo }); ``` Run `npx novu-labs@latest echo` in your terminal. It will automatically open the Novu Dev Studio where you can preview your workflow and Sync it with the Cloud. ![Novu 3](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ed2sl38m7zrlgjoj4a6y.gif) The `Sync to Cloud` button triggers a pop-up that provides instructions on how to push your workflow to the Novu Cloud. ![Novu 4](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8ch8ba7y9klyudmmv9jz.png) To proceed, run the following code snippet in your terminal. This will generate a unique URL representing a local tunnel between your development environment and the cloud environment. ```bash npx localtunnel --port 3000 ``` Copy the generated link along with your Echo API endpoint into the Echo Endpoint field, click the `Create Diff` button, and deploy the changes. ```bash https://<LOCAL_TUNNEL_URL>/<ECHO_API_ENDPOINT (/api/email)> ``` Congratulations! You've just created a Novu workflow from your codebase. ![Novu 5](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6bdugs6g15y1e7xeixux.png) Finally, let's create the endpoint that sends the email and in-app notifications when a user makes a payment. Create an `api/send` route and copy the code snippet below into the file: ```tsx import { NextRequest, NextResponse } from "next/server"; import { Novu } from "@novu/node"; const novu = new Novu(process.env.NOVU_API_KEY!); export async function POST(req: NextRequest) { const { email, name, totalAmount } = await req.json(); const { data } = await novu.trigger("novu-store", { to: { subscriberId: process.env.NOVU_SUBSCRIBER_ID!, email, firstName: name, }, payload: { name, totalAmount, subject: `Purchase Notification from Novu Store`, message: `Your purchase of ${totalAmount} was successful! 🎉`, }, }); console.log(data.data); return NextResponse.json( { message: "Purchase Completed!", data: { novu: data.data }, success: true, }, { status: 200 } ); } ``` The endpoint accepts the customer's email, name, and total amount paid, and triggers the Novu notification workflow to send the required notifications after a payment is successful. --- ## Conclusion So far, you've learned how to do the following: - Implement multiple authentication methods, store, and retrieve data and files from Appwrite. - Create email templates with React Email, and send in-app and email notifications with Novu. If you are looking forward to sending notifications within your applications, Novu is your best choice. With Novu, you can add multiple notification channels to your applications, including chat, SMS, email, push, and in-app notifications. The source code for this tutorial is available here: [https://github.com/novuhq/ecom-store-with-nextjs-appwrite-novu-and-stripe](https://github.com/novuhq/ecom-store-with-nextjs-appwrite-novu-and-stripe) Thank you for reading!
empe
1,877,003
Professional Ants Pest Removal Services in Sydney
Do you have ants intruding on your personal space within your house or business premises? If you are...
0
2024-06-04T18:48:47
https://dev.to/most_mimakther_e98efd0a3/professional-ants-pest-removal-services-in-sydney-54mo
Do you have ants intruding on your personal space within your house or business premises? If you are in search of the excellent ants-removal/ then [Elegance Pest Control](https://elegancepestcontrol.au/) will meet your expectations. We have developed our pest control for ants to be efficient with a focus on ensuring that it is long-lasting so that your space does not host ants again. Ant Pest Control Solutions Ants are particularly a pain that can disrupt activities around the house and cause damages to property. There is no arguing that ants are invasive and unwelcome in our homes and businesses, which is why our ant pest control services help to eradicate them quickly at Elegance Pest Control. This practice of eliminating ants involves use of certain procedures, which are explained by a trained team of professionals for the task. Combat Ant Problems with the Best Ant Control Are you looking for the best ant killer Australia available in the market? Ant control measure is among the various services offered by Elegance Pest control to ensure their elimination successfully. This is the reason why our ant control approaches are individually assembled to ensure the methods and treatments are specific to your case. Trustworthy Ant Control Service in Sydney Located in Sydney? Ant pest control Sydney provided by Elegance Pest Control is the best in this hemisphere. We operate a service guarantee when it comes to ant pest control near me, ensuring that we offer you premium services to eradicate the ants and solve any problem they may cause within the shortest time possible. Concerning, professional ant killer products Elegance Pest Control presents the best products, these are suited for indoors and outside use. Eliminate ants effortlessly with our guaranteed solutions and have a hassle-free home, free from ants.
most_mimakther_e98efd0a3
1,875,909
How to use fetch in JavaScript
Hello, everyone. In this article, I would explain fetch in JavaScript. It is recommended to read the...
0
2024-06-04T18:40:06
https://dev.to/makoto0825/how-to-use-fetch-in-javascript-9on
javascript, webdev
Hello, everyone. In this article, I would explain fetch in JavaScript. It is recommended to read the following article and understand the concept of asynchronous, synchronous processing and Promise before reading this article. This is because these concepts are connected with fetch. [➡async/sync in Javascript ](https://dev.to/makoto0825/asyncsync-in-javascript-438e) [➡How to use Async/Await in Promise.](https://dev.to/makoto0825/how-to-use-asyncawait-in-promise-38hc) [➡How to use chaining in Promise.](https://dev.to/makoto0825/how-to-use-promise-chaining-in-javascript-391c) ## What is fetch? Fetch is the function can get the local or outside data. This function asynchronously is executed. By using the function, you can get the many type of data such as text, json, image so on. Please look at following image, in this case, this code can get the information from the json file named "sample.json". **sample.json** ```json { "name": "makoto", "gender": "male" } ``` **test.html** ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> </head> <body> </body> </html> <script> const test = async ()=>{ const response = await fetch('sample.json') const data = await response.json() console.log(data) } test(); </script> ``` **console** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uv2fio0paowcko9nml9h.png) Thus, fetch can be used to retrieve data from other files and other websites. ## How to use it. This is basic format for fetch. fetch('sample.json') requests data from the file 'sample.json' and returns a Promise object representing the result of the request. And await response.json() retrieves the JSON object (the content we want to obtain) from the response (a Promise object) obtained from the fetch() request. ```html <script> const test = async ()=>{ const response = await fetch('sample.json')//get the promise object const data = await response.json()//get the json object console.log(data) } test(); </script> ``` - Specify the path or URL of the file from which you want to fetch data as an argument to fetch. - Since fetch is an asynchronous operation, you should use await on the fetch function. - Since fetch is an asynchronous operation, add async to the calling function. - The fetch() method returns a Promise object. ## fetch can retrieve data in a variety of formats fetch can get the data not only json but also other types of data such as text, binary data, Blob(image) and FormData. ```javascript const data = await response.json();//json data const text = await response.text();//text data const arrayBuffer = await response.arrayBuffer();//array buffer data const blob = await response.blob();//blob data const formData = await response.formData();//form data ``` We have to use different methods depending on the type of data we want to retrieve. In this article, we assume that we are retrieving JSON data, so we will only use response.json(). ## Example of usage when url is specified for fetch By specifying an external site's URL in fetch, you can retrieve information from the site. Here, I will use the weather Web API, Open-Meteo, as an example to explain. [Open-Meteo](https://open-meteo.com/en/docs)is weather API, which provide weather info. By including latitude, longitude, and other parameters in the URL, you can retrieve weather information for that location, such as precipitation probability and temperature. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/p9fhwzl5yi6ghcnhbf4z.png) Since you need to customize the URL based on the information you want, it's important to read the site's documentation and deepen your understanding of the API. The rules for writing URLs differ between Web APIs, so first, visit the site and learn how to use the Web API. In this case, to obtain the maximum temperature for the 7days, I created the following URL. The location is specified as Toronto, Canada (latitude: 43.67, longitude: -79.4): ``` https://api.open-meteo.com/v1/forecast?latitude=43.67&longitude=-79.4&daily=temperature_2m_max&timezone=America/Toronto ``` - For obtaining the maximum temperature, temperature_2m_max was specified in the URL. - 43.67 was designated for the latitude parameter. - -79.4 was assigned to the longitude parameter. - America/Toronto was set for the time zone parameter. <u>This is JavaScript </u> ```javascript <script> const test = async ()=>{ // parameters for the url const lat = 43.67; const long = -79.4; const tzone = 'America/Toronto'; const response = await fetch(`https://api.open-meteo.com/v1/forecast?latitude=${lat}&longitude=${long}&daily=temperature_2m_max&timezone=${tzone}`); const data = await response.json()// get the json object console.log(data.daily.temperature_2m_max) } test(); </script> ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ekn6oez0uozt4jnioj5y.png) Using fetch with the previously mentioned URL, I was able to output the maximum temperature to the console. By utilizing external website's web API and fetch in this manner, it becomes feasible to retrieve external data. When specifying a URL, it's important to understand the specifications of each respective web API and include parameters accordingly in the URL.
makoto0825
1,877,001
Exception In java and hierarchy
While trying to understand the concept of exception in java , it becomes necessary to understand the...
0
2024-06-04T18:37:57
https://dev.to/rajubora/exception-in-java-and-hierarchy-53f5
While trying to understand the concept of exception in java , it becomes necessary to understand the hierarchy of exceptions becauce we can see a famous interview question from exception hierarchy in java developer interview. suppose we have a parent class with a method that throws **IO exception** , also there is a sub class that overrides the method of parent class throws **FileNotFoundException** , will this code works fine or not? The answer is no because in Java the overridden method in the subclass cannot throw a broader or more general exception than the method in the parent class. In our scenario **IOException** is a broader exception than **FileNotFoundException**, so the overriding method in the subclass is not allowed to throw **FileNotFoundException** (which is a subclass of IOException) . let us understand by an example : **We have a parent class** ``` import java.io.FileNotFoundException; import java.io.IOException; class ParentClass { // Method that throws a narrower exception public void Method() throws FileNotFoundException { System.out.println("IncorrectParentClass: Method"); if (true) { throw new FileNotFoundException("FileNotFoundException from ParentClass"); } } } ``` **Now we are inheriting all the features of super class in our base class:** ``` class ChildClass extends ParentClass { // Overriding method that throws a broader exception @Override public void Method() throws IOException { System.out.println("IncorrectChildClass: Method"); if (true) { throw new IOException("IOException from ChildClass"); } } } ``` **Now Declaring Main class:** ``` public class Main { public static void main(String[] args) { ParentClass parent = new ParentClass(); ChildClass child = new ChildClass(); try { parent.Method(); } catch (FileNotFoundException e) { System.out.println("Caught FileNotFoundException from parent: " + e.getMessage()); } try { child.Method(); } catch (IOException e) { System.out.println("Caught IOException from child: " + e.getMessage()); } } } ``` When we try to compile Main.java along with **ParentClass.java** and **ChildClass.java**, you will get a compilation error indicating that IOException is broader than FileNotFoundException. The ChildClass will not compile because it violates the rule that an overriding method cannot throw a broader exception than the method it overrides.
rajubora
1,877,000
Realtime Pub/Sub meets Amazon SQS Elasticity
In this blog post, we will explore one standout feature: real-time message forwarding to Amazon SQS...
0
2024-06-04T18:37:49
https://dev.to/kyberneees/realtime-pubsub-meets-amazon-sqs-elasticity-480k
aws, sqs, messaging, javascript
In this blog post, we will explore one standout feature: real-time message forwarding to Amazon SQS queues. We’ll delve into the details and provide a hands-on example of how to process and respond to your WebSocket clients requests using an Amazon SQS queue as a proxy layer between your application clients and the backend services. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jvy3xmjy4li24ubg4qr8.png) https://medium.com/@kyberneees/realtime-pub-sub-meets-amazon-sqs-elasticity-150cb2ac1df6
kyberneees
1,876,999
Authenticate Realtime Pub/Sub WebSocket clients with Supabase
In this blog post, we describe how to authenticate your Realtime Pub/Sub Application WebSocket...
0
2024-06-04T18:34:41
https://dev.to/kyberneees/authenticate-realtime-pubsub-websocket-clients-with-supabase-59ko
supabase, realtime, messaging, authentication
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x5d7a45qd3u6euc1vbp8.png) In this blog post, we describe how to authenticate your Realtime Pub/Sub Application WebSocket clients using Supabase users. https://medium.com/@kyberneees/authenticate-realtime-pub-sub-websocket-clients-with-supabase-ed538a5eb47d
kyberneees
1,876,998
calculator program
In gpt chat I asked how to program a calculator in htmo, css and java script and it looks beautiful...
0
2024-06-04T18:34:10
https://dev.to/mokko/calculator-program-41jh
In gpt chat I asked how to program a calculator in htmo, css and java script and it looks beautiful in codepen, but in the android studio emulator it looks a disaster. I'm new, what am I doing wrong?
mokko
1,871,874
Ensuring Scalable Application Design and Development: A Comprehensive Guide
In today's digital age, applications are vital in driving business success. However, as the user base...
0
2024-06-04T18:30:08
https://dev.to/norbybaru/ensuring-scalable-application-design-and-development-a-comprehensive-guide-4mn6
scaling, webapp, devops, development
In today's digital age, applications are vital in driving business success. However, as the user base and data volume increase, many applications struggle to meet the demand, leading to performance and scalability problems. Ensuring scalable application design and development is crucial to handle growing traffic, data, and user engagement. ## Why Scalability Matters Scalability is an application's ability to handle increased load and demand without compromising performance, security, or user experience. ## Types of Scalability 1. **Horizontal Scalability (Scale-Out):** - Involves adding more machines or instances to your application's infrastructure. - Utilizes techniques like load balancing and distributed computing to spread workloads. - Offers flexibility and cost-effectiveness by adding inexpensive hardware as needed. 2. **Vertical Scalability (Scale-Up):** - Involves enhancing the resources of existing servers (e.g., increasing CPU, RAM). - Provides a straightforward solution but can be limited by hardware capacity and higher costs. ## Benefits of Scalability A scalable application design ensures: - **High availability:** The application remains accessible and responsive even during peak usage or unexpected traffic spikes. - **Improved user experience:** Scalable applications ensure fast load times, reduced latency, and seamless navigation, leading to increased user satisfaction and engagement. - **Cost savings:** Scalable designs reduce the need for costly hardware upgrades or infrastructure changes as the application grows. - **Competitive advantage:** Scalable applications can handle sudden surges in traffic, giving businesses a competitive edge in the market. ## Designing for Scalability To ensure scalable application design and development, follow these best practices: 1. **Modular Architecture** Break down the application into smaller, independent modules or microservices, each responsible for a specific function. This allows for easier maintenance, updates, and scalability. 2. **Cloud-Native Approach** Leverage cloud-native technologies and services, such as serverless computing, containers, and Kubernetes, to take advantage of on-demand scalability and reduced infrastructure costs. 3. **Database Optimization & Scalability** Design databases for scalability by using NoSQL databases, sharding, and data replication to ensure efficient data storage and retrieval. 4. **Caching and Content Delivery Network** Implement caching mechanisms by storing frequently accessed data to improve response times and distribute content delivery across global servers to reduce latency and enhance geographically dispersed users' performance. 5. **Load Balancing and Autoscaling** Use load balancers to distribute traffic efficiently and autoscaling to adjust resources based on demand automatically. 6. **Microservices Architecture** Breaking down the application into smaller, independent services can enhance scalability and fault tolerance. 7. **APIs and Integration** Efficiently designed APIs and third-party integrations can improve performance and scalability. ## Development Best Practices In addition to design considerations, follow these development best practices to ensure scalability: 1. **Future-Proofing Strategies** Ensure the app architecture can adapt to technological advancements. 2. **Containerization and Orchestration** Using containers (e.g., Docker) and orchestration tools (e.g., Kubernetes) can streamline application components' deployment, management, and scaling. 3. **Capacity Planning and Performance Benchmarking** Regular capacity planning and performance benchmarking help identify potential bottlenecks and inform scaling decisions. 4. **Monitoring and Observability** Use tools to track performance metrics and detect issues in real time. 5. **Scalability Testing & Planning** Conduct performance, load, and stress testing. Develop a comprehensive plan outlining scaling strategies. 6. **Optimization Techniques** Implement caching, asynchronous processing, and distributed computing. 7. **Ongoing Support** Ensure continuous scalability consulting and maintenance. 8. **Budget Allocation** Allocate budget on resources for scalability testing and optimizations. 9. **Collaboration and Communication** Maintain transparent communication regarding scalability initiatives. 10. **User Feedback Integration** Incorporate user feedback to guide scalability decisions. ## Conclusion Ensuring scalable application design and development is crucial for businesses to stay competitive in today's fast-paced digital landscape. Following this article's design principles and development best practices, you can build applications that easily handle growth, provide an excellent user experience, and drive business success.
norbybaru
1,876,996
From Blind Spots to Brilliance: Achieve API Excellence with Observability
In the intricate world of modern applications, APIs (Application Programming Interfaces) act as the...
0
2024-06-04T18:23:27
https://dev.to/syncloop_dev/from-blind-spots-to-brilliance-achieve-api-excellence-with-observability-m99
webdev, javascript, programming, api
In the intricate world of modern applications, APIs (Application Programming Interfaces) act as the invisible workhorses, facilitating seamless communication between various components. But ensuring their smooth operation and identifying potential issues can be a challenge. This is where API observability and monitoring come into play. ## Understanding the Nuance: While often used interchangeably, there's a subtle distinction between these two practices: **API Monitoring**: This is the traditional approach, focusing on collecting specific metrics and logs related to API health and performance. It involves tools that track key indicators like response times, error rates, and request volumes. **API Observability**: It's a more comprehensive approach, going beyond basic metrics. It encompasses monitoring, but also empowers you to delve deeper into the inner workings of your APIs. By collecting rich telemetry data, including traces, logs, and distributed tracing information, API observability provides a holistic view of API behavior. This allows for proactive problem identification and root cause analysis. ## Why is API Observability Important? The benefits of implementing API observability extend far beyond basic monitoring: **Improved User Experience**: By pinpointing performance bottlenecks and identifying anomalies in user interactions, you can proactively address issues that could impact user experience. **Faster Debugging and Troubleshooting**: Rich telemetry data facilitates faster pinpointing of root causes for API errors and malfunctions. This expedites troubleshooting and reduces downtime. **Enhanced Security**: Improved visibility into API activity helps to detect and prevent potential security threats like unauthorized access or malicious attacks. **Data-Driven Decision Making**: Observability data provides valuable insights into API usage patterns, enabling you to optimize resource allocation and make informed decisions about API development and deployment strategies. ## Statistics Highlighting the Need: A study by Honeycomb reveals that 70% of IT professionals struggle to identify the root cause of production incidents within an hour. API observability can significantly reduce this time by offering comprehensive insights. A report by Dynatrace indicates that organizations with strong API observability practices experience 28% fewer application outages. This translates to improved service uptime and reliability. ## Examples and Use Cases: **E-commerce Platform**: Imagine a sudden spike in error rates for the "Add to Cart" API during a promotional sale. Traditional monitoring might just alert you to the increase in errors. However, API observability would enable you to analyze the request traces, identify the specific product or API endpoint causing the issue, and take swift action to resolve it, minimizing customer frustration. **Financial Services Provider**: In a scenario where a fraudulent login attempt is detected, API observability provides valuable data on the origin of the attempt, the specific API endpoint targeted, and potentially other suspicious user behavior. This allows for a more informed response to potential security threats. ## Latest Tools and Technologies: **Distributed Tracing Platforms**: Tools like Zipkin and Jaeger offer detailed tracing capabilities, allowing you to visualize the entire request flow across various microservices involved in an API call. **Prometheus and Grafana**: This popular combination provides a robust monitoring and visualization platform, enabling you to collect, analyze, and visualize API metrics in real-time. **API Gateways**: Many API gateways like Azure API Management and AWS API Gateway offer built-in observability features, simplifying data collection and analysis. **Open-source Libraries**: Libraries like Datadog and New Relic provide comprehensive API monitoring and observability capabilities. ## Integration Process: The integration process for API observability tools and technologies varies depending on your chosen solution. Here's a general outline: **Selection**: Choose the tools and technologies that best align with your specific needs, technical stack, and budget. **Configuration**: Configure the chosen tools with your APIs and services, specifying the data points you want to collect. **Instrumentation**: Depending on the chosen solution, you might need to instrument your API code to capture additional telemetry data. **Visualization and Alerting**: Set up dashboards and alerts to visualize and monitor API health and performance metrics. ## Benefits and Considerations: **Benefits**: **Proactive Problem Identification**: Enables you to identify potential issues before they impact users. **Faster Root Cause Analysis**: Rich telemetry data streamlines troubleshooting and pinpointing the root cause of problems. **Improved Collaboration**: Provides a unified view of API health for all stakeholders, facilitating better collaboration between development and operations teams. Considerations: **Cost**: Implementing robust API observability solutions can incur initial setup and ongoing operational costs. **Data Overload**: Careful planning is necessary to avoid being overwhelmed by the sheer volume of data generated by API observability tools. **Technical Expertise**: Implementing and managing some advanced observability solutions might require specialized technical expertise. Consider upskilling your team or enlisting the help of managed service providers for more complex setups. ## Advanced Techniques: **Distributed Tracing with Correlation IDs**: Assigning unique correlation IDs to each request allows you to trace its entire journey across multiple services and identify bottlenecks or errors at any point in the flow. **API Schema Validation**: Implementing schema validation at the API gateway ensures that incoming requests adhere to the defined format, preventing invalid data from causing API malfunctions. **Real-time Anomaly Detection**: Utilizing machine learning algorithms can help automatically detect deviations from normal API behavior, enabling proactive identification of potential issues. ## Choosing the Right Approach: The optimal approach to API observability depends on several factors: **API Complexity**: For simple APIs with limited traffic, basic monitoring tools might suffice. However, for complex APIs with high volumes of requests, a comprehensive observability solution is necessary. **Technical Expertise**: Consider your team's capabilities when choosing a solution. User-friendly platforms with pre-built features require less technical expertise to implement. **Budget**: Factor in the upfront and ongoing costs associated with different tools and technologies. ## Conclusion: API observability is no longer an option, but a necessity in today's API-driven landscape. By implementing a robust observability strategy and leveraging the right tools and techniques, you can gain deep insights into your APIs' health, performance, and security posture. This empowers you to deliver a superior user experience, ensure service uptime, and proactively address potential issues before they escalate. Remember, API observability is an ongoing journey, requiring continuous refinement and adaptation as your APIs evolve and your needs change.
syncloop_dev
1,876,992
HOW I RECOVER MY LOST CRYPTO'S FROM FAKE BROKER ONLINE
I was the subject of a cryptocurrency heist. My wallet was broken into, and some bitcoin was taken...
0
2024-06-04T18:16:39
https://dev.to/abigail_697d5af1af245afed/how-i-recover-my-lost-cryptos-from-fake-broker-online-4kig
I was the subject of a cryptocurrency heist. My wallet was broken into, and some bitcoin was taken and transferred to an unauthorized account. They notified my wallet management, but they had me wait a week to get back to me by email. Instead of helping, they suggested that my phone might have been compromised and advised me to contact the authorities, who were unable to give any aid in getting my bitcoin back. While searching the internet for a way to recover my stolen bitcoin, I came across Earth Hackers, a group of bitcoin recovery specialists. I made the decision to get in touch with them to ask for help recovering the money I had lost. Earth Hackers helped me get the money back that had been taken, thus I have nothing but appreciation for them. Earth Hackers was able to ascertain how the transaction was completed by using their advanced equipment. I am really appreciative of Earth Hackers outstanding service and have received my money back. If you are experiencing any problems like mine. Kindly contact them via:EarthHackers@proton.me Telegram.@EarthHackers Whatsapp.+16625189219
abigail_697d5af1af245afed
1,876,991
Populating Select Input with data from an API using Nextjs and typescript
After a series of nerve racking attempts and failures including combing through the entire internet I...
0
2024-06-04T18:16:28
https://dev.to/romkev/populating-select-input-with-data-from-an-api-using-nextjs-and-typescript-4jcn
nextjs, nestjs, node, api
After a series of nerve racking attempts and failures including combing through the entire internet I was finally able to achieve this feat. I shall share with you the function I currently use to populate select input using Nextjs from a Nest js api backend. Working with asynchronous functions can be a real paid and some of the errors I encountered while attempting this included - Property 'data' does not exist on type 'Promise<any>' - Type {} is not assignable to type ReactNode - And the biggest pain of them all.... “Invalid Hook Call” Error in React: Hooks Can Only Be Called Inside of a Function Component After literally taking days to achieve such a menial task I finally stumble upon a response on StackOverflow by Alarid (Shoutout to [Alarid](https://stackoverflow.com/users/1379837/alarid)) that pointed me to the right direction. This is a front end implementation only and am using the following libaries - Nextjs - Axios - Nextui Of course this is just a "proof of concept" can can be edited to fit the libraries you are using on your nextjs project. For example you could be using the default 'fetch' method to pull data from an api endpoint or any other third party library like swr or tanstack - To begin make sure the libraries are installed - Create file "apiSelect" in the filepath "@/app/lib/apiSelect.tsx" - Use the code below and edit according to your need. ``` import axios from "axios"; import { useState, useEffect } from "react"; import {Select, SelectItem} from "@nextui-org/react"; function apiSelect(url : string , lable: string, selected: string="") { const [options, setOptions] = useState<string[]>([]); useEffect(() => { async function fetchData() { // Fetch data const { data } = await axios.get(url); const results: any[] = [] // Store results in the results array data.forEach((value: any) => { results.push({ key: value.id, value: value.name, }); }); // Update the options state setOptions([ ...results ]) } // Trigger the fetch fetchData(); }, []); return ( <Select label={lable} className="max-w-xs" > {(options).map((option: any) => ( <SelectItem key={option.key}> {option.value} </SelectItem> ))} </Select> ); } export default apiSelect; ``` Incase you need to use authentication tokens you can include the token variable as an argument and implement using axios as shown below ``` import axios from "axios"; import { useState, useEffect } from "react"; import {Select, SelectItem} from "@nextui-org/react"; function apiSelect(url : string , token : string, lable: string, selected: string="") { const [options, setOptions] = useState<string[]>([]); useEffect(() => { async function fetchData() { // Fetch data const config = { headers: { Authorization: `Bearer ${token}` } }; const { data } = await axios.get(url, config); const results: any[] = [] // Store results in the results array data.forEach((value: any) => { results.push({ key: value.id, value: value.name, }); }); // Update the options state setOptions([ ...results ]) } // Trigger the fetch fetchData(); }, []); return ( <Select label={lable} className="max-w-xs" > {(options).map((option: any) => ( <SelectItem key={option.key}> {option.value} </SelectItem> ))} </Select> ); export default apiSelect; ``` - Import the function to page.tsx file so that you can call the function. `import apiSelect from "@/app/lib/apiSelect";` - Call the function You use the example below to call the function With Token ` {apiSelect(url, token, "Select a Role")}` Without token ` {apiSelect(url, "Select a Role")}` Hope this helps save you lots of hours trying to figure this out... Peace
romkev
1,876,990
How Odoo ERP Revolutionizes Retail Business Management?
Odoo ERP (Enterprise Resource Planning) has been gaining popularity in the retail sector due to its...
0
2024-06-04T18:13:10
https://dev.to/serpent2024/how-odoo-erp-revolutionizes-retail-business-management-9ei
odooerp, erpdevelopment, softwaredevelopment, software
Odoo ERP (Enterprise Resource Planning) has been gaining popularity in the retail sector due to its comprehensive suite of integrated applications that streamline and optimize various business processes. ## **Best way to Odoo ERP revolutionizes retail business management** ### 1. **Integrated Point of Sale (POS) System** Odoo’s POS module is fully integrated with the rest of the ERP system, allowing seamless synchronization of sales, inventory, and customer data. This integration ensures real-time updates and helps in managing multiple sales channels efficiently. ### 2. **Inventory Management** Odoo provides robust inventory management capabilities, including real-time tracking, automated stock replenishment, and multi-warehouse support. These features help retailers maintain optimal stock levels, reduce carrying costs, and avoid stockouts or overstock situations. ### 3. **Customer Relationship Management (CRM)** The CRM module helps retailers manage customer interactions, track sales leads, and nurture relationships through personalized marketing campaigns. It provides valuable insights into customer behavior and preferences, enabling better customer service and targeted marketing efforts. ### 4. **E-commerce Integration** Odoo offers seamless integration with e-commerce platforms, allowing retailers to manage their online and offline sales channels from a single system. This integration ensures consistent product information, pricing, and inventory levels across all channels, enhancing the overall customer experience. ### 5. **Accounting and Financial Management** Odoo’s accounting module automates financial transactions, supports multiple currencies, and provides real-time financial reporting. Retailers can track their financial performance, manage invoices, and ensure compliance with regulatory requirements more efficiently. ### 6. **Human Resources Management** [Odoo’s HR module](https://apps.odoo.com/apps/modules/10.0/hr_attendance_reports) helps retailers manage employee records, payroll, attendance, and performance appraisals. This centralized approach to HR management simplifies administrative tasks and improves employee management. ### 7. **Marketing Automation** The marketing automation tools in Odoo allow retailers to create, execute, and analyze marketing campaigns across multiple channels. Features like email marketing, social media integration, and customer segmentation help retailers engage with their customers more effectively. ### 8. **Reporting and Analytics** Odoo offers powerful reporting and analytics tools that provide insights into sales performance, inventory levels, customer behavior, and financial metrics. These insights help retailers make data-driven decisions, identify trends, and optimize their operations. ### 9. **Scalability and Customization** Odoo is highly customizable and scalable, making it suitable for retailers of all sizes. Businesses can start with basic modules and add more functionalities as they grow, ensuring the system evolves with their needs. ### 10. **Cost-Effective Solution** Compared to traditional ERP systems, Odoo offers a cost-effective solution with lower upfront and maintenance costs. Its open-source nature and modular pricing structure allow retailers to pay for only the features they need. ## Real-World Applications ### **Case Study Examples:** 1. **Small Boutique Shops**: A small boutique using Odoo can benefit from the POS integration, inventory management, and customer loyalty programs, enhancing the in-store customer experience and streamlining back-end operations. 2. **Large Retail Chains**: A large retail chain can use Odoo to manage multiple locations, synchronize inventory across all stores, and leverage advanced analytics to optimize supply chain management and marketing strategies. 3. **Online Retailers**: Online retailers can integrate their e-commerce platforms with Odoo, ensuring real-time inventory updates, streamlined order processing, and unified customer data management. ## Conclusion [Odoo ERP Developer](https://www.serpentcs.com/services/odoo-openerp-services/odoo-development) transforms retail business management by providing a unified platform that integrates all essential business functions. Its flexibility, comprehensive feature set, and user-friendly interface help retailers streamline operations, improve customer service, and drive growth. Whether for a small boutique or a large retail chain, Odoo offers the tools needed to succeed in today’s competitive retail landscape.
serpent2024
1,875,434
WinterJS vs. Bun: Comparing JavaScript runtimes
Written by Emmanuel Odioko✏️ Speed, as we know, is always a plus in a programming environment, and...
0
2024-06-04T18:10:56
https://blog.logrocket.com/winterjs-vs-bun-comparing-javascript-runtimes
javascript, webdev
**Written by [Emmanuel Odioko](https://blog.logrocket.com/author/emmanuelodioko/)✏️** Speed, as we know, is always a plus in a programming environment, and runtimes are expected to support this need. A faster runtime typically translates to more quickly executed code, which in turn directly impacts UX, as users don’t have to wait any longer than expected. That's why we’ll be introducing WinterJS in this tutorial, which theoretically is the fastest WinterCG JavaScript runtime. And yes, we do have yet another JavaScript runtime! Since there are so many to choose from, we’ll also compare WinterJS to [Bun, another runtime](https://blog.logrocket.com/bun-adoption-guide/) known for its speed. To follow along with this article, you should have a basic understanding of JavaScript concepts and some experience working with a JavaScript runtime. Some familiarity with the JavaScript Engine would also be helpful. ## Reviewing how JavaScript runtimes work [JavaScript executes in an engine](https://blog.logrocket.com/how-javascript-works-optimizing-the-v8-compiler-for-efficiency/) — for example, V8 or SpiderMonkey. This engine takes care of memory, keeping track of asynchronous tasks. However, you still need the ability to interact with an external file system and to make requests somewhere. That’s where JavaScript runtimes like WinterJS and Bun come into play. The diagram below shows how these different elements work together: ![Diagram Showing How Javascript Runtimes Work With Your Code, The Javascript Engine, Web Apis, And The User's Operating System](https://blog.logrocket.com/wp-content/uploads/2024/05/img1-Diagram-how-JavaScript-runtimes-work.png) ## A brief overview of WinterCG WinterJS is the first runtime to go all-in on the [WinterCG spec](https://wintercg.org/), ticking all of its requirements. Before we dive into WinterJS, if you're not familiar with WinterCG, it’s an important piece of the puzzle to know. WinterCG, or the Web-interoperable Runtimes Community Group, represents an attempt to establish standards grounds in the runtime community. This community group aims to agree on the features and functionality that server-side JavaScript should be capable of. The goal of WinterCG is to have server-side JavaScript code — from logging functions to data fetching — work seamlessly across all these different runtimes, including Node, Deno, Cloudflare Workers, Bun, LRT, and WinterJS. In other words, WinterCG aims to standardize runtimes so JavaScript code works and looks the same no matter which one you use. WinterCG's efforts become increasingly important and recognized in the developer community, so naming a runtime after WinterCG signifies a strong commitment to supporting its rules. It's worth noting that while Bun isn't directly affiliated with WinterCG, it’s built on top of things that are part of WinterCG — notably, the JavaScriptCore runtime from the WebKit framework. ## What is WinterJS? [WinterJS focuses on speed](https://wasmer.io/posts/winterjs-vs-alternatives-is-blazing-fast) and providing a safe space for developers building performant web applications. It’s a JavaScript runtime built in Rust that utilizes the SpiderMonkey engine to execute JavaScript and [Tokio to manage HTTP requests](https://blog.logrocket.com/best-rust-http-client/#tokio-curl). You can compile WinterJS to WebAssembly (Wasm) — it’s a production-ready runtime fully operable in Wasmer Edge, a great accomplishment for such a new tool. Actually, WinterJS boasts a ridiculous number of accomplishments: * Unprecedented speed, surpassing the performance of Bun, WorkerD, and Node (a highlight of its performance) * Fully adheres to the WinterCG specifications * Compatible with the Cloudflare API * Support for various web frameworks Now, let’s dive into our comparison between WinterJS and Bun. ## The differences between WinterJS and Bun WinterJS and Bun each have different top priorities, or things that they are more focused on. For example, WinterJS is more focused on speed, WinterCG specifications, compatibility with Cloudflare API, and support for frameworks: ![Winterjs Homepage Displaying List Of Benefits It Brings To Projects](https://blog.logrocket.com/wp-content/uploads/2024/05/img2-WinterJS-homepage-list-benefits.png) Meanwhile, Bun also emphasizes speed, but otherwise focuses on providing elegant APIs and a complete toolkit for building JavaScript apps: ![Bun Homepage With Summary Of Runtime, Install Command, And Some Performance Benchmarks Showing How Bun Outperforms Deno And Node](https://blog.logrocket.com/wp-content/uploads/2024/05/img3-Bun-homepage-summary-install-command-performance-benchmarks.png) Going further, we will explore the differences in their OS support, TypeScript support, Wasm support, ecosystem and framework support, installation requirements and methods, performance, and limitations. ### Operating system support As of the time of this writing, WinterJS does not support Windows OS, but it has built-in support for Linux and MacOS. Bun has support for Windows, Linus, and MacOS. ### Build architecture As previously mentioned, WinterJS is built with Rust and powered by SpiderMonkey. What we didn’t mention is that it’s also powered by [Spiderfire](https://github.com/Redfire75369/spiderfire) and [hyper](https://blog.logrocket.com/a-minimal-web-service-in-rust-using-hyper/) to bring the unique strengths of all these tools to you in one convenient runtime. Meanwhile, it’s important to note that Bun is not only a JavaScript runtime, but also a package manager, bundler, and test runner. It’s built with [the Zig language](https://blog.logrocket.com/getting-started-zig-programming-language/) and uses WebKit's JavaScriptCore as the JavaScript engine. ### TypeScript support WinterJS, at the time of this writing, does not have TypeScript support. However, Bun does have strong support for TypeScript out of the box. Bun automatically translates TypeScript files into JavaScript as you use them. Unlike some other tools, it doesn't check for typing errors, but rather just converts TypeScript code into JavaScript by removing type annotation. ### WebAssembly (Wasm) support WinterJS can be compiled to Wasm using WASIX, allowing it to run in WebAssembly environments with decent performance. It’s important to note that the compilation process is complex, and you will need to open an issue for guidance from the WinterJS team. Bun, in turn, has experimental support for Wasm. To run a `.wasm` binary with Bun, you just need to use the command `bun` followed by the name of your `.wasm` file. If the file doesn't end with `.wasm`, you can use the `bun run` command followed by the file name. ### Ecosystem and framework support WinterJS is still very young and not widely adopted, but has support for almost all major frontend frameworks. This is possible because of its compatibility with the Cloudflare Workers API, which allows it to serve static websites created by these frameworks as well enabling SSR. Here are the frameworks WinterJS supports as of the time of writing: Next.js, Next.js with RSCs (although the server-side `fetch` cache is not yet implemented), Hono, Astro.build, Remix.run, Svelte, Gatsby, and Nuxt. In comparison, Bun is still growing, but it has been adopted much more widely than WinterJS and has a strong community. Even so, it has less compatibility for most frontend frameworks. Bun supports React, Nuxt, and Svelte, but doesn't support Next.js and Remix out of the box. You can use Bun to set up a Next.js project and install dependencies, but the Next.js App Router depends on Node.js APIs that Bun hasn't incorporated yet, so you'll still need to rely on Node.js to run the development server. ### Performance WinterJS makes a strong case for being the fastest runtime: * Built with the Rust compiled programming language * Can utilize SpiderMonkey’s engine * Can compile JavaScript code to Wasm These factors give WinterJS certain rights to its claim to be the fastest runtime, as they’re each independently known for their speed, especially Rust. This also gives WinterJS an edge over other runtimes written in JavaScript. Bun claims to be very fast, too. It utilizes the JavaScriptCore engine, which powers Safari and is known for its faster startup times and potentially better performance compared to Node’s V8 engine, especially in certain scenarios. Bun’s speed can also be attributed to its being built with Zig, a low-level systems programming language similar to C++. Zig offers good memory management and control, allowing for efficient code generation and potentially faster execution compared to higher-level languages. Let’s look at a simple performance test. Keep in mind that this test isn’t a reliable benchmark for the actual expected performance of these runtimes in a real-world application, but provides a performance snapshot for a specific testing scenario. In the test, we will define a simple HTTP request in WinterJS and Bun, then see how well they perform independently. Create a folder named `Performance`. In this folder, create two more folders named `Bun` and `Winter`, respectively. Then, in each of these folders, create an `index.js` file in which we will create simple HTTP requests. Below is what the file tree looks like: ```javascript Performance/ │ ├── Bun/ │ ├── index.js │ └── Winter/ ├── index.js ``` In the `index.js` file in the `Winter` folder, paste the code below: ```javascript addEventListener('fetch', (req) => { req.respondWith(new Response('Logrocket is the best!')); }); ``` Navigate to the right directory and run this code with the command below: ```shell wasmer run wasmer/winterjs --net --mapdir=./:. ./index.js ``` Here are the performance results for WinterJS with Wasmer: ![Performance Results Using Winterjs With Wasmer](https://blog.logrocket.com/wp-content/uploads/2024/05/img4-Performance-results-WinterJS-Wasmer.png) In the `index.js` file in the `Bun` folder, paste the code below; ```javascript const server = Bun.serve({ port: 3000, fetch(request) { return new Response('Logrocket is the best!') }, }) console.log(`Listening on localhost:${server.port}`) ``` Navigate to the right directory, and run this code with the command below: ```zig Bun run index.js ``` Below, we can see Bun’s performance: ![Bun Performance Results](https://blog.logrocket.com/wp-content/uploads/2024/05/img5-Bun-performance-results.png) In this specific example, you can see that Bun performs better than WinterJS on my computer. WinterJS is said to run much better natively than when it runs using Wasmer, so this explains its underperformance in this test. A [native result](https://github.com/wasmerio/winterjs/tree/main/benchmark) might look something like this instead: ![Example Native Performance Result From Winterjs Github Repo Showing Better Performance Than With Wasmer](https://blog.logrocket.com/wp-content/uploads/2024/05/img6-WinterJS-native-performance-example-result-official-GitHub-repo.png) The sample result above comes from the WinterJS GitHub and shows how it does better natively with this very simple test. ### Limitations According to the WinterJS team, despite being fully compliant with the WinterCG spec, the runtime itself is still a work in progress. Among other limitations, WinterJS currently has limited API compatibility. In comparison, Bun's limited compatibility for some frameworks and its experimental stage for Wasm are its only known limitations. ### Installation As a last comparison point — and to help you get started with whichever runtime you ultimately choose — let’s discuss the getting-started steps for WinterJS and Bun, respectively. You should [have Rust and SpiderMonkey installed](https://github.com/wasmerio/winterjs/blob/main/README.md) before you try building with WinterJS. The WinterJS installation process is a bit complex. As of now, you’re likely to run into errors when installing it on Linux. For example, you may see an error like the one below right after you try installing everything and then building with WinterJS: ![Example Winterjs Error When Installing On Linux And Building With Winterjs](https://blog.logrocket.com/wp-content/uploads/2024/05/img7-WinterJS-error-installing-Linux.png) If you run into any errors while using WinterJS, you should open an issue so its maintaining team can try to help you fix it. You can also run WinterJS on Wasmer. First, [install the Wasmer CLI](https://docs.wasmer.io/install). Then, create a directory in your computer, open this directory in your code editor, and create a file. I named my file `simple.js` , but feel free to pick a name that fits your needs. Navigate to to your file and paste in the code below: ```javascript addEventListener('fetch', (req) => { req.respondWith(new Response('hello)); }); ``` Next, open your terminal and run the command below: ```shell wasmer run wasmer/winterjs --net --mapdir=./:. ./simple.js ``` Finally, navigate to the browser and open [http://localhost:8080/](http://localhost:8080/). You should see a “hello” message there. Installing Bun is easy-peasy compared to WinterJS. Bun provides various options for installation, including [cURL](https://blog.logrocket.com/an-intro-to-curl-the-basics-of-the-transfer-tool/), npm, Docker, and so on. To [install on Linux — using cURL, assuming you have it set up](https://bun.sh/docs/installation#installing) — run the following: ```bash curl -fsSL https://bun.sh/install | bash # for macOS, Linux, and WSL # to install a specific version curl -fsSL https://bun.sh/install | bash -s "bun-v1.0.0" ``` You can check the [Bun installation docs](https://bun.sh/docs/installation#installing) for more installation options. ## Conclusion WinterJS is new, so we’ll need to see more heavyweight projects built with it to get a sense of its real-world performance. However, we can see when running a simple “Hello world” server that it’s incredibly fast for lightweight tasks. This makes WinterJS a very promising runtime. In the future, when we have a clearer understanding of how WinterJS performs in practical applications compared to runtimes like Bun, your choice will likely come down to your wants in a project. If speed and compatibility with Wasm and major frontend frameworks are a priority, then you’ll probably want to use WinterJS. Otherwise, you may want to trust Bun to do its proven great job in other areas of performance. Thank you for reading this far. --- ##Are you adding new JS libraries to build new features or improve performance? What if they’re doing the opposite? There’s no doubt that frontends are getting more complex. As you add new JavaScript libraries and other dependencies to your app, you’ll need more visibility to ensure your users don’t run into unknown issues. [LogRocket](https://lp.logrocket.com/blg/javascript-signup) is a frontend application monitoring solution that lets you replay JavaScript errors as if they happened in your own browser so you can react to bugs more effectively. [![LogRocket Signup](https://blog.logrocket.com/wp-content/uploads/2019/10/errors-screenshot.png)](https://lp.logrocket.com/blg/javascript-signup) [LogRocket](https://lp.logrocket.com/blg/javascript-signup) works perfectly with any app, regardless of framework, and has plugins to log additional context from Redux, Vuex, and @ngrx/store. Instead of guessing why problems happen, you can aggregate and report on what state your application was in when an issue occurred. LogRocket also monitors your app’s performance, reporting metrics like client CPU load, client memory usage, and more. Build confidently — [start monitoring for free](https://lp.logrocket.com/blg/javascript-signup).
leemeganj
1,876,987
Entenda a diferença entre modelo conceitual e modelo lógico em Banco de Dados
Modelagem de dados é o processo de criar um #modelo de dado específico para um determinado...
0
2024-06-04T18:07:06
https://dev.to/edsonaraujobr/entenda-a-diferenca-entre-modelo-conceitual-e-modelo-logico-em-banco-de-dados-4bg4
webdev, beginners, programming, learning
Modelagem de dados é o processo de criar um #modelo de #dado específico para um determinado problema de domínio. Domínio é uma área claramente definida no mundo real, com escopo bem definido. Usamos modelos para o gerenciamento da complexidade, comunicação entre pessoas envolvidas e redução de custos no desenvolvimento. Os níveis são: Especificação dos requisitos, projeto conceitual, projeto lógico e por fim o projeto físico. 1. Especificação de requisitos: 🦻 Envolve elícitar e analisar requisitos, o objetivo é identificar o escopo e as fronteiras da aplicação. O resultado final dessa etapa é um documento de coleta de requisitos escrito de forma concisa. 2. Projeto Conceitual 💭 É a primeira abstração do processo, nele fazemos uma descrição em alto nível e é independente de SGBD. A saída pode ser: Diagrama Entidade-Relacionamento, Diagrama de classe. 3. Projeto Lógico ✒ Mapeia o modelo de dados conceitual para um modelo de uma SGBD em específico. A saída pode ser: Modelo relacional, modelo orientado a objeto e modelo semi-estruturado. 4. Projeto Físico´🎯 É a implementação do esquema lógico seguindo as estruturas de armazenamento e métodos de acesso do SGBD. Esquema físico é a descrição do esquema do BD segundo a Linguagem de Definição de Dados (LDD) Referências: HEUSER, Carlos Alberto. Projeto de Banco de Dados. Sagra Luzzatto ELMASRI, R. NAVATHE, S. B., Sistemas de Banco de Dados: Fundamentos e Aplicações, Pearson, 6o ed. Designed by Freepik
edsonaraujobr
1,876,986
Elementary Logic And Proof Techniques
1. Statements and Truth Values Definitions and Detailed...
0
2024-06-04T18:04:55
https://dev.to/niladridas/elementary-logic-and-proof-techniques-4541
startup, coding, datascience, machinelearning
## 1. Statements and Truth Values ## Definitions and Detailed Explanations **Statement**: A statement in logic is a declarative sentence that can definitively be classified as true or false. This is distinct from questions, commands, or expressions of uncertainty. Statements are the fundamental building blocks in logic and mathematical proofs. **Examples**: > "7 is a prime number." (True) "The moon is made of green cheese." (False) "New York is the capital of the USA." (False) "Water boils at 100 degrees Celsius at sea level." (True) ## Truth Values: **True**: A statement is considered true if its assertion corresponds with fact or reality. In formal logic, truth values are often denoted as 'T' or '1'. **Examples**: > "Dogs are mammals." This is true because dogs belong to the class of mammals. "2+2=4." This statement is universally true in standard arithmetic. **False**: A statement is false if its assertion contradicts fact or reality. False statements are denoted as 'F' or '0'. **Examples**: > "Cats are reptiles." This is false because cats are mammals, not reptiles. "5 is an even number." This is false because 5 cannot be evenly divided by 2. ## Usage of Statements and Truth Values in Logic Understanding the nature of statements and their truth values is crucial for the analysis and construction of logical arguments. Here’s how these concepts play out in logical reasoning: **Identifying Statements in Arguments**: When analyzing an argument, the first step is to identify all the statements being made and to evaluate their truth values. For example, in the argument "If it rains, the ground gets wet. It is raining. Therefore, the ground is wet," each sentence is a statement, and their truth values help determine the validity of the conclusion. **Practice Example**: > Premise: "All birds can fly." Premise: "Penguins are birds." Conclusion: "Penguins can fly." Here, although the premises are statements, the first premise is false since not all birds can fly (penguins are an example). This affects the truth value of the conclusion, making the argument invalid. **Constructing Logical Deductions**: Once you understand the truth values of various statements, you can use logical connectives (like AND, OR, NOT) to construct new statements. The truth value of these new statements depends on the truth values of the original statements and the nature of the connective used. **Example with Logical Connectives**: > **Statement A**: "It is raining." (True) **Statement B**: "It is cold outside." (False) **Conjunction (A AND B)**: "It is raining and it is cold outside." (False, since both A and B need to be true for the conjunction to be true) **Disjunction (A OR B)**: "It is raining or it is cold outside." (True, since only one of A or B needs to be true for the disjunction to be true) ## 2. Logical Connectives Logical connectives are operators used in logic to connect statements together to form more complex statements. Each connective has a specific rule about how the truth value of the compound statement is determined based on the truth values of the component statements. Below, we detail three fundamental logical connectives: negation, conjunction, and disjunction, with ample examples. > ## Negation **Symbol**: `¬` **Meaning**: The negation of a statement `p`, denoted `¬p`, is true if `p` is false, and false if `p` is true. It essentially reverses the truth value of the original statement. **Examples**: > **Statement**: "It is raining." (Assume True) **Negation**: "It is not raining." (False) **Statement**: "The cat is asleep." (Assume False) **Negation**: "The cat is not asleep." (True) Negation is particularly useful in constructing proofs by contradiction, where one assumes `¬p` (the negation of what is to be proven) and shows that this leads to a logical impossibility, thereby establishing `p` as true. ## Conjunction **Symbol**: `∧` **Meaning**: The conjunction of statements `p` and `q`, denoted `p ∧ q`, is true only if both `p` and `q` are true. This connective represents the logical "and." **Examples**: > **Statement p**: "It is raining." (True) **Statement q**: "It is cold outside." (True) **Conjunction**: "It is raining and it is cold outside." (True) If either `p` or `q` were false, then `p ∧ q` would be false: >> **Statement p**: "It is raining." (False) **Statement q**: "It is cold outside." (True) **Conjunction**: "It is raining and it is cold outside." (False) Conjunctions are key in mathematical proofs and logical reasoning where multiple conditions or premises must be satisfied simultaneously. ## Disjunction **Symbol**: `∨` **Meaning**: The disjunction of statements `p` and `q`, denoted `p ∨ q`, is true if at least one of `p` or `q` is true. This connective represents the logical "or." **Examples**: **Statement p**: "It is raining." (True) **Statement q**: "It is cold outside." (False) **Disjunction**: "It is raining or it is cold outside." (True) Even if both are true, the disjunction remains true: **Statement p**: "It is raining." (True) **Statement q**: "It is cold outside." (True) **Disjunction**: "It is raining or it is cold outside." (True) Disjunctions are used in cases where multiple scenarios or possibilities lead to a similar outcome or conclusion, and only one needs to be true to satisfy a condition. By combining these connectives, one can build complex logical expressions and effectively analyze the logical structure of arguments and mathematical proofs. ## 3. Basic Proof Concepts > ## Direct Proof **Description**: Direct proof is a straightforward method of proving a statement by assuming the truth of the premises and logically deducing the conclusion. It follows a linear and step-by-step approach, making it one of the most commonly used proof techniques in mathematics. **Steps**: - **Apply Logical Rules and Axioms**: Use logical connectives, rules, and mathematical axioms to systematically derive the conclusion from the premises. **Example**: **Premise**: "If it is raining, then the ground is wet." **Assumption for Direct Proof**: It is raining. **Conclusion**: The ground is wet. This method is direct and avoids indirect implications or assumptions beyond the stated premises. It is particularly effective for proving implications and statements that follow from clear logical or mathematical laws. > ## Logic Symbols and Their Meanings **Universal Quantifier ( `∀`)**: **Meaning**: The universal quantifier `∀` is used to indicate that a statement holds true for all elements within a certain set. **Example**: `∀x ∈ ℝ`, `x² ≥ 0`. This reads as "For all `x` in the set of real numbers `ℝ`, the square of `x` is greater than or equal to zero." This statement is true for every real number `x`. **Existential Quantifier ( `∃` )**: **Meaning**: The existential quantifier `∃` indicates that there is at least one element in a specified set for which the statement is true. **Example**: `∃x ∈ ℝ, x² = 2`. This reads as "There exists an `x` in the set of real numbers `ℝ` such that `x²` equals 2." This is true as there are real numbers (specifically, `√2` and `√2`) that satisfy this condition. > ## Utilizing Quantifiers in Proofs Quantifiers are crucial in the formulation of mathematical theorems and their proofs: **Using `∀` (Universal Quantifier)**: When proving statements involving `∀`, you must show that the statement holds for every possible case within the set. For instance, proving `∀x ∈ ℤ, x + 0 = x` involves demonstrating that adding zero to any integer `x` will result in `x` itself, a fundamental property of additive identity in mathematics. **Using `∃` (Existential Quantifier)**: Proofs involving `∃` typically require demonstrating the existence of at least one element that satisfies the conditions of the theorem. This might involve constructing an example or showing that under certain conditions, such an element must logically exist. In summary, direct proofs and logical quantifiers form essential components of mathematical reasoning, enabling clear and structured problem-solving and theorem proving. ## 4. Introduction to Different Types of Proofs > ## Contrapositive Proof **Description**: A contrapositive proof involves proving the contrapositive of a given implication instead of the implication itself. If the original statement is of the form `p → q` (if `p` then `q`), its contrapositive is `¬q → ¬p` (if not `q` then not `p), which is logically equivalent to the original statement. **Example**: **Original Statement**: "If it is not cold, then it is not winter." **Contrapositive**: "If it is winter, then it is cold." **Proof Approach**: Show that during winter, it must be cold. Thus, the contrapositive holds, confirming the original statement through its logical equivalence. **Additional Examples**: **Original Statement**: "If a number is even, then it is divisible by 2." **Contrapositive**: "If a number is not divisible by 2, then it is not even." **Proof**: Show that any number not divisible by 2 has a remainder when divided by 2, thus it cannot be even. > ## Proof by Contradiction **Description**: Proof by contradiction (also known as reductio ad absurdum) is a method where you assume the opposite of what you need to prove and show that this assumption leads to a contradiction or an impossibility. This contradiction implies that the assumption is false, thereby establishing the truth of the original statement. **Example**: **Statement to Prove**: "There is no smallest negative number." Assumption: Assume there is the smallest negative number, say `n`. **Contradiction**: Consider `n/2`, which is also a negative number but smaller than `n`. This contradicts the assumption that n is the smallest negative number. **Conclusion**: Since the assumption leads to a contradiction, the original statement is true. **Additional Examples**: **Statement to Prove**: "The square root of 2 is irrational." **Assumption**: Assume the square root of 2 is rational, meaning it can be expressed as a fraction a/b where a and b are integers with no common factors. **Contradiction**: By squaring both sides and manipulating the equation, one ends up with `2b² = a²`, implying `a²` is even, thus `a` is even. Represent `a` as `2k` and substitute back to find `b` must also be even, contradicting the assumption that `a` and `b` have no common factors. **Conclusion**: The square root of 2 cannot be expressed as a fraction of two integers, hence it is irrational. Both contrapositive and proof by contradiction are powerful methods in mathematics, particularly useful when direct proof is cumbersome or complex. They allow mathematicians to explore logical relationships from different perspectives, often leading to insightful conclusions about the properties and nature of mathematical objects. _Student, Author, Investor, ML Developer, Network Engineer — [Niladri Das](https://x.com/niladrridas)_
niladridas
1,876,985
HeyRuu
🌺
0
2024-06-04T18:04:43
https://dev.to/heyruu/heyruu-1bne
laravel, csharp
`🌺`
heyruu
1,876,984
Choosing an online pharmacy you can trust
Choosing a trustworthy online pharmacy is crucial for your health and safety. DiRx outlines five key...
0
2024-06-04T18:04:07
https://dev.to/skyline_entertainment_843/choosing-an-online-pharmacy-you-can-trust-4fi
mentalhealth, webdev, beginners, react
Choosing a trustworthy online pharmacy is crucial for your health and safety. DiRx outlines five key factors: ensuring the pharmacy sells only FDA-approved medicines, is licensed and accredited, offers pharmacist support, monitors prescription interactions, and complies with privacy laws. By focusing on these aspects, you can confidently select an online pharmacy that prioritises your well-being. Read More:https://bit.ly/3VqMY1l
skyline_entertainment_843
1,876,280
How to Check if an Array is Sorted
There are many times we need to check if an array is sorted or not. Checking if an array is sorted...
27,580
2024-06-04T18:01:00
https://blog.masum.dev/how-to-check-if-an-array-is-sorted
algorithms, computerscience, cpp, tutorial
There are many times we need to check if an array is sorted or not. Checking if an array is sorted can be approached in multiple ways. Here, we we'll discuss two solutions: a **brute force approach** and an **optimal approach**. ### Solution 1: Brute Force Approach This method involves comparing each element with every other element that comes after it in the array to ensure that the array is sorted in non-decreasing order. **Implementation**: ```cpp // Solution-1: Brute Force Approach // Time Complexity: O(n*n) // Space Complexity: O(1) bool isArraySorted(vector<int> &arr, int n) { for (int i = 0; i < n; i++) { for (int j = i + 1; j < n; j++) { if (arr[j] < arr[i]) return false; } } return true; } ``` **Logic**: **1. Nested Loops**: Use two nested loops to compare each element with every subsequent element in the array. **2. Check Order**: If any element is found to be greater than a subsequent element, the array is not sorted and the function returns `false`. **3. Return True**: If no such pair is found, the array is sorted and the function returns `true`. **Time Complexity**: O(n²) * **Explanation**: The outer loop runs `n` times and for each iteration, the inner loop runs up to `n-1` times, resulting in a quadratic time complexity. **Space Complexity**: O(1) * **Explanation**: The algorithm uses a constant amount of extra space. **Example**: * **Input**: `arr = [10, 20, 30, 40, 50]`, `n = 5` * **Output**: `true` * **Explanation**: All elements are in non-decreasing order. --- ### Solution 2: Optimal Approach A more efficient method involves a single pass through the array, comparing each element with its predecessor to ensure that the array is sorted. **Implementation**: ```cpp // Solution-2: Optimal Approach // Time Complexity: O(n) // Space Complexity: O(1) bool isArraySorted(vector<int> &arr, int n) { for (int i = 1; i < n; i++) { if (arr[i] < arr[i - 1]) { return false; } } return true; } ``` **Logic**: **1. Single Loop**: Traverse the array starting from the **second** element. **2. Compare with Predecessor**: For each element, check if it is less than its predecessor. **3. Return False**: If any element is found to be less than its predecessor, the array is not sorted and the function returns `false`. **4. Return True**: If no such element is found, the array is sorted and the function returns `true`. **Time Complexity**: O(n) * **Explanation**: The algorithm makes a single pass through the array, resulting in linear time complexity. **Space Complexity**: O(1) * **Explanation**: The algorithm uses a constant amount of extra space. **Example**: * **Input**: `arr = [10, 20, 30, 40, 50]`, `n = 5` * **Output**: `true` * **Explanation**: All elements are in non-decreasing order. --- ### Comparison - **Brute Force Method**: - **Pros**: Simple and easy to understand. - **Cons**: Inefficient due to its O(n²) time complexity. - **Use Case**: Not suitable for large arrays due to its inefficiency. - **Optimal Method**: - **Pros**: Highly efficient with O(n) time complexity. - **Cons**: None significant. - **Use Case**: Ideal for checking the sorted status of large arrays. ### Edge Cases * **Empty Array**: An empty array is considered sorted. * **Single Element Array**: An array with a single element is considered sorted. * **Array with All Identical Elements**: An array where all elements are the same is considered sorted. ### Additional Notes * **Efficiency**: The optimal approach is significantly more efficient for large datasets. * **Simplicity**: Despite its efficiency, the optimal approach is also simple to implement. * **Practicality**: The optimal method is generally preferred due to its linear time complexity and constant space complexity. ### Conclusion Checking if an array is sorted can be done efficiently using a single-pass approach. While the brute force method provides a simple but inefficient solution, the optimal method is both efficient and easy to implement, making it suitable for large datasets. ---
masum-dev
1,876,982
The Future of Remote Work: Tech Innovations Shaping the Workplace
Advanced Collaboration Tools: Platforms like Slack, Microsoft Teams, and Zoom are constantly...
0
2024-06-04T18:00:45
https://dev.to/bingecoder89/the-future-of-remote-work-tech-innovations-shaping-the-workplace-1l12
remote, web3, javascript, beginners
* Advanced Collaboration Tools: Platforms like Slack, Microsoft Teams, and Zoom are constantly evolving to offer more than just video conferencing. They now feature integrated project management, file sharing, and real-time collaboration tools, creating a more seamless virtual workspace. * Virtual and Augmented Reality: VR can create immersive meeting spaces or training environments, while AR can overlay digital information onto the physical world to assist with tasks. These technologies can bridge the gap between remote and in-person work. * Enhanced Cybersecurity Measures: As remote work becomes more prevalent, robust cybersecurity measures are crucial to protect sensitive data and company information. * AI-powered Automation: AI can automate repetitive tasks, freeing up valuable time for employees to focus on more strategic work. * The Rise of the Digital Nomad: With increased connectivity and flexible work arrangements, more people will have the freedom to work remotely from anywhere in the world. * Focus on Employee Wellbeing: Companies will need to prioritize employee well-being in a remote work environment. This may include providing ergonomics training, mental health resources, and fostering a strong sense of community. * The Evolving Role of the Manager: Managers will need to adapt their leadership styles to effectively manage and motivate remote teams. * Performance Metrics and Analytics: Companies will increasingly rely on data and analytics to track employee performance and measure productivity in a remote work setting. * The Blurring of Lines Between Work and Life: With remote work, it can be challenging to disconnect from work. Companies and employees will need to establish healthy boundaries to maintain a good work-life balance. * The Importance of Reskilling and Upskilling: As technology continues to evolve, workers will need to continuously learn new skills to stay relevant in the job market. Happy Learning 🎉
bingecoder89
1,876,981
A Comprehensive Guide to Building Recommendation Systems
Recommendation systems are an integral part of our digital experience, influencing our choices on...
0
2024-06-04T18:00:34
https://dev.to/abhaysinghr1/a-comprehensive-guide-to-building-recommendation-systems-4me7
python, machinelearning, datascience
Recommendation systems are an integral part of our digital experience, influencing our choices on platforms like Netflix, Amazon, and Spotify. These systems analyze vast amounts of data to suggest products, movies, music, and even friends or jobs. In this guide, we will delve deep into the world of recommendation systems, covering various techniques, popular libraries, and real-world applications. Whether you are a data scientist, a developer, or simply curious about the technology, this comprehensive guide will equip you with the knowledge to build effective recommendation systems. ### Table of Contents 1. Introduction to Recommendation Systems 2. Types of Recommendation Systems - Collaborative Filtering - Content-Based Filtering - Hybrid Methods 3. Key Techniques and Algorithms - User-Based Collaborative Filtering - Item-Based Collaborative Filtering - Matrix Factorization - Singular Value Decomposition (SVD) - Deep Learning Approaches 4. Popular Libraries for Building Recommendation Systems - Scikit-Learn - Surprise - LightFM - TensorFlow and PyTorch 5. Step-by-Step Guide to Building a Simple Recommendation System - Data Collection and Preprocessing - Model Training and Evaluation - Implementation with Scikit-Learn 6. Advanced Topics and Techniques - Incorporating Implicit Feedback - Context-Aware Recommendations - Sequence-Aware Recommendations 7. Real-World Use Cases - E-commerce - Entertainment - Social Media - Job Portals 8. Challenges and Best Practices - Data Sparsity - Cold Start Problem - Scalability - Privacy Concerns 9. Conclusion and Future Trends --- ### 1. Introduction to Recommendation Systems Recommendation systems are algorithms designed to suggest relevant items to users based on various data inputs. These systems have become essential in many industries, driving user engagement and increasing sales. By analyzing user behavior, preferences, and historical interactions, recommendation systems can predict what users might be interested in. ### 2. Types of Recommendation Systems There are several types of recommendation systems, each with its unique approach and use cases. The primary types are: #### Collaborative Filtering Collaborative filtering is one of the most popular recommendation techniques. It relies on the assumption that users who have agreed in the past will agree in the future. Collaborative filtering can be further divided into: - **User-Based Collaborative Filtering**: This approach finds users similar to the target user and recommends items that those similar users liked. - **Item-Based Collaborative Filtering**: This method finds items similar to the items the target user has liked and recommends those. #### Content-Based Filtering Content-based filtering recommends items based on the features of the items and the preferences of the user. This technique uses item metadata and user profiles to find matches. For instance, a content-based recommendation system for movies might consider the genre, director, and actors to suggest films similar to those a user has enjoyed in the past. #### Hybrid Methods Hybrid recommendation systems combine collaborative filtering and content-based filtering to improve performance and overcome the limitations of each method. By leveraging the strengths of both approaches, hybrid methods can provide more accurate and diverse recommendations. ### 3. Key Techniques and Algorithms Various techniques and algorithms are used to build recommendation systems. Here, we will explore some of the key methods: #### User-Based Collaborative Filtering User-based collaborative filtering finds users who have similar preferences and recommends items that those users have liked. This method involves calculating the similarity between users using measures such as cosine similarity, Pearson correlation, or Jaccard index. #### Item-Based Collaborative Filtering Item-based collaborative filtering focuses on finding items that are similar to the items a user has interacted with. The similarity between items is calculated, and recommendations are made based on these similarities. This approach is often preferred in scenarios with a large number of users but fewer items. #### Matrix Factorization Matrix factorization techniques, such as Singular Value Decomposition (SVD) and Alternating Least Squares (ALS), are popular in collaborative filtering. These methods decompose the user-item interaction matrix into latent factors, capturing underlying patterns in the data. #### Singular Value Decomposition (SVD) SVD is a matrix factorization technique that decomposes the interaction matrix into three matrices, capturing the latent factors representing users and items. This technique is widely used in collaborative filtering to provide high-quality recommendations. #### Deep Learning Approaches Deep learning methods, such as neural collaborative filtering (NCF) and autoencoders, have gained popularity in recent years. These models can capture complex patterns in the data and provide highly personalized recommendations. ### 4. Popular Libraries for Building Recommendation Systems Several libraries and frameworks make it easier to build recommendation systems. Here are some of the most popular ones: #### Scikit-Learn Scikit-Learn is a versatile machine learning library in Python that provides tools for building simple recommendation systems. While it doesn't have specialized functions for recommendations, it can be used for implementing basic collaborative filtering and content-based methods. #### Surprise Surprise is a dedicated library for building and evaluating recommendation systems. It provides various algorithms for collaborative filtering, including matrix factorization techniques and tools for cross-validation and parameter tuning. #### LightFM LightFM is a Python library designed for building hybrid recommendation systems. It supports both collaborative filtering and content-based methods and can incorporate metadata about users and items into the recommendation process. #### TensorFlow and PyTorch TensorFlow and PyTorch are powerful deep learning frameworks that can be used to implement advanced recommendation models. They provide flexibility and scalability, making them suitable for large-scale recommendation systems. ### 5. Step-by-Step Guide to Building a Simple Recommendation System In this section, we will build a simple recommendation system using Scikit-Learn. We'll go through data collection and preprocessing, model training and evaluation, and implementation. #### Data Collection and Preprocessing The first step in building a recommendation system is collecting and preprocessing the data. We need user-item interaction data, such as ratings, purchases, or clicks. Once we have the data, we need to clean and preprocess it, handling missing values and normalizing features. #### Model Training and Evaluation Next, we train our recommendation model using the preprocessed data. We'll use collaborative filtering methods, such as user-based or item-based approaches. After training the model, we evaluate its performance using metrics like precision, recall, and mean squared error. #### Implementation with Scikit-Learn ```python import pandas as pd from sklearn.model_selection import train_test_split from sklearn.metrics.pairwise import cosine_similarity from sklearn.metrics import mean_squared_error import numpy as np # Load the dataset data = pd.read_csv('ratings.csv') # Split the data into training and testing sets train_data, test_data = train_test_split(data, test_size=0.2) # Create a user-item matrix for training user_item_matrix = train_data.pivot(index='user_id', columns='item_id', values='rating').fillna(0) # Calculate cosine similarity between users user_similarity = cosine_similarity(user_item_matrix) user_similarity_df = pd.DataFrame(user_similarity, index=user_item_matrix.index, columns=user_item_matrix.index) # Function to make recommendations def recommend(user_id, num_recommendations): similar_users = user_similarity_df[user_id].sort_values(ascending=False).index[1:] recommended_items = {} for similar_user in similar_users: items = train_data[train_data['user_id'] == similar_user]['item_id'].values for item in items: if item not in recommended_items: recommended_items[item] = 0 recommended_items[item] += user_similarity_df[user_id][similar_user] if len(recommended_items) >= num_recommendations: break recommended_items = sorted(recommended_items.items(), key=lambda x: x[1], reverse=True) return [item[0] for item in recommended_items[:num_recommendations]] # Example: Recommend 5 items for user with ID 1 recommendations = recommend(1, 5) print(f"Recommendations for user 1: {recommendations}") ``` ### 6. Advanced Topics and Techniques #### Incorporating Implicit Feedback Implicit feedback, such as clicks or views, can be used to improve recommendation systems. Unlike explicit feedback (ratings), implicit feedback is more abundant and can provide valuable insights into user preferences. #### Context-Aware Recommendations Context-aware recommendation systems take into account additional contextual information, such as time, location, or device, to provide more relevant suggestions. For example, a restaurant recommendation system might consider the time of day and the user's location to suggest nearby dining options. #### Sequence-Aware Recommendations Sequence-aware recommendations consider the order of user interactions. Techniques like Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) networks can model sequential data to capture temporal patterns in user behavior. ### 7. Real-World Use Cases #### E-commerce E-commerce platforms like Amazon use recommendation systems to suggest products based on user behavior and preferences. These systems help increase sales by showing users items they are likely to purchase. #### Entertainment Streaming services like Netflix and Spotify rely heavily on recommendation systems to suggest movies, TV shows, and music. These recommendations are tailored to individual user preferences, enhancing the overall user experience. #### Social Media Social media platforms like Facebook and Twitter use recommendation systems to suggest friends, groups, and content. By analyzing user interactions, these systems help users discover relevant connections and information. #### Job Portals Job recommendation systems on platforms like LinkedIn and Indeed suggest job postings to users based on their profiles and past interactions. These systems help users find relevant job opportunities more efficiently. ### 8. Challenges and Best Practices #### Data Sparsity Recommendation systems often deal with sparse data, where many users have interacted with only a few items. Techniques like matrix factorization and incorporating implicit feedback can help mitigate this issue. #### Cold Start Problem The cold start problem arises when a new user or item is added to the system with no prior interactions. Hybrid methods and leveraging metadata can help address this challenge. #### Scalability As the number of users and items grows, recommendation systems need to scale efficiently. Distributed computing and optimized algorithms can help maintain performance at scale. #### Privacy Concerns Collecting and analyzing user data raises privacy concerns. Implementing robust data anonymization and security measures is essential to protect user privacy. ### 9. Conclusion and Future Trends Recommendation systems have become a crucial component of many online platforms, enhancing user experience and driving engagement. As technology advances, we can expect to see more sophisticated recommendation systems incorporating deep learning, context-awareness, and real-time personalization. Future trends may also include explainable recommendations, where users can understand why certain items are suggested, and more emphasis on ethical considerations in recommendation systems. In conclusion, building effective recommendation systems requires a deep understanding of various techniques and algorithms, the ability to leverage popular libraries, and a keen awareness of real-world challenges and best practices. By following this comprehensive guide, you can develop recommendation systems that provide valuable insights and personalized experiences for users.
abhaysinghr1
1,876,980
Discover Top-Notch Anomaly Pool Services in the USA! 🌊🏊‍♂️
Looking for reliable and professional anomaly pool services in the USA? Look no further! Our expert...
0
2024-06-04T18:00:29
https://dev.to/nikolaprem/discover-top-notch-anomaly-pool-services-in-the-usa-1n75
Looking for reliable and professional anomaly pool services in the USA? Look no further! Our expert team specializes in identifying and resolving any unusual issues with your pool, ensuring it stays in pristine condition all year round. 🔍 What We Offer: Comprehensive anomaly detection and analysis Expert **[repair and maintenance services](https://www.anomalypoolservices.io/)** State-of-the-art equipment and techniques Tailored solutions for residential and commercial pools 💡 Why Choose Us? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h2k20pihdsxtrsqoitg3.jpeg) Experienced and certified professionals Prompt and efficient service Customer satisfaction guaranteed Competitive pricing Don’t let anomalies ruin **[your pool ](https://www.anomalypoolservices.io/about-us)**experience! Contact us today to schedule an inspection and enjoy a flawless pool environment. 📞 Call us now: [(817) 809-3144] 🌐 Visit our website: [https://www.anomalypoolservices.io/] #AnomalyPoolService #PoolMaintenance #USAPoolServices #ExpertPoolCare #PoolRepair #SwimmingPool [pool service ](https://www.anomalypoolservices.io/the-anomaly-experience/weekly-pool-maintenance)pool matenance pool repairing pool refilling
nikolaprem
1,876,979
RECOVERING LOST, HACKED, OR STOLEN BITCOIN THROUGH HACK SAVVY TECHNOLOGY
HACK SAVVY TECHNOLOGY CONTACT INFO: Mail them via: contactus@hacksavvytechnology. com Mail them via:...
0
2024-06-04T17:59:47
https://dev.to/jennifer_andrew_54643684f/recovering-lost-hacked-or-stolen-bitcoin-through-hack-savvy-technology-5aad
HACK SAVVY TECHNOLOGY CONTACT INFO: Mail them via: contactus@hacksavvytechnology. com Mail them via: Support@hacksavvytechrecovery. com WhatsApp No: +7 999 829‑50‑38, website: https://hacksavvytechrecovery.com After successfully running and then selling my t-shirt printing business, I netted $800,000 in revenue after deducting taxes. With this windfall, I decided to invest the entire amount in Bitcoin. My decision was influenced by my thorough understanding of cryptocurrency trading and the substantial potential I saw in the online market. Having previously studied the concept of day trading extensively, I felt well-prepared to navigate the volatile world of cryptocurrency investments. However, along with the opportunities came risks, including those posed by cybercriminals. Shortly after making my investment, I began receiving malicious emails that infiltrated my Gmail account. These emails were expertly crafted to appear legitimate, and unfortunately, they managed to obtain my passwords. The scammers then attempted to defraud me and steal my Bitcoin holdings. Recognizing the severity of the situation, I immediately contacted my friend, who is an IT expert. He recommended HACK SAVVY TECHNOLOGY, a professional team specializing in dealing with such cyber threats. The team from Hack Savvy Technology swung into action promptly, bringing their expertise to bear on securing my digital assets. HACK SAVVY TECHNOLOGY provided several key advantages during this critical time. They responded rapidly to my distress call, understanding the urgency of the situation and beginning their work almost immediately. Their in-depth knowledge of cybersecurity and cryptocurrency transactions ensured that they could effectively address the threat. They were able to identify and neutralize the malicious emails that had infiltrated my account. In addition to securing my accounts, HACK SAVVY TECHNOLOGY assisted in recovering any compromised data, ensuring that my Bitcoin investments remained safe. They also provided me with valuable advice on how to enhance my cybersecurity to prevent future attacks. This included setting up two-factor authentication, using more robust passwords, and recognizing potential phishing attempts. Knowing that my digital assets were being protected by professionals gave me immense peace of mind, allowing me to focus on my investment strategy without constantly worrying about potential threats. The experience underscored the importance of cybersecurity, especially in the world of cryptocurrency trading. Thanks to HACK SAVVY TECHNOLOGY, I was able to safeguard my $800,000 Bitcoin investment and continue exploring the promising landscape of digital currencies with confidence. Their swift and effective action not only protected my assets but also educated me on maintaining better security practices going forward. Kindly reach them via the above info. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b0lh641uuq49me3p7ggc.jpeg)
jennifer_andrew_54643684f
1,876,978
How safe is your prescription information online? How safe is your prescription information online?
https://bit.ly/3yMW1Ru Ensuring the safety of your prescription information online is crucial. DiRx...
0
2024-06-04T17:58:18
https://dev.to/emily_alba_889fc8556d3dd8/how-safe-is-your-prescription-information-onlinehow-safe-is-your-prescription-information-online-4b36
https://bit.ly/3yMW1Ru Ensuring the safety of your prescription information online is crucial. DiRx emphasises the importance of understanding who has access to your health data, how it is used, and the security measures to protect it. With HIPAA compliance and secure database protocols, DiRx guarantees that your private health information remains confidential and safe. #Onlinepharmacysecurity#ProtectinghealthinformationHIPAA-compliant pharmacy#Secureprescriptioninformation#Onlineprescriptionprivacy#Health informationsecurity# Safeonlinepharmacy#ProtectPHIonline#Securehealthdata#Online pharmacyprivacy#Privacylawpharmacy# HIPAAverifiedpharmacy#LegitScriptcertified pharmacy#Protectingprescriptiondata#Pharmacydataencryption#Confidentialhealth information#Privacyinonlinepharmacies#Secureonlinemedication# Protectedhealth information#Onlinepharmacycompliance
emily_alba_889fc8556d3dd8
1,876,973
Bringing up BPI-F3 - Part 3
Initramfs Initially I was hoping that it would not be needed, but since the SoC has a...
27,455
2024-06-04T17:45:32
https://dev.to/luzero/bringing-up-bpi-f3-part-3-101h
riscv, bpif3, gentoo
## Initramfs Initially I was hoping that it would not be needed, but since the SoC has a [remote processor](https://www.kernel.org/doc/html/latest/staging/remoteproc.html) and the defconfig for it enables it, I guess it is simpler to use an initramfs. ### Remoteproc firmware As seen [here](https://github.com/BPI-SINOVOIP/armbian-build/commit/f4d657eda0400386bb2bf6d4db8798741afae963) the remoteproc needs a firmware bit and if you happen to forget about it you'd be welcomed by: ``` dmesg [ 4.205609] remoteproc remoteproc0: rcpu_rproc is available [ 4.211421] remoteproc remoteproc0: Direct firmware load for esos.elf failed with error -2 [ 4.214379] riscv-pmu-sbi: SBI PMU extension is available [ 4.219790] remoteproc remoteproc0: powering up rcpu_rproc [ 4.225306] riscv-pmu-sbi: 16 firmware and 18 hardware counters [ 4.230776] remoteproc remoteproc0: Direct firmware load for esos.elf failed with error -2 [ 4.245106] remoteproc remoteproc0: request_firmware failed: -2 [ 4.246235] es8326 2-0019: assuming static mclk [ 4.256170] enter spacemit_snd_sspa_pdev_probe [ 4.301833] usb 2-1: new high-speed USB device number 2 using xhci-hcd ``` If you like to use [dracut](https://wiki.gentoo.org/wiki/Dracut) all you need is to add to your `/etc/dracut.conf.d/firmware.conf` is: ``` sh install_items+=" /lib/firmware/esos.elf " ``` If you use [Genkernel](https://wiki.gentoo.org/wiki/Genkernel), set in `/etc/genkernel.conf`: ``` sh # Add firmware(s) to initramfs FIRMWARE="yes" # Specify directory to pull from FIRMWARE_DIR="/lib/firmware" # Specify a comma-separated list of firmware files or directories to include, # relative to FIRMWARE_DIR. If empty or unset, the full contents of # FIRMWARE_DIR will be included (if FIRMWARE option above is set to YES). FIRMWARE_FILES="esos.elf" ``` as explained [here](https://wiki.gentoo.org/wiki/Genkernel#Firmware_loading). ## Coming next Now the remaining bits I'd like to have done are having a nicer u-boot configuration and hopefully wrap everything up so we can have a Gentoo image that can be simply flashed to the SD/eMMC/NVMe.
luzero
1,876,977
What is Front End Development?
Hello! So you may have heard of the phrase "front-end" development before, but you don't quite know...
0
2024-06-04T17:44:42
https://dev.to/jockko/what-is-front-end-development-17ak
Hello! So you may have heard of the phrase "front-end" development before, but you don't quite know what it means. Let's dive into it. Front-End Development is what I like to call the "umbrella" of our code that contains the visual side of things for our client. Essentially, everything you see on the webpage. Our role as the developer is to ensure that our websites are usable and compatible with one other. **Skills** Knowing how complex a working webpage is, you can imagine the different parts that all align with each other to form them. Essential skills required to learn front end development as a whole include knowledge of HTML, CSS, JavaScript, and Frameworks. HTML is the section that includes all of the elements that create a webpage. For example, things such as creating a header tag, a title, a paragraph and even images. Here is how we divide our webpage into sections that we could even investigate within the browser. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qy0dqnfy047mb7d7ilxa.png) For the sake of time, here we just created a header tag for the title of our blog. However, you can imagine there also being a body tag which would then include a p tag for every paragraph text of our blog. So if we were to debug our webpage, we can inspect which elements are what based on their structure in the html page. **CascadingStyleSheets (CSS)** The styling of our webpage typically goes here. From changing the color of a specific DOM element, to altering the background image of your page. CSS is a world of it's own because there are so many different ways to style a page and it grows by the day! So if you're a fan of designing how things will look on the webpage, CSS may be right up your alley. **JavaScript** Although there are many other coding languages, JavaScript remains to be one of the most commonly used. Not only that, but many people agree that learning JavaScript first can make your transition to learning other languages a lot easier. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9kx02jiglhcs36ds3kys.png) For example, if I had an array of data that I wanted to be multiplied by 2, we can use javascript to create higher order functionality that achieves certain goals faster for us. In this case, the map function logs [2, 4, 6, 8, 10] to the console. **Frameworks** Frameworks are different tools that we as developers can use ensure that our webpages are able to "communicate" with one another. For example, if I wanted the client to add a tweet to the page, or even add a background image to their profile, we expect the state of our webpage to update to an entirely new one that includes that tweet and/or background. Frameworks allow us to do that! Just like there are many different coding languages, there are other frameworks as well. Some of them include Jquery, AngularJs, Bootstrap, Vue.js, and ReactJs; the most popular being React. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t1gyxb4phc0u4nl9iyzz.png) There are many other layers to front end development that can be a focus career wise such as maintaining Asynchronous HTTP Requests (AJAX) for the client side, DOM manipulation, learning about package managers (such as npm), hosting the webpage and more. Salaries can range between $60,000-$120,000 depending on your experience level. Some senior devs make up to $150,000. So it's imperative that you find your niche as a developer for job hunt. If you're one that loves to focus more on the client side of things, then you would narrow your search to anything under the front-end development umbrella.
jockko
1,876,975
How Technology and Programming Help My Hikes
I love hiking, especially in the mountains. There's something magical about being in the middle of...
0
2024-06-04T17:29:30
https://dev.to/outofyourcomfortzone/how-technology-and-programming-help-my-hikes-28ap
I love hiking, especially in the mountains. There's something magical about being in the middle of nature, tackling challenging trails, and admiring breathtaking views. But let's be real, hiking in the mountains isn't just about throwing a backpack on and heading out. Luckily, technology and programming are here to give a helping hand, making everything safer and more enjoyable. #### Navigation and Route Planning First of all, who hasn't gotten lost on a trail? I've been there, but now I use GPS and apps like Google Maps, AllTrails, and Gaia GPS. These apps are lifesavers—literally! They show me detailed maps, route options, and even monitor my progress in real-time. This way, I can plan my hikes properly and avoid ending up in the middle of nowhere. #### Health and Fitness Monitoring I'm a fan of smartwatches and wearables. I have a Garmin that's my inseparable companion on the trails. It measures my heart rate, counts my steps, and even calculates the calories I burn. It's like having a digital personal trainer on my wrist, helping me keep the pace and avoid overexertion. #### Weather Forecasting and Environmental Conditions There's nothing worse than being caught off guard by a storm in the middle of the mountains. That's why I rely on weather apps like Weather Underground and Windy. They alert me to any weather changes and help me prepare for whatever comes. This way, I can focus on the hike without worrying too much about the weather. #### Communication in Remote Areas I've been to places where the cell phone doesn't work at all. In these situations, having a satellite communication device like the Garmin inReach is essential. These devices allow me to send SOS messages and share my location, ensuring I can call for help if needed. Safety first, always! #### Smart Equipment and Drones And then there are drones! They are amazing for exploring inaccessible areas and taking spectacular photos of the landscapes. Additionally, boots with sensors and backpacks with solar chargers are technologies that make hiking more comfortable and safe. It's modernity at our service! ### Best Countries for Mountain Hiking Now, if there's one thing I love, it's exploring trails in different countries. My favorites are: Nepal, New Zealand, Switzerland, Peru, Japan and [South Korea](https://www.outofyourcomfortzone.net/16-top-hikes-in-south-korea/).
outofyourcomfortzone
1,876,971
FUNCTIONAL & NON-FUNCTIONAL TESTING
FUNCTIONALITY TESTING: It is a type of testing to test the feature of the product .Whether it covers...
0
2024-06-04T17:21:13
https://dev.to/samu_deva/functional-non-functional-testing-1724
FUNCTIONALITY TESTING: It is a type of testing to test the feature of the product .Whether it covers all its purpose and list of all functionalities and also to check whether it is created as per the requirements from the SRS or BRS document. Functional testing is also a type of black box testing that holds the test cases based on the specifications of the component under test. what the system actually does is functional testing. To verify that each function of the software application behaves as specified in the requirement document. Testing all the functionalities by providing appropriate input to verify whether the actual output is matching the expected output or not. It falls within the scope of black box testing and the testers need not concern about the source code of the application. 1. What the system actually does is functional testing 2. To ensure that your product meets customer and business requirements and doesn’t have any major bugs 3. To verify the accuracy of the software against expected output 4.It is performed before non-functional testing Functional Testing types are • Unit testing • Smoke testing • User Acceptance • Integration Testing • Regression testing • Localization • Globalization • Interoperability EXAMPLE: Example of functional test case is to verify the login functionality GPay 1. Adding a bank accounts 2. Sending and receiving money 3. Messaging 4. Allowing contacts NON FUNCTIONAL TESTING: Non functional testing is another type of software testing which is used to check the non functional aspects of a system like the performance the usability readability and so on it helps in testing the readiness of the system How well the system performs is non-functionality testing. Non-functional testing refers to various aspects of the software such as performance, load, stress, scalability, security, compatibility etc., Main focus is to improve the user experience on how fast the system responds to a request. 1.How well the system performs is non-functionality testing 2.To ensure that the product stands up to customer expectations 3.To verify the behavior of the software at various load conditions 4.It is performed after functional testing Non Functional Testing types • Performance Testing • Volume Testing • Scalability • Usability Testing • Load Testing • Stress Testing • Compliance Testing • Portability Testing • Disaster Recover Testing EXAMPLE: google homepage open non-functional test case is to check whether the homepage is loading in less than 2 seconds
samu_deva
1,876,970
Day 10 of my progress as a vue dev
About today Today was a little better than yesterday, I made progress with my DSA visualizer project...
0
2024-06-04T17:18:55
https://dev.to/zain725342/day-10-of-my-progress-as-a-vue-dev-177b
webdev, vue, typescript, tailwindcss
**About today** Today was a little better than yesterday, I made progress with my DSA visualizer project and spent quite a few time learning new concepts practicing and trying to come up with new ideas by brainstorming. My thought process is simple at the point, implement whatever is easy to understand and good to visualize. **What's next?** I will be done with my tree implementation by tomorrow and will move to refining the whole project and smooth up any rough edges to make it look visually appealing and a good experience functionality wise. **Improvements required** I still lack that touch that I'm looking for in order to make my visuals more appealing to the user, I need to spend more time studying the visual arts and human computer interaction concepts in order to get this right and make it better. Wish me luck!
zain725342
1,876,969
409. Longest Palindrome
409. Longest Palindrome Easy Given a string s which consists of lowercase or uppercase letters,...
27,523
2024-06-04T17:18:40
https://dev.to/mdarifulhaque/409-longest-palindrome-127o
php, leetcode, algorithms, programming
409\. Longest Palindrome Easy Given a string `s` which consists of lowercase or uppercase letters, return the length of the **longest** palindrome[^1] that can be built with those letters. Letters are **case sensitive**, for example, `"Aa"` is not considered a palindrome. **Example 1:** - **Input:** s = "abccccdd" - **Output:** 7 - **Explanation:** One longest palindrome that can be built is "dccaccd", whose length is 7. **Example 2:** - **Input:** s = "a" - **Output:** 1 - **Explanation:** The longest palindrome that can be built is "a", whose length is 1. **Constraints:** - <code>1 <= s.length <= 2000</code> - `s` consists of lowercase **and/or** uppercase English letters only. [^1]: **Palindrome** `A palindrome is a string that reads the same forward and backward.` **Solution:** ``` class Solution { /** * @param String $s * @return Integer */ function longestPalindrome($s) { $ans = 0; $count = array_fill(0, 128, 0); foreach(str_split($s) as $c) { $count[ord($c)]++; } foreach($count as $freq) { $ans += $freq % 2 == 0 ? $freq : $freq - 1; } $hasOddCount = false; foreach($count as $c) { if($c % 2 != 0) { $hasOddCount = true; break; } } return $ans + ($hasOddCount ? 1 : 0); } } ``` **Contact Links** - **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)** - **[GitHub](https://github.com/mah-shamim)**
mdarifulhaque
1,876,348
App runner with CloudFormation AWS (json, nodejs, java )
Refer to the previous article to understand the architectural...
0
2024-06-04T17:16:19
https://dev.to/huydanggdg/app-runner-with-cloudformation-aws-json-nodejs-java--433i
apprunner, aws, cloudformation, iac
Refer to the previous article to understand the architectural model: https://dev.to/huydanggdg/migrate-heroku-to-aws-1d73 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wcwik0zq49nd4q3tplkb.png) **1.Setup connect github** ![Create Connected accounts](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/eekfp5d06kumh06r4bjw.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bq39s4jauubm85hj89pu.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/aq2xvua128locjz6m8vr.png) **2.Create file template json for Cloudformation** Example for nodejs ``` { "AWSTemplateFormatVersion": "2010-09-09", "Resources": { "AppRunnerService": { "Type": "AWS::AppRunner::Service", "Properties": { "ServiceName": "client", "SourceConfiguration": { "AuthenticationConfiguration": { "ConnectionArn": "arn:aws:apprunner:ap-northeast-1:68972488xxx:connection/app/xxxxxxxxxx" }, "CodeRepository": { "RepositoryUrl": "https://github.com/huydanggdg/client", "SourceCodeVersion": { "Type": "BRANCH", "Value": "main" }, "CodeConfiguration": { "ConfigurationSource": "API", "CodeConfigurationValues": { "Runtime": "NODEJS_14", "StartCommand": "npm run production", "BuildCommand": "npm install", "Port": "8080" } } } }, "InstanceConfiguration": { "Cpu": "1024", "Memory": "2048" } } } } } ``` Example for java ``` { "AWSTemplateFormatVersion": "2010-09-09", "Resources": { "AppRunnerService": { "Type": "AWS::AppRunner::Service", "Properties": { "ServiceName": "java-main", "SourceConfiguration": { "AuthenticationConfiguration": { "ConnectionArn": "arn:aws:apprunner:ap-northeast-1:6897248xxxx:connection/app/xxxxxxx" }, "CodeRepository": { "RepositoryUrl": "https://github.com/huydanggdg/java-main", "SourceCodeVersion": { "Type": "BRANCH", "Value": "main" }, "CodeConfiguration": { "ConfigurationSource": "API", "CodeConfigurationValues": { "Runtime": "CORRETTO_8", "StartCommand": "java -Xms256m -jar target/gms_agm.jar .", "BuildCommand": "mvn package", "Port": "3020" } } } }, "InstanceConfiguration": { "Cpu": "2048", "Memory": "4096" } } } } } ``` Example for Database ``` { "AWSTemplateFormatVersion": "2010-09-09", "Description": "RDS PostgreSQL with Auto-Created VPC for Singer", "Parameters": { "DBPassword": { "Type": "String", "NoEcho": true, "Description": "Password for the PostgreSQL database" } }, "Resources": { "VPC": { "Type": "AWS::EC2::VPC", "Properties": { "CidrBlock": "10.0.0.0/16", "EnableDnsSupport": true, "EnableDnsHostnames": true, "Tags": [{ "Key": "Name", "Value": "SingerVPC" }] } }, "InternetGateway": { "Type": "AWS::EC2::InternetGateway", "Properties": { "Tags": [{ "Key": "Name", "Value": "SingerIGW" }] } }, "VPCGatewayAttachment": { "Type": "AWS::EC2::VPCGatewayAttachment", "Properties": { "VpcId": { "Ref": "VPC" }, "InternetGatewayId": { "Ref": "InternetGateway" } } }, "SubnetGroup": { "Type": "AWS::RDS::DBSubnetGroup", "Properties": { "DBSubnetGroupDescription": "Subnets for Singer RDS", "SubnetIds": [ { "Ref": "PublicSubnet1" }, { "Ref": "PublicSubnet2" } ] } }, "PublicSubnet1": { "Type": "AWS::EC2::Subnet", "Properties": { "VpcId": { "Ref": "VPC" }, "CidrBlock": "10.0.0.0/24", "AvailabilityZone": { "Fn::Select": [0, { "Fn::GetAZs": "" }] }, "MapPublicIpOnLaunch": true, "Tags": [{ "Key": "Name", "Value": "SingerPublicSubnet1" }] } }, "PublicSubnet2": { "Type": "AWS::EC2::Subnet", "Properties": { "VpcId": { "Ref": "VPC" }, "CidrBlock": "10.0.1.0/24", "AvailabilityZone": { "Fn::Select": [1, { "Fn::GetAZs": "" }] }, "MapPublicIpOnLaunch": true, "Tags": [{ "Key": "Name", "Value": "SingerPublicSubnet2" }] } }, "PostgreSQLInstance": { "Type": "AWS::RDS::DBInstance", "Properties": { "AllocatedStorage": "20", "DBInstanceClass": "db.t3.micro", "Engine": "postgres", "EngineVersion": "14", "MasterUsername": "admin", "MasterUserPassword": { "Ref": "DBPassword" }, "DBName": "singer_db", "PubliclyAccessible": false, "DBSubnetGroupName": { "Ref": "SubnetGroup" } } } }, "Outputs": { "PostgreSQLInstanceEndpoint": { "Description": "Endpoint for the PostgreSQL instance", "Value": { "Fn::GetAtt": ["PostgreSQLInstance", "Endpoint.Address"] } } } } ``` **3.Run code** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y919wb7t6mw8xh9freug.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bnca31er9i0x38d1un81.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c7pby2izb6p1x16e6fnb.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t0w342vmzc993l9dq682.png) Default option and review => Submit **4.Check service** **5.Delete Stack** => AWS auto clear service
huydanggdg
1,876,953
Nvidia's 1000x Performance Boost Claim Verified
Nvidia's keynote at the recent Computex was full of bold marketing and messaging, bordering on...
0
2024-06-04T17:14:45
https://dev.to/maximsaplin/nvidias-1000x-performance-boost-claim-verified-j7f
ai, machinelearning, marketing, hardware
Nvidia's keynote at the recent Computex was full of bold marketing and messaging, bordering on complete BS. ![CEO Math Lesson](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmalbg9nc20p4jrks92m.png) The "CEO Math" lesson with the "The more you buy, the more you save" conclusion has reminded me of another bold claim (and play with the numbers) from earlier this year. At Blackwell's intro, one of the [slides](https://www.youtube.com/watch?t=2521) stated there's a 1000x boost in the compute power of Nvidia GPUs. Though many noticed the comparison was not apples-to-apples: FP16 data type performance for older generations was compared against FP8 and FP4 smaller data types introduced in the newer hardware. Apparently, lower precision computation is faster. The graph would be much nicer if the FP16 line continued. Like that: ![Blackwell FP16 performance](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/8vzepoog14hp6xattd7x.png) It is great that the new hardware has acceleration for smaller data types. It follows the trend of quantized language models - trading off slight LLM performance degradation for smaller size and faster inference. Though presenting the figures in the way they were presented: - not explaining the difference in datatypes, - hiding the baseline and breaking consistency - not highlighting the downside of decreased precision... ... that seems like a sketchy move worth of "How to Lie with Statistics" book. ![How to Lie with Statistics](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yl9sret4vqg7likyky4v.png) Anyways... To come up with the above numbers for the FP16 performance for Hopper and Blackwell I found the specs for the products that had 4000 TFLOPS FP8 and 20000 TFLOPS FP4. They are: - [H100 SXM](https://www.nvidia.com/en-us/data-center/h100/) FP8 3,958 teraFLOPS and FP16 1,979 teraFLOPS ![H100 SXM](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w85boiif5e1w0wbtksl5.png) - [GB200 NVL2](https://www.nvidia.com/en-us/data-center/gb200-nvl2/) dual GPU system with FP4 40 PFLOPS and FP16 10 PFLOPS (5000 FP16 teraFLOPS per GPU) ![GB200 NVL2](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pb2sh9a4q8xm8dhjeppx.png) The improvement in performance is still impressive, yet 1000x is way nicer than a mere 263x ;)
maximsaplin
1,876,967
Learning AWS Day by Day — Day 80 — Amazon Cloud Directory
Exploring AWS !! Day 80 AWS Cloud Directory A directory-based store in AWS, where directories...
0
2024-06-04T17:11:44
https://dev.to/rksalo88/learning-aws-day-by-day-day-80-amazon-cloud-directory-5b5f
aws, beginners, cloud, cloudcomputing
Exploring AWS !! Day 80 AWS Cloud Directory ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rq175xqjumoz73snn91n.png) A directory-based store in AWS, where directories can scale to millions of objects. No need of managing the directory infrastructure, just focus on your development and deployment of the application. There is no limit to organize the directory objects in fixed hierarchy. We can use Cloud Directory to organize directory objects in multiple hierarchies supporting many organizational pivots and relationships across directory information. For example, a directory of users may provide a hierarchical view based on reporting structure, location, and project affiliation. Similarly, a directory of devices may have multiple hierarchical views based on its manufacturer, current owner, and physical location. We can do following with Cloud Directory: Create directory-based applications easily and without having to worry about deployment, global scale, availability, and performance Build applications that provide user and group management, permissions or policy management, device registry, customer management, address books, and application or product catalogs Define new directory objects or extend existing types to meet their application needs, reducing the code they need to write Reduce the complexity of layering applications on top of Cloud Directory Manage the evolution of schema information over time, ensuring future compatibility for consumers Cloud Directory is not a directory service for IT Administrators who want to manage or migrate their directory infrastructure.
rksalo88
1,876,966
So I tried Rust for the first time.
My first attempt at writing a program in rust.
0
2024-06-04T17:11:42
https://dev.to/martinhaeusler/so-i-tried-rust-for-the-first-time-4jdb
rust
--- title: So I tried Rust for the first time. published: true description: My first attempt at writing a program in rust. tags: rust, rustlang cover_image: https://www.rust-lang.org/static/images/rust-social-wide.jpg # Use a ratio of 100:42 for best results. # published_at: 2024-06-04 16:01 +0000 --- In my professional life, I'm at home on the Java Virtual Machine. I started out in Java over 10 years ago, but since around 4 years, I'm programming almost entirely in Kotlin. And while Kotlin serves a lot of different use cases and compilation targets, I've already felt like I need a "native tool" in my tool belt. I can deal with C if I have to, but it feels very dated at this point. And I heard all the buzz about Rust being the most loved programming language out there, so I thought I might take a look and see for myself. Here's how that went. # Installation The installation process was super smooth on my linux machine. I've installed `rustup` and let `cargo init` do its job; project setup done. I started my Visual Studio Code, got a few plugins, and was hacking away within minutes. I especially love the fact that `cargo` is also a built-in dependency management tool. Nice! # Advent of Code Whenever I try a new language, the first thing I throw it at is usually the [Advent of Code](https://adventofcode.com/). It is a set of language-agnostic problems which start out very easy and gradually get more difficult; perfect for testing a new language! I picked the "Day 1" example of 2023. We're given a textual input file; each line contains digits and characters. We need to find the first and the last digit in each row, convert them into a number (e.g. 8 and 3 turn into 83) and sum them up. After a little bit of tinkering, I ended up with this: ```rs static INPUT: &'static str = include_str!("inputs/1.txt"); fn main() { let mut sum = 0; for line in INPUT.lines() { let numbers: Vec<u32> = line .chars() .filter(|c| c.is_numeric()) .map(|c| c.to_digit(10).unwrap()) .collect(); let first = numbers.first().unwrap(); let last = numbers.last().unwrap(); sum += first * 10 + last; } println!("{}", sum); } ``` This ended up producing the correct result. This may not be idiomatic rust (I have yet to figure out what that is for myself) but it gets the job done. I have a lot of things to say about this little program. ## include_str! We're already starting out with a **macro** on the first line. I'm REALLY not a fan of macros[^1], but that's just how Rust rolls I guess. If we look beyond the problems of macros overall, the `include_str!` macro is really nice. It includes the text of an input file as a string in the program, and the compiler verifies that the file path is correct and the file exists. This should raise some eyebrows: this macro is doing more than just producing regular Rust code, it talks to the compiler. This pattern of macros opening sneaky paths to compiler intrinsics is going to repeat on our journey. It allows the compiler to provide better error messages, but macros are also really good at hiding what code is actually being executed. At the very least, in Rust all macros are clearly demarcated with a trailing `!` in their name so you know where the "magic" is[^2]. ## Give me a lifetime Right in our first line we're also hit with the expression `&'static`. There are really two things going on here: - `&` means that the type that comes next (`str`) is actually a "borrowed" reference. We will dig into this later, for now know that a borrowed thing is immutable. - `'static` is a "lifetime specifier". All of these start with a single quote (`'`). Rust is the only language I've ever seen that uses single quotes in a non-pair-wise fashion. And **good lord is it ugly**! Looking past the syntax, `'static` is a special lifetime that means "lives for the entire duration of the program", which makes sense for a constant like this. ## main The main function declaration is nothing special, aside from the nice fact that you're not forced to accept parameters, nor are you forced to return an integer (looking at you, Java!). `fn` is fine as a keyword, but I still prefer Kotlin's `fun`. ## let mut Variables are declared with `let`, types can optionally be annoated after the name with `: SomeType`. Fairly standard, all fine. The keyword `mut` allows us to mutate the variable after we've defined it; by default, all `let`s are immutable. Kotlin's `var` and `val` approach is a little more elegant in that regard, but it's fine. ## Semicolon!! `;` Gods why. That's all I have to say. Why does the darn thing refuse to die? Chalk up another language with mandatory semicolons, they just keep coming. ## for Next is our standard "foreach" loop. A noteworthy detail is that there are no round braces `( ... )`, which is also true for `if` statements. Takes a little time to get used to, but works for me. `lines()` splits the text into individual lines for us. ## Iterators Next we see `chars()` which gives us an iterator over the characters in the current line. Rust actually doesn't differentiate between a lazy `Sequence` and a lazy `Iterator`, like Kotlin does. So we have all the functional good stuff directly on the `Iterator`. We `filter` for the numeric characters, we `map` them to integers, we `collect` the result in a `Vec<u32>` (which is roughly comparable to a `List<Int>` in kotlin). ## Lambdas The lambda syntax in rust is... iffy at best, and a monstrosity at worst. The examples in the `filter` and `map` calls are very basic ones, things can get much worse. Rust actually has lambdas with different semantics (kind of like kotlin's `callsInPlace` contract). Sure, these things are there to aid the borrow checker, but I really miss my `it` from kotlin. `filter { it.isDigit() }` is hard to beat in terms of readability. ## Type Inference... sometimes. Much like the Kotlin compiler, the Rust compiler is usually pretty clever at figuring out the types of local variables and such so you don't have to type them all the time. Except that my `let numbers` absolutely required a manual type annotation for some reason. It's not like `collect()` could produce anything else in this example, so I don't really get why. > EDIT: Esfo pointed out in the comments (thanks!) that `collect()` can be used to produce more than just `Vec<T>` as output. Rust in this case determines the overload of `collect()` based on the **expected return type** - which is wild! I'm not sure if I like it or not. ## Unwrap all the things! You may have noticed the calls to `unwrap()`. The method `unwrap()` is defined on Rust's `Result<T, E>` type, which can be either an `Ok<T>` (success) or an `Err<E>` (error). `unwrap()` on `Ok` simply returns the value, `unwrap()` on `Err` causes a panic and terminates the whole program with an error message. And no, there is no `try{ ... } catch { ... }` in Rust. Once your program panics, there's no turning back. So `unwrap()` shouldn't be used in production code. For this simple toy example, I think it's good enough. Later I learned that if you're inside a method that returns a `Result`, you can also use `?` to propagate the error immediately.[^3] ## print it! Rust wouldn't be Rust if `println` was straight forward, because it really isn't. Right off the bat we see that `println!` has an exclamation mark, so it's a macro. And oh boy does it do a lot of magic. Rust doesn't **really** have string interpolation, so the standard way to print something is `println!("foo {} bar", thing)` which will first print `foo`, then the `thing`, then `bar`. That is, provided that `thing` is printable, but that's another story. You **could** also write `println!("foo {thing} bar")` which **almost** looks like string interpolation, but it's really just a feature of the `println!` macro. It breaks down quickly, for example when you try to access a field inside the curly brackets. This won't compile: `println!("foo {thing.field} bar")`. So while it may **seem** simple at first glance, it really isn't and it does a whole lot of magic. Kotlin string templates are not without issues (especially if you want a literal `$` character in the string) but they are more universal because they allow for arbitrary logic in their placeholders. # The Pros and Cons At this point, it's time for a first verdict. - Pro: Tooling is good - Pro: Compiles natively - Pro: Memory safe without manual memory management or garbage collector - Pro: small executables, excellent runtime performance - Pro: Feels "modern" - Con: Syntax. Good lord the syntax. - Con: Macros. No need to explain that. - Con: Error handling, or rather the absence thereof - Con: Enormously steep learning curve with borrowing and lifetimes - Con: Type inference sometimes works, sometimes doesn't ... and that's just for the first example. I've since performed more extensive experiments with the language, and I'll report about these in another article. I can see why system level programmers enjoy Rust. It's certainly a big step up from C. But on a grand scale, compared to the effectiveness and productivity of Kotlin, I can't yet see where Rust's popularity is really coming from. [^1]: As programmers, we already have enough tools at our disposal to shoot ourselves into the foot; macros just add to that danger. [^2]: In C, that's not the case. You can make a macro look like a totally normal function call, but it does arbitrary other things when compiled. [^3]: The Go language should take notes here. `if(err != nil) return err` is only funny so many times before it gets really old really fast.
martinhaeusler
1,876,965
Lost your README?
When this happens, this is kind of a loss of a mojo as well and this has happened to me a lot. I am...
0
2024-06-04T17:10:44
https://dev.to/fion21/lost-your-readme--129f
git, push, github, origin
When this happens, this is kind of a loss of a mojo as well and this has happened to me a lot. I am had to do a `git push -u origin master` and then the commit was rejected. Has this happened to you too[](url)? Fret not. Image courtesy of [http://Studytonight.com](url). I stumbled upon a post on Stackoverflow on how this should be done: [https://stackoverflow.com/a/59790434/11083275](url). which irons out the problem entirely. 1. Can't push. 2. What did I do? `git push -f origin master` 3. Why did I do that? To upload my last commit! 4. What was the consequence? Well everything was loaded __but for__ every readme I did from my last lot of commits! 5. How to put that right? As per the article above, the next steps would be... ``` - git fetch origin master - git pull origin master - git add . - git commit -m 'your commit message' - git push origin master ``` If you now check your github you should find that master to master was successful.✅ Ok everyone, have fun 😂!!
fion21
1,876,954
** Los Mejores Entornos de Desarrollo Integrado (IDE) Explicados con la Magia de Escandalosos **🐻‍❄️🐼🐻
¡Hola Chiquis! 👋🏻 Acompáñenme en este viaje donde los Escandalosos se convierten en desarrolladores y...
0
2024-06-04T16:59:42
https://dev.to/orlidev/-los-mejores-entornos-de-desarrollo-integrado-ide-explicados-con-la-magia-de-escandalosos--m07
webdev, tutorial, programming, beginners
¡Hola Chiquis! 👋🏻 Acompáñenme en este viaje donde los Escandalosos se convierten en desarrolladores y nos enseñan todo sobre los IDEs, esos increíbles entornos de desarrollo integrado. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/geqgi01zgho7tadahzrr.jpg) ¿Qué son los IDEs? 🌳 Imagínense los IDEs como las guaridas secretas de los Escandalosos, pero en lugar de planes alocados, ¡están llenas de herramientas para crear software increíble! Un IDE es como una caja de herramientas mágica que te ayuda a escribir código, compilarlo, depurarlo y mucho más. ¿Por qué usar un IDE? ❄️ Usar un IDE es como tener a los Escandalosos como tu equipo de desarrollo personal. Te ayudan con todo: - Escribir código: Un IDE te ofrece un editor de texto con funciones especiales para programar, como resaltado de sintaxis, autocompletado y refactorización. ¡Es como tener a Polar guiándote en cada línea de código! - Compilar código: El IDE convierte tu código en un lenguaje que la computadora entiende, como si Panda descifrara un código secreto para que la máquina lo ejecute. - Depurar código: Cuando hay errores en tu código, el IDE te ayuda a encontrarlos y corregirlos, ¡como si Ice Bear estuviera revisando cada detalle para que todo funcione perfectamente! - Probar código: El IDE te permite ejecutar tu código y ver cómo funciona, ¡como si los Escandalosos estuvieran probando un nuevo invento para asegurarse de que sea genial! ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ywrvvkfjc60dzvip3pc7.png) Los Escandalosos y sus roles en el mundo del desarrollo 🎈 + Panda: El líder nato del grupo, siempre buscando la mejor herramienta para el trabajo. Panda sería el IDE perfecto: potente, versátil y con una interfaz amigable. + Pardo: El creativo del equipo, siempre buscando nuevas formas de hacer las cosas. Pardo sería el IDE ideal para los más innovadores: lleno de plugins y extensiones para personalizar tu experiencia. + Polar: El silencioso pero efectivo miembro del grupo, siempre trabajando duro y sin descanso. Polar sería el IDE ideal para los maratoneros de la programación: rápido, estable y confiable. Pardo (Grizzly ) - El entusiasta de Visual Studio Code 🐻 El IDE más popular del mundo, gratuito y de código abierto. Es como tener a Pardo liderando un equipo de desarrollo diverso y talentoso. Grizzly es el oso más extrovertido y sociable, siempre lleno de energía y listo para cualquier desafío. Por eso, su IDE ideal es Visual Studio Code. Al igual que Grizzly, Visual Studio Code es versátil y personalizable, lo que lo hace perfecto para cualquier tipo de proyecto.  - Extensiones: Visual Studio Code tiene una amplia gama de extensiones, igual que Grizzly tiene amigos por todas partes. Puedes encontrar herramientas para Python, JavaScript, C++ y más. - Integración con Git: Al igual que Grizzly siempre está conectando con nuevas personas, Visual Studio Code se integra perfectamente con Git y GitHub para facilitar el trabajo en equipo. - Multiplataforma: Disponible en Windows, macOS y Linux, para que puedas usarlo donde quiera que vayas, ¡como Grizzly explorando el mundo! Panda - El amante de PyCharm 🐼 Un IDE ideal para programar en Python, con muchas funciones para el análisis de datos y la ciencia de datos. ¡Es como tener a Panda creando experimentos científicos con código! Panda es el más sensible y detallista del grupo, siempre preocupado por los detalles y la estética. Por eso, PyCharm es su IDE perfecto. PyCharm, desarrollado por JetBrains, está diseñado específicamente para los desarrolladores de Python, ofreciendo un entorno de trabajo limpio y eficiente. - Autocompletado inteligente: Igual que Panda sabe exactamente cómo cuidar su imagen en las redes sociales, PyCharm ofrece autocompletado inteligente para escribir código de manera más eficiente. - Soporte para frameworks: Con soporte para Django, Flask y otros frameworks, PyCharm es ideal para desarrolladores que, como Panda, buscan las mejores herramientas para su trabajo. - Refactorización: La capacidad de refactorizar código en PyCharm asegura que todo esté organizado y limpio, como Panda organizando sus fotos de comida para Instagram. Polar - El genio del código en IntelliJ IDEA 🐻‍❄️ Polar es el oso más misterioso y habilidoso con la tecnología. Siempre encuentra la forma más eficiente de hacer las cosas y, por eso, su IDE de elección es IntelliJ IDEA. Este poderoso entorno de desarrollo, también de JetBrains, es perfecto para los desarrolladores de Java y otros lenguajes. - Análisis de código: IntelliJ IDEA ofrece una profunda inspección del código, ayudando a Polar a detectar y corregir errores antes de que se conviertan en problemas. - Soporte multilenguaje: No solo se limita a Java; IntelliJ IDEA también es compatible con Kotlin, Groovy, Scala y otros lenguajes, haciendo que Polar sea aún más versátil. - Rendimiento: Al igual que Polar, IntelliJ IDEA es rápido y eficiente, optimizando el uso de recursos para mantener un rendimiento óptimo. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/g3gu13wpvcvjt6q2hhjc.jpg) Otros IDEs que tienen su valor 🐾 Batería Extra - El Versátil Eclipse: ¿Qué pasa cuando todos los osos necesitan trabajar juntos en un proyecto grande? ¡Aquí es donde entra Eclipse! Este IDE es conocido por su capacidad de manejar proyectos complejos y grandes, lo que lo hace ideal para proyectos colaborativos. - Plug-ins: Eclipse tiene una vasta colección de plug-ins que permiten a cada oso personalizar su entorno de desarrollo según sus necesidades específicas. - IDE Polivalente: Soporta múltiples lenguajes de programación como Java, C++, PHP, entre otros, asegurando que cada oso tenga lo que necesita. - Gran Comunidad: Al igual que los osos tienen a sus amigos y comunidad, Eclipse tiene una enorme base de usuarios y desarrolladores que contribuyen a su crecimiento continuo. No podemos olvidar: 👨‍💻 + Notepad++: Este editor de código ligero y gratuito es como el ninja silencioso de los IDEs. Admite más de 50 lenguajes de programación y tiene características sorprendentes, como resaltado de sintaxis y búsqueda avanzada. + NetBeans: Aunque no es tan popular como los otros, NetBeans es ideal para el desarrollo de aplicaciones empresariales en Java EE. Es como Pardo, siempre listo para cuidar los detalles importantes. + Visual Studio: Este IDE permite desarrollar aplicaciones para Windows, Linux, macOS, Android e iOS. + WebStorm: Un IDE perfecto para desarrollar aplicaciones web con JavaScript, HTML y CSS. ¡Es como tener a Ice Bear construyendo una guarida secreta increíblemente compleja! + Vim: Es una versión mejorada del editor de código Vi, para los principales sistemas operativos, siendo particularmente popular entre los usuarios de Linux. + Sublime Text: Es un software propietario para escribir código, disponible para Windows, Mac y Linux. Destaca por su kit de herramientas, su interfaz de usuario, sus potentes funciones y su increíble rendimiento. + Android Studio: Que se basa en el entorno de desarrollo integrado IntelliJ IDEA, es el IDE oficial para el desarrollo de aplicaciones para Android. Destaca por su editor de diseño visual, el analizador de APK, emulador rápido y el editor de código inteligente. + Atom: Es un editor de código multiplataforma (anteriormente un IDE), con integraciones de Git y GitHub. + Jupyter Notebook: Es la herramienta más popular de ciencia de datos para crear y compartir documentos que contienen código en vivo, ecuaciones, visualizaciones y texto. + Xcode: Es el entorno de desarrollado integrado de Apple para crear aplicaciones para iOS, macOS, tvOS y watchOS. + PHPStorm: Es un IDE multiplataforma comercial de PHP, diseñado para trabajar con Laravel, Symfony, Drupal, WordPress, Joomla!, Magento y otros frameworks. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s3xnvs5139zcgfx58pqe.jpg) + Emacs: Es un popular editor de texto libre extensible, personalizable y disponible para GNU/Linux, Windows y macOS. + RStudio: Es uno de los más populares entornos de desarrollo gratuitos y de código para el lenguaje de programación R. + RubyMine: Es un IDE de Jetbrains multiplataforma e inteligente para el desarrollo de aplicaciones en Ruby y Rails. + TextMate: Es un potente editor de texto para macOS con soporte para una gran cantidad de lenguajes de programación. + Coda: Es un editor de texto de pago para desarrolladores web, disponible para el sistema operativo macOS. + Komodo IDE: Es un software multiplataforma para los lenguajes Python, PHP, Golang, Perl, Ruby, entre otros. + Zend Studio: Es un IDE de PHP inteligente para codificar más rápido y depurar código fácilmente. + Light Table: Con un diseño elegante y liviano, Light Table es personalizable desde keybinds hasta extensiones para que se adapte completamente al proyecto específico. Entornos de desarrollo más populares 🐻‍❄️ Según una encuesta realizada por Stack Overflow, la comunidad de desarrolladores, estos son los editores de código y entornos de desarrollo más populares: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gl6epp81tmuc94h49f70.png) IDEs online con inteligencia artificial (IA) 🐻🐼🐾 IDEs tan populares en el año 2024 como los personajes de la caricatura "Los Escandalosos".  1. Repl.it 🚀 Repl.it es como Grizz, el oso líder del grupo. Es amigable, versátil y siempre está dispuesto a ayudarte. Al igual que Grizz, Repl.it te permite codificar en más de 30 lenguajes sin necesidad de configuración previa. ¡Solo elige un lenguaje y comienza a programar! Además, su función Repl Run es como compartir una caja de donas con tus amigos: puedes enviar un enlace a tu código y se ejecutará automáticamente en otro sitio web. 🍩 2. Visual Studio Code (VS Code) 💻 VS Code es como Panda, el oso sensible y astuto. Este IDE es desarrollado por Microsoft y es uno de los más queridos por los desarrolladores. Al igual que Panda, VS Code es ligero, gratuito y compatible con muchos lenguajes de programación. Su extenso mercado de extensiones es como el armario de Panda lleno de sombreros y bufandas: puedes personalizarlo según tus necesidades. 🎩🧣 3. Tabnine 🌟 Tabnine es como Ice Bear, el oso misterioso y eficiente. Este complemento de autocompletado es conocido por su facilidad de uso y soporte para múltiples lenguajes. Al igual que Ice Bear, Tabnine es un experto en su campo y ofrece sugerencias precisas. Además, puedes personalizarlo según las preferencias de tu empresa, como elegir tu sabor de helado favorito. 🍦 4. Stepsize AI 🤖 Stepsize AI es como Charlie, el oso científico. Se enfoca en Python, JavaScript, TypeScript y más, ofreciendo sugerencias de código y opciones de privacidad. Al igual que Charlie, Stepsize AI es inteligente y siempre está investigando nuevas formas de mejorar tu flujo de trabajo. 📚🔬 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s2mvtwazbzn9gx73l0ah.jpg) 5. Codium AI 🧪 Codium AI es como Nom Nom, el oso hambriento. Se especializa en sugerir pruebas y comprender cambios en el código a nivel del sistema. Al igual que Nom Nom, Codium AI está obsesionado con encontrar errores y asegurarse de que todo funcione correctamente. 🍔🔍 6. Mintlify Writer 📝 Mintlify Writer es como Ranger Tabes, la guardabosques. Este IDE automatiza la creación de documentación en varios lenguajes de programación. Al igual que Ranger Tabes, Mintlify Writer está comprometido con la preservación del conocimiento y la documentación precisa. 🌿📖 7. Grit.io 💡 Grit.io es como Darrell, el oso emprendedor. Proporciona consejos inteligentes para codificar y se integra con herramientas de gestión de proyectos. Al igual que Darrell, Grit.io sabe cómo llevar las cosas al siguiente nivel y mantenerse organizado. 💼📊 8. WhatTheDiff 🔄 WhatTheDiff es como Lucy, la osa artista. Resume los cambios en el código y ofrece consejos inteligentes para revisiones. Al igual que Lucy, WhatTheDiff ve el mundo desde una perspectiva única y siempre encuentra la belleza en los detalles. 🎨✨ 9. Bugasura 🐞 Bugasura es como Ranger Marta, la osa exploradora. Se centra en el seguimiento de errores en cualquier lenguaje de programación importante. Al igual que Ranger Marta, Bugasura está listo para enfrentar cualquier desafío y proteger tu código de los errores. 🌟🔍 10. Project IDX 🌐 Project IDX es como Chloe, la joven prodigio que siempre está buscando innovar y explorar nuevas fronteras. Desarrollado por Google, este IDE online es un espacio de trabajo asistido por IA para el desarrollo de aplicaciones full-stack y multiplataforma en la nube1. Al igual que Chloe, Project IDX es inteligente, está en constante aprendizaje y ofrece soporte para una amplia gama de frameworks, lenguajes y servicios. Conclusión 🐻 Así como los Escandalosos tienen sus propias personalidades y habilidades únicas, cada IDE tiene sus fortalezas y características especiales. Grizzly con Visual Studio Code, Panda con PyCharm, Polar con IntelliJ IDEA y el equipo unido con Eclipse nos muestran que, sin importar tus preferencias o necesidades, hay un IDE perfecto para ti. ¡Ahora es tu turno de elegir el tuyo y comenzar a crear maravillas en el mundo de la programación! En resumen: Los IDEs son como tener a los Escandalosos como tu equipo de desarrollo personal, ¡haciendo que la programación sea más fácil, divertida y productiva! Elige el IDE que mejor se adapte a tus necesidades y prepárate para crear software asombroso. ¡Recuerda! 🐼 - Explora diferentes IDEs: Al igual que los Escandalosos siempre buscan nuevas aventuras, prueba diferentes IDEs para encontrar el que mejor se adapte a ti. - Aprende lo básico: No necesitas ser un genio de la programación para usar un IDE. Comienza con lo básico y ve avanzando poco a poco. - Practica, practica, practica: La mejor manera de aprender a usar un IDE es practicando. ¡No tengas miedo de experimentar y cometer errores! Así que, querido desarrollador, elige tu "oso" favorito y comienza a codificar con estos increíbles IDEs.🐾👩‍💻 Recursos Adicionales: 14 Best online IDEs as of 2024 🚀 ¿Te ha gustado? Comparte tu opinión. Artículo completo, visita: https://lnkd.in/ewtCN2Mn https://lnkd.in/eAjM_Smy 👩‍💻 https://lnkd.in/eKvu-BHe  https://dev.to/orlidev ¡No te lo pierdas! Referencias:  Imágenes creadas con: Copilot (microsoft.com) ##PorUnMillonDeAmigos #LinkedIn #Hiring #DesarrolloDeSoftware #Programacion #Networking #Tecnologia #Empleo #IDEs ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tr0suly5rwsw7hmmnw1x.jpg) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yqf1j0zs2a0sqyeseawb.jpg)
orlidev
1,876,952
Upstream preview: The value of open source software
Upstream is next week on June 5, and wow, our schedule is shaping up brilliantly. For the rest of...
0
2024-06-04T16:41:29
https://dev.to/tidelift/upstream-preview-the-value-of-open-source-software-2pfm
opensource, upstream, cybersecurity, security
<p><em>Upstream is next week on June 5, and wow, our schedule is shaping up brilliantly. For the rest of this week, we’ll be giving you a sneak preview into some of the talks and the speakers giving them via posts like these. RSVP </em><a href="https://upstream.live/register?__hstc=23643813.d1ddc767e9f4955f3bdd2f1c64c72f8c.1654699542897.1716388421586.1716392544277.1287&amp;__hssc=23643813.2.1716392544277&amp;__hsfp=1649118565"><em><span>now</span></em></a><em>!</em></p> <p>When asked about the estimated value of open source software, it’s likely assumed to be a big number—surely in the billions. However, a team at Harvard Business School and the University of Toronto took on the task of <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4693148" rel="noopener"><span>investigating the value of open source</span></a> and found it was worth approximately 8.8 <em>trillion</em> dollars. (<a href="https://blog.tidelift.com/eight-trilion-dollars-the-new-valuation-of-open-source" rel="noopener"><span>Read Tidelift co-founder and general counsel Luis Villa’s take on the announcement</span></a>, including why he thinks it’s worth <em>even more</em>.) In comparison, the entire U.S. electrical grid is valued at 1.5- 2 trillion dollars, and the U.S. interstate highway system is valued at 750 billion dollars.&nbsp;</p> <p>Simply put: open source software is an exceptionally valuable resource.&nbsp;</p> <p>How did this number come to be? What does it mean for organizations using open source? Or those creating open source, the maintainers? Harvard Business School assistant professor <a href="https://upstream.live/speaker-2024/frank-nagle"><span>Frank Nagle</span></a> joins <a href="https://upstream.live/speaker-2024/luis-villa"><span>Luis Villa</span></a> at this year’s Upstream on Wednesday, June 5, to explain how the project came to be, how he and his team landed on the headline-worthy 8.8 trillion dollar number, and why it’s never been a more apt time to discuss the importance of open source in the software supply chain.</p> <p>If this number blows your mind (it should!) and you want to learn more, <a href="https://upstream.live/"><span>register for Upstream</span></a> and power up your calculators on Wednesday, June 5. See you there!</p> <h2 style="font-size: 24px;">About Frank Nagle</h2> <p>Frank Nagle is an assistant professor in the Strategy Unit at Harvard Business School. Professor Nagle studies how competitors can collaborate on the creation of core technologies, while still competing on the products and services built on top of them. His research falls into the broader categories of the future of work, the economics of IT, and digital transformation and considers how technology is weakening firm boundaries.</p> <h2 style="font-size: 24px;">About Luis Villa</h2> <p>Luis Villa is co-founder and general counsel at Tidelift. Previously he was a top open source lawyer advising clients, from Fortune 50 companies to leading startups, on product development, open source licensing, and other matters. Luis is also an experienced open source community leader with organizations like the Wikimedia Foundation, where he served as deputy general counsel and then led the Foundation’s community engagement team. Before the Wikimedia Foundation, he was with Greenberg Traurig, where he counseled clients such as Google on open source licenses and technology transactions, and Mozilla, where he led the revision of the Mozilla Public License.&nbsp;</p> <p>He has served on the boards at the Open Source Initiative and the GNOME Foundation, and been an invited expert on the Patents and Standards Interest Group of the World Wide Web Consortium and the Legal Working Group of OpenStreetMap. Recent speaking engagements include RedMonk’s Monki Gras developer event, FOSDEM, and as a faculty member at the Practicing Law Institute’s Open Source Software programs. Luis holds a JD from Columbia Law School and studied political science and computer science at Duke University.</p>
caitbixby
1,876,951
Top 10 Youtube channels to follow if you're a Programmer 🚀
PS: nothing fancy in the blog; Just listing here, some of the best youtube channels that will make...
0
2024-06-04T16:40:49
https://dev.to/prathamjagga/top-10-youtube-channels-to-follow-if-youre-a-programmer-28c8
PS: nothing fancy in the blog; Just listing here, some of the best youtube channels that will make you fall in love with software development. (i) ThePrimeTime -- This guy reacts to programming blogs, videos, etc, sharing his opinion and insights. (ii) ByteByteGo -- Videos on System Design concepts and case studies. (iii) Gaurav Sen -- Videos on System Design concepts and case studies. (iv) Theo - t3.gg -- Videos on whats happening in programming world. (v) Perfology -- Videos on Scaling systems and Performance testing. (vi) Scaler -- Podcasts with top developers. (vii) Beyond Coding -- Podcasts with top developers. (viii) Honeypot -- Documentaries on different technologies and their origin. (ix) GO TO Conferences -- Sessions from Software Conferences. (x) NDC Conferences -- Sessions from Software Conferences. (xi) Arpit Bhayani -- Videos on System Design concepts, case studies and podcasts. Some honorable mentions: Just me and open source, FastAI, Deeplearning.ai. Show some love by sharing, liking and commenting ✌️ Cheers, let's meet next time with a next cool blog 😃
prathamjagga
1,876,939
♾️All about Infinite Scrolling
Introdution Social media platforms like TikTok, Instagram, and Twitter use infinite...
0
2024-06-04T16:35:04
https://dev.to/algoorgoal/why-intersection-observer-is-better-than-scroll-event-for-infinite-scrolling-list-a99
webdev, react, ux
## Introdution Social media platforms like TikTok, Instagram, and Twitter use infinite scrolling instead of pagination. However, I wasn't sure when to use the classic pagination or infinite scrolling. Sometimes I implemented infinite scrolling using `IntersectionObserver`, but later I realized there are more ways to implement infinite scrolling. I didn't know why exploiting the `onScroll` event listener is a bad practice. Therefore, I'll sort out the pros and cons of infinite scrolling, when to use it, and 2 ways you can implement it. ## Upsides of Infinite Scrolling ### Fewer interruptions Each user engages in **different activities** while on the same page. They might look for a specific piece of information or an item. Otherwise, they might just want to consume the content and kill some hours without goals. If the latter is the case, they click the link to the next page to load more. This can be an interruption to user activity. By removing this interruption, users can keep engaging in their activity without distraction. ### Cheap interaction cost While interaction cost shares a similar point of view with interruptions, this is a significant difference. Infinite scrolling reduces the number of interactions to achieve the same result. It doesn't need two interaction steps, scrolling to the bottom and clicking the link. Just a scroll to the bottom action works. ### Optimized for mobile devices The small viewport is the nature of mobile devices. Therefore, mobile users keep their fingers close to the screen to get ready to scroll down. Therefore, infinite scrolling can give good user experiences to mobile users rather than desktop users. ## Downsides of Infinite Scrolling ### Hard to refind content Let's say user navigates from feed(infinite scrolling) page to detail page, and he(she) wants to navigate back to feed page. In classic infinite scrolling user scrolls down to where he(she) was from the beginning of the content. This can worsen user experiences. You can mitigate this issue by using persisting the scrolling state and the loaded content. ### Page load becomes slow The page gets slower and slower as you scroll down more and more. There is too much content in the document, so the page gets slower. If user navigates from the detail page to the feed page again, you need to load all the contents from the beginning. Even though we can mitigate this issue by using virtual scroll, lazy loading, and caching, the content still takes up memory and lower performance. ### Illusion to completeness Scroll indicates how far users have to go till the end of the content. However, an infinite scrolling page cannot be an indicator by nature. This can be a huge problem if user thinks the current end of the page is the end of the content. They will miss a lot of information on the page. ### No Footer Layout Most websites usually place Footer to display their contacts, legal rights, and links to their social media accounts. Now that we can't access the end of the page, the footer Layout cannot be accessed anymore, and users miss out on related content. ### Poor SEO Since search engine bots can only see the rendered result of the first content load, they cannot index the second+ content. This can harm SEO. ## How to implement infinite scrolling ### Why Intersection Observer is Better than Scroll Event for Infinite Scrolling List Intersection Observer가 scroll event handler에 비해서 성능이 좋다. 실험 Intersection Observer로 구현한 infinite scrolling list와 scroll event handler로 구현한 infinite scrolling list를 비교하였다. scroll event handler에는 throttling과 caching이 적용되어 있다. 실험 결과 Scroll event handler에서 frame drop 현상이 크게 나타났다. 노란색, 빨간색 영역이 컸다. CPU를 6배로 느리게 하고 실행한 결과, frame drop 현상이 실험 중 체감이 될 정도로 크게 느껴졌다. 스크롤링으로 페이지의 다른 부분으로 이동했을 때 screen을 paint하는데 3초 이상 소요되는 경우가 있었다. core web vitals에서 interaction to next paint는 200ms 이하를 권장하고 있기에 이는 안 좋은 UX를 초래할 수 있다. 원인 분석 Intersection Observer 경우는 element간 교차가 발생할 경우만 비동기로 실행되어 main thread를 blocking하지 않지만, scroll event handler의 경우 스크롤 event가 발생할 때마다 main thread를 blocking한다. throttling을 하더라도 element가 교차하지 않는 시점에 event handler가 호출되고, caching을 시키더라도 cache된 값을 가져오는function call은 여전히 발생하는 것이 원인으로 보인다. 결과 공수 비교: scroll event handler 구현은 caching과 throttling에서 복잡하게 구현하게 되므로 Intersection Observer보다 공수가 크다. 쓸데없는 main thread blocking 유무: caching과 throttling을 구현하더라도, main thread blocking 빈도는 본질적으로 scroll event handler가 훨씬 더 많을 수밖에 없어 극한의 디바이스 환경에서는 frame drop을 겪을 수도 있다. 반면 Intersection Observer는 소모적은 main thread blocking이 발생하지 않는다. 픽셀 단위 customization: scroll event에서 요소의 정확한 픽셀 단위를 파악할 수는 있겠다. 따라서 요소의 픽셀 단위 절대 위치가 필요한 게 아니라면 IntersectionObserver를 쓰는게 사용자 경험에 좋다. https://itnext.io/1v1-scroll-listener-vs-intersection-observers-469a26ab9eb6 https://blog.hyeyoonjung.com/2019/01/09/intersectionobserver-tutorial/ Medium
algoorgoal
1,876,938
เริ่มต้น Quarkus 3 part 2.3 Renarde
Renarde เป็นอีกตัวช่วยในการทำ SSR ในรูปแบบ MVC เช่นเดียวกับ Spring MVC...
0
2024-06-04T16:34:08
https://dev.to/pramoth/erimtn-quarkus-3-part-23-renarde-2bgd
quarkus
Renarde เป็นอีกตัวช่วยในการทำ SSR ในรูปแบบ MVC เช่นเดียวกับ Spring MVC ที่มีเครื่องมือให้พร้อมหยิบมาใช้งานเลย เช่น form validator,csrf,routing,email,htmx support,barcode,gen pdf,security.... เริ่มต้นใช้งานก็เพิ่ม dependency ใน pom.xml จากบทความก่อนๆได้เลย โดยคง web-bundler ไว้เหมือนเดิม แต่สามารถเอา qute-web ,rest-qute ออกได้เลยเพราะมี transitive dependency จาก renarde ไปแล้ว ```xml <dependency> <groupId>io.quarkiverse.renarde</groupId> <artifactId>quarkus-renarde</artifactId> <version>3.0.12</version> </dependency> ``` จากนั้นจะต้องไป disable proactive authen ก่อน เพราะว่าเรายังไม่ได้ config security จะเข้า endpoint ไม่ได้เพราะมัน enable by default ``` quarkus.http.auth.proactive=false ``` ที่เหลือก็เหมือน Action based MVC ทั่วๆไปนะครับ รยละเอียดแนะนำให้ไปโหลด เดโมมาลองที่ https://github.com/ia3andy/quarkus-blast จะเป็น Renarde+HTMX ครับ
pramoth
1,873,798
Jenkins up and running on Kubernetes 🚀
Introduction 👋 In this hands-on, we'll cover: Deploy Jenkins controller on k8s...
0
2024-06-04T16:32:02
https://dev.to/tungbq/jenkins-on-kubernetes-a-comprehensive-guide-5d6a
devops, jenkins, kubernetes, cicd
## Introduction 👋 In this hands-on, we'll cover: - Deploy Jenkins controller on k8s cluster - Configure k8s cluster as Jenkins agents - Create and run a sample pipeline on a k8s Pod Jenkins agent - Watch the Pod life cycle for a pipeline run ## Environment ☁️ This hands-on is for a PoC or Pilot environment, to explore the Jenkins and Kubernetes features ## Prerequisites 🔓 Before you start, ensure you have: - A running [Kubernetes cluster](https://kubernetes.io/docs/setup/) (I used [kind](https://kind.sigs.k8s.io/) for my k8s local environment). - [kubectl](https://kubernetes.io/docs/tasks/tools/) configured to interact with your cluster. - Basic knowledge of Kubernetes and Jenkins. ## Documentation Reference 📖 - [kubernetes.io](https://kubernetes.io/docs/home/) - [Installing Jenkins on Kubernetes](https://www.jenkins.io/doc/book/installing/kubernetes/) - [www.jenkins.io/doc](https://www.jenkins.io/doc/) - [devops-basics](https://github.com/tungbq/devops-basics) ## Deploy Jenkins on Kubernetes and run your pipeline 🔥 Let's start deploying and using Jenkins on Kubernetes by following below steps: ### 1. Prepare K8s Manifest YAML files Before start, we need to prepare the K8s YAML files. _NOTE_: All the YAML files to deploy Jenkins controller on Kubernetes are available at: [K8sHub](https://github.com/tungbq/K8sHub) (**hands-on/jenkins-on-k8s/yamls**). If you want to use the hands-on example and all-in-one script from my repo (desrible later in the next section), you do not need to create these files manually, just refer them as captured version of the ones in hands-on repository. Otherwise, create these files in your PC with following content and name: - [volume.yaml](https://github.com/tungbq/K8sHub/blob/main/hands-on/jenkins-on-k8s/yamls/volume.yaml): To create the persitent volumne for our Jenkins instance on k8s (Replace the `demo-jenkins-cluster-control-plane` by your node name) ```yaml ## volume.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: local-storage provisioner: kubernetes.io/no-provisioner volumeBindingMode: WaitForFirstConsumer --- apiVersion: v1 kind: PersistentVolume metadata: name: jenkins-pv-volume labels: type: local spec: storageClassName: local-storage claimRef: name: jenkins-pv-claim namespace: devops-tools capacity: storage: 10Gi accessModes: - ReadWriteOnce local: ## Replace by your desired path path: /mnt/jenkins nodeAffinity: required: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In ## Replace by your node name values: - demo-jenkins-cluster-control-plane --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: jenkins-pv-claim namespace: devops-tools spec: storageClassName: local-storage accessModes: - ReadWriteOnce resources: requests: storage: 3Gi ``` - [service_account.yaml](https://github.com/tungbq/K8sHub/blob/main/hands-on/jenkins-on-k8s/yamls/sevice_account.yaml): To create `jenkins-admin` service account for the `Deployment` usage ```yaml ## service_account.yaml --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: jenkins-admin rules: - apiGroups: [''] resources: ['*'] verbs: ['*'] --- apiVersion: v1 kind: ServiceAccount metadata: name: jenkins-admin namespace: devops-tools --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: jenkins-admin roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: jenkins-admin subjects: - kind: ServiceAccount name: jenkins-admin namespace: devops-tools ``` - [deployment.yaml](https://github.com/tungbq/K8sHub/blob/main/hands-on/jenkins-on-k8s/yamls/deployment.yaml): To deploy latest `jenkins/jenkins:lts` Jenkins version with `jenkins-admin` service account and us `jenkins-pv-claim` persistent volume ```yaml ## deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: jenkins namespace: devops-tools spec: replicas: 1 selector: matchLabels: app: jenkins-server template: metadata: labels: app: jenkins-server spec: securityContext: fsGroup: 1000 runAsUser: 1000 serviceAccountName: jenkins-admin containers: - name: jenkins image: jenkins/jenkins:lts resources: limits: memory: '2Gi' cpu: '1000m' requests: memory: '500Mi' cpu: '500m' ports: - name: httpport containerPort: 8080 - name: jnlpport containerPort: 50000 livenessProbe: httpGet: path: '/login' port: 8080 initialDelaySeconds: 90 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 5 readinessProbe: httpGet: path: '/login' port: 8080 initialDelaySeconds: 60 periodSeconds: 10 timeoutSeconds: 5 failureThreshold: 3 volumeMounts: - name: jenkins-data mountPath: /var/jenkins_home volumes: - name: jenkins-data persistentVolumeClaim: claimName: jenkins-pv-claim ``` - [services.yaml](https://github.com/tungbq/K8sHub/blob/main/hands-on/jenkins-on-k8s/yamls/services.yaml): To expose the `jenkins-server` as a k8s service ```yaml ## services.yaml apiVersion: v1 kind: Service metadata: name: jenkins-service namespace: devops-tools annotations: prometheus.io/scrape: 'true' prometheus.io/path: / prometheus.io/port: '8080' spec: selector: app: jenkins-server ports: - name: httpport port: 8080 targetPort: 8080 - name: jnlpport port: 50000 targetPort: 50000 ``` ### 2. Deploy Jenkins Controller You can deploy it manually or using my all-in-one deploy script, choose one of the following options: #### Option 1. Deploy Jenkins with prepare scipt Clone the repo contains the hands-on example. Then, run the deployment script: ```bash ## Checkout the source code for this hands-on git clone https://github.com/tungbq/K8sHub.git cd K8sHub/hands-on/jenkins-on-k8s ## NOTE: In `hands-on/jenkins-on-k8s/yamls/deployment.yaml` replace the `demo-jenkins-cluster-control-plane` value by your node name ## Create new `devops-tools` namespace kubectl create namespace devops-tools ## Now run the deploy script ./deploy.sh ``` #### Option 2. Deploy Jenkins manually ```bash ## Ensure you created 4 k8s manifest YAML files as describle in previous section kubectl create namespace devops-tools kubectl apply -f sevice_account.yaml kubectl apply -f volume.yaml kubectl apply -f deployment.yaml kubectl apply -f services.yaml ``` ### 3. Access Jenkins Controller #### 3.1. Port-Forwarding Open a new terminal and set up port forwarding to access Jenkins: ```bash ## Replace 8087 with an available port on your PC kubectl port-forward service/jenkins-service -n devops-tools 8087:8080 ``` #### 3.2. Get Initial Password Jenkins requires an initial admin password for first-time access. Retrieve it with the following commands: ```bash ## Get the Jenkins pod name kubectl get pods --namespace=devops-tools ## Retrieve the initialAdminPassword kubectl exec -it <pod_name> cat /var/jenkins_home/secrets/initialAdminPassword -n devops-tools ## Sample output: d72493ce44fb48bc8833da94b40cdd68 ``` #### 3.3. Access Jenkins Open your browser and navigate to http://localhost:8087. Log in using the initial password obtained earlier, install the suggested plugins, and create an admin user. ![login-ok](https://github.com/tungbq/K8sHub/blob/4d841a696d93a62338295169a7afb89c910036d2/hands-on/jenkins-on-k8s/assets/login-ok.png?raw=true) ### 4. Configure Jenkins Agents on Kubernetes With Jenkins up and running, the next step is to configure Kubernetes as Jenkins agents. #### 4.1. Install Kubernetes Plugin Navigate to Dashboard > Manage Jenkins > Plugins (http://localhost:8087/manage/pluginManager/available), search for the "Kubernetes" plugin, and install it. Restart Jenkins after installation. #### 4.2. Configure the Kubernetes Plugin Navigate to Dashboard > Manage Jenkins > Clouds (http://localhost:8087/manage/cloud/). Select "New cloud" and input the cloud name (e.g., k8s-agents), then select "Create." Configure the following fields: - Kubernetes URL: Leave empty - Kubernetes server certificate key: Leave empty - Kubernetes Namespace: devops-tools - Credentials: Leave as None - Jenkins URL: http://jenkins-service.devops-tools.svc.cluster.local:8080 Select "Test connection" to verify the connection. If you see `Connected to Kubernetes vX.Y.Z`, the configuration is successful. ![k8s-agent-ok](https://github.com/tungbq/K8sHub/blob/main/hands-on/jenkins-on-k8s/assets/k8s-agent-ok.png?raw=true) ### 5. Run a Sample Job on a Kubernetes Pod Now that the Jenkins controller and k8s agents are configured, let's run a sample job on a Pod. #### 5.1. Create a New Pipeline Go to the Jenkins homepage, select "New item," choose "Pipeline," input a name (e.g., Demo-Run-Shell-Inside-K8s-Pod), and create. #### 5.2. Define the Pipeline Script In the pipeline configuration section, input the following script: ```groovy podTemplate(containers: [ containerTemplate( name: 'jnlp', image: 'jenkins/inbound-agent:latest' ) ]) { node(POD_LABEL) { stage('Shell') { container('jnlp') { stage('Shell Execution') { sh ''' echo "Hello! Running from shell" ''' } } } } } ``` #### 5.3. Build the Pipeline Select "Build Now" to trigger the pipeline. ![start-a-run](https://raw.githubusercontent.com/tungbq/K8sHub/main/hands-on/jenkins-on-k8s/assets/start-a-run.png?raw=true) Jenkins will create a new Pod based on your template and run the pipeline inside the Pod. ![result-demo](https://raw.githubusercontent.com/tungbq/K8sHub/main/hands-on/jenkins-on-k8s/assets/result-demo.png?raw=true) #### 5.4. Monitor Pod Lifecycle To see the Pods created and terminated with each build, use the following command: ```bash ## Check pods in devops-tools namespace kubectl get pods -n devops-tools -w ``` ![pod-stats](https://github.com/tungbq/K8sHub/blob/main/hands-on/jenkins-on-k8s/assets/pod-stats.png?raw=true) ## Cleanup To clean up your environment, delete the namespace along with its resources: ```bash ## Terminate the namespace kubectl delete namespace devops-tools ``` Alternatively, run the cleanup script: ```bash cd hands-on/jenkins-on-k8s ./cleanup.sh ``` ## Troubleshooting 🔨 Common Issues and Solutions: - Persistent volume binding issues: Delete the PV with `kubectl delete pv jenkins-pv-volume` and redeploy. - Connection issues to the Jenkins service: Verify the service port and check the agent pod logs using `kubectl logs -f <your_pod_name> -n devops-tools`. - Losing connection to Jenkins page: Re-run `kubectl port-forward service/jenkins-service -n devops-tools 8087:8080` and ensure the Jenkins pod is running. ## Conclusion ✒️ By following this guide, you should have a functional Jenkins setup on Kubernetes, allowing you to run pipelines within k8s Pods. Experiment with custom pipelines and explore more advanced Jenkins features on Kubernetes. Happy DevOps-ing! --------------------------------------------------------------- I'm building the **K8sHub** to help everyone learn and practice Kubernetes from the basics to pratical hands-on, If you're interested in this project please cónider giving it a star ⭐️. Any star you can give me will help me grow it even more ❤️ Thank you! <table> <tr> <td> <a href="https://github.com/tungbq/K8sHub" style="text-decoration: none;"><strong>Star K8sHub repo ⭐️ on GitHub</strong></a> </td> </tr> </table>
tungbq
1,876,896
Writing an Obsidian Plugin Driven By Tests
I recently developed the initial version of Obsidian DEV Publish Plugin, a plugin that enables...
0
2024-06-04T16:31:36
https://dev.to/stroiman/writing-an-obsidian-plugin-driven-by-tests-1b35
typescript, obsidian, tdd, javascript
I recently developed the initial version of [Obsidian DEV Publish Plugin](https://github.com/stroiman/obsidian-dev-publish), a plugin that enables publishing Obsidian notes as articles on [DEV](https://dev.to). The first prototype was developed during a ~4 hour live stream. When I was ready to try the plugin for real to create the first article, that feature **worked for the first time!** This was the first time ever, I loaded the plugin in Obsidian. This was accomplished because almost every line of code had been developed using TDD. In fact, only some code that _couldn't be tested automatically_ failed when trying to publish again, which should _update_ the previously posted article. This is not an uncommon effect I experience when practicing TDD, that the code works the first time. Not always; but often! When it doesn't work the first time; the issues are often relatively trivial; at least compared to non-TDD code. Unfortunately, the Obsidian API does not make it easy to actually write meaningful tests of the part of the code that communicates _with_ Obsidian, as the TypeScript types quickly bring in a web of dependencies our code really doesn't care about. In this article, I will describe some of those problems, and the TypeScript tricks I applied to I solve them. ## The Closed-Source Problem Obsidian itself is closed source, which means that we cannot write tests that call into Obsidian code, unless we actually run the tests from within obsidian. The only thing we have access to as plugin developers are TypeScript type definitions describing the classes and functions available at run-time. Running the tests suite inside Obsidian does defy one of the primary principles of TDD, that we should get the fastest possible feedback. If we discard that options, we are left with no option but to mock out everything handled by Obsidian, even helper functions that we would have liked to actually call from tests. To clarify, I am not arguing against running tests inside Obsidian. Some complex plugins do just that. But the essence of TDD is about setting up a fast feedback cycle to achieve an efficient development process. So for a TDD process, it's not the right choice. You may want to write tests for other reasons than the feedback loop. E.g. for a UI heavy plugin, TDD may not provide the right feedback cycle for a large part of the code base. In that case it would make perfect sense to add tests after a feature was implemented to prevent a regression. ## Reading/Updating frontmatter A feature of the plugin is that the first time you run it for a given note, a new article should be created on DEV. If you run it again on the same note, the article should be updated to reflect the latest note contents. To handle this, the plugin stores the DEV article `id` in the frontmatter. Reading and updating frontmatter is performed by the `FileManager` class, which has the following declaration: ```typescript export class FileManager { getNewFileParent(sourcePath: string, newFilePath?: string): TFolder; renameFile(file: TAbstractFile, newPath: string): Promise<void>; generateMarkdownLink(file: TFile, sourcePath: string, subpath?: string, alias?: string): string; processFrontMatter(file: TFile, fn: (frontmatter: any) => void, options?: DataWriteOptions): Promise<void>; getAvailablePathForAttachment(filename: string, sourcePath?: string): Promise<string>; } ``` As Obsidian code is not available; we must provide _some_ alternate implementation. If you're familiar with [sinon](https://sinonjs.org/), you might think we can create a stubbed instance like this: ```typescript const fileManagerStub = sinon.createStubbedInstance(FileManager); fileManagerStub.processFrontMatter.callsFake(/* ... */); ``` But that is not possible when we don't have access to the class! But we can create a new class that conforms to the same _interface_, because a `class` in TypeScript creates both a _value_ at runtime (the actual class), and an _interface_ at design time. The interface represents the functions and properties available on _an instances of the class_. It does _not_ imply any inheritance relationship. When a piece of code depends on a "class", the TypeScript compiler verifies that the argument is compatible with the interface, but not that it inherits from the class. ```typescript // FileManager in this scope refers to the *interface*, meaning that we can // provide any argument that conforms to that interface. class Publisher { constructor(fileManager: FileManager) { /* ... */} } ``` We can simply write a `FakeFileManager` class that implements the `FileManager` interface, and use that in test code, allowing us to express the desired behaviour: ```typescript let fileManager: FakeFileManager; let publisher: Publisher; beforeEach(() => { const fileManager = new FakeFileManager(); const publisher = new Publisher(fileManager); }); it("Should create an new article if the frontmatter has no article id", () => { const file = fileManager.createFakeFile({ frontMatter: { "dev-article-id": undefined } }); // Returns a TFile await publisher.publish(file); devGateway.create.should.have.been.calledOnce; devGateway.update.should.not.have.been.called; }); it("Should update an existing article if the frontmatter has an article id", () => { const file = fileManager.createFakeFile({ frontMatter: { "dev-article-id": 42 } }); // Returns a TFile await publisher.publish(file); devGateway.update.should.have.been.calledOnceWith(match({ articleId: 42 })); devGateway.create.should.not.have.been.called; }) ``` One slightly annoying issue in this approach is that the plugin only depends on _one_ function in the interface, `processFrontMatter`, but the function express a dependency to the `FileManager` interface. To be compatible, the fake implementation must provide dummy implementations of all remaining functions, if the goal is to have strong types in test code too (it is!). It's not a big issue (the biggest is still to come), but it is noise in test code. Fortunately, there is a solution to that. TypeScript supports duck-typing. I.e., if a class has the methods and properties defined in an interface, then the class is _compatible with_ that interface. The `implements` keyword can indicate that a class _should_ implement an interface, but is not required. It merely helps when it is _the intent_ that a class should conform to a specific interface, as you get more helpful compiler errors when it doesn't. I use this to my advantage and turn the problem upside down. Rather than writing a class that conforms to Obsidian's interface; I can make Obsidian's `FileManager` class conform to _my interface!_ ```typescript interface GenericFileManager { processFrontMatter(file: TFile, fn: (frontmatter: any) => void, options?: DataWriteOptions): Promise<void>; } export class Publisher { fileManager: GenericFileManager; constructor(fileManager: GenericFileManager) { this.fileManager = fileManager; } async publish(file: TFile) { /* ... */ } } export class MainPlugin /* ... */ { async publish(file: TFile) { new Publisher(this.app.fileManager).publish(file) } } ``` I have now organised the code such that the closed-source `FileManager` class is compatible with my own `GenericFileManager` interface. I can now write the test that express the desired behaviour, and the compiler does not force me to write dummy method implementations I never call from the plugin. It is actually more profound than that. By turning the problem upside down, I have made the code _more conformant with the interface segregation principle_, which states that "no code should be forced to depend on methods it does not use". When the plugin depended on the `FileManager` interface, it was forced to depend on 5 functions. Now it only depend on the one being used. The fact that the _real_ implementation provides more capabilities than needed is of no concern; nor does it affect the maintainability of the code. But there is a worse offender of ISP, the `TFile` representing a file in Obsidian. ## Generalising on `TFile` Despite being able to remove the unneeded methods from our _direct_ dependencies, we cannot ignore that `processFrontMatter` accepts an input of type `TFile`, which is now an _indirect dependency_ of the plugin. This is unfortunate, as it brings a cascade of indirect dependencies. ```typescript export class TFile extends TAbstractFile { stat: FileStats; basename: string; extension: string; } export abstract class TAbstractFile { vault: Vault; path: string; name: string; parent: TFolder | null; } ``` Arg! 😱 `TFile` has a dependency to `Vault`, which is everything in Obsidian. The plugin code does not depend on _any_ of the properties of `TFile`, the plugin just passes the value around to other Obsidian functions, such as `FileManager.processFrontMatter`. How can we get rid of the dependencies to the `Vault`? By making the `GenericFileManager` ... _generic_. ```typescript interface GenericFileManager<TFile> { processFrontMatter(file: TFile, fn: (frontmatter: any) => void): Promise<void>; } ``` With this declaration, implementing the `FakeFileManager` is extremely simple (and also a `FakeVault` as the `Vault` provides the functionality to read the contents of the file, but I will ignore that in all other parts of this article to keep things simple. The problem and solution is identical to the `FileManager`) ```typescript type FakeFile { frontmatter: any; contents: string; } class FakeFileManager implements GenericFileManager<FakeFile> { processFrontMatter(file: TFile, fn: (frontmatter:any) => void) { fn(file.frontMatter); return Promise.resolve(); } } ``` The `Publisher` class, which depended on this interface is now also forced to be generic, but that's simple enough: ```typescript class Publisher<TFile> { fileManager: GenericFileManager<TFile>; constructor(fileManager: GenericFileManager<TFile) { this.fileManager = fileManager; } async getFrontMatter(file:TFile) { return new Promise((resolve, reject) => { this.fileManager.processFrontMatter(resolve).catch(reject) } } async publish(file: TFile) { const frontMatter = await getFrontMatter(file); const articleId = frontMatter['dev-article-id']; if (typeof articleId === 'number') { await update(/* ... */) } else { await create(/* ... */) } } } ``` The code above basically says. The `Publisher` needs to be constructed with some `fileManager`. It also has a function, `publish` that must receive a `file` as input. The `Publisher` itself neither knows, nor cares, _what_ a file is, it just cares that whatever it receives is _something_ that the `fileManager` knows how to deal with. In test code the plugin is constructed with fake implementations: ```typescript const fakeFileManager = new FakeFileManager() const publisher = new Publisher(fakeFileManager) ``` In the main plugin file, the plugin is constructed with the real implementation: ```typescript const publisher = new Publisher(this.app.fileManager) ``` I didn't even need to specify the generic type argument for the `Publisher` constructor, that was just inferred from the passed arguments. Now ISP is followed; no plugin code has any dependency to any function or property it doesn't need. I can replace closed source classes in tests with minimal fakes that just simulate the behaviour I care about for for the feature I am testing. That is how I was able to write a plugin, where 50% of the functionality worked the first time it was loaded in Obsidian, and the other 50% required two or three easily identified lines of code to change, before working also. ### Optionally Including Some Properties of `TFile`. I will add a hypothetical case which is not relevant for this plugin, but could be for readers wanting to adapt this approach. If your plugin depends on _some_ properties of `TFile`, for example `basename` and `extension`, you can add a _type constraint_ to the generic type. ```typescript /** * Represents properties of a TFile that _our_ plugin depends on */ type GenericFile = { basename: string; extension: string; } /** * A valid implementation of GenricFile that is used just for test */ type FakeFile = GenericFile & { // Add whatever properties are relevant for the test doubles frontmatter: any; } class MyPluginLogic<TFile extends GenericFile> { processFile(file: TFile) { // in this scope, we can rely on the file having a basename and extension property. } } ``` This tells the compiler that you only types that have the a `basename:string` and `extension:string` property can be used as generic type arguments, so now the code can safely use these two properties. As the real `TFile` has these properties, that is not a problem. The compiler will force us to add them to the `FakeFile` implementation, but they would already exist, because the entire point was to enable the practice of TDD, which means you would add them to the fake implementation _before_ actually writing the code that depends on them ;) ## Coming Up: Dealing with Inheritance To create an Obsidian plugin, you cannot avoid creating classes that _inherit from classes in the the closed-source part of Obsidian_, as a minimum for the main plugin class _must_ inherit from Obsidian's `Plugin` class. In my case, I didn't test the main plugin class as there is _virtually no complexity_. But there are other cases where it _may_ be necessary. This makes it seemingly impossible to actually create this instance in test code; how can you create an instance of a class if the base class is not accessible? But it is actually possible; a topic I will be covering in an upcoming article. But the gist of it is, a _class_ in JavaScript is just an identifier in the current scope. It is a reference to a function (the constructor). Functions can receive functions as arguments, and they can return functions, which implies they can also receive and return classes. This makes it possible to write a function that creates a class (not an instance, a class) - and this function could potentially use a parameter as the base class for the constructed class. The JavaScript `extends` keyword simply operates on an identifier in the current scope. I will write a proper article with code examples, including the TypeScript types necessary for this to compile correctly. ## p.s. This article was of course written in Obsidian, and published using the Obsidian DEV publish plugin (and then fixing some things that the plugin doesn't yet handle). The plugin is not yet available in the official community plugin list - but it can be installed using BRAT. Be aware of how your DEV api keys are stored before using it. The live stream is _currently_ available on [twitch.tv/stroiman](https://twitch.tv/stroiman). I will eventually publish it to [YouTube/@stroiman.development](https://www.youtube.com/@stroiman.development), with pauses removed. Have mercy on me, I am still a total noob in regards to streaming and video content.
stroiman
1,876,936
#117 Introduction to Natural Language Processing with Python
93 ReALM: Apple's AI Revolution for Seamless Siri Conversations Figure 2: AI Visual...
0
2024-06-04T16:30:05
https://dev.to/genedarocha/117-introduction-to-natural-language-processing-with-python-5847
#93 ReALM: Apple's AI Revolution for Seamless Siri Conversations ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5ad8ac41-6fe0-43d1-80fc-670388a65712_694x665.png) <svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewbox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg> **Figure 2: AI Visual Representation of the Apple ReALM AI system concept** Apple AI Research focuses on how LLMs can resolve references not only within conversational text but also about on-screen entities (such as buttons or text in an app) and background information (like an app running on a device). Traditionally, this problem has been approached by separating the tasks into different modules or using models specific to each type of reference. However, the authors propose a unified model that treats reference resolution as a language modeling problem, capable of handling various reference types effectively. The link to the research paper is [https://arxiv.org/pdf/2403.20329.pdf](https://arxiv.org/pdf/2403.20329.pdf) Voxstar's Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Apple researchers have unveiled a breakthrough AI system named ReALM, designed to enhance how technology interprets on-screen content, conversational cues, and active background tasks. This innovative system translates on-screen information into text, streamlining the process by eliminating the need for complex image recognition technology. This advancement allows for more efficient AI operations directly on devices. ReALM's capabilities enable it to understand the context of what a user is viewing on their screen along with any active tasks. The research highlights that advanced versions of ReALM have achieved superior performance levels compared to established models like GPT-4, albeit with a more compact set of parameters. An illustrative scenario demonstrates ReALM's practicality: a user browsing a website wishing to contact a business listed on the page can simply instruct Siri to initiate the call. The system intelligently identifies and dials the number directly from the website. This development signifies a significant leap towards creating voice assistants that are more attuned to the context, potentially revolutionizing user interactions with devices by offering a more intuitive and hands-free experience Here are the main points and contributions of the paper, simplified for easier understanding: ### **Introduction and Motivation** - **Problem Definition:** Understanding references within conversations and to on-screen or background entities is vital for interactive systems, like voice assistants, to function effectively. - **Challenge:** Traditional models and large language models (LLMs) have struggled with this task, especially when it comes to non-conversational entities. - **Solution:** The authors present a method using LLMs that significantly improves reference resolution by transforming it into a language modeling problem. ### **Approach** - **Encoding Entities:** A novel approach is used to encode on-screen and conversational entities as natural text, making them understandable by LLMs. - **Model Comparison:** The paper compares the proposed method, ReALM, against other models, including GPT-3.5 and GPT-4, demonstrating superior performance across various types of references. ### **Datasets and Models** - The study utilizes datasets created for this specific task, including conversational data, synthetic data, and on-screen data. - The models evaluated include a reimplementation of a previous system called MARRS, ChatGPT variants (GPT-3.5 and GPT-4), and the authors' own models of varying sizes (ReALM-80M, ReALM-250M, ReALM-1B, and ReALM-3B). ### **Results and Analysis** - **Performance:** ReALM models outperform both the baseline (MARRS) and ChatGPT variants, with the largest ReALM models showing significant improvements in resolving on-screen references. - **Practical Implications:** The research suggests that ReALM models could be used in practical applications, providing accurate reference resolution with fewer parameters and computational requirements than models like GPT-4. ### **Figures and Model Comparisons** The paper includes comparative figures illustrating the performance of the proposed ReALM models against traditional models and ChatGPT variants (GPT-3.5 and GPT-4). These figures are critical in demonstrating the substantial improvements in accuracy and efficiency the ReALM models offer across different datasets: conversational data, synthetic data, and on-screen data. The figures likely show metrics such as precision, recall, and F1 scores, which are standard for evaluating the performance of models in tasks involving natural language understanding and reference resolution. One significant aspect that the figures highlight is the absolute gains in performance over existing systems, especially in resolving on-screen references. The smallest ReALM model achieves absolute gains of over 5% for on-screen references compared to the baseline, indicating a notable improvement in handling non-conversational entities. This enhancement is crucial for developing more intuitive and responsive conversational agents that can interact with users in a more natural and context-aware manner. Furthermore, the comparison with GPT-3.5 and GPT-4 underlines the efficiency of ReALM models. Despite being significantly smaller and faster, ReALM models perform comparably to or even outperform GPT-4 in specific scenarios. This efficiency is particularly relevant for applications running on devices with limited computing power, such as smartphones and smart home devices, where delivering real-time responses is essential. ### **Detailed Analysis and Implications** The paper's approach to encoding entities as a natural text for processing by LLMs is both novel and practical. By reconstructing on-screen content into a textually representative format, the authors tackle the challenge of reference resolution in a domain traditionally dominated by visual and spatial understanding. This method's success, as evidenced by the performance figures, suggests a promising direction for integrating LLMs into a wider range of applications beyond purely textual tasks. Moreover, the ReALM models' ability to handle complex reference resolution tasks with fewer parameters is a significant technical achievement. This efficiency opens up new possibilities for deploying advanced natural language processing (NLP) capabilities on a broader spectrum of devices and platforms, potentially making sophisticated conversational interfaces more accessible to users worldwide. The comparative analysis also sheds light on the importance of domain-specific fine-tuning. By training ReALM models on user-specific data, the models gain a deeper understanding of domain-specific queries and contexts. This fine-tuning allows ReALM to surpass even the latest version of ChatGPT in understanding nuanced references, demonstrating the value of targeted model optimization in achieving high performance in specialized tasks. [ ![](https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581515d3-6419-44c1-885b-0caafad40b08_1606x903.png) <svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewbox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg> ](https://substackcdn.com/image/fetch/f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F581515d3-6419-44c1-885b-0caafad40b08_1606x903.png) **Figure 2: LMSYS [Chatbot Arena](https://lmsys.org/blog/2023-05-03-arena/) is a crowdsourced open platform for LLM evals** ### **Google** Google, through its parent company Alphabet, is a powerhouse in AI research and application, known for its open approach to research and contributions to foundational AI technologies. Google's DeepMind subsidiary made headlines with AlphaGo, the first computer program to defeat a world champion in Go, a complex board game. Google's AI prowess extends into practical applications, from its search algorithms to autonomous driving ventures with Waymo. According to "The State of AI 2023" report, Google continues to lead in publishing cutting-edge AI research, contributing significantly to the field's advancement. ### **Meta** Meta has shifted its focus towards building AI that supports large-scale social networks and its ambitious metaverse project. Meta AI Research Lab is known for its work on machine learning models that process natural language and understand social media content. Meta has also made strides in creating AI models that generate realistic virtual environments, which is crucial for its vision of the metaverse. Despite facing criticism over data privacy concerns, Meta's investments in AI are substantial, as evidenced by their continuous release of open-source AI models and tools. ### **Amazon** Amazon leverages AI across its vast ecosystem, from enhancing customer recommendations to optimizing logistics in its fulfillment centers. Amazon Web Services (AWS) offers a range of AI and machine learning services to businesses, making sophisticated AI tools accessible to a wide audience. In the consumer space, Amazon's Alexa is a prime example of AI integration into everyday life, offering voice-activated assistance. While Amazon may not publish as much research as Google or Meta, its AI applications in retail, cloud computing, and consumer electronics are extensive and deeply integrated into its operations. ### **OpenAI** Initially founded as a non-profit to ensure AI benefits all of humanity, OpenAI has transitioned into a capped-profit entity. It has made headlines with groundbreaking models like the GPT (Generative Pre-trained Transformer) series, culminating in GPT-4. OpenAI's approach to AI is both ambitious and cautious, emphasizing safe and ethical AI development. OpenAI's collaboration with Microsoft has provided it with significant computational resources, enabling large-scale models that have set new standards for natural language processing and generation. ### **Grok / X** Grok X AI, although not as widely recognized as the giants like Google or Meta, plays a crucial role in the AI domain by focusing on the infrastructure that powers these advanced systems. Grok AI specializes in developing cutting-edge solutions optimized for AI and machine learning computations. Their work is essential for supporting the computational demands of large-scale AI models, making Grok AI a key player in enabling the next wave of AI innovations. While Grok AI's contributions might not be in direct AI research or application development, their technology is foundational in providing the necessary horsepower for AI models to run efficiently and effectively. ### **Apple** Apple's approach to AI is somewhat different, prioritizing user privacy and on-device processing. Apple integrates AI across its product lineup, enhancing user experiences with features like Face ID, Siri voice recognition, and Proactive Suggestions. Unlike its counterparts, Apple tends to be more reserved about its AI research, focusing on applying AI in ways that enhance product functionality while safeguarding user data. Despite this, Apple has made significant hires in the AI space and acquired startups to bolster its AI capabilities, signaling a strong but understated presence in AI. ### **Conclusion and Future Direction** In conclusion, the paper "ReALM: Reference Resolution As Language Modeling" makes a significant contribution to the field of NLP by demonstrating the feasibility and effectiveness of treating reference resolution as a language modeling problem. The comparative figures and analyses provided in the paper underscore the potential of ReALM models to revolutionize how conversational agents understand and respond to human language. As research in this area continues to evolve, we can look forward to more intuitive, efficient, and intelligent systems that bridge the gap between human communication and machine understanding. #ArtificialIntelligence #MachineLearning #DeepLearning #NeuralNetworks #ComputerVision #AI #DataScience #NaturalLanguageProcessing #BigData #Robotics #Automation #IntelligentSystems #CognitiveComputing #SmartTechnology #Analytics #Innovation #Industry40 #FutureTech #QuantumComputing #Iot #blog #x #twitter #genedarocha #voxstar Voxstar's Substack is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.
genedarocha
1,876,935
Recursion 遞迴
接下來想要緩慢的把自己的 CS 相關基礎補上,多多接觸寬廣的技術知識,就先從遞迴開始吧! 🐳 遞迴的種類 如果一個 function 裡面有 self-calling...
0
2024-06-04T16:27:31
https://simonecheng.github.io/recursion/
dsa
接下來想要緩慢的把自己的 CS 相關基礎補上,多多接觸寬廣的技術知識,就先從遞迴開始吧! ## 🐳 遞迴的種類 如果一個 function 裡面有 self-calling 的敘述,便稱為遞迴,遞迴概略可以分為三個種類,分別是: - Direct Recursion - Indirect Recursion - Tail Recursion 下面舉一些簡單的例子來說明這三個遞迴。 ### 🦀 Direct Recursion Direct Recursion,直接遞迴,應該蠻好理解的。如果某個 function 在 function 內部呼叫自己,就可以稱為直接遞迴。可以參考下面的 psuedo code: ```c void directRecursionFunction() { // some code... directRecursionFunction(); // some code... } ``` ### 🦀 Indirect Recursion Indirect Recursion,間接遞迴,意思是指多個 module 之間,彼此互相呼叫,形成 calling cycle。例如:假設目前有三個 function:`module A`、`module B`、`module C`,這三個 function 彼此互相呼叫,便會形成間接遞迴,如下圖: ![circular-dependency-example](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oo4tszrtrsr7t85q6tyg.png) > 像上面那種 function 互相 call 來 call 去,互相高度依賴的狀況(高耦合),盡量不要在實際開發中寫出來,會很可怕。 ### 🦀 Tail Recursion Tail Recursion,尾端遞迴,其實是直接遞迴的一種,只是在 recursion 之後,下一個可執行的敘述就是 END 敘述。會特別把這個種類分出來是因為這種遞迴可以在 compiler 裡面做到最佳化。(最佳化的意思,某種程度上可以理解成「將遞迴改成非遞迴」) ## 🐳 Recursion v.s. Iteration(Non-recusrion) - 任何問題的解法必定可以用兩種演算法去解決:遞迴與非遞迴。 - 遞迴與非遞迴演算法兩者可以互相轉換 - 遞迴改為非遞迴,有標準 SOP - 非遞迴改回遞迴,沒有標準 SOP(需要靈感) ### 示意圖 ![recursion-to-for-loop](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hqoyy6p0ixhjroqt2a9v.png) ### 比較表 ||遞迴|非遞迴| |---|---|---| |程式碼|較精簡|較冗長| |區域變數、暫存變數|使用很少或是沒有|使用量多| |表達問題的能力|powerful|weak| |除錯|困難|容易| |程式執行時間|較久,比較沒有效率|較短,較有效率| |memory stack 空間|需要額外的 stack 空間支持,所以執行時需要較多的動態空間|無需 stack support| ## 🐳 題目練習 ### 🦀 Factorial N! 階乘 #### Question 1: Write an Interative function Fac(N) or pseudo code for N! ```js function fac(n) { let result = 1; for (let i = 1; i <= n; i++>) { result = result * i; } return result; } ``` #### Question 2: Write a Recursive function Fac(N) or pseudo code for N! 先把階乘的遞迴數學定義寫出來: {% katex %} n! = \begin{cases} 1,\ if\ n \ne 0 \\\\ (n-1)! * n,\ if\ n > 0 \end{cases} {% endkatex %} 然後再寫出遞迴的程式碼: ```js function fac(n) { if (n === 0) { return 1; } else { return fac(n-1) * n; } } ``` > 解遞迴相關問題的訣竅:先想出遞迴的數學定義,再把數學定義轉換成程式碼! ### 🦀 Fibonacci Number #### Definition {% katex %} \begin{cases} F_{0} = 0 \\\\ F_{1} = 1 \\\\ F_{n} = F_{n-1} + F_{n-2},\ for\ n \ge 2 \end{cases} {% endkatex %} #### Question 1: Write a Recurisive function for Fib(N) ```js function fib(n) { if (n === 0) { return 0; } if (n === 1) { return 1; } return fib(n-1) + fib(n-2); } ``` #### Quesiton 2: Write a Interative function for Fib(N) ```js function fib(n) { if (n === 0) { return 0; } else if (n === 1) { return 1; } else { let a = 0; let b = 1; let c; for (let i = 2; i <= n; i++) { c = a + b; a = b; b = c; } return c; } } ``` ### 🦀 Greatest Common Divisor (GCD) 最大公因數 #### Definition 用輾轉相除法來計算兩個數字(A, B)的最大公因數,定義如下: {% katex %} \begin{cases} B,\ if\ (A\mod B) = 0 \\\\ GCD(B,\ A\mod B),\ otherwise \end{cases} {% endkatex %} #### Write the recursive code for GCD(A, B) ```js function gcd(a, b) { if (a % b === 0) return b; return gcd(b, a % b); } ``` ### 🦀 Tower of Hanoi 河內塔 #### 題目敘述 有三個柱子,假設分別叫做 A、B、C,其中 A 柱子上有 n 個大小不同的盤子,這些盤子從上到下按照大小排放,最上面的盤子最小,最下面的盤子最大,現在要把這些盤子從 A 柱子移到 C 柱子,但必須遵守以下規則: 1. 每次只能移動一個盤子 2. 大的盤子不能放在小的盤子上面 請把所有的移動步驟都 print 出來。 #### 解題思路 ``` A B C │ │ │ │ │ │ │ │ │ 1 ┌─┼─┐ │ │ 2 ┌┼┼┼┼┼┐ │ │ 3 ┌┼┼┼┼┼┼┼┐ │ │ ─┴───────┴─ ────┴──── ────┴──── ``` 先從例子開始想,假設目前有 A、B、C 三個柱子,然後有 3 個盤子在 A 柱子上面,則步驟如下: 1. move disk 1 from A to C 2. move disk 2 from A to B 3. move disk 1 from C to B 4. move disk 3 from A to C 5. move disk 1 from B to A 6. move disk 2 from B to C 7. move disk 1 from A to C 把步驟分成三個區塊來看: 1. 第 1. ~ 3. 步驟是把 1 ~ 2 號的盤子都先從 A 柱子移到 B 柱子 2. 第 4. 步驟是把最後一個第 3 號盤子直接從 A 柱子移到 C 柱子 3. 接下來是把 B 柱子上的盤子都移到 C 柱子 例外情況:如果只有一個盤子的話,就直接從 A 柱子搬到 C 柱子就可以了。 ```js function hanoi(n, from, to, via) { if (n === 1) { console.log(`move disk 1 from ${from} to ${to}`); } else { hanoi(n - 1, from, via, to); // 先把 n - 1 個盤子都移到中間的柱子 console.log(`move disk ${n} from ${from} to ${to}`); // 把最下面的盤子移到目標柱子 hanoi(n - 1, via, to, from); // 再把剩下的 n - 1 個盤子移到目標柱子 } } ``` #### 河內塔的遞迴定義 把上面提到的步驟用數學式來表示,{% katex inline %}T(n){% endkatex %} 表示移動 {% katex inline %}n{% endkatex %} 個盤子時程式所需的執行次數,如果解出 {% katex inline %}T(n){% endkatex %} 就表示解出了這個 function 的時間複雜度: {% katex %} \begin{equation*} \begin{split} T(n) &= T(n - 1) + 1 + T(n - 1),\ 且\ T(1) = 1\\\\ &= 2T(n - 1) + 1 \end{split} \end{equation*} {% endkatex %} 解 {% katex inline %}T(n){% endkatex %},用展開代入法: {% katex %} \begin{equation*} \begin{split} T(n) &= 2 * T(n - 1) + 1\\\\ &= 2 * [2 * T(n - 2) + 1] + 1\\\\ &= 4 * T(n - 2) + 3\\\\ &= 4 * [2 * T(n - 3) + 1] + 3\\\\ &= 8 * T(n - 3) + 7\\\\ &= 16 * T(n - 4) + 15\\\\ &= 2^{n-1} * T(n - (n - 1)) + (2^{n-1} - 1)\\\\ &= 2^{n-1} * T(1) + (2^{n-1} - 1)\\\\ &= 2^{n-1} + (2^{n-1} - 1)\\\\ &= 2^n - 1 \approx O(2^n) \end{split} \end{equation*} {% endkatex %}
simonecheng
1,876,924
LLM Evaluations: Why They Matter
When building applications powered by large language models, it's easy to get excited about the rapid...
0
2024-06-04T16:25:58
https://dev.to/petrbrzek/you-need-llm-evaluations-to-make-your-app-stable-1j94
When building applications powered by large language models, it's easy to get excited about the rapid prototyping capabilities. However, as you move beyond the initial prototype phase, you'll encounter various challenges that can impact the stability and reliability of your app. To address these issues and ensure a robust LLM-based application, implementing a comprehensive evaluation and testing strategy is crucial. ## The Challenges of LLM-based Apps: 1. Hallucinations: LLMs can generate outputs that seem plausible but are factually incorrect or inconsistent with reality. 2. Factuality problems: LLMs may provide inaccurate information or make mistakes in their responses. 3. Steering to weird directions: LLMs can sometimes generate inappropriate or irrelevant content. 4. Hacking attempts: Malicious users may try to exploit vulnerabilities in LLMs to manipulate their behavior. 5. Reputational and legal risks: Inaccurate or offensive outputs from LLMs can damage your brand reputation and potentially lead to legal issues. ## The Importance of LLM Evaluations: To mitigate these challenges and ensure the stability of your LLM-based app, implementing a robust evaluation and testing process is essential. Here's how you can approach it: 1. Record all data: Start by logging all interactions with your LLM-based app. This includes user inputs, generated outputs, and any relevant metadata. 2. Flag bad answers: Manually review the logged data and flag any instances of hallucinations, factual errors, inappropriate content, or other problematic outputs. 3. Create test datasets: Use the flagged bad answers to create test datasets that cover a wide range of potential issues. These datasets will serve as a reference for evaluating the performance of your LLM. 4. Implement automated tests: Develop automated tests that compare the LLM's outputs against the expected results defined in your test datasets. This allows you to quickly identify regressions and ensure the stability of your app as you iterate on the LLM's prompts and configurations. 5. Leverage LLMs as judges: Utilize separate LLMs as "judges" to evaluate the quality and appropriateness of the outputs generated by your primary LLM. This adds an extra layer of validation and helps catch issues that may be missed by automated tests. 6. Perform post-processing checks: Implement post-processing checks on the LLM's outputs to detect and handle problematic content, such as prompt injection attempts, profanity, or outputs that violate predefined constraints. 7. Continuously iterate and expand: As you discover new issues or edge cases, update your test datasets and automated tests accordingly. Continuously monitor the performance of your LLM-based app and iterate on the evaluation process to ensure ongoing stability and reliability. Building stable and reliable LLM-based applications requires a proactive approach to evaluation and testing. By recording data, flagging bad answers, creating test datasets, implementing automated tests, leveraging LLMs as judges, performing post-processing checks, and continuously iterating, you can effectively identify and address the challenges associated with LLMs. This comprehensive evaluation strategy will help you deliver a high-quality and trustworthy application to your users. ## Do you want to know how to implement these LLM evaluation techniques in your own projects? Let me know in the comments below, and I'll be happy to provide more detailed guidance and share some practical examples to help you get started!
petrbrzek
1,876,923
Day 5 of 30
This is one of those days I was very overwhelmed and decided I needed to set one thing straight and...
0
2024-06-04T16:24:05
https://dev.to/francis_ngugi/day-5-of-30-3bh3
This is one of those days I was very overwhelmed and decided I needed to set one thing straight and come up with a friendly schedule I actually got a nice schedule from Monica AI: ## **Phase 1 (Weeks 1-3):** - Focus on learning React.js fundamentals, building various React projects, and becoming proficient in React. ## **Phase 2 (Weeks 4-6):** - Shift your focus to learning Flask, build a few Flask projects, and work on integrating your React and Flask skills to create full-stack web applications. ## **Phase 3 (Weeks 7-10):** - Once you have a good grasp of React and Flask, and can build functional full-stack projects, then start learning data structures and algorithms. - Allocate time to practice DS&A concepts and solve coding challenges. ## **Phase 4 (Weeks 11-14):** - Introduce the TryHackMe platform and start learning about cybersecurity and ethical hacking. - Spend time working through TryHackMe modules and challenges, ensuring you maintain a strong ethical and legal approach. **But I also managed to learn some new concepts but I will post them tomorrow together with more stuff I will be learning next**
francis_ngugi
1,876,922
Recreating Stripe’s Roles in PropelAuth
Stripe is a platform that allows companies of all sizes to accept payments, issue invoices and...
0
2024-06-04T16:22:24
https://www.propelauth.com/post/recreating-stripes-roles-propelauth
webdev, product, tutorial, security
[Stripe](https://stripe.com/?ref=propelauth.com) is a platform that allows companies of all sizes to accept payments, issue invoices and generally manage their billing. Their offering is necessarily a bit complex, and they have a roles structure to match. Every user can have a combination of roles that allows them access to certain features and functionality within the product. When you invite a user you’re presented with a list that looks like this: ![Stripe's roles UI](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ue4o3zifpcyg82frqw4z.png) All in all, there are 14 roles to choose from, plus a default 15th role (Owner) that is automatically assigned to whoever creates the account and cannot be assigned to another person. In this article, we’ll walk you through how to recreate Stripe’s Roles model using [PropelAuth](https://www.propelauth.com/?ref=propelauth.com). In order to follow this tutorial, first create a PropelAuth account, and then navigate to the Roles & Permissions page within the dashboard. ## **Single Role vs Multi-Role** PropelAuth allows for two separate role models — single, where a user can only be assigned one role at a time, and multi, where a user can have multiple roles. The default in PropelAuth is single role, so before we begin, we’ll need to switch to multi-role. This can be done via the Configuration button on the Roles & Permissions page. ![How to switch to multi-role](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5ouqb03l6xs71ja0mahl.png) ## **Adding Roles** Next, we’re going to want to add in all of our Stripe roles via the Add Role button. The Add Role modal lets you choose the name of your role, as well as select which other roles it can manage (basically: which other users can this role modify access for). The majority of Stripe’s roles don’t interact with other roles, so we’ll go ahead and leave that blank: ![Adding the View Only role](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lixkqnvadguzjn920v8a.png) And once we click Add, we’ll see it appear in the list: ![Showing View Only in the list](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zxi7qj8fj2kvebx9kc9m.png) PropelAuth has the concept of a Default role — a role that is automatically assigned to a user when they join the org if no other role has been given (this can happen if they join via a domain match or SAML login). Let’s go ahead and make View Only the default by expanding it, clicking the Gear icon and choosing “Set as Default Role.” ![Showing how to set the default role](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cy2fwhbes0hlkurifyi2.png) After we’ve done that, we can see that our list is now updated. ![List again - View Only is now marked as Default](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9ce0pcgm7mqvoxvtzb3j.png) Next, we’ll go ahead and keep adding roles the same way. The Administrator and IAM Administrator roles will be a bit different — these roles have access to manage other users. ![Adding Administrator, which can manage other roles](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/00u2w5h7up5omqc1q17n.png) Once we’ve done all of that, we’re going to want to make one last change — updating the pre-existing Owner role to be able to manage users as well. Since it was created before we added all of these additional roles, it only has access to manage itself. To make the changes we need, we’ll expand it, click the gear icon and select Edit Role. ![Selecting Edit Role](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6kihcvrqe8uihxapl6js.png) And then we’ll be able to give it access to manage all of the other roles. ![Owner role with all the other roles it can manage](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/o6aymvfawynkoyvvma5y.png) Once we do that, we can easily see the change in the expanded role view: ![Expanded Role View for Owner](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2g6x0usls2zn6bte3ffc.png) ## **User-Facing Interactions** Okay — so we’re all set up. Let’s see how this impacts our users. I’ve created an account, [jane@doe.com](mailto:jane@doe.com), and logged in. She is the first person in her organization, so she has been automatically assigned the Owner role, which can easily be seen from her Users page on the PropelAuth hosted pages: ![Jane Doe's users page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yfa5h00bad0zt9o8grad.png) Now, let’s invite another user to her organization — [john@doe.com](mailto:john@doe.com). Because [jane@doe.com](mailto:jane@doe.com) has the power to manage all of the roles, she can see and choose from the entire list. Let’s give John the View Only role: ![Invitation options for john@doe.com](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rnsycdi4j392im2kdyua.png) Note: PropelAuth does not enforce that there can only be one Owner role, but does enforce that at least one person must be designated as an Owner within an organization. Now that John has an account, let’s log in and see what he sees: ![John's user page](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7x3ypbxdyi2tsg2ms5lx.png) His view is a bit different to reflect his permissions — he can only see the members of his organization, and he cannot edit them, or invite others. ## **Custom Permissions** By default, PropelAuth provides a set of permissions related to PropelAuth itself — things like the ability to invite users, manage SAML connections, etc. You can also create custom permissions to manage access within your own application. Let’s use Stripe’s Developer role as an example: ![Description of Stripe's developer role](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jfspenmsjzlgh5pd4lu1.png) Looks like we need to add a permission around accessing a secret key. To do so, we return to the Roles & Permissions page in PropelAuth, select Configuration → Configure Permissions. ![Selecting Configure Permissions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/bqqfhr95rc6m96w71utl.png) A side panel will open, where we can add a new permission and assign it to a role: ![Adding a new permission](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jfnzgqq4r7frap6okjnt.png) And now, when we expand the Developer role within the list, we’ll be able to see that it has access to the canAccessSecretKey permission: ![Developer role, with the new permission visible](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yzt847afxo2l0xna14z2.png) When an engineer goes to write code, they can write: ```typescript // This will be different in each language user.hasPermission(organization, "canAccessSecretKey") ``` and this will take into account the role the user has to determine whether they can access the secret key. ## **More Reading** If you’d like to learn more about roles & permissions in PropelAuth, check out our documentation [here](https://docs.propelauth.com/features/rbac?ref=propelauth.com). If you’re interested in reading more about how to design your product’s RBAC, check our our guide [here](https://www.propelauth.com/post/guide-to-rbac-for-b2b-saas?ref=propelauth.com).
victoria_propel
1,876,921
Learn Rust by building a real web application.
I heard about Rust few years back but was hesitant to learn it mainly due to its unique syntax. But i...
0
2024-06-04T16:22:09
https://dev.to/phunsukh_wangdu_2c0990752/learn-rust-by-building-a-real-web-application-4m99
rust, learning, webapplication
I heard about Rust few years back but was hesitant to learn it mainly due to its unique syntax. But i kept stumbling on it ever since either on podcasts or YouTube channels or talks on how major companies are embracing it for C++. I finally decided, I want to learn the language. I started reading the book but in no time got lost in the new terminology or the syntax. I reset my learning and decided, i will build a real world application. This is not a simple hello world or a todo list. I am going to be building a real world application which has all the features. I am building a blog website which will have all these features. 1. Authentication. 2. Email verification 3. Oauth Sign in 4. Image Upload to server 5. Upload Images to S3 6. Static files for html rendering. Lets get to it.
phunsukh_wangdu_2c0990752
1,876,919
Supercharge Your React App's SEO: Tips and Tricks
Introduction: In today's digital landscape, having a website or web application with excellent search...
0
2024-06-04T16:19:21
https://dev.to/vyan/supercharge-your-react-apps-seo-tips-and-tricks-34pm
webdev, javascript, beginners, react
**Introduction:** In today's digital landscape, having a website or web application with excellent search engine optimization (SEO) is crucial for success. However, when it comes to Single Page Applications (SPAs) built with React, SEO can be a bit tricky. React's client-side rendering approach can pose challenges for search engine crawlers, potentially hindering your app's discoverability. Fear not, though! With the right strategies, you can enhance your React app's SEO and ensure that it ranks well in search engine results. **1.Server-Side Rendering (SSR)** One of the most effective ways to improve your React app's SEO is to implement server-side rendering (SSR). This approach renders the initial page content on the server and sends the fully rendered HTML to the client's browser. This ensures that search engine crawlers can easily access and index your app's content. To implement SSR in your React app, you can use tools like Next.js or React-Redux. These frameworks provide built-in support for SSR, making the process more streamlined. **2.Optimize Your Meta Tags** Meta tags play a crucial role in providing search engines with information about your app's content. Make sure to include relevant and descriptive meta tags, such as the title, description, and keywords, for each page or component in your React app. Here's an example of how you can set meta tags in your React app: ```jsx import React from 'react'; import { Helmet } from 'react-helmet'; const HomePage = () => { return ( <div> <Helmet> <title>My React App - Home</title> <meta name="description" content="This is the home page of my awesome React app." /> <meta name="keywords" content="react, seo, web development" /> </Helmet> {/* Your home page content */} </div> ); }; export default HomePage; ``` **3.Optimize Your URLs** Search engine crawlers and users alike prefer clean and descriptive URLs. In your React app, ensure that your URLs are human-readable and reflect the content they represent. You can achieve this by using React Router's `Link` component and specifying meaningful paths. **4.Create an XML Sitemap** An XML sitemap is a file that lists all the important pages and content of your website, making it easier for search engines to crawl and index your app's content. Generate an XML sitemap for your React app and submit it to major search engines like Google and Bing. **5.Utilize Pre-rendering Techniques** If implementing full-fledged server-side rendering isn't feasible for your project, you can opt for pre-rendering techniques like static site generation or pre-rendering during build time. These approaches generate static HTML files for your React app's pages, which can then be served to search engine crawlers and users alike. **6.Optimize Images and Other Assets** Don't forget to optimize images, videos, and other assets used in your React app. Compress them to reduce file sizes and improve load times, as faster-loading websites tend to rank higher in search engine results. **7.Implement Progressive Web App (PWA) Features** Enabling Progressive Web App (PWA) features in your React app can enhance the user experience and potentially improve your app's SEO. PWAs offer features like offline functionality, push notifications, and fast load times, which can contribute to better engagement and higher rankings. **Conclusion:** Optimizing your React app's SEO is an ongoing process that requires attention to various factors. By implementing server-side rendering, optimizing meta tags and URLs, creating an XML sitemap, utilizing pre-rendering techniques, optimizing assets, and leveraging PWA features, you can significantly improve your app's visibility and discoverability in search engine results. Stay proactive, monitor your app's performance, and continuously refine your SEO strategies for maximum impact.
vyan
1,876,918
Buy verified cash app account
https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash...
0
2024-06-04T16:17:02
https://dev.to/tarrantteafo/buy-verified-cash-app-account-4dma
webdev, javascript, beginners, programming
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oi6qnpbwdgyz2jqe90ig.png)\n\nBuy verified cash app account\n\n\n\n\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts.  With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n"
tarrantteafo
1,876,917
Buy verified cash app account
Buy verified cash app account Cash app has emerged as a dominant force in the realm of mobile banking...
0
2024-06-04T16:16:48
https://dev.to/whitemartin9875/buy-verified-cash-app-account-381a
Buy verified cash app account Cash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security. Our commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer. Why dmhelpshop is the best place to buy USA cash app accounts? It’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service. Clearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents. Our account verification process includes the submission of the following documents: [List of specific documents required for verification]. Genuine and activated email verified Registered phone number (USA) Selfie verified SSN (social security number) verified Driving license BTC enable or not enable (BTC enable best) 100% replacement guaranteed 100% customer satisfaction When it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential. Clearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license. Additionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process. How to use the Cash Card to make purchases? To activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Why we suggest to unchanged the Cash App account username? To activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts. Selecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.   Buy verified cash app accounts quickly and easily for all your financial needs. As the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts. For entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale. When it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts.  With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source. This article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.   Is it safe to buy Cash App Verified Accounts? Cash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process. Unfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts. Cash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers. Leveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.   Why you need to buy verified Cash App accounts personal or business? The Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals. To address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all. If you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts. https://dmhelpshop.com/product/buy-verified-cash-app-account/ Improper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts. A Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account. This accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.   How to verify Cash App accounts To ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account. https://dmhelpshop.com/product/buy-verified-cash-app-account/ As part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.  https://dmhelpshop.com/product/buy-verified-cash-app-account/ How cash used for international transaction? Experience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom. No matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account. https://dmhelpshop.com/product/buy-verified-cash-app-account/ Understanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial. https://dmhelpshop.com/product/buy-verified-cash-app-account/ As we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account. Offers and advantage to buy cash app accounts cheap? With Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform. We deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else. Enhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account. Trustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential. https://dmhelpshop.com/product/buy-verified-cash-app-account/ How Customizable are the Payment Options on Cash App for Businesses? Discover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management. https://dmhelpshop.com/product/buy-verified-cash-app-account/ Explore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account. Discover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all. Where To Buy Verified Cash App Accounts When considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account. Equally important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise. The Importance Of Verified Cash App Accounts In today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions. By acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace. When considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account. Equally important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise. https://dmhelpshop.com/product/buy-verified-cash-app-account/ Conclusion Enhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts. https://dmhelpshop.com/product/buy-verified-cash-app-account/ Choose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively. https://dmhelpshop.com/product/buy-verified-cash-app-account/ Contact Us / 24 Hours Reply Telegram:dmhelpshop WhatsApp: +1 ‪(980) 277-2786 Skype:dmhelpshop Email:dmhelpshop@gmail.com
whitemartin9875
1,825,792
AND / OR operators, Short-Circuiting and Nullish Coalescing in Javascript
Introduction Short Circuiting and Nullish Coalescing are mechanisms available in...
0
2024-06-04T16:16:26
https://dev.to/hatemtemimi/and-or-operators-short-circuiting-and-nullish-coalescing-in-javascript-1292
webdev, javascript, logicalgates, operators
## Introduction Short Circuiting and Nullish Coalescing are mechanisms available in javascript to enhance code efficiency and readability, by providing special way of evaluating values. It relies on the logical operators `&&` and `||`, while nullish Coalescing relies on the operator `??`, Both tools are powerful, but have to be used in the right context to shine ## A detailed comparaison Get some coffee and buckle up, we're going full technical. ## Short-Circuiting Short-Circuiting Applies to the logical operators `&& (AND)` and `|| (OR)` in JavaScript, But these operators originate from a neighbour field of science: Logical Gates in Electronics ## Logic Gates ### Logical Gate AND In Electronics Engineering, The AND logical gate is nothing more but a circuit that represents the boolean logic: `A.B` meaning `A and B`, which is written in programming as `A && B` Let's Imagine two switches connected in series with a light bulb; For the bulb (output) to light up `1 / ON / Truthy`, both switches (inputs) need to be closed `1 / ON / Truthy`. If either switch is open `0 / OFF / Falsy`, the circuit is broken, and the bulb stays `off (0)`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/m4rvmd0cu5e0w4gszi34.png) Now if we consider the light bulb to be Q , Q will only evaluate to true if both A and B allow the current to pass through, else Q will be falsy. Therefore, the expression `A + B = Q` will only evaluate to 1(true) if A=1 and B=1, which is illustrated by what is called in electronics: a truth table. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/962daz5p35sfxp8phlwx.png) ### Logical Operator AND The expression might differ in programming, but the logic is the same, Q will only be truthy if both A and B are truthy ```javascript // AND operator Q = A && B // A and B Must Have truthy values in order for Q to be truthy // Similar example using only if statement if (!A) { Q = false;} else if (!B) { Q = false;} else { Q = true;} ``` ### Logical Gate OR Similarly to the AND gate, the OR gate will represent a boolean logic, the Boolean logic for an OR circuit is represented by the plus symbol `+ (OR)`. So, for an OR circuit, the Boolean logic expression would be: `A + B` which means `A OR B`, which is written in programming as `A || B` Let's revisit our previous light bulb example, If either switch (input) is `closed (1)`, the light (output) turns `on (1)`. Only when both switches are `open (0)` will the light stay `off (0)`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h0hf6yqp1639963cyn8d.png) The logic or Boolean expression for an OR gate is `A+B = Q` which means: If `A or B` is true, then Q is true, below is the truth table for an `OR gate`. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0kl3467z26c2omli0gkd.png) ### Logical Operator OR ```javascript // OR operator Q = A || B // A OR B Must be truthy in order for Q to be truthy // Similar example using only if statement if (A) { Q = true;} else if (B) { Q = true;} else { Q = false;} ``` ### The Why to Logical Operators As seen in the precious examples, logical operators are cleaner and shorter to write but that's not convincing enough right ? When evaluating an expression involving these operators, JavaScript only evaluates one side of the expression if necessary, which is a big win in performance when evaluating conditions that come with costly operations to resolve; - For && (AND): - If the left side is false, the entire expression is false without evaluating the right side. - For || (OR): - If the left side is true, the entire expression is true without evaluating the right side. *Example:* ```javascript const isLoggedIn = true; const hasPremium = true; const isAdmin = false; const isGuest = false; const canViewPremium = isLoggedIn && hasPremium //canViewPremium = true const canViewAdmin = isLoggedIn && isAdmin //canViewAdmin = false const canNavigate = isLoggedIn || isGuest // will default to true because isLoggedIn is true ``` - we can also use logical operators for conditional execution of functions ```javascript const isLoggedIn = true; const isAdmin = false; const hasPremium = true; isLoggedIn && RedirectToHome() //the function will be executed isAdmin && isLoggedIn && RedirectToAdminPanel() //the function will not be executed (isLoggedIn && (isAdmin || hasPremium)) && RedirectToPremium() //the function will be executed ``` - one more use case for logical operators is fallback values or default values using the OR operator. - these come in handy when dealing with values that could be falsy: `undefined, null, false, 0, Empty string` (pay attention to what we are considering falsy in here) ```javascript const user = {} const isAdmin = user.isAdmin || false // if user.isAdmin is not defined, isAdmin defaults to false ``` ## Nullish Coalescing Operator (??) (Introduced in ES2020) The Nullish Coalescing Operator was introduced to complement the logical OR operator by providing a way to assign a default value ONLY if the left operand is null or undefined. It obviously differs from `|| (OR)`, because `||(OR)` considers all falsy values (including false, 0, and an empty string "") as equivalent to null or undefined, while `??` considers only `null and undefined` to be falsy *Example:* ```javascript let user = { name: ''}; let name1 = user.name || "Default User"; // name1 will be Default User let name2 = user.name ?? "Default User"; // name2 will be '' let maybeValue = 0; let result = maybeValue ?? 10; // 'result' will be 0 (falsy but not null/undefined) ``` *Key Points:* | Feature | Behavior | |--------------------|------------------------------------------------------| | Short-Circuiting | Optimizes evaluation of logical expressions | | Nullish Coalescing | Assigns default value specifically for null/undefined | | OR Operator | Considers all falsy values as equivalent to null/undefined| *When to Use:* - Use short-circuiting for conditional logic where you only need to evaluate one side of the expression based on the other side's truthiness/falsiness. - Use nullish coalescing when you want to provide a default value only for null or undefined cases, and other falsy values should retain their meaning. * Additional Ressources * [Short Circuiting and Nullish-Coalescing Operators](https://medium.com/@rabailzaheer/short-circuiting-and-nullish-coalescing-advanced-techniques-ad453b6a385a#:~:text=In%20summary%2C%20short%2Dcircuiting%20and,when%20dealing%20with%20falsy%20values.).
hatemtemimi
1,876,916
OpenSSF Case Study: Enhancing Open Source Security with Sigstore at Stacklok
Stacklok was founded in 2023 by Craig McLuckie (co-creator of Kubernetes) and Luke Hinds (creator of...
0
2024-06-04T16:15:57
https://dev.to/ninfriendos1/openssf-case-study-enhancing-open-source-security-with-sigstore-at-stacklok-50h0
siggstore, openssf, opensource, security
Stacklok was founded in 2023 by Craig McLuckie (co-creator of [Kubernetes](https://kubernetes.io/)) and Luke Hinds (creator of the OpenSSF project [Sigstore](https://www.sigstore.dev/)), with the goal of helping developers produce and consume open source software more safely. As malicious attacks on open source software continue to grow in number and become more sophisticated (like the recent [XZ Utils incident](https://stacklok.com/blog/the-good-the-bad-and-the-ugly-of-the-xz-vulnerability)), governments and organizations are calling for increased security and protection against these attacks. Yet open source maintainers—who are often unpaid volunteers, with other full-time jobs—lack the time to stay up to speed on security best practices, and access to freely available tools that can proactively keep their software secure. To help open source communities and developers produce and consume open source software more safely, Stacklok is harnessing the power of Sigstore, highlighted in this case study. Read the rest of the case study [here](https://openssf.org/blog/2024/06/04/openssf-case-study-enhancing-open-source-security-with-sigstore-at-stacklok/) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pce7p76wimm08p36lzxk.png)
ninfriendos1
1,876,911
Google Play developer
Hello, I do not want a used Google Play developer account (verified) for sale, bypassed application...
0
2024-06-04T16:14:51
https://dev.to/taha_iraq_37650aded2ca6b5/google-play-developer-3dc
Hello, I do not want a used Google Play developer account (verified) for sale, bypassed application verification
taha_iraq_37650aded2ca6b5
1,876,906
Discover the Power of Node.js: Highlights from Zibtek's Blog
Node.js has revolutionized the way developers approach web development, offering a versatile and...
0
2024-06-04T16:09:31
https://dev.to/cachemerrill/discover-the-power-of-nodejs-highlights-from-zibteks-blog-160d
node, mean, javascript
Node.js has revolutionized the way developers approach web development, offering a versatile and efficient platform for building server-side applications. At Zibtek, we have extensively covered various aspects of Node.js, highlighting its benefits, use cases, and comparisons with other technologies. Here are some of the key articles from our blog that delve into the world of Node.js: 1 - [Node.js Development: What it is and Why You Want To Use It](https://www.zibtek.com/blog/node-js-development-what-it-is-and-why-you-want-to-use-it/) This article provides an in-depth look at Node.js, explaining its core features and why it has become a preferred choice for developers. It covers the benefits of using Node.js, such as its non-blocking I/O model, which enhances performance and scalability. 2 - [Why Choose Node.js for Your Web Development Needs?](https://www.zibtek.com/blog/why-choose-node-js-for-your-web-development-needs/) Here, we explore the reasons why Node.js is an excellent choice for web development. The article discusses its ability to handle concurrent connections efficiently, making it ideal for real-time applications. It also highlights how Node.js uses JavaScript for both frontend and backend development, streamlining the development process. 3 - [Node.js vs. Apache: Choosing the Right Tool for Web Development](https://www.zibtek.com/blog/node-vs-apache/#:~:text=If%20you%20need%20a%20server,applications%2C%20Apache%20provides%20proven%20reliability) In this comparative piece, we examine the differences between Node.js and Apache. The article outlines the advantages of Node.js, such as its lightweight and efficient nature, particularly for real-time applications, and contrasts these with the traditional web server approach of Apache. 4 - [Types of Applications You Can Build With Node.js](https://www.zibtek.com/blog/types-of-applications-you-can-build-with-node-js/) This article showcases the versatility of Node.js by discussing various types of applications that can be built using this technology. From real-time chat applications to complex single-page applications, Node.js offers the tools and frameworks needed to develop a wide range of solutions. 5 - [The MEAN/MERN Advantage: The Stats Behind the Stack] (https://www.zibtek.com/blog/the-mean-mern-advantage-the-stats-behind-the-stack/) Node.js plays a crucial role in the MEAN and MERN stacks. This article highlights the performance statistics of Node.js within these popular stacks, emphasizing its ability to handle up to 20,000 concurrent connections and its suitability for modern web applications that manage large volumes of data. 6 - [How to Choose the Right Technology Stack for Your Next Project](https://www.zibtek.com/blog/how-to-choose-the-right-technology-stack-for-your-next-project/) Choosing the right technology stack is critical for project success. This blog post discusses various factors to consider, including the high performance and resource efficiency of Node.js, which make it a strong candidate for I/O-intensive applications. 7 - [Selecting the Right Programming Language for My App Dev Project](https://www.zibtek.com/blog/backend-programming-language-project/) When deciding on a programming language for your project, Node.js offers several advantages. This article explains how Node.js, with its event-driven architecture and use of JavaScript, provides a robust and scalable solution for both frontend and backend development. 8 - [A Beginner's Guide to the Many Types of Software Development](https://www.zibtek.com/blog/a-beginners-guide-to-the-many-types-of-software-development/) For those new to software development, this guide covers various development types, including how Node.js fits into cloud development. It highlights the flexibility of Node.js in creating cloud storage applications and other web-based solutions. 9 - [Expert MEAN Developers Empowering Businesses](https://www.zibtek.com/mean-stack-development-company#meanStackfaq8) Node.js is a cornerstone of the MEAN stack. This article details how MEAN developers use Node.js to create modular and maintainable server-side code, enhancing overall development efficiency and application performance. These articles collectively provide a comprehensive overview of Node.js, its capabilities, and its impact on modern web development. Whether you're a developer looking to deepen your understanding or a business considering Node.js for your next project, Zibtek's blog offers valuable insights and expert guidance.
cachemerrill
1,872,115
Taming FluxCD HelmReleases: The Kustomize Way approach
HelmRelease looks like a good idea… but it has many problems. Let's see how to do better without HelmRelease for a better GitOps
0
2024-06-04T16:00:00
https://dev.to/davinkevin/taming-fluxcd-helmreleases-the-kustomize-way-approach-48l8
kubernetes, fluxcd, yaml, helm
--- title: Taming FluxCD HelmReleases: The Kustomize Way approach published: true description: HelmRelease looks like a good idea… but it has many problems. Let's see how to do better without HelmRelease for a better GitOps tags: kubernetes, fluxcd, yaml, helm cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/74sq5hjvn4w8fths8yb1.jpg # published_at: 2024-06-04 18:00 +0200 --- [FluxCD](https://fluxcd.io/) is a powerful tool for managing deployments in Kubernetes using GitOps principles. While it offers a wide range of features, this post will explore scenarios where a simpler approach might be preferable, aligning with the #SimplerIsBetter philosophy. **NOTE**: This article shares insights and perspectives gained through experience in various contexts and companies. Feel free to disagree, but with respect! 😉 ## `HelmRelease`, what is this? [`HelmRelease` is a custom resource provided by FluxCD](https://fluxcd.io/flux/use-cases/helm/), which provides users a way to automatically install `helm` charts using FluxCD and its declarative system. For example, if you want to install the [PodInfo](https://github.com/stefanprodan/podinfo) app, you have to declare the following manifest: ```yaml apiVersion: helm.toolkit.fluxcd.io/v2 kind: HelmRelease metadata: name: podinfo spec: chart: spec: chart: podinfo version: '6.5.*' sourceRef: kind: HelmRepository name: podinfo interval: 5m releaseName: podinfo values: # part dedicated to the all `values` the chart accept replicaCount: 2 ``` **NOTE**: For the sake of simplicity, I kept only relevant attributes, but I can say FluxCD offers a rich API to cover most of use cases. ![Workflow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lwizd0gvqtzctacvfjs5.png) The simplified workflow can be described like this: * The `HelmOperator` verifies and retrieves the Helm chart from a remote source like GitHub, GitLab, Artifactory or an OCI registry (recommended) * It then renders the chart using configuration from the `HelmRelease` object, which can include values provided in different ways (inlined, using ConfigMap or Secret) * Finally, the `HelmOperator` applies the generated `YAML` manifests to the Kubernetes API to install or upgrade the application. This approach can work for simple deployments, but as an operator, I've encountered several design drawbacks that I'd like to discuss. Let's see these issues! **NOTE** Like every GitOps solution, FluxCD requires a connection to a `GitRepository` too. To keep the diagram simple, I've not materialized this item. ## GitOps, aka "only source of truth" The GitOps philosophy is built on top of 3 key principles: * **Declarative System**: You define the desired state of your system (what you want) using declarative language ([4GL](https://en.wikipedia.org/wiki/Fourth-generation_programming_language)). This approach focuses on the "what" instead of the "how," making your configuration easier to understand and maintain. * **System State Captured in a Git Repository**: The desired state of your system is stored in a Git repository. This provides a central location for managing your infrastructure configurations, enabling version control, collaboration, and easy rollbacks if needed. * **Automatic Deployment system**: Any changes pushed to the Git repository trigger an automated deployment process. This automates the process of translating your desired state into actual changes within your system, reducing manual intervention and the risk of errors. ![3 pillars](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ilcr9vf1y9jwia24au1.jpg) The `HelmRelease` is a **Declarative** approach, where **automation** is managed by the controller. However, "**desired state of your system is stored in a Git repository**" and all it implies are not respected. ### External Chart Dependencies: A Potential Weak Point A core principle of building resilient systems is ensuring the availability of all their components. When using `HelmRelease`, we introduce an external dependency, the location where the Helm chart resides. ![HelmRelease not able to fetch chart](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/rr35t0rbcs9ilcb0mdr4.png) If communication with this external location fails (due to server downtime, network issues, etc.), you might not be able to install or update your application. This introduces a potential single point of failure (SPOF) in your deployment process. Additionally, the desired state of your cluster is not solely defined by your `git` repository; it also depends on all the charts your system downloads. ### Limited Visibility into Helm Chart Content Another purpose of this **System State Captured in a Git Repository** is its auditability. If you capture the complete state of a system in `git`, you can review it before an installation or upgrade. However, using `HelmRelease` introduces a layer of opacity. While you declare the specific chart you want to use, you might not have a complete picture of what resources the chart will actually install in your cluster. It could potentially create various resources like `ClusterRoles`, `NetworkPolicies`, or `DaemonSets`. You can't say without… deploying it, running it locally or worth, reading the chart's source. 😞 {% twitter 1648619974934642689 %} ### Lack of Immutability in Helm Charts! The problem we already have with container images applied the same way to `helm` charts. A chart produced and published at a specific date might be un-published or re-published with a different content. In those case, you expose your system to the two previous points again… ![representation of immutability with zebras…](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wnafjo7x5jahkfhlycwb.jpg) **NOTE** We can now store charts in OCI registries, with immutability feature. To leverage it, you have to complexify your system with `digest` pinning, because in security, we can't blindly trust a 3rd party 😇. ### Chart customisation, another nightmare 👻 While Helm charts offer a convenient way to package deployments, maintaining them can be challenging due to their complexity. Kubernetes itself provides a wide range of configuration options, which can further complicate matters. This complexity can lead to situations where the desired configuration isn't readily available within a chart. For example, you might want to add an annotation to a workload or modify taints and tolerations, but the chart may not offer built-in ways to do so. {% twitter 1622925402845970433 %} To address this challenge, FluxCD introduced the concept of `postRenderers` (see [documentation](https://fluxcd.io/flux/components/helm/helmreleases/#post-renderers) within `HelmRelease` resources. This feature leverages the Kustomize API to customize deployments after the initial Helm chart rendering. ```yaml apiVersion: helm.toolkit.fluxcd.io/v2 kind: HelmRelease metadata: name: podinfo spec: releaseName: podinfo chart: { … } values: { … } postRenderers: - kustomize: patches: - target: version: v1 kind: Deployment name: metrics-server patch: | - op: add path: /metadata/labels/environment value: production images: - name: docker.io/bitnami/metrics-server newName: docker.io/bitnami/metrics-server newTag: 0.4.1-debian-10-r54 ``` The configuration for `postRenderers` is separate from the Helm chart result. This can make it difficult to understand the complete picture of how the final deployment will be configured, potentially leading to hidden errors. 🤯 FluxCD doesn't provide robust mechanisms to analyze the resources created after the combined rendering and post-rendering steps. This can make troubleshooting issues arising from these modifications cumbersome. 😞 ## Solution is simplicity 🚀! We've discussed the challenges associated with relying solely on Helm charts within FluxCD deployments. These challenges can compromise the visibility, maintainability, and overall health of your GitOps workflow. So, how can we achieve the ideal balance: a declarative system, automatic deployments, and a clear picture of your system state captured entirely within your Git repository? ![human printing document with an old machine made of wood](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7zogjy9jj9snirmvk8nb.jpg) ### Render the chart and store it into `git` for better auditability The answer lies in a simple yet powerful sub-command – `helm template` (or `helmfile template` if you use [Helmfile](https://helmfile.readthedocs.io/)). This command allows you to locally render helm charts along with your desired values, generating the final deployment manifest files. ```shell $ helm repo add podinfo https://stefanprodan.github.io/podinfo "podinfo" has been added to your repositories $ helm repo update podinfo Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "podinfo" chart repository Update Complete. ⎈Happy Helming!⎈ $ echo "replicaCount: 2" > values.yaml $ helm template podinfo/podinfo -f values.yaml --version 6.6.3 > podinfo.yaml ``` And that's it! It was not so complicated 😇. We have a file called `podinfo.yaml`, located in `/k8s/podinfo` of our `git` repository. This file is now yours, it can be read, analyzed and pushed to `kubernetes`. After this generation, your system is free from the external chart registry where the chart is located. **NOTE** I recommend to **never** modify a file "generated" by another tool, because you will loose this modification during the next rendering. tldr; treat them as "read-only". ### How to deploy generated files with FluxCD? Instead of using `HelmRelease`, we're going to use `Kustomization` resources provided by FluxCD. It is way simpler than `HelmRelease`, because it just deploys manifests located in a specific location. ```yaml apiVersion: kustomize.toolkit.fluxcd.io/v1 kind: Kustomization metadata: name: podinfo spec: sourceRef: kind: GitRepository name: our-gitops-repository path: "/k8s/podinfo" # our location in our GitOps repo prune: true timeout: 1m ``` Because we use the `GitRepository`, called `our-gitops-repository` and eventually used by `FluxCD` itself, there is no extra dependency in our system. ### Direct Customization with Kustomize While `HelmRelease` offers `postRenderers` for some customizations, the `Kustomization` resource provides full access to the powerful Kustomize capabilities. This allows for more granular and flexible control over your manifests. Here's how to achieve a similar customization as the previous `postRenderer` example using a `kustomization.yaml` file: ```yaml # kustomization.yaml apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - podinfo.yaml # your generated file from the previous step patches: - target: version: v1 kind: Deployment name: metrics-server patch: | - op: add path: /metadata/labels/environment value: production images: - name: docker.io/bitnami/metrics-server newName: docker.io/bitnami/metrics-server newTag: 0.4.1-debian-10-r54 ``` [kustomize](https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/) offers a wider range of functionalities compared to `postRenderers`. You can manipulate resources using features like `namePrefix`, `labels`, `replacements`, `components`… The list is too long to be detailed here 😇. As a bonus point, you can run `kustomize build /k8s/podinfo/` and see the complete result of the generation before any interaction with FluxCD. ### Enjoy reviews and audit with rich diff! One of the significant advantages of managing your full Kubernetes state with GitOps is the ability to leverage Git's powerful version control capabilities for reviewing and auditing deployments. From an operator perspective, there is nothing better than a clear and detailed diff views during tool upgrade: ![rancher/local-path-provisioner to v0.0.27 diff view](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tl6j9n7c5uusbhp8k7r9.png) Obviously, your IDE will be your best friend to understand what happened, with clear context and details of changes: ![cert-manager modification history](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vlxqzs8koih0coeacccs.png) **NOTE** Upgrade can be automated using tools like `renovate` or `dependabot` ## Conclusion While Helm charts offer a convenient way to package deployments, maintaining them in a FluxCD workflow can introduce challenges related to **transparency**, **maintainability**, and **control**. This article explored the limitations of `HelmReleases` and presented `helm template …` as a more powerful and flexible alternative, leveraging FluxCD `Kustomize` resource. Using Kustomize directly within your `git` repository, you gain greater control, visibility, and the benefits of Git version control for reviewing and auditing changes. Ultimately, by adopting a Kustomize-based approach within your FluxCD workflows, you can achieve a more **declarative**, **transparent**, and **auditable** approach to managing your Kubernetes deployments.
davinkevin
1,876,903
From Monolith to Microservices or Enhanced SOA: A Comprehensive Guide to Modernizing Your Application Architecture
The software world is changing fast. Monolithic applications, once the go-to choice, are struggling...
0
2024-06-04T15:59:35
https://dev.to/marufhossain/from-monolith-to-microservices-or-enhanced-soa-a-comprehensive-guide-to-modernizing-your-application-architecture-26fa
The software world is changing fast. Monolithic applications, once the go-to choice, are struggling to keep up with the demands of today's dynamic business landscape. Imagine a giant, single server holding all your application code – that's a monolith. It works well at first, but as features pile on, it becomes sluggish and cumbersome to update. This is where application architecture modernization comes in. It's like giving your application a complete makeover, transforming it from a clunky monolith into a modern, agile system. So, what's the problem with monoliths? They're simple to build initially, but as your application grows, making changes becomes a nightmare. Updating a single feature means redeploying the entire application, slowing down development and innovation. Monoliths also struggle to scale – imagine trying to add more processing power to a single server – it's not easy! This is where the need for modernization arises. Modern software development demands fast releases, continuous updates, and the ability to handle ever-growing user bases. Monoliths just can't keep up. Here's where two exciting options emerge: microservices and enhanced SOA. Microservices take a revolutionary approach. Imagine breaking down your application into tiny, independent teams, each working on a specific function. These are microservices – self-contained, specialized pieces that work together seamlessly. Need to update a login feature? Simply deploy the updated login microservice without affecting other parts of the application. This approach offers incredible agility, but managing a swarm of microservices can get complex. Enhanced SOA, on the other hand, takes a more organized approach. Think of it like a well-oiled machine, where everyone follows the same rules and uses compatible tools. SOA stands for Service-Oriented Architecture, where reusable services communicate via standardized protocols. This makes them easy to integrate with other applications and promotes a well-governed service ecosystem. But with all this structure comes a bit of extra planning needed upfront, potentially leading to slower development cycles. So, [microservices vs SOA architecture](https://www.clickittech.com/devops/microservices-vs-soa-architecture/?utm_source=backlinks&utm_medium=referral ) – which one should you choose? It depends! Here's a breakdown to help you decide: * **Project Complexity:** For simpler applications, microservices can be a good choice due to their faster initial setup. However, for larger projects, the complexity of managing numerous services might favor SOA's structured approach. * **Team Expertise:** If your team thrives in a fast-paced, independent development environment, microservices could be a great fit. However, if your team is comfortable with standardized service design, SOA might be a more natural choice. * **Existing Infrastructure:** Consider your current setup. Does your existing infrastructure easily support containerization technologies often used with microservices? Or would a more centralized approach of SOA work better? Ultimately, both microservices and enhanced SOA can achieve peak performance and agility when implemented effectively. Microservices leverage containerization for efficient deployment and scaling, while enhanced SOA benefits from service governance frameworks to ensure consistent service behavior. The key takeaway is to understand the strengths and weaknesses of each approach. Carefully evaluate your project requirements and choose the modernization path that best aligns with your goals for a more agile, scalable, and future-proof application architecture. Remember, this guide is a starting point. Embrace the journey to a modern app architecture, and watch your business soar!
marufhossain
1,876,901
I Got Tired of the Way I Type Parentheses on a QWERTY Keyboard
I was used to AZERTY layout I was forced to use AZERTY keyboards because, they are the...
0
2024-06-04T15:57:50
https://dev.to/zinnwan/i-got-tired-of-the-way-i-type-parentheses-on-a-qwerty-keyboard-55ff
linux, keymap, qwerty
<h2 id="i-was-used-to-azerty-layout">I was used to AZERTY layout</h2> <p>I was forced to use AZERTY keyboards because, they are the default layout in my region. Recently I acquired a QWERTY keyboard. I’m getting used to it but, there are some keys that shouldn’t be placed the way they are. My first and major issue was the parentheses key. To type left parentheses you need to hold the SHIFT_KEY and press 9. I use left parentheses quite a lot, I got tired, and tried to change it.</p> <h2 id="heres-how-i-changed-my-keymaps">Here’s how I changed my keymaps</h2> <p>I should mention that I’m on Fedora, this should work on all Linux machines (sorry Windows, and Mac users).</p> <ul> <li>On your home directory, create a file and name it “.Xmodmap”.</li> <li>Open the file in your text editor. <ul> <li>Here is the basic syntax:</li> </ul> <pre><code>keycode &#39;Number&#39; = &#39;Keysystem&#39; [SHIFT] &#39;Keysystem&#39; keycode 59 = comma parenleft</code></pre> <ul> <li>The keycode 59 coresponds to the comma and less key (, &amp; &lt;).</li> <li>The keysystem is what you want it to type e.g: comma, f, slash…etc.</li> <li>The second argument is what is typed when holding the SHIFT_KEY (don’t write ‘[SHIFT]’).</li> </ul></li> <li>Get the keycodes however you want. I used a tool called ‘xev’. <ul> <li>In the terminal run ‘xev’. Press a key, it’ll give you this:</li> </ul> <pre><code>KeyRelease event, serial 38, synthetic NO, window 0x2000001, root 0x4a4, subw 0x0, time 2789210, (905,497), root:(1006,637), state 0x0, keycode 32 (keysym 0x6f, o), same_screen YES, XKeysymToKeycode returns keycode: 19 XLookupString gives 1 bytes: (6f) &quot;o&quot; XFilterEvent returns: False</code></pre> <ul> <li>I typed ‘o’ (not zero), and voila keycode 32, and keysystem, ‘o’ (line 3).</li> </ul></li> <li>When you’re done, run the script in the terminal</li> </ul> <pre><code>$ xmodmap ~/.Xmodmap</code></pre> <h2 id="but-youll-have-to-run-the-command-each-time-you-login">But you’ll have to run the command each time you login</h2> <p>I ran the command, and it was working. I was easily typing parentheses because, I switched it with less and greater ‘&lt; &amp; &gt;’. But when I restarted the computer, my modifications were gone. I was back at doing chores typing parentheses. So now, every time I login, I have to open the terminal, and run the command.</p> <p>Just kidding, you can make the command run automatically upon login. There are multiple ways on how to do it. I tried some, but only one worked for me.</p> <ul> <li><a href="https://www.baeldung.com/linux/run-script-on-startup">This</a> article explores 4 ways on how to do it.</li> <li><a href="http://xahlee.info/linux/linux_xmodmap_tutorial.html">This</a> one here explains the whole process (change the keymapping and auto execute it) in-depth.</li> <li>There is also a <a href="https://www.youtube.com/watch?v=jcE8U1lG514&amp;t=377s&amp;ab_channel=SpartanCC">video</a>.</li> </ul> <p>However, what worked for me was over on <a href="https://stackoverflow.com/questions/8247706/start-script-when-gnome-starts-up/8290652#8290652">stackoverflow</a>.</p> <ul> <li>Basically, make a bash scrpit (‘example.sh’), put the command in it:</li> </ul> <pre><code>#!/bin/sh xmodmap ~/.Xmodmap</code></pre> <p>and make it executable</p> <pre><code>$ chmod +x /path/to/example.sh</code></pre> <ul> <li>Head over to this directory (/etc/xdg/autostart)</li> <li>Create a file with ‘.desktop’ extension.</li> <li>Put this code in it:</li> </ul> <pre><code>[Desktop Entry] Name=MyScript GenericName=A descriptive name Comment=Some description about your script Exec=/path/to/example.sh Terminal=false Type=Application X-GNOME-Autostart-enabled=true</code></pre> <ul> <li>Don’t forget to replace /path/to/example.sh with the actual path.</li> <li>Reboot your system, and there you have it.</li> </ul> <h2 id="conclusion">Conclusion</h2> <p>I tried multiple methods that didn’t work for me. They might work for someone else. I think that curly braces get used more than square brackets, so that might have to change. Also, backslash could switch with asterisk. However, imagine the reaction of someone using my computer, priceless I know.</p>
zinnwan
1,876,900
How to find FunCaptcha | By using CapSolver Extension
Understanding FunCaptcha FunCaptcha is a security solution that uses a variety of advanced...
0
2024-06-04T15:57:43
https://dev.to/trebolese/how-to-identify-funcaptcha-by-using-capsolver-extension-28
## Understanding FunCaptcha FunCaptcha is a security solution that uses a variety of advanced technologies to enhance security and user experience, including: - **Behavioral Analysis**: This feature tracks user behavior during the challenge, such as mouse movements and click patterns, to differentiate between humans and bots. - **Dynamic Challenges**: The challenges are constantly updated and differ each time, making it more difficult for bots to learn and solve them. - **Adaptive Mechanisms**: The system adjusts the challenge's difficulty based on the user's risk profile and behavior, ensuring minimal disruption for legitimate users. - **Device Fingerprinting**: It gathers and analyzes device and browser information to identify potential threats. - **Machine Learning**: It uses machine learning models to analyze large amounts of data and improve the accuracy of distinguishing between human and automated traffic. ## Identifying FunCaptcha Usage with CapSolver Extension In addition to finding the FunCaptcha Site Key, it may sometimes be necessary to determine if FunCaptcha is in use. ### Steps to Detect FunCaptcha Parameters: 1. **Installation**: * For Chrome users, install the [Captcha Solver Auto Solve](https://chrome.google.com/webstore/detail/captcha-solver-auto-bypas/pgojnojmmhpofjgdmaebadhbocahppod) extension. * For Firefox users, install the [Captcha Solver Auto Solve](https://addons.mozilla.org/en-US/firefox/addon/capsolver-captcha-solver/) extension. 2. **CapSolver Setup**: * Go to [CapSolver](https://www.capsolver.com/). * Press "F12" on your keyboard to open the developer tools. * Go to the **CapSolver Captcha Detector** tab. 3. **Detection**: * Keep the CapSolver panel open and visit the website where you want to trigger the CAPTCHA. * Trigger the captcha. * Note: **Do not close** the CapSolver panel before triggering the CAPTCHA. ### Identifying FunCaptcha Parameters: The parameters for FunCaptcha that can be detected are: * Website URL * Site Key * funcaptcha * funcaptcha api js subdomain * data blob Once the CAPTCHA parameters are detected, CapSolver will provide a JSON detailing how to submit the captcha parameters to their service. ### Identifying FunCaptcha Usage: 1. **Open Developer Tools**: Press `F12` to open the developer tools or right-click on the webpage and select "Inspect". 2. **Open the CapSolver Panel**: Go to the Captcha Detector Panel. 3. **Trigger the FunCaptcha**: Perform the action that triggers the FunCaptcha on the webpage. 4. **Check the CapSolver Panel**: Look at the CapSolver Captcha Detector tab in the developer tools. If the `FunCaptcha` parameter is set to `true`, then FunCaptcha is being used on the site. By following these steps, you can easily determine if FunCaptcha is being used on a website. Always use such tools responsibly and ethically, respecting the terms of service of the websites you interact with. For further assistance, you can contact CapSolver via email at [support@capsolver.com](mailto:support@capsolver.com).
trebolese
1,876,899
I need help in node ,anyone up?
sequelize //.sync({ force: true })it's task to overide the table. .sync() .then((result) =&gt;...
0
2024-06-04T15:57:10
https://dev.to/nishant_ba92a27bea59a6002/i-need-help-in-node-anyone-up-1434
help
sequelize //.sync({ force: true })it's task to overide the table. .sync() .then((result) => { return User.findByPk(1); // app.listen(3000); }) .then((user) => { if (!user) { console.log("ankuuser", user); return User.create({ name: "Max", email: "test@test.com" }); } return user; }) .then((user) => { console.log("user of nishant", user); app.listen(3000); }) .catch((err) => { console.log(err); }); error: original: Error: Field 'id' doesn't have a default value
nishant_ba92a27bea59a6002
1,876,898
How to identify the extra parameters required for solve Google ReCaptcha
Deciphering Additional Parameters reCaptcha v2: reCaptcha versions: These come in...
0
2024-06-04T15:55:17
https://dev.to/trebolese/how-to-identify-the-extra-parameters-required-for-solve-google-recaptcha-4enp
# Deciphering Additional Parameters **reCaptcha v2:** - **reCaptcha versions**: These come in different forms: - **reCaptcha v2 Normal**: The basic version where users solve a puzzle to confirm they're not a bot. - **reCaptcha v2 Enterprise**: A premium, paid version with enhanced features and customization options, tailored for businesses needing superior security. - **reCaptcha v2 Invisible**: A version that functions behind the scenes, generally not requiring user interaction. It employs a risk analysis engine to distinguish between users and bots. - **pageAction**: This is a parameter present in some site anchor endpoints. It signifies the 'action' value and is utilized in reCaptcha v3 for risk analysis. - **enterprise payload**: This pertains to the 's-data' in reCaptcha, which is a collection of data dispatched to reCaptcha Enterprise for examination. The exact contents are contingent on the specific requirements of reCaptcha Enterprise. - **Anchor**: This is supplementary data procured from the Capsolver extension. In the context of reCaptcha, it could denote the part of the page where the reCaptcha widget is anchored. - **Reload**: This is also supplementary data procured from the Capsolver extension. It could denote the action of reloading the reCaptcha widget, for example, if the user desires a new challenge. - **ApiDomain**: This is the domain address from which to load reCAPTCHA Enterprise. Examples include 'http://www.google.com/' and 'http://www.recaptcha.net/'. This parameter should be used only if its purpose is clear. Avoid using a parameter if its necessity is not understood. ![](https://assets.capsolver.com/prod/images/post/2024-06-04/9309601c-3b34-4db3-a594-7ca073928a38.png) The presence of these parameters is not always guaranteed and depends on the site key. # Detecting Additional Parameters The Capsolver extension can ascertain if a site key necessitates these additional parameters. ### Steps to Identify reCAPTCHA Parameters: 1. **Installation**: - For Chrome users, install the [Captcha Solver Auto Solve](https://chrome.google.com/webstore/detail/captcha-solver-auto-bypas/pgojnojmmhpofjgdmaebadhbocahppod) extension. - For Firefox users, install the [Captcha Solver Auto Solve](https://addons.mozilla.org/en-US/firefox/addon/capsolver-captcha-solver/) extension. 2. **Capsolver Configuration**: - Navigate to [CapSolver](https://www.capsolver.com/). - Press "F12" on your keyboard to launch the developer tools. - Proceed to the **Capsolver Captcha Detector** tab. ![](https://assets.capsolver.com/prod/images/post/2023-10-31/2115f05d-a7eb-40b6-9693-53baa45d39a9.png) 3. **Detection**: - Keep the Capsolver panel open and browse the website where you wish to activate the CAPTCHA. - Activate the captcha. - Note: **Do not close** the Capsolver panel prior to activating the CAPTCHA. ### CAPTCHA Parameter Identification: #### Parameters for reCAPTCHA that can be detected: - Website URL - Site Key - isInvisible - pageAction - isEnterprise - isSRequired - isReCaptchaV3 - Api Domain - Anchor - Reload Once the CAPTCHA parameters have been identified, CapSolver will provide a JSON detailing how to submit the captcha parameters to their service. The extension panel will exhibit the information like this: ![](https://assets.capsolver.com/prod/images/post/2024-06-04/22c56797-bbde-4ba5-9a2a-115fb4715449.png)
trebolese
1,876,897
What is zoom in Web Design? A funny story.
Almost every software engineer comes across a time where they start designing. And when you are...
0
2024-06-04T15:54:44
https://dev.to/alishgiri/what-is-zoom-in-web-design-3ddb
webdev, design, css
Almost every software engineer comes across a time where they start designing. And when you are designing your own product you want to make sure that it is close to perfection! But something terrible happened which made me realize the importance of setting browser's zoom to 100% when designing on web lol. Since I have a habit of zooming out of most of my frequently visited websites so that everything on the screen looks tiny, I accidently designed my website in 85% zoom in Safari and made a huge blunder by killing UX as the elements and images were too huge on my client's laptops. 🤯 And yeah I discovered this after 4 months of development. I also came across an interesting fact that it is better to design in the 'em' unit in CSS but since I was using TailwindCSS I just needed to adjust the styling of my entire website which took me two entire days. But it was a good encounter. I love it when new things pop up and makes you more experienced that ever. Cheers guys, have a great day! 🚀
alishgiri
1,876,895
How to build a chrome extension in 2024
Happy to be here! This is a walkthrough on creating the 'Scrolling Zombie' chrome extension which...
0
2024-06-04T15:52:05
https://dev.to/kalisen/how-to-build-a-chrome-extension-in-2024-1ba3
extensions, chrome, howto
Happy to be here! This is a walkthrough on creating the 'Scrolling Zombie' chrome extension which tracks the amount of scrolling a user does to make them aware of their browsing behavior and help them fight dark UI patterns. In this guide I walk you through my typical approach to bringing an idea to life. I cover: * Problem statement * Use Cases * Architecture * Implementation * A few Observations (Gotchas) Read the full guide: https://www.jeromethibaud.com/en/blog/how-to-build-a-chrome-extension/ Link to the extension in the store: [Scrolling Zombie extension in Chrome Web Store](https://chromewebstore.google.com/detail/scrolling-zombie/jihbnbebcilifidcodopcibcaeiegmab) Have a great day!
kalisen
1,876,894
Unlocking the Power of Expired Domains: A Comprehensive Guide to Buying Them
In the vast landscape of the internet, domain names play a crucial role in establishing a brand's...
0
2024-06-04T15:50:16
https://dev.to/seowhiz/unlocking-the-power-of-expired-domains-a-comprehensive-guide-to-buying-them-2g2b
In the vast landscape of the internet, domain names play a crucial role in establishing a brand's online identity. However, not all domains are created equal. One often-overlooked treasure trove in the world of domains is expired domains. These domains, once active websites that have lapsed, can hold significant value for savvy digital marketers and entrepreneurs looking to boost their online presence. In this comprehensive guide, we will delve into the world of expired domains, exploring their potential benefits, how to find and evaluate them, and the strategies for acquiring these hidden gems. **What Are Expired Domains?** Expired domains are domain names that were previously registered by individuals or businesses but were not renewed by the original owners, leading them to become available for registration once again. These domains may have been active websites with established traffic, backlinks, and search engine rankings before expiring. When these domains become available, they present an opportunity for new owners to leverage their existing authority and reputation for their own purposes. **Benefits of Buying Expired Domains** - **Established Authority:** Expired domains often come with a history of backlinks, traffic, and search engine rankings, giving the new owner a head start in building their online presence. - **SEO Value:** Domains with a history of good SEO practices can provide a strong foundation for a new website, helping it rank higher in search engine results. - **Brand Recognition:** Some expired domains may still retain brand recognition or recall value, making them valuable assets for branding purposes. - **Traffic Potential:** If an expired domain had significant traffic before expiring, the new owner can potentially redirect this traffic to their own website. **How to Find and Evaluate Expired Domains** Finding high-quality expired domains requires a strategic approach. Here are some methods to consider: - **Domain Auctions:** Platforms like GoDaddy Auctions, NameJet, and Sedo host auctions for expired domains. - **Domain Marketplaces:** Websites like Flippa, ExpiredDomains.net, and Dynadot offer a variety of expired domains for sale. - **Domain Drop-Catching Services:** Services like DropCatch and SnapNames specialize in capturing domains as soon as they become available for registration. When evaluating expired domains, consider the following factors: **Backlink Profile:** Check the quality and relevance of the backlinks pointing to the domain. **Traffic History:** Look for domains with a history of consistent traffic to maximize their potential. **Domain Authority:** Tools like Moz's Domain Authority can help assess the authority of an expired domain. **Strategies for Acquiring Expired Domains** Acquiring expired domains requires a mix of patience, research, and strategy. Here are some tips for successfully buying expired domains: **Set Clear Goals:** Define your objectives for buying an expired domain, whether it's for SEO purposes, branding, or traffic acquisition. **Research Thoroughly:** Conduct in-depth research on the domain's history, backlinks, and potential risks before making a purchase. **Monitor Auctions:** Keep an eye on domain auctions and set up alerts for domains that match your criteria. **Negotiate Wisely:** When dealing with private sellers, negotiate prices based on the domain's value and potential. **Conclusion** Expired domains represent a unique opportunity for digital marketers and entrepreneurs to unlock the untapped potential of established online assets. By understanding the benefits of expired domains, knowing how to find and evaluate them, and implementing smart acquisition strategies, you can leverage these domains to enhance your online presence and achieve your business goals. Check out **[SEO Whiz](https://www.konker.io/services/2065/?affid=34fe33)** on [Konker.io](https://Konker.io) to discover killer expired domains and supercharge your online strategy today! Remember, the world of expired domains is vast and full of possibilities. With the right knowledge and approach, you can harness the power of expired domains to propel your digital endeavors to new heights.
seowhiz
1,876,882
Raising a Web Studio Without Selling Your Soul 👹
Building a business is hard. That’s a pretty obvious statement, but there’s so much nuance to it that...
0
2024-06-04T15:47:31
https://houseofgiants.com/blog/raising-a-web-studio-without-selling-your-soul
webdev, learning, career, community
Building a business is hard. That’s a pretty obvious statement, but there’s so much nuance to it that it bears repeating. How am I going to get more work? How am I going to finish the work that I have? What about taxes? Did I send out that status update on time? Are my clients properly informed about the status of their projects? Should I start scaling? Shit. It’s a lot to think about. I’ve committed to building House of Giants in a way that ensures it will never become a soulless code farm churning out sub-par websites for profit. Here’s what it’s taken to get us this far. ## Avoiding the Scale-At-All-Costs Mind Game ⚖️ I've seen it too many times and to too many awesome companies. The critical point where businesses decide to scale aggressively to take on more work, build faster, churn churn churn. This often means rapid hiring, taking on projects that aren’t the right fit, and pushing quick, generic solutions that don’t scale. Sure, it might bring short-term gains, but it usually leads to long-term headaches. A marked reduction in quality and a noticeable uptick in employee turnover. Nobody is happy doing that work. And the voices of the passionate few get drowned out by the shrieks of account managers making promises nobody can keep. House of Giants was built to avoid these distractions. We focus on sustainable and thoughtful growth, putting everything we have into the work we love. ## Personalized Solutions Over One-Size-Fits-All 👖 Another massive issue with traditional agencies is their tendency to use the same set of cookie-cutter tech for everyone, with little to no thought into how that tech should grow with a business. This approach might make some processes smoother, but it often fails to address the unique challenges each client faces. When we start a new project, we dive deep into our partner’s business, objectives, hopes, dreams, and desires. We conduct thorough discovery sessions to understand the specific challenges and goals of each project we take on. This means **asking the right questions** to get to the root of our partner’s issues and finding the best way to address them. Every project is different. Sometimes the solution is a robust content management system; other times, it's a custom web application or a mix of several integrations. We tailor our technology recommendations to fit each of our partner’s needs, ensuring the solution we provide them can grow and evolve with their business. ## Emphasizing Collaboration and Partnership 🤝 I can’t tell you how many times I’ve seen people passed from account manager to project manager, to executive and then back down to the production team before any meaningful collaboration has even begun. It’s exhausting, and business owners shouldn’t have to go through that when embarking on a website build. This only serves to break down communication and ensure that some key feature or functionality is built poorly, or not at all. House of Giants prides itself on maintaining a small, supremely passionate team. Maintaining the small but mighty approach allows us to work closely with our partners, fostering genuine collaboration and partnership. This enables us to truly integrate with our partners and become an extension of them. We find immense satisfaction in understanding the issues of our partners and finding creative ways to solve them. ## Building a Flexible Foundation 🧘 We build solutions that solve our partners’ immediate problems and adapt as their business evolves. This means choosing technologies that are flexible and scalable, allowing for growth and change without needing a complete overhaul. For small business owners, having a flexible technology foundation is crucial. Markets change, customer bases shift, and businesses need to adapt quickly. Our approach ensures that our partners are never locked into a single solution but have the tools and flexibility to pivot as needed. ## Staying Connected to the Work 🔌 One of my biggest fears in scaling House of Giants is losing touch with the production work. So many business strategists and YouTube evangelists advocate for scaling to free up the owner’s time, but this often results in a massive disconnect from the fun of the production. If we’re not enjoying at least some of what we do daily then why are we doing it at all? Every single time, when leadership becomes disconnected from production, it results in a painful tone-deaf existence and yields massive distrust from anyone else within the production team. To combat this, I stay involved and thrive in the production process. This helps me maintain a clear vision of how House of Giants can improve and innovate. It also ensures that I remain connected to the quality of our work and the happiness of the partners and colleageues we choose to work with. All this is to say that building a successful web studio doesn’t mean losing touch with your values or the quality of the work you produce. By avoiding the scale-at-all-costs mentality, focusing on thoughtfully built solutions, fostering collaboration, and staying intimately familiar with the production process, you can create a studio that not only thrives but also delivers exceptional value to your partners. Running a web studio, or any business for that matter, is hard as hell. But with the right approach, it’s possible to grow sustainably while maintaining the integrity and quality that sets you apart. Let’s build something great together, without losing sight of what actually matters. ❤️
magnificode
1,876,881
Jotai atomWithStorage
What is atomWithStorage? Basically to put it simply atomWithStorage is a cool way for you...
0
2024-06-04T15:47:17
https://dayvster.com/blog/jotai-atomwithstorage/
react, webdev, typescript, javascript
## What is atomWithStorage? Basically to put it simply `atomWithStorage` is a cool way for you the developer to persist data in your application, it's a function that will automatically store the data of the atom in `localStorage`, `sessionStorage` for React or `AsyncStorage` for React Native. That means you can persist your application data even if the user refreshes, closes or crashes the page or application and when the user comes back the data will still be there. The only way to remove the data is if the user manually clears their browser cache, local storage, session storage and cookies. ## How to use atomWithStorage To use `atomWithStorage` you need to import it from `jotai/utils` and then you can use it like you would use `atom` ```jsx import {useAtom} from "jotai"; import {atomWithStorage} from "jotai/utils"; const themeAtom = atomWithStorage("theme", "dark"); const userSettingsAtom = atomWithStorage("userSettings", {language: "en", country: "us", accentColor: "blue"} ); export default function Page() { const [theme, setTheme]=useAtom(themeAtom); const [userSettings, setUserSettings]=useAtom(userSettingsAtom); return ( <div> <h1>Theme: {theme}</h1> <h1>User Settings</h1> <p>Language: {userSettings.language}</p> <p>Country: {userSettings.country}</p> <p>Accent Color: {userSettings.accentColor}</p> </div> ); } ``` As you see you can use `atomWithStorage` just like you would use `atom`, it really is as easy as that. It will automatically default to persisting the data in `localStorage` but you can also pass in a second argument to specify where you want to store the data. ```jsx const themeAtom = atomWithStorage("theme", "dark", sessionStorage); ``` ### Learn more about Jotai You can find out more about why I love Jotai in my post [Why I love Jotai](/blog/why-i-love-jotai/) and you can find the official documentation [here](https://jotai.org/docs/introduction)
dayvster
1,876,879
Buzz Off Expert Bee, Wasp, and Hornet Exterminator in Greenwich
A post by Green pest management
0
2024-06-04T15:40:27
https://dev.to/greenpest/buzz-off-expert-bee-wasp-and-hornet-exterminator-in-greenwich-8b6
pest, home, control
[](urlhttps://greenpestmanagementct.com/pest-control-greenwich/) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wmmkvjgh1fuofmvjetg6.jpeg)![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/48qyyjtjdr0qbsvuvhlm.png)
greenpest
1,876,878
How to create a cloud server for many projects in AWS
There are many tools or services on the internet for making this kind of server. They are almost all...
0
2024-06-04T15:40:16
https://dev.to/cocodelacueva/how-to-create-a-cloud-server-for-many-projects-in-aws-26l6
webdev, aws, tutorial
There are many tools or services on the internet for making this kind of server. They are almost all easy to work with front-end frameworks such as React, Angular and Astro. There are others for back-end frameworks as well. However, this tutorial will be useful to learn about how a server works or to practice becoming a devops. Besides that, these tools are useful when you are starting or practicing. But, if you are working on many projects, they will quickly become expensive. In those cases, it’s better to build your own cloud and configure it to adapt it to your own needs. ### The small server’s specifications: - A small cloud server for deploying different sites or services like APIs, Websites, commands, etc. - They can be developed using any technology and language: PHP, Python, JavaScript, and HTML; along with any framework: React, Laravel, Django, WordPress, etc. - With or without any kind of database. - With multiple domains or subdomains. - Connecting to any other AWS service like S3, RDS (database), CloudFront (to make a CDN), SES (to send notifications by email), etc. This cloud will be able to host from 5 to 10 different projects. This will depend on the projects, of course. ### Requirements: 1. An AWS account. 2. At least one domain bought. 3. At least one project developed. 4. Docker and Github/Bitbucket/GitLab. ## Step by step: ### First: Dockerize the project. Docker containers can be used for any project. It doesn’t matter what language or framework is used. However, each framework or project has a different method to dockerize it. For example, a React Project needs a webserver to run for it. Because when the project is built (npm run build ), it becomes a HTML site. It means, the hole project will be one index.html file with a bunch of images, CSS and two or three JS bundles. This is how ReactJS works. That’s why you will need a web server to make this htm run when the browser asks for it. Therefore, in case of a ReactJS project, you need to add Apache or NGINX server to the container. To dockerize any project only a Dockerfile is needed. This is a file named “Dockerfile” (without an extension), which has a list of instructions inside. It tells Docker how to build the image and what to do when the container runs. For example: take nodeJS, copy “package.json”, install dependencies, build a react app, etc. Configure NGINX server and expose it on port 80. The Dockerfile must be placed in the root of the project, next to “package.json”. This is an example of a Dockerfile with Nginx and ReactJS. ``` ### STAGE 1: Build ### FROM node:20 as build RUN mkdir /usr/src/app WORKDIR /usr/src/app ENV PATH /usr/src/app/node_modules/.bin:$PATH COPY package.json /usr/src/app/package.json RUN npm install --silent COPY . /usr/src/app RUN npm run build ### STAGE 2: Production Environment ### FROM nginx:1.21.0-alpine COPY --from=build /usr/src/app/build /usr/share/nginx/html RUN rm /etc/nginx/conf.d/default.conf COPY nginx/nginx.conf /etc/nginx/conf.d EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] ``` Besides this Dockerfile, NGINX require a file named “nginx.conf” for configuration. Create a folder named “nginx" and put this file inside: ``` server { listen 80; location / { root /usr/share/nginx/html; index index.html index.htm; # to redirect all the requests to index.html, # useful when you are using react-router try_files $uri /index.html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } ``` Before going to AWS or any server, it is a good practice to test it all locally. It is possible to build the image and run the container in a local machine: ``` #building the docker image docker build -t myimage . #running the container docker run -t -d -i --name=mycontainer -p 8000:80 myimage ``` If everything goes well, the site will run on http://localhost:8000 in any web browser. Every time you run a container, you must specify on which port it will run. Therefore, it is possible to have many containers running, each of them on different ports: 8000, 8001, 8002, etc. This is useful because it allows the same server to have many projects running. ### Secondly, build and configure our machine. The next step is to create an EC2 machine in AWS. AWS has many tools. They call them services. An EC2 is a virtual machine which is created with the hardware specifications wanted and the software needed. AWS has many pre-built images: Ubuntu, Cent, Alpine or even Windows. _Note: Be careful, because the hardware or software you choose is related to the price you pay. My advice is start with the smallest one and go higher only if you need it. For example, I am going to use a small one (Nano), this will be useful for 5 to 10 sites without any problem and will help us for this tutorial purpose._ I found this good step-by-step tutorial written by Abhishek Pathak which guides you on how to run and connect your first EC2 if you don’t know how to do it: [https://dev.to/scorcism/create-ec2-instance-1d59](https://dev.to/scorcism/create-ec2-instance-1d59) ### Thirdly, deploy the code into the EC2 machine. There are three ways for deploying the project code to any EC2 machine: 1. By SSH if the EC2 machine has the connections opens. 2. Pulling a Docker image already built. 3. Cloning and pulling a git repository to the EC2 machine. Uploading files with SSH is the first choice. But it is not too comfortable. For pulling and pushing Docker images, a Docker Hug is needed. A Docker hug is an online repository like GitHub, but for Docker images. AWS has a hub service called ECR. It could be used to upload images and pull them to your EC2 running. However, it is also possible to clone a git repository in a EC2 and then build docker image to run the container. This is the easiest way, and it is useful as well; because sometimes the developer who is working on the code it is not the same person deploying the site. Besides, there is a regular practice using GitHub, BitBucket or GitLab; so, anybody has the code already upload there. #### It as easy as it sounds: - Clone the repository - Build the Docker image - Run the Docker container However, the EC2 machine needs to be prepare first because it doesn’t come with Docker, Git or NGINX installed. Connecting to your EC2: ``` #connecting to EC2 ssh -i /path/of/key/downloaded ubuntu@public.ip.address # this command give you full server access sudo su ``` Installing Docker. ``` #update your repositories apt-get update #install Docker apt-get install docker.io -y #start Docker daemon systemctl start Docker #Check if everything is fine docker –version #Enable Docker to start automatically when the instance starts systemctl enable docker ``` And installing git: ``` # install git apt install git #Check if everything is fine git –version #configure git git config --global user.name "Your Name" git config --global user.email "youremail@domain.com" ``` With GIT and Docker installed, the EC2 machine is ready to receive the repositories. ``` #clone repository git clone myreposytoryurl cd myreposytoryname #build the image docker build -t myimage . #running the container docker run -t -d -i --name=mycontainer -p 8000:80 myimage ``` Great!! The first container is now running, but it is possible to run more of the same repository or from any other as well. ``` #running another container docker run -t -d -i --name=mycontainer -p 8001:80 myimage #running another container docker run -t -d -i --name=mycontainer -p 8002:80 myimage #running another container docker run -t -d -i --name=mycontainer -p 8003:80 myimage ``` Notice that the first container is running on port 8000 and the second one on port 8001, and the third one on port 8002. In this case, the three of them are the same. However, this is useful for having different projects on each port. Look at the running command: ``` docker run -t -d -i --name=mycontainer -p 8003:80 myimage ``` The “-p” flag connects the EC2 port you want to use on the server with the container port exposed. The right number is the container itself, it means, what it is exposed in the Docker image; the left number is the EC2 machine port will be using. ``` #testing the container curl localhost:8001 ``` This must respond with the index.html file of the ReatJS project created in the step one. The containers were created, and the projects are running. However, if anybody try to connect from the browser, they will not see anything. The next step is to connect domains with each container. ### Fourthly: Create a balancer. In this EC2 there will be many projects inside. Each project will be a container and it will be running on a different port. The first one runs on port 8000, the next one 8001, 8002, etc. Each project could have different domains: Myproject1.com, myproject2.com, etc. Or also a different subdomain: subdomain1.projects.com, subdomain2.projects.com. In both cases, domains or subdomains must be redirect to the EC2 public IP. Check again here [https://dev.to/scorcism/create-ec2-instance-1d59](https://dev.to/scorcism/create-ec2-instance-1d59) (Point 10). It will be a simple A redirect to the IP number. However, this don’t work either, because the first container created it is not running on port 80 or 443. So, the EC2 needs a balancer that receives the browser petitions and redirects to the correct container running. For this purpose, it is possible to install NGINX server and to use it as a reverse proxy. This Nginx will be receiving all the petitions for this EC2 machine and redirecting to the correct container. ``` #installing nginx apt update apt install nginx ``` If it is everything fine, the IP public in a web browser shows the Welcome-to-Nginx-page: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x9fbeqtx7ybm4uf8rh51.png) Any domain or subdomain redirected to this IP must show this page as well. Any time a new project is added NGINX must be configured to know what to do with them. NGINX has a config file for each project, and it must be copied twice; first on “/etc/nginx/sites-available”, and the same file on “/etc/nginx/sites-enabled/”. The name of the file it is the name of the domain or subdomain. For example: “subdomain.mydomain.com” This is a basic example of the file. ``` server { root /var/www/subdomain.mydomain.com/html; index index.html index.htm index.nginx-debian.html; server_name subdomain.mydomain.com; location /{ proxy_pass http://127.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Host $host; } } ``` Look at the file above.: The fourth line indicates Nginx that when the petition comes from subdomain.mydomain.com. It must redirect to http://127.0.0.1:8000. It means the container running on port 8000. The domain or subdomain only needs to match with the port container running the project. After creating the files, NGINX need to restart. #Test if the files are well created. nginx -t #restart nginx systemctl restart nginx #or reload nginx systemctl reload nginx After restarting NGINX, the project will be exposed to the public. And anybody will be able to reach it from the web browser. This cloud server is for 10 or 20 projects. If a project would need a database, it is possible to connect to any AWS services as RDS or DocumentDB. However, it is also possible to run another container with the database on the same EC2 machine. The EC2 can also connect with S3, or any AWS service needed. The same steps must be repeated to add any new project. And the EC2 has a monitoring section in the AWS console to know if the server is saturated or has space available for more projects. I hope you like this tutorial. The next step is to add SSL certificate so you can use https!!! Ask me if you need it. :D Thanks!
cocodelacueva
1,876,877
How to create a cloud server for many projects in AWS
There are many tools or services on the internet for making this kind of server. They are almost all...
0
2024-06-04T15:40:16
https://dev.to/cocodelacueva/how-to-create-a-cloud-server-for-many-projects-in-aws-11dg
webdev, aws, tutorial
There are many tools or services on the internet for making this kind of server. They are almost all easy to work with front-end frameworks such as React, Angular and Astro. There are others for back-end frameworks as well. However, this tutorial will be useful to learn about how a server works or to practice becoming a devops. Besides that, these tools are useful when you are starting or practicing. But, if you are working on many projects, they will quickly become expensive. In those cases, it’s better to build your own cloud and configure it to adapt it to your own needs. ### The small server’s specifications: - A small cloud server for deploying different sites or services like APIs, Websites, commands, etc. - They can be developed using any technology and language: PHP, Python, JavaScript, and HTML; along with any framework: React, Laravel, Django, WordPress, etc. - With or without any kind of database. - With multiple domains or subdomains. - Connecting to any other AWS service like S3, RDS (database), CloudFront (to make a CDN), SES (to send notifications by email), etc. This cloud will be able to host from 5 to 10 different projects. This will depend on the projects, of course. ### Requirements: 1. An AWS account. 2. At least one domain bought. 3. At least one project developed. 4. Docker and Github/Bitbucket/GitLab. ## Step by step: ### First: Dockerize the project. Docker containers can be used for any project. It doesn’t matter what language or framework is used. However, each framework or project has a different method to dockerize it. For example, a React Project needs a webserver to run for it. Because when the project is built (npm run build ), it becomes a HTML site. It means, the hole project will be one index.html file with a bunch of images, CSS and two or three JS bundles. This is how ReactJS works. That’s why you will need a web server to make this htm run when the browser asks for it. Therefore, in case of a ReactJS project, you need to add Apache or NGINX server to the container. To dockerize any project only a Dockerfile is needed. This is a file named “Dockerfile” (without an extension), which has a list of instructions inside. It tells Docker how to build the image and what to do when the container runs. For example: take nodeJS, copy “package.json”, install dependencies, build a react app, etc. Configure NGINX server and expose it on port 80. The Dockerfile must be placed in the root of the project, next to “package.json”. This is an example of a Dockerfile with Nginx and ReactJS. ``` ### STAGE 1: Build ### FROM node:20 as build RUN mkdir /usr/src/app WORKDIR /usr/src/app ENV PATH /usr/src/app/node_modules/.bin:$PATH COPY package.json /usr/src/app/package.json RUN npm install --silent COPY . /usr/src/app RUN npm run build ### STAGE 2: Production Environment ### FROM nginx:1.21.0-alpine COPY --from=build /usr/src/app/build /usr/share/nginx/html RUN rm /etc/nginx/conf.d/default.conf COPY nginx/nginx.conf /etc/nginx/conf.d EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] ``` Besides this Dockerfile, NGINX require a file named “nginx.conf” for configuration. Create a folder named “nginx" and put this file inside: ``` server { listen 80; location / { root /usr/share/nginx/html; index index.html index.htm; # to redirect all the requests to index.html, # useful when you are using react-router try_files $uri /index.html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } ``` Before going to AWS or any server, it is a good practice to test it all locally. It is possible to build the image and run the container in a local machine: ``` #building the docker image docker build -t myimage . #running the container docker run -t -d -i --name=mycontainer -p 8000:80 myimage ``` If everything goes well, the site will run on http://localhost:8000 in any web browser. Every time you run a container, you must specify on which port it will run. Therefore, it is possible to have many containers running, each of them on different ports: 8000, 8001, 8002, etc. This is useful because it allows the same server to have many projects running. ### Secondly, build and configure our machine. The next step is to create an EC2 machine in AWS. AWS has many tools. They call them services. An EC2 is a virtual machine which is created with the hardware specifications wanted and the software needed. AWS has many pre-built images: Ubuntu, Cent, Alpine or even Windows. _Note: Be careful, because the hardware or software you choose is related to the price you pay. My advice is start with the smallest one and go higher only if you need it. For example, I am going to use a small one (Nano), this will be useful for 5 to 10 sites without any problem and will help us for this tutorial purpose._ I found this good step-by-step tutorial written by Abhishek Pathak which guides you on how to run and connect your first EC2 if you don’t know how to do it: [https://dev.to/scorcism/create-ec2-instance-1d59](https://dev.to/scorcism/create-ec2-instance-1d59) ### Thirdly, deploy the code into the EC2 machine. There are three ways for deploying the project code to any EC2 machine: 1. By SSH if the EC2 machine has the connections opens. 2. Pulling a Docker image already built. 3. Cloning and pulling a git repository to the EC2 machine. Uploading files with SSH is the first choice. But it is not too comfortable. For pulling and pushing Docker images, a Docker Hug is needed. A Docker hug is an online repository like GitHub, but for Docker images. AWS has a hub service called ECR. It could be used to upload images and pull them to your EC2 running. However, it is also possible to clone a git repository in a EC2 and then build docker image to run the container. This is the easiest way, and it is useful as well; because sometimes the developer who is working on the code it is not the same person deploying the site. Besides, there is a regular practice using GitHub, BitBucket or GitLab; so, anybody has the code already upload there. #### It as easy as it sounds: - Clone the repository - Build the Docker image - Run the Docker container However, the EC2 machine needs to be prepare first because it doesn’t come with Docker, Git or NGINX installed. Connecting to your EC2: ``` #connecting to EC2 ssh -i /path/of/key/downloaded ubuntu@public.ip.address # this command give you full server access sudo su ``` Installing Docker. ``` #update your repositories apt-get update #install Docker apt-get install docker.io -y #start Docker daemon systemctl start Docker #Check if everything is fine docker –version #Enable Docker to start automatically when the instance starts systemctl enable docker ``` And installing git: ``` # install git apt install git #Check if everything is fine git –version #configure git git config --global user.name "Your Name" git config --global user.email "youremail@domain.com" ``` With GIT and Docker installed, the EC2 machine is ready to receive the repositories. ``` #clone repository git clone myreposytoryurl cd myreposytoryname #build the image docker build -t myimage . #running the container docker run -t -d -i --name=mycontainer -p 8000:80 myimage ``` Great!! The first container is now running, but it is possible to run more of the same repository or from any other as well. ``` #running another container docker run -t -d -i --name=mycontainer -p 8001:80 myimage #running another container docker run -t -d -i --name=mycontainer -p 8002:80 myimage #running another container docker run -t -d -i --name=mycontainer -p 8003:80 myimage ``` Notice that the first container is running on port 8000 and the second one on port 8001, and the third one on port 8002. In this case, the three of them are the same. However, this is useful for having different projects on each port. Look at the running command: ``` docker run -t -d -i --name=mycontainer -p 8003:80 myimage ``` The “-p” flag connects the EC2 port you want to use on the server with the container port exposed. The right number is the container itself, it means, what it is exposed in the Docker image; the left number is the EC2 machine port will be using. ``` #testing the container curl localhost:8001 ``` This must respond with the index.html file of the ReatJS project created in the step one. The containers were created, and the projects are running. However, if anybody try to connect from the browser, they will not see anything. The next step is to connect domains with each container. ### Fourthly: Create a balancer. In this EC2 there will be many projects inside. Each project will be a container and it will be running on a different port. The first one runs on port 8000, the next one 8001, 8002, etc. Each project could have different domains: Myproject1.com, myproject2.com, etc. Or also a different subdomain: subdomain1.projects.com, subdomain2.projects.com. In both cases, domains or subdomains must be redirect to the EC2 public IP. Check again here [https://dev.to/scorcism/create-ec2-instance-1d59](https://dev.to/scorcism/create-ec2-instance-1d59) (Point 10). It will be a simple A redirect to the IP number. However, this don’t work either, because the first container created it is not running on port 80 or 443. So, the EC2 needs a balancer that receives the browser petitions and redirects to the correct container running. For this purpose, it is possible to install NGINX server and to use it as a reverse proxy. This Nginx will be receiving all the petitions for this EC2 machine and redirecting to the correct container. ``` #installing nginx apt update apt install nginx ``` If it is everything fine, the IP public in a web browser shows the Welcome-to-Nginx-page: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x9fbeqtx7ybm4uf8rh51.png) Any domain or subdomain redirected to this IP must show this page as well. Any time a new project is added NGINX must be configured to know what to do with them. NGINX has a config file for each project, and it must be copied twice; first on “/etc/nginx/sites-available”, and the same file on “/etc/nginx/sites-enabled/”. The name of the file it is the name of the domain or subdomain. For example: “subdomain.mydomain.com” This is a basic example of the file. ``` server { root /var/www/subdomain.mydomain.com/html; index index.html index.htm index.nginx-debian.html; server_name subdomain.mydomain.com; location /{ proxy_pass http://127.0.0.1:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-Host $host; } } ``` Look at the file above.: The fourth line indicates Nginx that when the petition comes from subdomain.mydomain.com. It must redirect to http://127.0.0.1:8000. It means the container running on port 8000. The domain or subdomain only needs to match with the port container running the project. After creating the files, NGINX need to restart. #Test if the files are well created. nginx -t #restart nginx systemctl restart nginx #or reload nginx systemctl reload nginx After restarting NGINX, the project will be exposed to the public. And anybody will be able to reach it from the web browser. This cloud server is for 10 or 20 projects. If a project would need a database, it is possible to connect to any AWS services as RDS or DocumentDB. However, it is also possible to run another container with the database on the same EC2 machine. The EC2 can also connect with S3, or any AWS service needed. The same steps must be repeated to add any new project. And the EC2 has a monitoring section in the AWS console to know if the server is saturated or has space available for more projects. I hope you like this tutorial. The next step is to add SSL certificate so you can use https!!! Ask me if you need it. :D Thanks!
cocodelacueva
1,876,876
respinix
Discover the limitless possibilities of Respinix. At respinix.com, you'll have access to an...
0
2024-06-04T15:39:21
https://dev.to/respinix3/respinix-1a1b
Discover the limitless possibilities of Respinix. At [respinix.com](https://respinix.com/), you'll have access to an impressive collection of over 20,300 demo slot machines from 457 leading software providers. This unique platform allows you to try a wide variety of slot themes and styles for free and without any obligation, from classic fruit machines to innovative 3D machines with exciting storylines. Explore over 100 different themes to find the games that perfectly match your preferences and mood.
respinix3
1,876,874
MICROSOFT AZURE CORE SERVICES
Azure Core Services are the primary offerings of Microsoft Azure. Services refer to individual...
27,595
2024-06-04T15:38:27
https://dev.to/aizeon/microsoft-azure-core-services-24g1
beginners, azure, cloud, cloudcomputing
Azure Core Services are the primary offerings of Microsoft Azure. _Services refer to individual offerings or capabilities provided by CSPs. Solutions are integrated packages that bring together multiple services to solve a specific need._ ## **COMPUTE** Azure Compute is a cloud computing service that offers a range of services and features to support various computing needs, from virtual machines to serverless and containerized applications, enabling users to build, deploy, and manage applications and workloads in the cloud. With Azure Compute, users can: - run applications and workloads in the cloud—provides a range of options for running applications and workloads, including: - Virtual Machines (VMs) - Container Instances - Azure Kubernetes Service (AKS) - Azure Functions - Azure Virtual Desktop - Azure App Services - Azure Batch - Azure Automanage - Azure CycleCloud - migrate on-premises workloads to the cloud. - develop and deploy cloud-native applications. - take advantage of serverless and containerized computing. - optimise costs and performance. ## **NETWORKING** Azure Networking Services provide secure, scalable, and high-performance networking capabilities to support a wide range of applications and services. Some of the key services include: - Virtual Network (VNet): allows users to create a virtual network in Azure, providing a secure and isolated environment for user resources to communicate with each other, the internet and on-premises network. - Subnets - Network Security Groups (NSGs): filter incoming and outgoing network traffic based on rules. - Load Balancer: distributes incoming traffic across multiple resources to improve responsiveness and availability. - Application Gateway: a web application firewall and load balancer for web applications. - Azure DNS: A cloud-based Domain Name System (DNS) service. _DNS is a critical infrastructure of the internet that enables us to access websites and other online resources using easy-to-remember domain names instead of difficult-to-remember IP addresses._ - Azure Firewall: a managed, cloud-based network security service. - ExpressRoute: extends on-premise networks into Azure over a private connection that is facilitated by a connectivity provider. - Virtual Private Network (VPN) Gateway: used to send encrypted traffic between an Azure VNet and on-premises infrastructure over the public internet. - Content Delivery Network (CDN) - Azure Bastion: provide secure, RDP/SSH access to VMs without public IP addresses. - RDP (Remote Desktop Protocol) and SSH (Secure Shell) are two popular protocols used for remote access and management of computers and servers. - RDP provides a graphical interface (GI), can provide fast performance especially over local networks and is relatively easy to set up and use, especially for Windows users, while SSH provides a command-line interface (CLI) and is considered more secure than RDP due to its encryption and flexible authentication mechanisms (including password, key-based, and multi-factor authentication). - RDP is native to Windows and is widely used for remote access to Windows machines for administrative tasks or user support and remote desktop connections for employees or contractors, while SSH is available on multiple platforms, including Windows, macOS, and Linux and is ideal for secure, command-line access to multiple platforms or servers, especially for administrative and development tasks and secure file transfer and management between systems. - Azure Private Link: access Azure services without exposing them to the public internet. These services enable users to: - create a secure and isolated environment for your resources. - filter and control network traffic. - improve application performance and availability. - establish secure connections between Azure and on-premises infrastructure. - deliver content efficiently across multiple locations. - access resources securely without public IP addresses. ## **STORAGE** Azure Storage Services is a cloud-based storage solution offered by Microsoft Azure, providing secure, durable, and scalable storage for various data types, making it a versatile and reliable option for various use cases. The services include: - Blob Storage: for storing unstructured data like images, videos, audio files, documents, texts and binary. - File Storage: for storing and sharing files in a hierarchical structure, like a file system. - Queue Storage: for passing messages between applications and services. - Disk Storage: for attaching data disks to VMs and other applications. Azure has 4 (blob) storage access tiers which users can switch between at any time: - Hot tier: an online tier optimized for storing data that is accessed or modified frequently. The hot tier has the highest storage costs, but the lowest access costs. - Cool tier: an online tier optimized for storing data that is less frequently accessed or modified. Data in the cool tier should be stored for a minimum of 30 days. The cool tier has lower storage costs and higher access costs compared to the hot tier. - Cold tier: an online tier optimized for storing data that is rarely accessed or modified, but still requires fast retrieval. Data in the cold tier should be stored for a minimum of 90 days. The cold tier has lower storage costs and higher access costs compared to the cool tier. - Archive tier: an offline tier optimized for storing data that is rarely accessed, and that has flexible latency requirements, on the order of hours. Data in the archive tier should be stored for a minimum of 180 days. ## **DATABASE** Azure Database Services offer a range of fully managed database solutions, allowing you to focus on application development without worrying about database management. The services include: - Azure SQL Database - Azure Cosmos DB - Azure Database for PostgreSQL - Azure Database for MySQL - Azure Database for MariaDB - Azure SQL Managed Instance - Azure Synapse Analytics (formerly Azure SQL Data Warehouse) The benefits of Azure Database Services include: - reduced administrative burden as Azure handles database management tasks. - increased scalability and performance. - built-in security features and compliance certifications. - cost-effectiveness as users pay only for the resources used. Azure Database Services are applied in several use cases such as: - As backend for web and mobile application development. - Azure Synapse Analytics for data warehousing and analytics. - Migrating existing databases to Azure for improved scalability and performance. ## **AZURE MARKETPLACE** Azure Marketplace is an online store that allows users find, try, purchase and provision solutions, applications and services that are built on or built for Azure by other leading service providers and are all certified to run on Azure.
aizeon
1,876,875
Transitioning into an AI Career: A Step-by-Step Guide
Advancements in technology, specifically in the field of artificial intelligence (AI), have created a...
0
2024-06-04T15:36:55
https://dev.to/ganesh_p_96bc2f769a6049e1/transitioning-into-an-ai-career-a-step-by-step-guide-50b2
java, python, ai, programming
Advancements in technology, specifically in the field of artificial intelligence (AI), have created a high demand for professionals with expertise in this area. If you are looking to enter the world of AI, there are several steps you can take to successfully transition into an AI career. Here is a guide to help you get started: **Step 1: Familiarize yourself with the basics of AI** Before pursuing a career in AI, it is essential to have a foundational understanding of the principles, concepts, and technologies that make up this field. This includes machine learning, neural networks, natural language processing, and more. There are many online resources and courses available to help you gain this knowledge. **Step 2: Develop your programming skills** AI heavily relies on programming languages such as Python, C++, and Java. Therefore, it is crucial to hone your skills in these languages. You can begin by learning the basics through online tutorials and practice coding exercises. **Step 3: Explore AI tools and platforms** To gain hands-on experience and build AI applications, it is beneficial to familiarize yourself with popular AI tools and platforms such as TensorFlow, Keras, scikit-learn, and PyTorch. This can make you more attractive to potential employers. **Step 4: Gain experience through internships or projects** Having practical experience is crucial in the AI field. Consider seeking internships or volunteer opportunities at AI companies or participating in online hackathons and coding challenges to showcase your skills and build a portfolio. **Step 5: Network and attend AI events** Networking is vital in any industry, including AI. Attend conferences, workshops, and other events related to AI to connect with professionals and stay updated on the latest advancements and trends. This can also help you build a strong professional network that can open up career opportunities. **Step 6: Consider pursuing a degree or certification** While not always necessary, having a degree or certification in a related field such as computer science, data science, or AI can give you a competitive edge in the job market and provide a deeper understanding of AI concepts and techniques. **Step 7: Stay informed and continue learning** As the field of AI is constantly evolving, it is essential to stay updated with the latest advancements and techniques. Follow industry experts, read industry publications, and join online communities to stay informed. Continuing to learn and enhance your skills can make you a valuable asset in an AI career. Transitioning into an AI career may seem challenging, but by following these steps and continuously developing your skills and knowledge, you can successfully enter this exciting field. Keep an open mind, stay determined, and remain persistent in your pursuit of an AI career. **MyExamCloud Study Plans** [Java Certifications Practice Tests](https://www.myexamcloud.com/onlineexam/javacertification.courses) - MyExamCloud Study Plans [Python Certifications Practice Tests](https://www.myexamcloud.com/onlineexam/python-certification-practice-tests.courses) - MyExamCloud Study Plans [AWS Certification Practice Tests](https://www.myexamcloud.com/onlineexam/aws-certification-practice-tests.courses) - MyExamCloud Study Plans [Google Cloud Certification Practice Tests](https://www.myexamcloud.com/onlineexam/google-cloud-certifications.courses) - MyExamCloud Study Plans [MyExamCloud Aptitude Practice Tests Study Plan](https://www.myexamcloud.com/onlineexam/aptitude-practice-tests.course) [MyExamCloud AI Exam Generator](https://www.myexamcloud.com/onlineexam/testgenerator.ai)
ganesh_p_96bc2f769a6049e1
1,875,769
JavaScript30 - 4 Array Cardio Day 1
Let me just start by saying my initial thoughts about this challenge were completely wrong. I...
0
2024-06-04T15:36:31
https://dev.to/virtualsobriety/javascript30-4-array-cardio-day-1-3a0b
javascript, beginners, javascript30, learning
Let me just start by saying my initial thoughts about this challenge were completely wrong. I figured this would be the easiest challenge BY FAR and was expecting to complete it and write this post in the same day. Here I am, almost two weeks later having just finished watching Wes Bos's video with his solutions. He was absolutely right calling this Array Cardio because it was a workout for sure. Plus it reminded me, not so gently, how little I know about arrays, how they work and what you can do with their sweet sweet information. ```js const oldPeople = inventors.filter(function (person){ if (person.year < 1600) { return person.year > 1500; } } ) console.log(oldPeople) ``` I guess we should basically start at the top and take it from there...This challenge obviously had us working with arrays from start to finish in many ways I never knew possible. It covered different tasks that started with filtering names from a given list to sorting different street names from a Wikipedia page in real time. ```js const fifteen = inventors.filter(inventor => (inventor.year >= 1500 && inventor.year < 1600)); console.table(fifteen); ``` The exercises covered four different ways to manipulate data within an array, those being: filter, sort, map and reduce. There were 8 exercises in total and I mostly struggled with number 6 which had you go to [this link right here](https://en.wikipedia.org/wiki/Category:Boulevards_in_Paris) and work solely in the console to take the information off the page, turn it from a node into an array and then list out the necessary information. I missed the fact that the data in the array was pretty obviously an object and eventually moved on without finding a perfect solution on my own/with the help of google. Despite being stumped on one question I did manage to solve the other 7 exercises on my own. Very few of my solutions matched Wes's but that's part of the fun! I would probably have been a bit letdown if I saw that I came up with the exact same answer, as by the time I completed this challenge I now had two different ways to complete each task. ![The console.table is a complete gamechanger!](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wy11bm9yc4uzi9en384y.png) I do have to give a HUGE shoutout to one part of this challenge in particular. This was when Wes introduced the console.table() command. I knew console.log() was a huge tool in coding but never knew there were more variations to the console commands. Looking back at it now it's pretty obvious that there would be different ways to show information in the console but it never occurred to me that they could be so different and helpful. I definitely want to do a quick dive into different console commands just to see what else I can do, as this simple change from .log to .table completely blew my mind! This challenge was a toughie and I am exhausted. I spent days upon days working on this. I should start leaving my expectations at the door when I think about coding and how the rest of this course will be because I couldn't have been more wrong by assuming this would be quick and painless. I am excited to work on something a bit more tangible with the next challenge: Flex Panels Image Gallery! I'll see you then! ![the next lesson!](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/260iuiaga4ijit6928ew.png)
virtualsobriety
1,876,873
Sklep z Obrazami i Dekoracjami Ściennymi - Feeby
Witamy w Feeby - Twoim ulubionym sklepie z obrazami, dekoracjami ściennymi i tapetami! Nasz sklep z...
0
2024-06-04T15:34:49
https://dev.to/opm-med/sklep-z-obrazami-i-dekoracjami-sciennymi-feeby-1343
design
Witamy w Feeby - Twoim ulubionym sklepie z obrazami, dekoracjami ściennymi i tapetami! Nasz sklep z dekoracjami ściennymi oferuje szeroki wybór produktów, które odmienią każde wnętrze. Odkryj naszą kolekcję i znajdź idealne obrazy online, które podkreślą charakter Twojego domu. Sklep z Obrazami - Sztuka na Wyciągnięcie Ręki Feeby to sklep z obrazami, który zapewnia najwyższą jakość i różnorodność. Znajdziesz u nas obrazy, które idealnie wpasują się w nowoczesne, klasyczne oraz minimalistyczne wnętrza. Nasze obrazy online są dostępne w różnych stylach, od abstrakcji, przez pejzaże, po portrety. Dzięki temu każdy znajdzie coś dla siebie. Sklep z Dekoracjami Ściennymi - Twórz Unikalne Przestrzenie Nasz [sklep z dekoracjami ściennymi](https://feeby.pl/) to miejsce, gdzie możesz znaleźć inspiracje do każdego pomieszczenia. Oferujemy dekoracje, które nadadzą Twoim ścianom wyjątkowego charakteru. Wybieraj spośród różnorodnych plakatów, obrazów i paneli ścianowych, które doskonale komponują się z każdym stylem wnętrza. Sklep z Tapetami - Nowoczesne Rozwiązania dla Twoich Ścian Feeby to również sklep z tapetami, gdzie znajdziesz najnowsze trendy i klasyczne wzory. Nasze tapety są idealnym rozwiązaniem dla tych, którzy chcą szybko i efektownie odmienić swoje wnętrza. Wysokiej jakości materiały i łatwość aplikacji sprawiają, że nasze tapety są doskonałym wyborem dla każdego domu. Fototapety - Wyjątkowe Dekoracje dla Każdego Fototapety to doskonały sposób na wprowadzenie do wnętrza odrobiny magii. W naszym sklepie z fototapetami znajdziesz szeroką gamę wzorów, które zadowolą nawet najbardziej wymagających klientów. Od motywów roślinnych, przez krajobrazy, po artystyczne kompozycje - nasze fototapety uczynią Twoje wnętrza niepowtarzalnymi. Dlaczego Feeby? Feeby to sklep, który stawia na jakość, różnorodność i zadowolenie klientów. Oferujemy szybką wysyłkę, atrakcyjne ceny i profesjonalną obsługę. Niezależnie od tego, czy szukasz obrazów online, tapet czy dekoracji ściennych - Feeby to najlepszy wybór. Odwiedź nasz sklep z obrazami, sklep z dekoracjami ściennymi, sklep z tapetami oraz sklep z fototapetami i odkryj, jak łatwo można odmienić swoje wnętrza. Feeby - Twoje miejsce na piękne dekoracje ścienne!
opm-med
1,876,798
Creando un Tetris con JavaScript
Insertrix: un tetris ligeramente distinto.
27,594
2024-06-04T15:30:07
https://dev.to/baltasarq/creando-un-tetris-con-javascript-15ba
spanish, gamedev, javascript, tutorial
--- title: Creando un Tetris con JavaScript published: true series: JavaScript Tetris description: Insertrix: un tetris ligeramente distinto. tags: #spanish #gamedev #javascript #tutorial cover_image: https://upload.wikimedia.org/wikipedia/commons/4/46/Tetris_logo.png # Use a ratio of 100:42 for best results. # published_at: 2024-06-04 14:35 +0000 --- Todos conocemos el juego de [Tetris](https://es.wikipedia.org/wiki/Tetris). Las piezas van cayendo desde la parte superior, y si no hay espacio para que la siguiente pieza "caiga", entonces el juego termina. Haremos el juego ligeramente distinto: personalmente, nunca he entendido que la velocidad de las piezas aumente, más allá de limitar el tiempo de las partidas en los salones arcade. Además, tiene mucho más mérito hacer líneas en la parte superior del tablero que en la inferior. Y finalmente, y esto no es inédito, sería interesante insertar líneas con huecos en la parte inferior, para complicar la eliminación de líneas. En esta serie, vamos a desarrollar este juego, llamado *Insertrix*, para jugarlo en el navegador. Es decir, vamos a utilizar [JavaScript](https://es.wikipedia.org/wiki/JavaScript) (y algo de HTML). Para programar con JavaScript hay múltiples posibilidades, incluyendo [Notepad](https://es.wikipedia.org/wiki/Bloc_de_notas), el bloc de notas de Windows, o algo más sofisticado como [Codepen.io](https://codepen.io/), que permite ver el desarrollo actualizándose continuamente, lo que es bastante práctico. En mi caso, utilizaré mi editor de textos de siempre, [Geany](https://www.geany.org/). Soy un clásico. Básicamente, lo que haremos será dibujar continuamente el tablero de Tetris, con la pieza que está cayendo en ese momento. Para ello, emplearemos un componente de HTML llamado [Canvas](https://es.wikipedia.org/wiki/Canvas_(HTML)), literalmente, lienzo. Las piezas son esencialmente unos bloques o puntos que se dibujan para formar la pieza completa. El tablero, al fina y al cabo, lo forman los restos de las piezas que han ido cayendo. Así, el elemento más pequeño que dibujaremos será uno de estos puntos o bloques. Si dividimos el tablero en una cuadrícula, podemos fácilmente decidir si hay o no hay un bloque en una posición representándolo con un 1, o un 0 si en esa posición no hay nada. ![Cuadrícula del tablero de Tetris](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5d7oaz1sj461hrtgo378.jpg) En la imagen de más arriba, tenemos un total de 10 filas por 5 columnas. Si tuviéramos la pieza de la "L" en la parte superior del tablero, se vería como se muestra a continuación. ![Cuadrícula del tablero de Tetris con una pieza "L"](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9fa04yqpb62f906dl0tp.jpg) Al fin y al cabo, lo que tenemos es una matriz de ceros y unos: |0|1|2|3|4| -|-|-|-|-|-| 0|0|0|0|0|0 1|0|0|0|0|0 2|0|0|0|0|0 3|0|0|1|0|0 4|0|0|1|0|0 5|0|0|1|1|0 6|0|0|0|0|0 7|0|0|0|0|0 8|0|0|0|0|0 9|0|0|0|0|0 Podemos automatizar el dibujado del tablero, por tanto, si por cada casilla pintamos un cuadrado relleno si hay un 1 en ella. ```javascript class Piece { #_shape = null; #_color = "black"; #_height = 1; #_width = 1; #_row = 0; #_col = 0; constructor(shape, color) { this._row = 0; this._col = 0; this._height = shape.length; this._width = shape[0].length; if ( color != null ) { this._color = color; } } } ``` Creamos la clase **Pieza** (*Piece*), que contendrá los atributos *_shape* (la matriz de unos o ceros que representa la pieza), *_row* y *_col*, que llevan la posición de la pieza en el tablero y finalmente *_height* y *_width*, que representan la altura y anchura de la misma. Estos atributos se prefijan con '#' para indicar que son atributos privados. El alto y el ancho de la pieza se calculan a partir de la matriz *_shape*. El número de filas de la matriz será su altura, mientras que la longitud de cada fila (todas las filas deberían tener la misma) es el ancho. ¿Cómo creamos una pieza, entonces? Supongamos que queremos crear la "L", entonces: ```javascript class PieceL extends Piece { constructor() { super( [ [ 1, 0 ], [ 1, 0 ], [ 1, 0 ], [ 1, 1 ] ], "orange" ); } } ``` Estamos creando la **PieceL** como una clase que hereda de la clase **Piece** que es la pieza genérica. El primer parámetro, como decíamos, es la matriz con la forma de la pieza, mientras el segundo es el color de la misma. ¿Quieres ver un par de piezas más? ```javascript class PieceInverseS extends Piece { constructor() { super( [ [ 0, 1 ], [ 1, 1 ], [ 1, 0 ] ], "darkred" ); // color } } class PiecePodium extends Piece { constructor() { super( [ [ 0, 1, 0 ], [ 1, 1, 1 ] ], "purple" ); // color } } ``` La clase **Piece** tiene unos cuantos métodos más, que son básicamente *getters* y *setters* para la posición. A continuación, podemos verla al completo. ```javascript class Piece { #_shape = null; #_color = "black"; #_height = 1; #_width = 1; #_row = 0; #_col = 0; constructor(shape, color) { this._row = 0; this._col = 0; this._height = shape.length; this._width = shape[0].length; if ( color != null ) { this._color = color; } // Copy shape this._shape = new Array( shape.length ); for(let i = 0; i < shape.length; ++i) { this._shape[ i ] = [ ...shape[ i ] ]; } } get shape() { return this._shape; } get width() { return this._width; } get height() { return this._height; } get shape() { return this._shape; } get row() { return this._row; } set row(v) { this._row = v; } get col() { return this._col; } set col(v) { this._col = v; } get color() { return this._color; } reset(board) { this._row = 0; this._col = parseInt( ( board.cols / 2 ) - 1 ); } } ``` En esta primera parte, hemos discutido como representar el tablero y las piezas, y hemos visto cómo implementar las piezas. Sí, aún falta poder girarlas, pero vayamos poco a poco.
baltasarq
1,876,871
Understanding the Different EDI Standards: ANSI X12, EDIFACT, and More
In the ever-evolving landscape of supply chain management, Electronic Data Interchange (EDI) is the...
0
2024-06-04T15:29:42
https://dev.to/actionedi/understanding-the-different-edi-standards-ansi-x12-edifact-and-more-17of
In the ever-evolving landscape of supply chain management, Electronic Data Interchange (EDI) is the backbone of efficient and error-free communication between trading partners. But with a variety of EDI standards in use globally, it can be challenging to navigate which one suits your business needs best. Today, we'll dive into the most prominent EDI standards: ANSI X12, EDIFACT, and others, to help you make an informed decision. **The Importance of EDI Standards** Before we explore the different EDI standards, it's crucial to understand why standards matter. EDI standards define the format and structure of electronic documents. By adhering to a standard, businesses can ensure that their data exchanges are consistent, accurate, and easily interpretable by all parties involved. This standardization is key to reducing errors, streamlining processes, and improving overall efficiency. **ANSI X12: The North American Powerhouse** Developed by the American National Standards Institute (ANSI), the X12 standard is widely used in North America. ANSI X12 covers various industries, including retail, finance, healthcare, and transportation. One of its strengths is its adaptability; the standard includes a broad range of document types, from purchase orders to shipping notices. **Key Features:** - Widely adopted in North America - Extensive range of document types - Versatile and adaptable to different industries **EDIFACT: The Global Standard** The Electronic Data Interchange for Administration, Commerce, and Transport (EDIFACT) is an international EDI standard developed by the United Nations. It is the preferred standard in Europe and many other parts of the world. EDIFACT's comprehensive structure supports a wide range of business processes and industries, making it a robust choice for companies engaged in international trade. **Key Features:** - Globally recognized and used - Supports diverse business processes - Ideal for international trade **TRADACOMS: The UK Retail Standard** TRADACOMS is an older EDI standard primarily used in the United Kingdom's retail sector. Though it has largely been superseded by EDIFACT and ANSI X12, some UK retailers still rely on TRADACOMS for specific transactions. **Key Features:** - UK-centric retail focus - Simple and straightforward document formats **VDA: The Automotive Specialist** The Verband der Automobilindustrie (VDA) standard is specific to the German automotive industry. VDA is tailored to the needs of automotive manufacturers and their suppliers, facilitating seamless communication within this sector. **Key Features:** - Specialized for the automotive industry - Predominantly used in Germany **Making the Right Choice for Your Business** Choosing the right EDI standard depends on several factors, including your geographic location, industry, and specific business needs. For businesses operating internationally, adopting multiple standards may be necessary to communicate effectively with various trading partners. **Why Choose ActionEDI?** At ActionEDI, we understand the complexities of EDI and offer a full suite of integrations that cater to all major standards. Whether you need ANSI X12 for North American partners or EDIFACT for global transactions, we've got you covered. Our scalable and cost-effective solutions ensure that you can seamlessly connect with your entire supply chain without breaking the bank. **Ready to simplify your EDI processes and enhance your business efficiency?** Seamlessly connect your entire supply chain with top-rated EDI software. ActionEDI delivers the most complete and scalable EDI software that makes it easier and more cost-effective to do business with your trading partners. Contact us today to learn more about how ActionEDI can transform your supply chain management. ActionEDI is a Full-Service EDI Fulfillment Partner. The best Fully hosted EDI Fulfilment Software for SMEs that provides a full suite of EDI integrations without breaking the bank. Be it for suppliers, trading partners, 3PL, and ERP Integrations.
actionedi
1,876,869
เริ่มต้น Quarkus 3 part 2.2 web bundler
หลังจากที่เราได้ serverside rendering Qute แล้วจากบทความก่อนหน้านี้...
0
2024-06-04T15:21:06
https://dev.to/pramoth/erimtn-quarkus-3-part-2-web-bundler-1fdk
quarkus
หลังจากที่เราได้ serverside rendering Qute แล้วจากบทความก่อนหน้านี้ https://dev.to/pramoth/erimtn-quarkus-3-part-2-web-4bkm ในบทความนี้เราจะเอา javascript dependencies (npm) เข้ามาใช้ใน Qute template โดยใช้ตัวช่วยที่ชื่อว่า Web Bundler ซึ่งเจ้าตัวนี้จะใช้ mvnpm(ก็เหมือน webjar แต่ว่ามันสร้าง jar auto ฉนั้นมันจึงมี version ใหม่ๆมาเร็วเท่ากับใน npm registry เลยทีเดียว ในบทความนี้เราจะลองทำ Hello world ด้วย AlpineJS กันครับ เริ่มต้นเพิ่ม Web Bundler dependency ใน pom.xml ตามนี้ ```xml <dependency> <groupId>io.quarkiverse.web-bundler</groupId> <artifactId>quarkus-web-bundler</artifactId> <version>1.5.2</version> </dependency> ``` จากนั้นเราก็ไปหา npm ที่เราจะใช้งาน แต่ว่าไปหาจากเวบ https://mvnpm.org/ นะครับ ก็ค้นหาชื่อตาม npm registry นั้นแหละ จากนั้นก็เอามาใส่ใน pom.xml ```xml <dependency> <groupId>org.mvnpm</groupId> <artifactId>alpinejs</artifactId> <version>3.14.0</version> <scope>provided</scope> </dependency> ``` แน่นอนว่าไม่ต้อง restart quarkus เหมือนเดิม มันจะ load dependencies แล้ว reload ให้เลย เจ้า Web Bundler จะไปหา js ที่เป็น entry point โดยจะหาจากไฟล์ใน src/main/resources/web/app/** ให้เราสร้างไฟล์ index.js ขึ้นมาโดยมี content โดย import Alpinejs มาใช้งานตามเอกสารของเขา ดังนี้ ```javascript import Alpine from 'alpinejs' window.Alpine = Alpine Alpine.start() ``` จากนั้นให้เอา `{#bundler /}` ไปใส่ใน hello.html ```html <!DOCTYPE html> <html lang="en"> <head> {#bundle /} </head> .... ``` เมื่อเรา refresh /hello เราจะพบว่า js ของเราถูก import เข้าไปที่หน้า index พร้อมกับ dependencies!! ง่ายสุดๆ ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x8g278lz1o6ondlfa6qh.png) ทดลองใส่ alpinejs ไปใน body ```html <div x-data="{ count: 0 }"> <button x-on:click="count++">Increment</button> <span x-text="count"></span> </div> ``` refresh แล้วดูผลลัพธืได้ทันที ว้าววว ปล. สำหรับใครที่หา dependencies mvnpm ไม่เจอให้ไปเพิ่ม repo ตามเอกสารจากเวบนี้นะครับ https://docs.quarkiverse.io/quarkus-web-bundler/dev/advanced-guides.html
pramoth
1,876,812
10 Luxury Corporate Gifts to Give Your VIPs
You must distinguish yourself from the competition in this crowded world if you want to impress your...
0
2024-06-04T15:17:18
https://dev.to/blog-business/how-to-sell-soap-online-step-by-step-guide-541g
You must distinguish yourself from the competition in this crowded world if you want to impress your clients. One way to do that is by impressing them with your luxury corporate gifts. This approach is always effective since it allows companies to appeal to customers and capture their attention without asking for anything in return. And, more often than not, clients feel compelled to return the favor. Luxury corporate gifts enclosed in luxury packaging are a great way to show your appreciation for your loyal business associates, employees, or clients while also standing out from the crowd. Such gifts are typically pricey and impressive, but they can also instill confidence about your company in the minds of your VIPs. So, a luxury gift is an essential gesture in your marketing strategy, whether you want to impress a new client, establish lasting connections and relationships with a loyal client, or appreciate a long-time employee. Luxury presents appeal to a wide range of individuals, but when it comes to gift-giving, it’s all about the recipient’s personality. Here are 10 luxury corporate gift ideas to demonstrate your gratitude to your VIPs, whether the receiver is an employee, a traveler, a food enthusiast, a photographer, or a long-time client of the company: ## 1. Engraved Pen A luxurious pen symbolizes class and functionality. And how can you improve an already great gift? By adding a personalized engraving to make your VIPs feel special. Furthermore, its packaging also matters a lot to create everlasting artistry and a memorable unboxing experience. Choose a pen that combines jewelry-like artistry with the functionality of a writing instrument to make for the perfect luxury corporate gift. Also, choose a luxury [rigid box](https://www.blueboxpackaging.com/product/rigid-boxes/) and add your recipient’s name and a unique message carefully engraved on the box to make this quality gift a treasured keepsake. Such a unique executive gift will be perfect for retirements, anniversaries, and other occasions. Besides, when the VIPs are signing contracts with your luxury corporate gift, they are sure to return the favor of gratitude. ## 2. Air Drone Drones are excellent luxury gift ideas if your VIP is into photography or simply enjoys advanced technology. A drone will allow the recipient to live their high-quality cinematography and aerial photography dreams. As a result, when VIPs use their drone to create extraordinary quality marketing content, promotional videos, and movies, they will undoubtedly think of your name with gratitude. ## 3. Business Bag For the discerning professional with refined taste, a business bag made of the highest quality leather is the ideal luxury corporate gift. Choose a bag made by expert artisans from the finest quality leather to ensure that it will be a good addition to the VIP’s wardrobe. When the receiver realizes how practical and well-made their elegant business bag is, they are bound to remember you and your company in the future for a long time. ## 4. Bluetooth Speakers Another best option to choose from is Bluetooth speakers. If your VIP is an audiophile, they are sure to appreciate Bluetooth speakers. These gadgets help VIPs in various ways, including playing music or plugging in a microphone to make announcements. With a variety of well-known brands and price ranges to select from, a Bluetooth speaker thus makes for an incredibly practical and portable luxury corporate gift for your VIPs. ## 5. Leather Padfolio Nothing says understated luxury quite like a premium leather padfolio. It is one of the best and most functional options to choose for your VIPs.This stylish option will be right at home in the corner office. It’s functional too, with a secure zip closure, a sophisticated internal organizer with ample storage including a zippered file pocket, and dedicated spaces for business cards and a calculator or smartphone. ## 6. Computer Backpack BackpackGreat gifts are gifts that get used every day – not left to collect dust on a shelf. This executive computer backpack is the perfect gift for modern business professionals. It comes with plenty of storage, an RFID pocket to keep devices secure, and a sleek, matte-black design that will look right at home in business class. ## 7. Camera If the VIP prefers traditional photography, a high-end traditional camera will be a good option to choose. A high-resolution camcorder can be an adventure-friendly, professional-quality companion for your recipient to capture high-resolution footage. These devices also allow for 360-degree footage and straight-on views, ideal for stills, panoramas, and video. So, when your VIP goes on their next shoot, having your present with them will ensure that they remember your business for a long time. ## 8. Multi-Device Wireless Charger This is one of those products that almost everyone needs in their daily routine. This multi-device wireless charger can charge two wireless devices at once, making it the perfect gift for business professionals. Your recipients can get rid of their endless charging cords and replace them with this clean, stylish charging pad complete with your business’s logo. ## 9. Golf Bag Golfers enjoy a superb game no matter what time of year it is. For your VIP executives and top-tier clients, there are a variety of stylish golf accessories and collectibles to send as luxury corporate gifts. These are also appropriate prizes to give away at corporate golf tournaments. Such luxurious sets come with professionally curated accessories and equipment to help the recipient achieve optimal performance and take their golf experience to the next level. The pleasure of this sport can only be amplified by your thoughtful and curious present, which is sure to earn you a special place in the VIP's heart. ## 10. Headphones A nice set of wireless headphones makes for luxury corporate gifts that serve to drown out the outside world’s noise and offer your VIPs the calm they need to focus on their jobs. Furthermore, for those working from home today, the importance of a good pair of headphones cannot be overstated. When you send high-quality headphones as a luxury corporate gift, the recipient will think of you every time they use them for online meetings or to quiet the office so they can focus on their work.
blog-business
1,876,344
Why is Kubernetes Debugging so Problematic?
The Immutable Nature of Containers The Limitations of kubectl exec Avoiding Direct...
0
2024-06-04T15:15:04
https://debugagent.com/why-is-kubernetes-debugging-so-problematic
kubernetes, devops, tutorial, java
- [The Immutable Nature of Containers](#the-immutable-nature-of-containers) - [The Limitations of `kubectl exec`](#the-limitations-of-raw-kubectl-exec-endraw-) - [Avoiding Direct Modifications](#avoiding-direct-modifications) - [Enter Ephemeral Containers](#enter-ephemeral-containers) * [Using `kubectl debug`](#using-raw-kubectl-debug-endraw-) - [Practical Application of Ephemeral Containers](#practical-application-of-ephemeral-containers) - [Security Considerations](#security-considerations) - [Interlude: The Role of Observability](#interlude-the-role-of-observability) - [Command Line Debugging](#command-line-debugging) - [Connecting a Standard IDE for Remote Debugging](#connecting-a-standard-ide-for-remote-debugging) - [Conclusion](#conclusion) Debugging application issues in a Kubernetes cluster can often feel like navigating a labyrinth. Containers are ephemeral by design, intended to be immutable once deployed. This presents a unique challenge when something goes wrong and we need to dig into the issue. Before diving into the debugging tools and techniques, it's essential to grasp the core problem: why modifying container instances directly is a bad idea. This blog post will walk you through the intricacies of Kubernetes debugging, offering insights and practical tips to effectively troubleshoot your Kubernetes environment. {% embed https://youtu.be/xkOekt02mNY %} As a side note, if you like the content of this and the other posts in this series check out my [Debugging book](https://www.amazon.com/dp/1484290410/) that covers **t**his subject. If you have friends that are learning to code I'd appreciate a reference to my [Java Basics book.](https://www.amazon.com/Java-Basics-Practical-Introduction-Full-Stack-ebook/dp/B0CCPGZ8W1/) If you want to get back to Java after a while check out my [Java 8 to 21 book](https://www.amazon.com/Java-21-Explore-cutting-edge-features/dp/9355513925/)**.** ### The Immutable Nature of Containers One of the fundamental principles of Kubernetes is the immutability of container instances. This means that once a container is running, it shouldn't be altered. Modifying containers on the fly can lead to inconsistencies and unpredictable behavior, especially as Kubernetes orchestrates the lifecycle of these containers, replacing them as needed. Imagine trying to diagnose an issue only to realize that the container you’re investigating has been modified, making it difficult to reproduce the problem consistently. The idea behind this immutability is to ensure that every instance of a container is identical to any other instance. This consistency is crucial for achieving reliable, scalable applications. If you start modifying containers, you undermine this consistency, leading to a situation where one container behaves differently from another, even though they are supposed to be identical. ### The Limitations of `kubectl exec` We often start our journey in Kubernetes with commands such as: ```bash $ kubectl -- exec -ti <pod-name> ``` This logs into a container and feels like accessing a traditional server with SSH. However, this approach has significant limitations. Containers often lack basic diagnostic tools—no `vim`, no `traceroute`, sometimes not even a shell. This can be a rude awakening for those accustomed to a full-featured Linux environment. Additionally, if a container crashes, `kubectl exec` becomes useless as there's no running instance to connect to. This tool is insufficient for thorough debugging, especially in production environments. Consider the frustration of logging into a container only to find out that you can't even open a simple text editor to check configuration files. This lack of basic tools means that you are often left with very few options for diagnosing problems. Moreover, the minimalistic nature of many container images, designed to reduce their attack surface and footprint, exacerbates this issue. ### Avoiding Direct Modifications While it might be tempting to install missing tools on-the-fly using commands like `apt-get install vim`, this practice violates the principle of container immutability. In production, installing packages dynamically can introduce new dependencies, potentially causing application failures. The risks are high, and it's crucial to maintain the integrity of your deployment manifests, ensuring that all configurations are predefined and reproducible. Imagine a scenario where a quick fix in production involves installing a missing package. This might solve the immediate problem but could lead to unforeseen consequences. Dependencies introduced by the new package might conflict with existing ones, leading to application instability. Moreover, this approach makes it challenging to reproduce the exact environment, which is vital for debugging and scaling your application. ### Enter Ephemeral Containers The solution to the aforementioned problems lies in ephemeral containers. Kubernetes allows the creation of these temporary containers within the same pod as the application container you need to debug. These ephemeral containers are isolated from the main application, ensuring that any modifications or tools installed do not impact the running application. Ephemeral containers provide a way to bypass the limitations of `kubectl exec` without violating the principles of immutability and consistency. By launching a separate container within the same pod, you can inspect and diagnose the application container without altering its state. This approach preserves the integrity of the production environment while giving you the tools you need to debug effectively. #### Using `kubectl debug` The `kubectl debug` command is a powerful tool that simplifies the creation of ephemeral containers. Unlike `kubectl exec`, which logs into the existing container, `kubectl debug` creates a new container within the same namespace. This container can run a different OS, mount the application container’s filesystem, and provide all necessary debugging tools without altering the application’s state. This method ensures you can inspect and diagnose issues even if the original container is not operational. For example, let’s consider a scenario where we’re debugging a container using an ephemeral Ubuntu container: ```bash kubectl debug <myapp> -it <pod-name> --image=ubuntu --share-process --copy-to=<myapp-debug> ``` This command launches a new Ubuntu-based container within the same pod, providing a full-fledged environment to diagnose the application container. Even if the original container lacks a shell or crashes, the ephemeral container remains operational, allowing you to perform necessary checks and install tools as needed. It relies on the fact that we can have multiple containers in the same pod, that way we can inspect the filesystem of the debugged container without physically entering that container. ### Practical Application of Ephemeral Containers To illustrate, let’s delve deeper into how ephemeral containers can be used in real-world scenarios. Suppose you have a container that consistently crashes due to a mysterious issue. By deploying an ephemeral container with a comprehensive set of debugging tools, you can monitor the logs, inspect the filesystem, and trace processes without worrying about the constraints of the original container environment. For instance, you might encounter a situation where an application container crashes due to an unhandled exception. By using `kubectl debug`, you can create an ephemeral container that shares the same network namespace as the original container. This allows you to capture network traffic and analyze it to understand if there are any issues related to connectivity or data corruption. ### Security Considerations While ephemeral containers reduce the risk of impacting the production environment, they still pose security risks. It’s critical to restrict access to debugging tools and ensure that only authorized personnel can deploy ephemeral containers. Treat access to these systems with the same caution as handing over the keys to your infrastructure. Ephemeral containers, by their nature, can access sensitive information within the pod. Therefore, it is essential to enforce strict access controls and audit logs to track who is deploying these containers and what actions are being taken. This ensures that the debugging process does not introduce new vulnerabilities or expose sensitive data. ### Interlude: The Role of Observability {% embed https://youtu.be/bRnOGb7rUV4 %} While tools like `kubectl exec` and `kubectl debug` are invaluable for troubleshooting, they are not replacements for comprehensive observability solutions. Observability allows you to monitor, trace, and log the behavior of your applications in real-time, providing deeper insights into issues without the need for intrusive debugging sessions. These tools aren't meant for everyday debugging, that role should be occupied by various observability tools. I will discuss observability in more detail in an upcoming post. ### Command Line Debugging While tools like `kubectl exec` and `kubectl debug` are invaluable, there are times when you need to dive deep into the application code itself. This is where we can use command line debuggers. Command line debuggers allow you to inspect the state of your application at a very granular level, stepping through code, setting breakpoints, and examining variable states. Personally, I don't use them much For instance, Java developers can use `jdb`, the Java Debugger, which is analogous to `gdb` for C/C++ programs. Here’s a basic rundown of how you might use `jdb` in a Kubernetes environment: **Set Up Debugging**: First, you need to start your Java application with debugging enabled. This typically involves adding a debug flag to your Java command. However, as discussed in [my post here](https://debugagent.com/mastering-jhsdb-the-hidden-gem-for-debugging-jvm-issues), there's an even more powerful way that doesn't require a restart: ```bash java -agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=*:5005 -jar myapp.jar ``` **Port Forwarding**: Since the debugger needs to connect to the application, you’ll set up port forwarding to expose the debug port of your pod to your local machine. This is important as [JDWP is dangerous](https://debugagent.com/remote-debugging-dangers-and-pitfalls): ```bash kubectl port-forward <pod-name> 5005:5005 ``` **Connecting the Debugger**: With port forwarding in place, you can now connect `jdb` to the remote application: ```bash jdb -attach localhost:5005 ``` From here, you can use `jdb` commands to set breakpoints, step through code, and inspect variables. This process allows you to debug issues within the code itself, which can be invaluable for diagnosing complex problems that aren’t immediately apparent through logs or superficial inspection. ### Connecting a Standard IDE for Remote Debugging I prefer IDE debugging by far. I never used JDB for anything other than a demo. Modern IDEs support remote debugging, and by leveraging Kubernetes port forwarding, you can connect your IDE directly to a running application inside a pod. To set up remote debugging we start with the same steps as the command line debugging. Configuring the application and setting up the port forwarding. 1. **Configure the IDE**: In your IDE (e.g., IntelliJ IDEA, Eclipse), set up a remote debugging configuration. Specify the host as [`localhost`](http://localhost) and the port as `5005`. 2. **Start Debugging**: Launch the remote debugging session in your IDE. You can now set breakpoints, step through code, and inspect variables directly within the IDE, just as if you were debugging a local application. I show how to do it in IntelliJ/IDEA [here](https://debugagent.com/remote-debugging-dangers-and-pitfalls). ### Conclusion Debugging Kubernetes environments requires a blend of traditional techniques and modern tools designed for container orchestration. Understanding the limitations of `kubectl exec` and the benefits of ephemeral containers can significantly enhance your troubleshooting process. However, the ultimate goal should be to build robust observability into your applications, reducing the need for ad-hoc debugging and enabling proactive issue detection and resolution. By following these guidelines and leveraging the right tools, you can navigate the complexities of Kubernetes debugging with confidence and precision. In the next installment of this series, we’ll delve into common configuration issues in Kubernetes and how to address them effectively.
codenameone
1,876,811
The Future of Conferences: Navigating the Shift to Virtual
As we move further into 2024, conferences are changing because of technology advancements. The global...
0
2024-06-04T15:14:19
https://dev.to/priyanka_aich/the-future-of-conferences-navigating-the-shift-to-virtual-34ad
devops, ai
As we move further into 2024, conferences are changing because of technology advancements. The global shift towards virtual gatherings, accelerated by necessity, has evolved into a preference for many, driven by convenience, accessibility, and new technological possibilities. 89% of event planners see virtual conferences as essential for the future. This article explores the future of conferences, highlighting opportunities, and strategies that are paving the way for a whole new approach to networking, learning, and engaging. **The Future of Conferences** In 2022, virtual conferences started coming back strong. From late 2021 to mid-2022, there was a big 255% increase in in-person and hybrid conferences. Event Farm did a survey that year and found that 93% of planners were planning virtual or hybrid conferences, which was way more than the year before. Even though in-person conferences are back, virtual conferences are still really popular. In 2024, 82% of businesses plan to keep hosting as many or even more virtual conferences as they did in the past year. Many companies will use a mix of in-person and virtual conferences, but 25% are moving to virtual-only conferences. But businesses are still organizing all sizes of conferences in person, so it’s important to be ready for both kinds of conferences. The virtual conferences industry is growing fast. Reports say that in 2025, there will be 10 times more conferences than before the pandemic. Experts think the global conferences market will grow a lot too, about 23.7% each year until 2028. **Opportunities of Virtual Conferences** In a rapidly evolving landscape, virtual conferences are revolutionizing how we connect and collaborate, presenting unique opportunities. Here are some of them: **Global Reach:** Virtual conferences allow participants from anywhere in the world to attend without the need for travel, significantly reducing the carbon footprint associated with transportation. This reduction can be substantial, with studies indicating that virtual conferences can cut greenhouse gas emissions by up to 99% compared to in-person conferences, depending on factors like event size and attendee locations. **Cost-Effectiveness:** Hosting virtual conferences is often more cost-effective than traditional in-person gatherings. Estimates suggest that virtual conferences can save businesses up to 75% on event costs, including venue rental, travel, accommodations, catering, and logistical expenses. This cost reduction makes conferences more accessible to a broader audience and promotes financial sustainability for both organizers and attendees. **Flexibility and Convenience:** Virtual conferences offer flexibility in scheduling, allowing participants to attend sessions at their convenience. This flexibility is particularly beneficial for professionals juggling work, personal commitments, and time zone differences. Research shows that 86% of participants appreciate the flexibility of virtual events, citing improved work-life balance as a significant advantage. **Data and Analytics:** Digital platforms used for virtual conferences provide robust data and analytics capabilities, offering valuable insights into attendee behavior, preferences, engagement levels, and content effectiveness. Studies indicate that 70% of event organizers use data analytics to measure event success and make informed decisions for future events, leading to continuous improvement and personalized experiences. **Innovative Engagement:** Virtual conferences leverage technology to create interactive and engaging experiences for participants. Features like live polls, Q&A sessions, networking lounges, virtual booths, and gamification enhance participant engagement and satisfaction. Surveys indicate that 82% of attendees find virtual conferences engaging and enjoyable, highlighting the effectiveness of innovative engagement strategies. **Strategies to Host a Successful Virtual Conference** As organizations step into the world of virtual conferences, they realize how crucial it is to make these events run smoothly and keep participants engaged. To fully benefit from virtual conferences, it’s important to have a strategic plan in place. This section will offer essential strategies for hosting effective virtual conferences, ensuring that everyone involved has a seamless and impactful experience. **1. Create Engaging Content** Distractions are common in remote settings, and studies show that human attention spans have decreased significantly over the years. Microsoft found that attention spans have dropped by 8 seconds, which is about 25% less than before. Keeping your audience engaged is crucial. By giving them interactive tasks instead of just watching, you can boost engagement. Studies also reveal that most groups lose focus after just 10 minutes in online conferences. 2. Make Audience Interaction and Networking Seamless Engaging people in person can be challenging enough, but when you move events online, you’re up against even more distractions. However, you don’t have to settle for low engagement. Surprisingly, 47% of people are more likely to ask questions at virtual events, and 37% are more likely to chat with others in a virtual booth compared to a real one. The key is creating opportunities for interaction. If your event includes physical activities like yoga or cooking, it’s easier to keep the audience engaged. Think creatively. Encourage people to stand up, move around, or join in group activities for a fun and impactful experience. **3. Focus on Innovative Networking** People attending virtual events or exhibiting at them want to ensure they still have great opportunities to engage and network. That’s why the success of any virtual event depends a lot on how well it fosters interaction. There are various new chat and networking tools you can include in your virtual event to enhance its value and immersion. Let’s discuss these features and how they can benefit your event. **- 1:1 Interaction** When attendees find someone interesting, they often prefer sending them a message to start a connection. If things go smoothly, they might schedule a meeting or have a video call. Live chat tools allow for one-on-one and group discussions, ensuring seamless interaction between visitors and exhibitors in real time. **- AI-enabled Matching** You can also use AI-enabled matchmaking to pair attendees with their ideal counterparts at the virtual event, based on their interests and backgrounds. This makes it easier to connect with like-minded individuals and saves time. Matchmaking features can include icebreaker questions to kickstart conversations. **4. Ensure Robust Customer Support & 24/7 Technology Assistance** Even if your audience is tech-savvy, technical glitches can still happen. If attendees can’t resolve these issues quickly during your event, they may lose interest. For a successful online conference, prioritize excellent customer support and tech assistance for hosts and attendees before, during, and after the event. Your virtual platform should guarantee quality and scalability and provide 24/7 technical support. Offering reliable support shows your commitment to your audience’s experience. Attendees who use the customer service tool will value the availability and see your brand as knowledgeable and trustworthy. **Conclusion** The future of digital conferences and events is promising, thanks to technological advancements that enhance engagement, accessibility, and personalization. In this landscape, [ibentos Virtual Event Platform](https://ibentos.com/virtual-event-platform/) stands out, introducing innovative features and benefits that elevate the event experience. Our platform isn’t just a complement to traditional events; it redefines expectations for professional networking and learning. With ibentos, attendees enjoy seamless virtual interactions, immersive experiences, and comprehensive tools for networking and collaboration. The evolution of digital events showcases how technology can connect us in meaningful ways, highlighting the limitless potential for growth and connectivity. Join us in shaping the future of conferences and events with ibentos’ Virtual Event Platform, where innovation meets seamless connectivity for impactful experiences. _**Source:** https://ibentos.com/blogs/the-future-of-conferences-navigating-the-shift-to-virtual/_
priyanka_aich
1,876,809
BEST CRYPTOCURRENCY RECOVERY COMPANY - CONTACT DIGITAL WEB RECOVERY
The promise of easy money, of turning a small investment into a fortune, can be difficult to resist....
0
2024-06-04T15:12:00
https://dev.to/genci_zane_bd4a22e84fad39/best-cryptocurrency-recovery-company-contact-digital-web-recovery-f6l
The promise of easy money, of turning a small investment into a fortune, can be difficult to resist. I, like many others, was captivated by the allure of binary options, drawn in by the promise of quick profits and financial freedom. The initial investment, a substantial $152,000 poured into the platform over a mere two weeks, felt like a small price to pay for the potential windfall that awaited. The platform seemed legitimate, the broker a trusted advisor, and the initial gains were intoxicating, a taste of the sweet success I had been promised. My account balance swelled to $380,000, a testament to the "expertise" of my broker and the seemingly foolproof nature of the platform. However, as time went on, a chilling reality began to set in. The promised withdrawals, the fruits of my labor, were repeatedly delayed. New demands for additional funds, a relentless barrage of requests for more and more money, became a constant hurdle, a sinister sign that something was amiss. The broker, once a reassuring figure, now seemed to be a shadowy puppet master, pulling the strings of my financial destiny. When I tried to withdraw my "profits," my attempts were met with roadblocks, excuses, and a relentless barrage of demands for even more capital. The platform, once a gateway to riches, now felt like a gilded cage, trapping me in a cycle of despair and financial ruin. The truth hit me like a tidal wave: I had become a victim of a ruthless scam, a carefully crafted con designed to exploit the dreams of unsuspecting investors. My $152,000, along with the additional $150,000 I had been forced to deposit, were gone, vanished into the digital abyss, stolen by the very people I had trusted with my financial future. Despair washed over me, a cold wave of dread that threatened to engulf my entire world. The authorities seemed powerless, unable to penetrate the veil of secrecy that surrounded these digital thieves. I was lost, alone, and utterly helpless. It was then, I stumbled upon Digital Web Recovery. They were a beacon of hope, a lifeline thrown to a drowning man. Their expertise in the digital world, their understanding of the intricacies of online scams, and their unwavering commitment to justice gave me a glimmer of hope, a reason to believe that I might reclaim what had been stolen from me. Digital Web Recovery didn't offer empty promises; they provided a clear roadmap, explaining the challenges and the potential pitfalls of recovering my lost funds. They were transparent, honest, and dedicated to fighting for their clients. They worked tirelessly, employing their skills and expertise to track down the scammers, unraveling the intricate web of deceit that had trapped me. Their success, the recovery of my stolen funds, was a testament to their dedication, their expertise, and their unwavering commitment to justice. They were more than just a recovery agency; they were champions of the innocent, digital knights fighting against the forces of corruption that plague the online world. Digital Web Recovery was a turning point, a reminder that even in the darkest of times, there is always hope, always a chance to reclaim what has been stolen. They were the beacon of light that guided me out of the darkness, the guiding hand that helped me navigate the treacherous waters of the digital world. They are a testament to the enduring power of human ingenuity and determination, a force for good in a world that can be fraught with betrayal. If you have fallen victim to an online scam, if you have lost your savings to a fraudulent investment platform, do not despair. Reach out to Digital Web Recovery. They will fight tirelessly to reclaim what has been stolen from you. Contact Info; Website https://digitalwebrecovery.com Email; digitalwebexperts@zohomail.com Telegram user; @digitalwebrecovery ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ey9wkaxn2n6hynhhwswn.jpeg)
genci_zane_bd4a22e84fad39
1,876,808
Search Component React
import "./styles.css"; import React, { useEffect } from "react"; import { useState } from...
0
2024-06-04T15:11:38
https://dev.to/alamfatima1999/search-component-react-11b8
```JS import "./styles.css"; import React, { useEffect } from "react"; import { useState } from "react"; import axios from "axios"; export default function App() { const URL = `https://jsonplaceholder.typicode.com/todos`; const [todoList, setTodoList] = useState([]); const [text, setText] = useState(""); const [filteredTodos, setFilteredTodos] = useState([]); useEffect(() => { const todos = async () => { let todo = await fetch(URL); let data = await todo.json(); console.log(data); setTodoList(data); setFilteredTodos(data); }; todos(); }, []); // console.log(todoList); const changeText = (e) => { setText(e.target.value); // console.log(todoList.length); let filteredTodo = todoList.filter((todo) => { return todo.title.toLowerCase().includes(e.target.value.toLowerCase()); }); console.log("filtered", filteredTodo); setFilteredTodos(filteredTodo); }; return ( <div className="App"> <input type="text" value={text} onChange={(e) => { changeText(e); }} ></input> <ul> {filteredTodos.map((todo) => { return <li key={todo.id}>{todo.title}</li>; })} </ul> </div> ); } ```
alamfatima1999
1,876,806
Deep Dive into SideEffects Configuration
In Webpack2, support for ES Modules was added, allowing Webpack to analyze unused export content and...
0
2024-06-04T15:11:03
https://dev.to/markliu2013/deep-dive-into-sideeffects-configuration-14me
javascript
In Webpack2, support for ES Modules was added, allowing Webpack to analyze unused export content and then tree-shake it away. However, code in the module that has side effects will be retained by Webpack. For example, there are modules utils/a.js and utils/b.js in the project, and a unified entry is provided through utils/index.js. The module b contains a print statement, which has side effects. ```js // utils/a.js export function a() { console.log('aaaaaaaaaaaaa'); } // utils/b.js console.log('======== b.js =========='); export function b() { console.log('bbbbbbbbbbbbbb'); } // utils/index.js export * from './a'; export * from './b'; ``` We add main entry, app.js, which only references the module a. We expect the unused module b to be tree-shaken away. ```js // app.js import { a } from './utils'; a(); ``` Let's take a look at the output of webpack build, note that we should use production mode. The results as follows, I have removed the irrelevant webpack startup code. ```js // output ([ function(e, t, r) { 'use strict'; r.r(t), console.log('======== b.js =========='), console.log('aaaaaaaaaaaaa'); }, ]) ``` The module b is not included, but the side effect code in b.js is retained, which is reasonable. ### What is sideEFfects? Let's modify the content of b.js. ``` // utils/b.js Object.defineProperty(Array.prototype, 'sum', { value: function() { return this.reduce((sum, num) =sum += num, 0); } }) export function b() { console.log([1, 2, 3, 4].sum()); } ``` We have defined a new method 'sum' on the Array prototype, which has side effects. Then we call this method in the module b, but as the maintainer of the module b, I hope that 'sum' is 'pure', only used by me, and the outside does not depend on its implementation. Modify package.json, add a new field "sideEffects": false, this field indicates that the entire project is 'side effect free'. Use webpack to compile again, expecting that when the b module is not used, the 'sum' method defined in b will also be tree-shaken off. The result is as follows. ```js ([ function(e, t, r) { 'use strict'; r.r(t), console.log('aaaaaaaaaaaaa'); }, ]) ``` As expected, the entire module b has been tree-shaken off, including the code with side effects. Therefore, sideEffects can optimize the package size, and to a certain extent, it can reduce the process of webpack analyzing the source code, speeding up the building speed. You can try the packaging results in situations like referencing the b module, setting the sideEffects value to true, or removing the sideEffects, etc. ### sideEffects Configuration Besides being able to set a boolean value, sideEffects can also be set as an array, passing code files that need to retain side effects (for example: './src/polyfill.js'), or passing wildcard characters for fuzzy matching (for example: 'src/*/.css'). ```js sideEffects: boolean | string[] ``` ### Precautions for sideEffects In real projects, we usually can't simply set "sideEffects" to false, as some side effects need to be retained, such as importing style files. Webpack will consider all 'import xxx' statements as imports that are not used. If you mistakenly declare them as 'without side effects', they will be tree-shaken away. And since tree-shaking only takes effect in production mode, everything may still seem normal during local development, and problems may not be detected in a timely manner in the production environment. The following are examples of 'imports that are not used'. ```js import './normalize.css'; import './polyfill'; import './App.less'; ``` This is import and used. ```js import icon from './icon.png'; function Icon() { return ( <img src={icon} /> ) } ``` For these files with side effects, we need to declare them correctly by modifying the sideEffects value. ```js // package.json "sideEffects": [ "./src/**/*.css" ] ``` **When using it, be sure to set the sideEffects value correctly.** ### Limitations of sideEffects The sideEffects configuration is file-based. Once you configure a file to have side effects, even if you only use the part of the file that does not have side effects, the side effects will still be retained. For example, modify b.js to ```js Object.defineProperty(Array.prototype, 'sum', { value: function() { return this.reduce((sum, num) =sum += num, 0); } }) export function b() { console.log([1, 2, 3, 4].sum()); } export function c() { console.log('ccccccccccccccccccc'); } ``` In app.js, only the c method is imported, the b method will be tree-shaken, but the sum method will not be. ### Epilogue sideEffects has a significant impact on the webpack build process, especially for npm modules. It is particularly important to pay attention to the correctness of the declaration when using it.
markliu2013
1,876,804
Explore GRASS: The Web sharing revolution, the benefits of 24-hour automated, unmanned online mining.
A post by whldasd
0
2024-06-04T15:08:39
https://dev.to/whldasd/explore-grass-the-web-sharing-revolution-the-benefits-of-24-hour-automated-unmanned-online-mining-j15
whldasd
1,876,805
Explore GRASS: The Web sharing revolution, the benefits of 24-hour automated, unmanned online mining.
A post by whldasd
0
2024-06-04T15:08:39
https://dev.to/whldasd/explore-grass-the-web-sharing-revolution-the-benefits-of-24-hour-automated-unmanned-online-mining-51dd
whldasd
1,876,803
Optimizing Node.js Performance: Best Practices for High-Traffic Apps
Node.js is a powerful platform for building scalable and high-performance applications. However, as...
0
2024-06-04T15:08:15
https://dev.to/codesensei/optimizing-nodejs-performance-best-practices-for-high-traffic-apps-4do9
webdev, javascript, tutorial, performance
Node.js is a powerful platform for building scalable and high-performance applications. However, as traffic increases, so does the need for optimization to ensure efficiency and speed. In this article, I'll share techniques for optimizing Node.js applications to handle high traffic, drawing from my experience in developing high-traffic applications. ## Summary This article explores methods to optimize Node.js applications, covering profiling and monitoring tools, optimizing asynchronous operations and event loops, memory management, and CPU usage tips. By implementing these best practices, you can significantly improve your Node.js application's performance. ## 1. Profiling and Monitoring Tools for Node.js To identify performance bottlenecks, use profiling and monitoring tools. These tools help you understand where your application spends most of its time and resources. ### Profiling Tools - **Node.js built-in Profiler**: Use the built-in V8 profiler to generate CPU profiles. ```bash node --prof app.js node --prof-process isolate-0xnnnnnnnnnnnn-v8.log ``` - **Clinic.js**: A suite of tools to diagnose and pinpoint performance issues in Node.js applications. ```bash npm install -g clinic clinic doctor -- node app.js ``` ### Monitoring Tools - **PM2**: A process manager that includes monitoring capabilities. ```bash npm install pm2 -g pm2 start app.js --name "my-app" pm2 monit ``` ## 2. Optimizing Asynchronous Operations and Event Loops Node.js uses an event-driven, non-blocking I/O model, making it essential to handle asynchronous operations efficiently. ### Use Promises and Async/Await Using Promises and async/await can simplify asynchronous code and make it more readable. ```javascript async function fetchData() { try { const response = await fetch('https://api.example.com/data'); const data = await response.json(); console.log(data); } catch (error) { console.error('Error fetching data:', error); } } ``` ### Avoid Blocking the Event Loop Avoid synchronous operations that block the event loop. For example, use `fs.promises` instead of synchronous `fs` methods. ```javascript // Bad: Synchronous file read const data = fs.readFileSync('/path/to/file'); // Good: Asynchronous file read const data = await fs.promises.readFile('/path/to/file'); ``` ### Optimize Heavy Computations Offload heavy computations to worker threads or use child processes to prevent blocking the main event loop. ```javascript const { Worker } = require('worker_threads'); const worker = new Worker('./worker.js'); worker.on('message', message => { console.log(message); }); worker.postMessage('Start computation'); ``` ## 3. Memory Management and CPU Usage Tips Efficient memory management and CPU usage are crucial for high-performance Node.js applications. ### Avoid Memory Leaks Identify and fix memory leaks by monitoring memory usage and using tools like `heapdump`. ```bash npm install heapdump ``` ```javascript const heapdump = require('heapdump'); // Trigger a heap dump heapdump.writeSnapshot('/path/to/dump.heapsnapshot'); ``` ### Use Efficient Data Structures Choose the right data structures for your use case. For instance, use `Buffer` for handling binary data instead of strings. ```javascript const buffer = Buffer.from('Hello, World!'); ``` ### Tune Garbage Collection Use command-line options to tune the V8 garbage collector for your application's needs. ```bash node --max-old-space-size=4096 app.js ``` ## 4. Performance Tuning Stories from High-Traffic Applications ### Case Study: Optimizing API Response Time In a high-traffic application I developed, we faced significant delays in API response times. After profiling, we identified that synchronous database queries were the bottleneck. We optimized the queries and implemented caching, reducing the response time by 50%. ```javascript const cache = new Map(); async function getData(id) { if (cache.has(id)) { return cache.get(id); } const data = await db.query('SELECT * FROM table WHERE id = ?', [id]); cache.set(id, data); return data; } ``` ### Case Study: Improving Throughput with Clustering Another high-traffic application required improved throughput. We used the Node.js cluster module to take advantage of multi-core systems, significantly improving the application's ability to handle concurrent requests. ```javascript const cluster = require('cluster'); const http = require('http'); const numCPUs = require('os').cpus().length; if (cluster.isMaster) { for (let i = 0; i < numCPUs; i++) { cluster.fork(); } cluster.on('exit', (worker, code, signal) => { console.log(`Worker ${worker.process.pid} died`); }); } else { http.createServer((req, res) => { res.writeHead(200); res.end('Hello, World!'); }).listen(8000); } ``` ## Conclusion Optimizing the performance of your Node.js applications is essential for handling high traffic efficiently. By implementing profiling and monitoring tools, optimizing asynchronous operations, managing memory and CPU usage, and learning from real-world examples, you can ensure your Node.js applications remain fast and responsive. Ready to improve your Node.js app’s performance? Connect with me to discuss optimization techniques for high-traffic applications. 🚀 ## Sources - [Node.js Documentation](https://nodejs.org/en/docs/) - [New Relic Node.js Agent](https://docs.newrelic.com/docs/agents/nodejs-agent/getting-started/introduction-new-relic-nodejs/) - [Clinic.js](https://clinicjs.org/) - [PM2](https://pm2.keymetrics.io/) - [V8 Garbage Collection](https://nodejs.org/en/docs/guides/garbage-collection/) --- Improve your Node.js app’s performance! Connect with me to discuss optimization techniques for high-traffic applications. 🚀 --- ~~#NodeJS #PerformanceOptimization #HighTraffic #AsyncProgramming #DevTips~~
codesensei
1,876,288
PACX ⁓ Connect to Dataverse environment
We introduced PACX here, as a toolbelt containing commands to streamline the application development...
0
2024-06-04T15:05:44
https://dev.to/_neronotte/pacx-connect-to-dataverse-environment-55c
powerplatform, pacx, dataverse, opensource
We [introduced PACX here](https://dev.to/_neronotte/pacx-command-line-utility-belt-for-power-platform-dataverse-e4e), as a toolbelt containing commands to streamline the application development on Dataverse environments. In order to run a command, **PACX** must be aware of the Dataverse environment where you want to run the command. This can be achieved using commands under [`pacx auth`](https://github.com/neronotte/Greg.Xrm.Command/wiki/pacx-auth) namespace. --- ## Create a new auth profile The `pacx auth create` command can be used to create a new connection vs a Dataverse environment. It takes as inputs: - **--name** (-n): a meaningful name for the connection - **--conn** (-cs): a [valid dataverse connection string](https://learn.microsoft.com/en-us/power-apps/developer/data-platform/xrm-tooling/use-connection-strings-xrm-tooling-connect) that can be used to connect to the desired dataverse environment. > Please remember to enclose the connection string in double quotes "...": some terminal handles the semicolon (;) in the connection string as a command delimiter. [This causes an error while trying to connect to dataverse](https://github.com/neronotte/Greg.Xrm.Command/issues/60). Once executed, the command will try to use the provided connection string to connect to the environment and executes a `WhoAmI` request to ensure that the connection works properly. If succeeded, it creates and saves on the local machine an authentication profile called as specified in the `--name` argument, and sets it as **default** for the next command executions. You can create as many authentication profiles as you need. `pacx auth list` command can be used to list all of them, while `pacx auth select` allows to select which auth profile must be use in the subsequent commands. ## List all auth profiles The `pacx auth list` command can be used to list all the authentication profiles stored in the local machine. It does not require any argument, and will provide the following output ![pacx auth list sample output](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6atzyig39z1hc5bx58ep.png) The first column contains the name of the auth profile, while the second column contains the connection string. ## Peek the right auth profile The `pacx auth select` command can be used to select which auth profile must be used from now on (until another pacx auth select changes your decision). It accepts a single argument **--name** (-n) containing the name of the auth profile you want to set as default. ![pacx auth select sample output](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zreva7xxxkzsgp2c78ti.png) ## Other useful info This namespace provides a few other useful commands: - `pacx auth rename`: can be used to rename an auth profile. It accepts 2 arguments: - **--old** (-o): the old name of the auth profile to rename - **--new** (-n): the new name of the auth profile - `pacx auth delete`: deletes a specific auth profile. It accepts a single argument: - **--name** (-n): the name of the auth profile to delete - `pacx auth ping`: tests the connection vs the current default auth profile. No argument required You can simply type `pacx auth` in the console to get the list of available commands. Moreover, you can just type the name of a command followed by `--help` if you need help on that specific command arguments. ## References - [`pacx` official wiki](https://github.com/neronotte/Greg.Xrm.Command/wiki) - [`pacx auth create` official wiki](https://github.com/neronotte/Greg.Xrm.Command/wiki/pacx-auth-create) - [`pacx auth list` official wiki](https://github.com/neronotte/Greg.Xrm.Command/wiki/pacx-auth-list) - [`pacx auth select` official wiki](https://github.com/neronotte/Greg.Xrm.Command/wiki/pacx-auth-select)
_neronotte
1,876,802
Introduction to Docker and Kubernetes with Node.js
Hey devs! With the growing adoption of microservices and the need for scalability, Docker and...
0
2024-06-04T15:00:41
https://dev.to/paulocappa/introduction-to-docker-and-kubernetes-with-nodejs-112f
docker, kubernetes, node, javascript
Hey devs! With the growing adoption of microservices and the need for scalability, Docker and Kubernetes have become indispensable technologies. While Docker facilitates the creation and management of containers, Kubernetes orchestrates these containers in a cluster, providing high availability and scalability. Let's explore how to use Docker and Kubernetes to deploy a Node.js application. ##Prerequisites - Basic knowledge of Docker and Kubernetes - [Node.js installed](https://nodejs.org/en) - [Docker installed](https://www.docker.com/) - [Minikube (or another Kubernetes environment) configured](https://minikube.sigs.k8s.io/docs/start/) ##Project Structure Let's create a simple Node.js project with the following folder structure: ```css my-node-app/ ├── src/ │ └── index.js ├── Dockerfile ├── .dockerignore ├── package.json └── k8s/ ├── deployment.yaml ├── service.yaml └── ingress.yaml ``` ##Step 1: Setting Up the Node.js Application First, let's create the Node.js application. **package.json File** Create the package.json file with the following content: ```json { "name": "my-node-app", "version": "1.0.0", "description": "A simple Node.js app", "main": "src/index.js", "scripts": { "start": "node src/index.js" }, "dependencies": { "express": "^4.17.1" } } ``` **src/index.js File** Create the src directory and, inside it, the index.js file with the following content: ```javascript const express = require('express'); const app = express(); const port = 3000; app.get('/', (req, res) => { res.send('Hello World!'); }); app.listen(port, () => { console.log(`App running on http://localhost:${port}`); }); ``` ##Step 2: Dockerizing the Application **Dockerfile** Create a Dockerfile in the root of the project with the following content: ```Dockerfile # Use the official Node.js base image FROM node:14 # Create a working directory WORKDIR /app # Copy package.json and package-lock.json COPY package*.json ./ # Install dependencies RUN npm install # Copy the rest of the application code COPY . . # Expose the application port EXPOSE 3000 # Command to start the application CMD ["npm", "start"] ``` **.dockerignore File** Create a .dockerignore file in the root of the project to prevent unnecessary files from being copied to the container: ```bash node_modules npm-debug.log ``` **Building and Running the Container** To build and run the container, execute the following commands: ```bash docker build -t my-node-app . docker run -p 3000:3000 my-node-app ``` ##Step 3: Orchestrating with Kubernetes Let's create the Kubernetes manifests to deploy the application in a cluster. **k8s/deployment.yaml File** Create the k8s directory and, inside it, the deployment.yaml file with the following content: ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: my-node-app spec: replicas: 2 selector: matchLabels: app: my-node-app template: metadata: labels: app: my-node-app spec: containers: - name: my-node-app image: my-node-app:latest ports: - containerPort: 3000 ``` **k8s/service.yaml File** Create the service.yaml file in the k8s directory with the following content: ```yaml apiVersion: v1 kind: Service metadata: name: my-node-app-service spec: selector: app: my-node-app ports: - protocol: TCP port: 80 targetPort: 3000 type: LoadBalancer ``` **Deploying to Kubernetes** To deploy the application to Kubernetes, execute the following commands: ```bash kubectl apply -f k8s/deployment.yaml kubectl apply -f k8s/service.yaml ``` Check if the pods are running: ```bash kubectl get pods ``` And the service: ```bash kubectl get services ``` The service should display an external IP address where the application will be available. ##Step 4: Setting Up Ingress in Kubernetes To expose the application using an Ingress resource, we need to configure an Ingress controller and create an Ingress resource. **Enabling the Ingress Controller in Minikube** Enable the Ingress controller in Minikube with the following command: ```bash minikube addons enable ingress ``` **k8s/ingress.yaml File** Create the ingress.yaml file in the k8s directory with the following content: ```yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: my-node-app-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - host: my-node-app.local http: paths: - path: / pathType: Prefix backend: service: name: my-node-app-service port: number: 80 ``` **Deploying the Ingress Resource** Apply the Ingress resource to the cluster: ```bash kubectl apply -f k8s/ingress.yaml ``` **Testing the Ingress** To test the Ingress, add an entry to your _/etc/hosts_ file to _map my-node-app.local_ to the Minikube IP. Get the Minikube IP with: ```bash minikube ip ``` Then, add the following line to your _/etc/hosts_ file: ```lua <minikube-ip> my-node-app.local ``` Replace _<minikube-ip>_ with the actual IP address returned by the _minikube ip_ command. Now, you should be able to access the application at _http://my-node-app.local_. ##Conclusion In this post, we created a simple Node.js application, dockerized it, deployed it to a Kubernetes cluster, and exposed it using an Ingress resource. Docker and Kubernetes are powerful tools that, when combined, provide an efficient way to manage and scale applications. With these tools, you can ensure that your applications are always available and easily scalable as needed.
paulocappa
1,876,801
cryptogram-assistant
I wrote this in React at the start of my new learning journey back in 2021. The goal was to be able...
0
2024-06-04T15:00:01
https://dev.to/cmcrawford2/cryptogram-assistant-2d2k
I wrote this in React at the start of my new learning journey back in 2021. The goal was to be able to format the very difficult cryptograms that I was trying to solve in my puzzle magazines. All I needed was a way to enter the cryptogram, with slots for the guessed letters. Then I would try out letters to break the code. I actually had written this in Microsoft Excel previously. I added a few canned quotes as an afterthought. I wrote this in React, and eventually React with hooks, with a twist. The letters of the cryptogram are stored in divs, and I don't keep track of changes in the divs using state in React. So when the user enters a letter in the box over the coded letter, a useEffect checks the key array, and rewrites the cryptogram with the new letter added in every place where the bottom coded letter is the same. I've only just deployed the new React-with-hooks format, so there might be a bug or two. For a while, it was capitalizing every "I" if "I" happened to be the first word in the quote. I have no idea why, and it just stopped happening. I also have the unfortunate habit of playing computer games addictively, so I tried to make this one as non-addictive as possible. Meaning, you don't get any special recognition or confetti if you solve a cryptogram, and I only have about 26 canned quotes. Let me know in the comments how you liked it!
cmcrawford2
1,873,476
🕒 Task vs Promise: Chaining
The first language in which I learned to work asynchronously was JavaScript. Initially, it was very...
0
2024-06-04T15:00:00
https://oscarlp6.dev/en/blogs/task-vs-promises/
csharp, javascript, async
The first language in which I learned to work asynchronously was *JavaScript*. Initially, it was very challenging because it was a completely different way of thinking from what I had learned in university. Once I internalized the principles of *asynchronous* programming, it became much easier. So, when I started working in *C#*, I immediately noticed the similarities between `Task` and `Promise` since they are practically equivalent. But when trying to chain promises the same way as in *JavaScript*, I encountered a peculiarity. The function received in the `.then` method of *JavaScript* is a function that expects the value wrapped in the promise. That is, if we have a `Promise<number>`, the function in `.then` is a function that receives a `number`. However, in *C#*, the "equivalent" to `.then` is `.ContinueWith`, but this method expects a function that receives a `Task` of the same type as the *original* `Task`. That is, if we have a `Task<string>`, the `.ContinueWith` method receives a function that receives a `Task<string>`. This caused a lot of confusion, and by discussing it with **ChatGPT**, I was able to gain more clarity on the matter. If you want to review my process, this is the [conversation](https://chatgpt.com/share/909c4bb1-d514-4279-a25a-05ce7d71103d) ### `.then` in JavaScript In *JavaScript*, the `.then` method is used to handle the result of a **promise**. The handler in `.then` directly receives the *resolved value* of the promise. Additionally, *JavaScript* provides the `.catch` method to handle errors. **Example in JavaScript:** ```javascript fetch('http://example.com') .then(response => response.json()) .then(data => { console.log(data); }) .catch(error => { console.error('Error:', error); }); ``` In this example, if the promise is resolved, the handler in the first `.then` receives the response and processes it. If an error occurs, the handler in `.catch` is executed. ### `.ContinueWith` in *C#* In C#, the `.ContinueWith` method of a `Task` is used to continue with code execution after a task is completed. Unlike `.then`, the handler in `.ContinueWith` receives an instance of `Task<T>`, allowing access to more details about the task, including its *status, exceptions, and result*. **Basic Example in C#:** ```cs Task<int> task = Task.Run(() => { // Simulating an asynchronous operation return 42; }); task.ContinueWith(t => { if (t.IsFaulted) { // Handle exceptions Console.WriteLine($"Error: {t.Exception.InnerException.Message}"); } else if (t.IsCompletedSuccessfully) { // Handle successful result Console.WriteLine($"Result: {t.Result}"); } }); ``` In this example, `ContinueWith` handles both the successful result and possible exceptions. This is possible because `ContinueWith` provides access to the entire task. ### The Reasoning #### No `.catch` in *C#* In *C#*, there is no direct equivalent to `.catch` for promises in *JavaScript* that chains directly to a `Task`. Instead, errors are handled within the same `ContinueWith` handler or by using *try-catch* blocks in combination with `await`. #### Options for `.ContinueWith` in *C#* The `.ContinueWith` method also allows specifying options that control *when* the continuation handler should be executed, such as `OnlyOnRanToCompletion` and `OnlyOnFaulted`. **Example with ContinueWith Options:** ```cs Task<int> task = Task.Run(() => { // Simulating an operation that may throw an exception throw new InvalidOperationException("Simulated error"); return 42; }); task.ContinueWith(t => { Console.WriteLine($"Result: {t.Result}"); }, TaskContinuationOptions.OnlyOnRanToCompletion); task.ContinueWith(t => { Console.WriteLine($"Error: {t.Exception.InnerException.Message}"); }, TaskContinuationOptions.OnlyOnFaulted); ``` In this example, two continuation handlers are defined: one that executes only if the task completes successfully (`OnlyOnRanToCompletion`) and another that executes only if the *task fails* (`OnlyOnFaulted`). ### Conclusions Although both `.ContinueWith` in *C#* and `.then` in *JavaScript* serve to continue code execution after an *asynchronous* operation, there are important differences: 1. **Continuation Handler:** In *JavaScript*, the `.then` handler receives the *resolved value* of the **promise**. In *C#*, the `.ContinueWith` handler receives an instance of `Task<T>`, providing access to more task details. 2. **Error Handling:** *JavaScript* uses `.catch` to handle **errors**. In *C#*, they are handled within the `ContinueWith` handler or by using try-catch blocks when using `await`. 3. **Continuation Options:** *C#* allows specifying options in `.ContinueWith` to control when the continuation handler should be executed, offering more granular control. These differences reflect the different philosophies and capabilities of the languages, providing developers with powerful tools to handle asynchronous operations in each environment. I hope this article helps you better understand the differences between `.ContinueWith` in C# and `.then` in JavaScript, as well as the options for handling errors and accessing task details in C#.
oscareduardolp6
1,865,084
Database Observability: An Introductory Guide
Ever wondered what happens behind the scenes when you use an app or website? A crucial part of the...
0
2024-06-04T15:00:00
https://www.neurelo.com/post/database-observability-introduction
postgres, mongodb, mysql, database
Ever wondered what happens behind the scenes when you use an app or website? A crucial part of the magic lies in the database—a vast digital system storing all the information that keeps things running smoothly. But just like any complex system, databases require constant care and attention to ensure optimal performance. This is where database observability comes in. With database observability, it's like there's a guardian watching over your data. This post will teach you the importance of database observability, prepare you for the challenges that might be encountered, and equip you with practical strategies for implementing it effectively. ## What Is Database Observability? Database observability, to put it simply, is the process of actively tracking and comprehending the functionality and state of your database systems. It's similar to having a live window into your database, letting you see possible problems early on, maximize efficiency, and make sure your data is always available. Database observability relies on three key components to provide this comprehensive view: - Metrics: These are numerical assessments that monitor several facets of the health of your database, including disk use, connection counts, and [query execution](https://www.neurelo.com/features/query-observability) times. They provide an instantaneous overview of your database's current state. - Logs: Imagine a detailed record of everything happening within your database. Logs capture events like successful or failed queries, user actions, and error messages. By analyzing logs, you can gain deeper insights into potential problems and identify root causes. - Traces: Think of traces as the behind-the-scenes story of a query. They capture the entire journey of a query as it travels through your database system, pinpointing any bottlenecks or slowdowns that might be hindering performance. ## Importance of Database Observability Consider your database to be the central nervous system of your application, housing all the vital data needed to keep everything operating. A healthy database is necessary for the proper functioning of your applications and websites, just as a healthy heart is necessary for an individual's well-being. This is the point at which database observability becomes important. This is why it's a critical piece of work. ## Deep Dive into Production and Application Behavior - Significance of API and query-level insights: Database observability allows you to see beyond overall database health and delve into granular details. By monitoring [API](https://www.neurelo.com/features/auto-generated-apis) and query-level metrics, you can pinpoint exactly how specific applications and functionalities interact with your database. This helps you identify areas where queries might be slow or inefficient, impacting the user experience. - Impact on identifying and solving issues promptly: Traditional monitoring might only alert you after a major issue arises. Database observability empowers you to be proactive. By tracking key metrics and analyzing logs, you can identify potential problems early on—before they snowball into critical failures. This allows for faster troubleshooting and resolution, minimizing downtime and ensuring a smooth user experience. ## Build a Reliable Database Fortress A sluggish database can significantly impact your application's performance. Database observability helps you identify bottlenecks and performance issues within your database. By analyzing query execution times, connection pools, and resource utilization, you can optimize your database configuration and fine-tune queries, leading to a faster and more responsive system. ## Enhanced Scalability As your application grows, your database needs to keep pace. Database observability provides valuable insights into your database's resource usage, allowing you to proactively scale your infrastructure to meet evolving demands and ensure smooth performance under increasing loads. ## Improved Development and Operations Collaboration Database observability fosters better communication between developers and operations teams. By providing shared visibility into database health and performance, both teams can work together to optimize [queries](https://docs.neurelo.com/definitions/custom-query-endpoints-as-apis/write-and-commit-custom-queries?_gl=1*xi5ewg*_gcl_au*NzIwNzAxNzY1LjE3MTQ0MzIxMzI.), identify potential issues early on, and ensure a more efficient development and deployment process. ## Optimizing Resource Utilization Database observability acts as a resource manager, akin to a wise gardener tending to a flourishing garden. It optimizes resource utilization, ensuring that every byte and cycle is utilized effectively. This not only improves efficiency but also reduces unnecessary expenses, much like turning off lights in unoccupied rooms to save energy. ## Challenges of Database Observability While database observability offers immense benefits, it's not without its challenges. Here are some key obstacles you might encounter on your journey. ## Data Privacy and Security - Balancing observability with privacy concerns: Database observability involves collecting and analyzing data about your database's operation, which might include sensitive information. It's crucial to strike a balance between gaining valuable insights and protecting user privacy. - Strategies for safeguarding sensitive information: There are several strategies to ensure data security while maintaining observability. You can implement [data masking](https://aws.amazon.com/what-is/data-masking/) to hide sensitive data in logs, leverage [role-based access control](https://auth0.com/docs/manage-users/access-control/rbac) to limit access to sensitive information, and encrypt sensitive data at rest and in transit. ## Complexity of Design and Maintenance - Navigating intricate database structures: Modern databases can be complex, with intricate structures and relationships between tables. This complexity can make it challenging to determine which metrics and logs are most relevant for monitoring and troubleshooting. - Addressing challenges in maintaining observability tools: Database observability tools themselves require ongoing maintenance and updates. You'll need to invest time and resources in selecting the right tools, configuring them effectively, and ensuring they stay up-to-date to provide accurate and reliable insights. ## Real-Time Observability - Importance of real-time insights: In today's fast-paced world, real-time insights are crucial for identifying and responding to issues promptly. Delays in data collection and analysis can hinder your ability to react quickly to potential problems. - Overcoming obstacles in achieving real-time observability: Achieving real-time observability can be challenging, especially for large and complex databases. Factors like data volume, processing power, and network latency can all contribute to delays. You can overcome these obstacles by implementing efficient data collection methods, leveraging streaming technologies, and optimizing infrastructure. ## Resource Scalability As data volumes grow, so does the need for scalable observability solutions. Addressing this challenge involves adopting cloud-based solutions and optimizing resource allocation. It's akin to ensuring that your ship not only sails smoothly but also adapts to the ever-changing tides without capsizing. ## Strategies for Implementing Database Observability: A Roadmap to Success Equipping yourself with the right strategies is essential for unlocking the true power of database observability. These strategies act as your roadmap, guiding you toward a comprehensive understanding of your database's health and performance. Let's delve into some key strategies that will empower you to effectively implement database observability. ## Demystifying Production Environments and Application Behavior - Monitoring queries slowing down: Slow queries can significantly impact user experience. Here's how to tackle them:some text - Identify bottlenecks: Use your observability tools to pinpoint queries with longer execution times. Analyze query plans and execution paths to identify bottlenecks that might be slowing down data retrieval. - Optimize slow queries: Once you've identified bottlenecks, you can optimize slow queries. This might involve rewriting inefficient queries, creating appropriate indexes, or adjusting database configuration settings. - Managing queries interfering with one another: Sometimes queries can compete for resources and slow each other down. Here's how to address this:some text - Analyze query dependencies: Use your observability tools to track query dependencies and identify situations where one query might be blocking another. - Implement isolation techniques: Use database features like transactions and locking mechanisms to ensure queries execute without interference, preventing slowdowns. ## Understanding Read/Write Patterns - Analyzing data access patterns: Gaining insights into how data is accessed within your database is crucial. Here's what you need to track:some text - Analyze read and write frequencies: Monitor the ratio of read operations to write operations (reads versus writes) within your database. This helps you understand how your application primarily interacts with the data. - Adjust resources based on usage patterns: Based on your read/write analysis, you might need to adjust resources allocated to your database. For instance, if you have a read-heavy application, scaling your read replicas can improve performance. ## Scaling for Optimal Performance - When to scale (scaling up or out): As your application grows, your database might need to scale as well. Here's how to decide:some text - Recognize signs of increased load: Monitor key metrics like CPU usage, memory consumption, and connection pools. When these metrics reach capacity, it's a sign you might need to scale. - Implement scaling strategies effectively: There are two main scaling approaches: scaling up (adding more resources to a single server) or scaling out (distributing the database load across multiple servers). Choosing the right approach depends on your specific needs and infrastructure. - What to scale: Not all database components need to be scaled equally.some text - Identify components for scaling: Focus on scaling components like CPU, memory, or storage based on which resources are reaching their limits. - Ensure cost-effectiveness in scaling decisions: Consider the cost implications of scaling. Explore cost-effective options like using cloud-based database services with auto-scaling features. - By implementing these strategies and tailoring them to your specific database environment, you'll gain a deeper understanding of your application's interaction with your database, optimize performance, and ensure your database scales effectively to meet your growing needs. Remember, database observability is an ongoing journey, and these strategies will serve as your guide as you refine your approach and continuously improve the health and performance of your databases. ## Conclusion After reading this post, you now know about database observability, a critical practice for ensuring your database runs smoothly and efficiently. We've unpacked its importance, shedding light on how it helps you understand application behavior, improve system reliability, and ensure data remains readily accessible. We've also equipped you with practical strategies for implementing database observability. You've learned how to monitor queries, analyze read/write patterns, and effectively scale your database for optimal performance. By following these steps and continuing to explore this essential practice, you can ensure your database remains the strong foundation of your applications and websites. This post was written by Gourav Bais. [Gourav](https://www.analyticsvidhya.com/blog/author/gourav29/?utm_source=social&utm_medium=linkedin&utm_campaign=author_profile) is an applied machine learning engineer skilled in computer vision/deep learning pipeline development, creating machine learning models, retraining systems, and transforming data science prototypes into production-grade solutions.
shohams