id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,899,168 | Learning Resource Hub | Bookmarks for tech professionals and enthusiasts at Tech Trendsetters. Constantly updated and curated... | 25,762 | 2024-06-24T16:56:34 | https://dev.to/iwooky/learning-resource-hub-2h0d | learning, beginners, datascience | Bookmarks for tech professionals and enthusiasts at Tech Trendsetters. Constantly updated and curated collection of literature, courses, and resources covers the latest trends and essential skills.
**Machine Learning / AI related**
Courses below require Python, Fundamentals of Machine Learning, Basic Probability and Statistics, Linear Algebra
**CS224N: Natural Language Processing with Deep Learning**
https://web.stanford.edu/class/cs224n/
Cool Stanford course, updated every year. This year, for the first time, they decided not to post the lectures on YouTube, although all the 2023 lectures remained publicly available – I highly recommend them.
**Chris Manning – notes**
https://web.stanford.edu/class/cs224n/readings/cs224n-self-attention-transformers-2023_draft.pdf
https://web.stanford.edu/class/cs224n/readings/
The teacher of the course above and one of the most successful scientists, authors of research papers without a large computer (DPO, Backpack language models), Chris Manning, posts all lecture materials in the public domain. Based on the update dates, it is clear that the updated materials are for the 2024 course, use it!
**Dan Jurafsky — Speech and Language Processing (3rd ed. draft)**
https://web.stanford.edu/~jurafsky/slpdraft/
The author of the main textbook on NLP over the past 20 years, also from Stanford, Dan Jurafsky continues to make new chapters of the textbook publicly available, constantly updating old ones. In general, this is practically the only book that you can read in its entirety and already have the keys to understanding 80% of what is happening in the industry.
The textbook was last updated on January 5, 2024.
**Transformers United**
https://web.stanford.edu/class/cs25/prev_years/2023_winter/index.html
https://www.youtube.com/playlist?list=PLoROMvodv4rNiJRchCzutFw5ItR_Z27CM
The second most important course is to understand what's going on - with a general focus on NLP, CV and multimodal models.
**CS236: Deep Generative Models**
https://deepgenerativemodels.github.io/
https://www.youtube.com/watch?v=XZ0PMRWXBEU
The introduction to deep generative models and theoretical analysis of any aspect of existing deep generative models. It touches on difficult concepts such as how to evaluate a generative model. The course materials, including lecture slides and notes, are publicly available and updated regularly. It's an excellent resource for anyone seeking to build a strong foundation from a very beginning.
👉 **All in one** on my resource hub: https://iwooky.substack.com/p/learning-resource-hub
[](https://iwooky.substack.com/p/learning-resource-hub)
| iwooky |
1,898,790 | 12 Open Source tools that Developers would give up Pizza for👋🍕 | It's Open Source tool time! There is more to open source tools than the top 3 that everyone knows... | 0 | 2024-06-24T17:21:08 | https://dev.to/middleware/13-foss-tools-that-developers-would-give-up-pizza-for-4a6g | tooling, webdev, opensource, productivity | It's Open Source tool time!
There is more to open source tools than the top 3 that everyone knows about.
Heck, you might actually know all the 12 in this list already(in which case: "slow clap"), but most of us don't.

And when it comes to tools to help devs do their jobs, open source makes for a convincing argument!
That's why we launched our own [Open Source tool for developer productivity](https://github.com/middlewarehq/middleware) as well.😉
So, here’s a curated list of 12 open source tools that can become indispensable in your toolkit.
Let’s go!
> Note: We found some inconsistencies in the projects, rephrased and added new projects based on suggestions from the community.
## 1. Theia
Think of Theia when you're looking for a [truly Open Source alternative](https://eclipse-foundation.blog/2020/05/05/eclipse-theia-and-vs-code-differences-explained/) to VSCode.
It's a flexible IDE that works on both the cloud and desktop. It’s built in TypeScript and comes with lots of add-ons you can use.
- **Key Features**:
- Cloud & desktop IDE capabilities
- Extensible plugin system: Accepts VSCode plugins/extensions
- Multi-language support
- [Theia Website](https://theia-ide.org/)
- [Theia Github](https://github.com/eclipse-theia/theia)
## 2. Postman
A lot of us would already know about Postman.
No no, not the guy who delivers your Amazon packages.

This Postman makes it easier to work with APIs by letting you chain requests together, automate tasks, and collaborate with others.
So, if you're not a fan of cURL, Postman comes to rescue.
- **Key Features**:
- API testing and automation
- Request chaining for complex workflows
- Collaboration tools for teams
- [Postman Website](https://www.postman.com/)
- [Postman Github](https://github.com/postmanlabs/postman-app-support)
## 3. Hoppscotch
Hoppscotch is a free, lightweight, fast, and a pretty API request builder tool to create and test your APIs relatively quickly.
- **Key Features**:
- HTTP request methods (GET, POST, PUT, DELETE, PATCH, etc.)
- Built-in support for GraphQL
- Collection management and environment variables
- **Website**: [Hoppscotch](https://hoppscotch.io/)
- **GitHub**: [Hoppscotch GitHub](https://github.com/hoppscotch/hoppscotch)
## 4. Pocketbase
Pocketbase is an open source realtime backend in one file that can be used in your Flutter, Vue, React & Angular applications.
Think of a no fuss & a simple SQL database for developers written in Go.
- **Key Features**:
- Embedded database (SQLite) with realtime subscriptions
- Built-in files and users management
- And simple REST-ish API
- [Pocketbase Website](https://pocketbase.io/)
- [Pocketbase Github](https://github.com/pocketbase/pocketbase)
## 5. cURL
I guess there is no developer who wouldn't know cURL.
cURL is a simple command-line tool that is used for calling APIs. In fact, cURL is included by default in most operating system distributions like Linux & MacOS.
- **Key Features**:
- Support for multiple protocols (HTTP, FTP, etc.)
- Scriptable command-line tool
- [cURL Website](https://curl.se/)
- [cURL Github](https://github.com/curl/curl)
## 6. Waveterm
Waveterm is an open-source AI-native terminal.
Waveterm ties command line with open web to help developers be more productive.
- **Key Features**:
- Persistent sessions across network disconnections and reboots
- Searchable contextual command history
- CodeEdit, to edit local and remote files with a VSCode-like inline editor
- AI Integration with ChatGPT (or ChatGPT compatible APIs) to help write commands and get answers inline
- [Waveterm Website](https://www.waveterm.dev/)
- [Waveterm Github](https://github.com/wavetermdev/waveterm)
## 7. Ollama
AI is all the rage and a developer who doesn't play around with local LLMs isn't a developer at all in 2024, right?😜
Ollama is all about experimenting with Large Language Models locally.
It's like Docker Desktop for LLMs.

- **Key Features**:
- Local experimentation with LLMs
- Development environment for large language models
- OpenAI compatible API
- [Ollama Website](https://ollama.com/)
- [Ollama Github](https://github.com/ollama/ollama)
## 8. LM-Studio
LM-Studio is also like Docker Desktop & a competitor to Ollama.
- **Key Features**:
- Local experimentation with LLMs
- OpenAI compatible API
- Support for Windows PCs
- [LM-Studio Website](https://lmstudio.ai/)
- [LM-Studio Github](https://github.com/lmstudio-ai)
## 9. VS Code
Tell me you knew VS Code is Open Source. Tell me please.
VS Code is a more or less the first code editor that most developers start with these days. Unless you use Vim on Linux. I use Arch btw.
Extensions for VSCode are available in unimaginable quantities.
- **Key Features**:
- Extensible code editor
- Debugging support
- Rich ecosystem of extensions
- [VS Code Website](https://code.visualstudio.com/)
- [VS Code Github](https://github.com/microsoft/vscode)
## 10. Docker Compose
Everyone uses Docker these days, right?
Docker Compose makes it easier to set up multiple connected Docker applications by using a simple `compose.yaml` file.
- **Key Features**:
- Orchestration of Docker containers
- Service definition with YAML
- Multi-container application management
- [Docker Compose Website](https://docs.docker.com/compose/)
- [Docker Compose Github](https://github.com/docker/compose)
## 11. ESLint
ESLint is a great tool for JavaScript and TypeScript that enforces coding standards and enhances code quality through customizable linting rules and plugins.
ESLint in combination with Prettier and other such tools help JavaScript developers at large.
- **Key Features**:
- Code quality analysis
- Configurable linting rules
- JavaScript and TypeScript support
- [ESLint Website](https://eslint.org/)
- [ESLint Github](https://github.com/eslint/eslint)
## 12. Oh My Zsh
Oh My Zsh is like a supercharged upgrade for your Zsh shell setup, it's got cool themes and plugins that put your terminal experience on steroids.
Of course, getting a proper Zsh resource file tends to be a bit difficult for some.
- **Key Features**:
- Zsh configuration management
- Customizable themes and plugins
- Community-driven development
- [Oh My Zsh Website](https://ohmyz.sh/)
- [Oh My Zsh Github](https://github.com/ohmyzsh/ohmyzsh)
---
Did I miss some important ones?
What do you think?
{% embed https://github.com/middlewarehq/middleware %} | shivamchhuneja |
1,899,181 | Functional Testing: Ensuring Software Functionality | Introduction Functional testing is a critical phase in the software development lifecycle. It... | 0 | 2024-06-24T17:19:45 | https://dev.to/keploy/functional-testing-ensuring-software-functionality-cia | webdev, javascript, beginners, programming |

Introduction
Functional testing is a critical phase in the software development lifecycle. It focuses on verifying that the software system operates according to the specified requirements and meets the intended functionality. This type of testing evaluates the software by checking its functions and features, ensuring that they perform correctly. In this article, we will explore the fundamentals of functional testing, its importance, methods, and best practices.
Understanding Functional Testing
[Functional testing](https://keploy.io/blog/community/functional-testing-unveiling-types-and-real-world-applications) is a type of black-box testing that involves testing the software's functionalities without considering the internal code structure. The primary goal is to validate that the system behaves as expected, ensuring that all the specified requirements and business logic are correctly implemented. Testers provide inputs and verify the outputs against the expected results.
Key Objectives of Functional Testing
1. Verification of Software Functions: Ensures that all functional requirements are met.
2. Validation of User Interactions: Confirms that the software behaves as expected from a user perspective.
3. Detection of Functional Defects: Identifies any issues or defects in the implemented functionalities.
4. Improvement of Software Quality: Enhances the overall quality and reliability of the software.
Types of Functional Testing
1. Unit Testing
o Scope: Focuses on individual units or components of the software.
o Purpose: Ensures that each unit performs as expected.
o Tools: JUnit, NUnit, PyTest.
2. Integration Testing
o Scope: Verifies the interaction between integrated components.
o Purpose: Ensures that combined components function together correctly.
o Tools: Postman, RestAssured, Selenium.
3. System Testing
o Scope: Validates the complete and integrated software system.
o Purpose: Ensures the system meets the specified requirements.
o Tools: JMeter, LoadRunner, QTP.
4. Smoke Testing
o Scope: Basic tests to check if the critical functionalities are working.
o Purpose: Acts as a preliminary check to determine if the software build is stable.
o Tools: Manual scripts, automated scripts.
5. Sanity Testing
o Scope: Focuses on specific functionalities after minor changes.
o Purpose: Ensures that the changes or fixes work as intended.
o Tools: Manual scripts, automated scripts.
6. Regression Testing
o Scope: Ensures that recent changes haven't adversely affected existing functionalities.
o Purpose: Confirms that the system works as before after modifications.
o Tools: Selenium, QTP, TestComplete.
7. User Acceptance Testing (UAT)
o Scope: Validates the software in real-world scenarios.
o Purpose: Ensures the software meets user needs and requirements.
o Tools: UAT scripts, TestRail, Zephyr.
Functional Testing Process
1. Requirement Analysis
o Review and understand the functional requirements and specifications.
o Identify and prioritize test scenarios based on these requirements.
2. Test Planning
o Develop a test plan outlining the scope, objectives, resources, schedule, and deliverables.
o Identify test cases, test data, and testing tools.
3. Test Case Design
o Create detailed test cases with clear input, output, and expected results.
o Ensure comprehensive coverage of all functional aspects.
4. Test Environment Setup
o Prepare the testing environment, including hardware, software, network configurations, and test data.
5. Test Execution
o Execute test cases manually or using automated tools.
o Record the actual results and compare them with expected outcomes.
6. Defect Reporting
o Log defects or issues found during testing.
o Provide detailed descriptions and steps to reproduce the defects.
7. Retesting and Regression Testing
o Retest the fixed defects to ensure they are resolved.
o Perform regression testing to verify that recent changes haven't introduced new defects.
8. Test Closure
o Evaluate the testing process and results.
o Document lessons learned and best practices for future projects.
Tools for Functional Testing
1. Selenium: A popular open-source tool for automating web browsers.
2. QTP/UFT: A commercial tool for automated functional and regression testing.
3. TestComplete: An automated UI testing tool for various applications.
4. Ranorex: A test automation tool for desktop, web, and mobile applications.
5. JMeter: Primarily used for performance testing but also supports functional testing.
Best Practices for Functional Testing
1. Understand Requirements Thoroughly: Ensure a deep understanding of the requirements to create accurate and effective test cases.
2. Prioritize Test Cases: Focus on critical functionalities and high-risk areas to maximize testing effectiveness.
3. Automate Where Possible: Use automation tools to increase efficiency, especially for repetitive and regression tests.
4. Maintain Test Cases: Regularly update test cases to reflect changes in requirements and software.
5. Use Realistic Test Data: Employ data that mimics real-world scenarios to ensure valid test results.
6. Collaborate with Stakeholders: Engage with developers, business analysts, and end-users to validate requirements and test cases.
7. Perform Continuous Testing: Integrate functional testing into the continuous integration/continuous deployment (CI/CD) pipeline for early defect detection.
Conclusion
Functional testing is an indispensable part of software development, ensuring that the software performs its intended functions correctly. By systematically validating each aspect of the software against the specified requirements, functional testing helps deliver high-quality, reliable, and user-friendly applications. Implementing best practices and leveraging appropriate tools can significantly enhance the effectiveness and efficiency of functional testing
| keploy |
1,899,180 | A forensic analysis of the Claude Sonnet 3.5 system prompt leak | A forensic analysis of the Claude Sonnet 3.5 system prompt Originally published in the... | 0 | 2024-06-24T17:15:46 | https://dev.to/ejb503/a-forensic-analysis-of-the-claude-sonnet-35-system-prompt-leak-58h7 | machinelearning, programming, ai, webdev |
## A forensic analysis of the Claude Sonnet 3.5 system prompt
Originally published in the [Tying Shoelaces Blog](https://tyingshoelaces.com/blog/forensic-analysis-sonnet-prompt)
### Introducing Artifacts
A step forward in structured output generation.
This is an analysis of the system prompt generation for [Claude 3.5 Sonnet.](https://www.anthropic.com/news/claude-3-5-sonnet) The link to the code for this analysis is available at the bottom with the source. The main focus of this analysis is the introduction of the concept of artifacts, and how this might work as part of an intelligent categorization and retrieval system.
> Artifacts are for substantial, self-contained content that users might modify or reuse.
An artifact is a paradigm change because it formalizes a new concept. The concept of persistent data. Persistent data is a stepping stone to us accessing a highly curated and structured content library. By providing fixed references, we unblock iteration and the ability to incrementally improve and refine output. This is a step towards controlling the ephemeral nature of verbose LLM output.
One of the inherent problems with Generative AI for functional tasks such as code completion is that they often repeat entire files for simple changes. There is a huge demand for a ‘diff’ feature, where we output the difference between before and after as opposed to repeating the same content.
Artifacts thus serve a dual purpose; first they act as a reference point for how and where we need output. This is like the setting of the scope or the definition of a reference point. This will stop the LLM from losing focus of the original problem and also keeps persistent structure and categorization in the output.
As a bonus point, we also have an autocomplete feature. By defining the ‘base’ code and scope of the changes, we have now directed our LLM to focus on a specific task or problem, in an opinionated and curated way. This stops erratic shifts in zoom and also provides the entire work in progress to the prompt. Any engineer who has accidentally wiped their code with "Rest of code here" thanks you. We can see the setting of the scope here:
> Self-contained, complex content that can be understood on its own, without context from the conversation
We are directing focus from uncontrolled verbose output to a concrete artifact. It is worth noting the explicit instruction to ignore the context from the conversation. This is a method of ensuring quality by reference to curated data. It is a quality control mechanism that controls the verbose and potentially random characteristics of the input.
All of this fits together with an architecture for retrieval. By having a deep library of curated artifacts, we can now direct our system to retrieve from a controlled dataset. We know that all large AI providers are focussing heavily on investing in high quality curated data. Artifacts are a step towards framing verbose input and output with a structure.
We can see the focus away from the input and mapping to the system defined research in the prompt. Here is an example of some of the exclusion criteria:
> Content that is dependent on the current conversational context to be useful.
>
> Content that is unlikely to be modified or iterated upon by the user.
>
> Request from users that appears to be a one-off question.
The prompt is actively focusing on the system context and the task in hand. The prompt is explicitly trying to filter out input that is not relevant to a very specific output. So the artifact acts as a concrete reference point both in the generated text, and as structured data behind the scenes. This gives us fast and accurate retrieval, and focus. Something that is very helpful for...
### Thinking
Logical thinking is a key part of the generation process.
Prompt engineers have long been telling us that one of the keys to reliable output is obligating LLMs to form a multi-step structured and logical thought process. We see formal recognition of this in the prompt.
> 1\. Briefly before invoking an artifact, think for one sentence in <antthinking> tags about how it evaluates against the criteria for a good and bad artifact. Consider if the content would work just fine without an artifact. If it's artifact-worthy, in another sentence determine if it's a new artifact or an update to an existing one (most common). For updates, reuse the prior identifier.
Here, we are obligating our system to take a structured multi-step process to analyse the task and the output. Again, moving towards the strong definition of verbose content and alluding to a search and retrieval system for artifacts.
> <antthinking>Creating a Python script to calculate factorials meets the criteria for a good artifact. It's a self-contained piece of code that can be understood on its own and is likely to be reused or modified. This is a new conversation, so there are no pre-existing artifacts. Therefore, I'm creating a new artifact.</antthinking>
>
> <antthinking>This request is a direct modification of the existing factorial-calculator artifact. It's not a new artifact but an update to make the script more robust. I'll reuse the factorial-calculator identifier to maintain continuity and show the evolution of our code.</antthinking>
Here we can see the implementation of a logical thought process for the generation of defined outputs. By ensuring that our algorithm goes through the same logical steps, we have the seeds of an intelligent and repeatable generation process. Thought.
We can map this logic to the thought process of a person. First of all we have a logical and rational problem solving approach. We supplement this with hard artifacts. The LLM data set is the brain, but artifacts are the skills and knowledge enabling us to arrive at a certain output.
If we imagine all the competing models, we can derive that they are relying on the replication of logical thought process. We are essentially creating a robot brain to mimic the logical thought process of a human. We are building the missing parts, the knowledge, structures and retrieval processes that fuel the brain.
This makes systems prompts and instructions incredibly valuable assets. The understanding and refinement of "logical thinking" is a key part of the generation process.
We can see some basic implementations of this structured thinking in the code...
### Identifiers and Search
Search and retrieval of artifacts is a key part of the system prompt.
> <antartifact identifier="factorial-script" type="application/vnd.ant.code" language="python" title="Simple Python factorial script"> def factorial(n): if n == 0: return 1 else: return n \* factorial(n - 1)
So what is application/vnd.ant.code. Application is simple enough, VND is vendor, ANT will be Anthropic (the creators of Claude) and code; that's an insight into their architecture. I would expect some kind of taxonomy and structured data that lists the tasks that people trying to achieve with LLMs.
1. Coding tasks
2. Presentations
3. Documents
4. Analysis
5. Many more...
We could for example create some psuedo code for an attempt at a powerpoint presentation.
<antartifact
identifier="powerpoint-presentation"
type="application/vnd.ant.presentation"
purpose="business"
title="Simple powerpoint presentation">
Slide 1: Title slide
Slide 2: Introduction
Slide 3: Problem statement
Slide 4: Solution
</antartifact>
This is almost certainly nothing like the production code, but an interesting mental paradigm. To control and structure verbose output, we have to encounter logical and rational processes for categorizing and standardizing the input and output.
I suspect this means that when inputs come in, they run separate battle-hardened algorithms that run entity extraction and categorization. This structured data is then run through an asset search and retrieval process. Where for text we use vector databases; for other defined outputs we have now introduced this concept of artifacts. For example, a React Code task could go something like this.
"INPUT: Create a react component for a metrics dashboard",
"ENTITY_EXTRACTION: Coding, React, Metrics Dashboard",
"ENTITY_SEARCH: Retrieve code artifacts for Metrics Dashboard where type = React",
"SYSTEM_PROMPT: create_system_prompt(artifact_id='metrics-dashboard-component', type='application/vnd.ant.code', language='react')"
There is a lot going on, and we can see the hard yards that are needed behind the scenes to curate high quality examples and taxonomies for what is essentially an unlimited theoretical pool of tasks. There will be iteration with other AI classification algorithms behind the scenes to automate this.
But it is at its core, as far we can see, a fancy search and retrieval system, based on a proprietary templating language.
### Templating language structure
A rendering template that will shift based on input variables
I started my career many years ago as a Drupal developer. Reading the prompt, the word that jumped out at me was TWIG. Twig is a html templating language that was commonly used for rendering templates in HTML from PHP. Claude will almost certainly use some equivalent approach that tailors input and context based on structured data (probably extracted outside the LLM).
It looks like Claude Sonnet 3.5 uses something similar, which makes perfect sense. Given the text input to the LLM, we need to systematically generate blocks of text. These are the dynamic tags that are put together to generate the prompt.
1. <antartifact></antartifact>
2. <artifacts\_info><artifacts\_info/>
3. <example></example>
4. <user\_query></user\_query>
5. <example\_docstring></example\_docstring>
6. <assistant\_response></assistant\_response>
This will leverage a kind of function calling approach. Each tag has a specific purpose. This then serves as an abstraction as we direct our model to find the right category and type for each specific purpose.
So there we have it, a thought process broken into blocks. Entity extraction mapped with advanced search and retrieval. The building blocks for a logical thought process. The underpinning data is key to the quality of the output.
### Conclusion
One small artifact for Claude, a giant leap for AI.
Artifacts are to structured output such as code generation what vector search is to rag. It is the search and retrieval system for structured output.
We see evidence of a structured and rational thought process in Claude 3.5. Something we've always expected to be important in Generative AI, but this is formal proof.
I can imagine armies of developers and marketeers, building libraries of curated artifacts. This library is accessed via classification, and then search and retrieval tasks. But the real step forward is the concept of persistence.
By working with artifacts we have reference points that exist beyond the ephemeral. Ones that can be refined and re-reused. We already had thought and verbose output. Now we've got memories and expertise...
### Claude 3.5 system
The system prompt in full [TyingShoelaces](https://tyingshoelaces.com/blog/forensic-analysis-sonnet-prompt#links)
| ejb503 |
1,899,177 | DAY 2 PROJECT : HOVER EFFECTS | Creating Engaging Web Interactions with Hover Effects Using HTML, CSS, and JavaScript In the digital... | 0 | 2024-06-24T17:14:36 | https://dev.to/shrishti_srivastava_/day-2-project-1de1 | webdev, javascript, beginners, programming | **Creating Engaging Web Interactions with Hover Effects Using HTML, CSS, and JavaScript**
In the digital age, the user experience on websites has become a pivotal element in retaining and engaging visitors. One of the most effective ways to enhance user interaction is through hover effects.
**What Are Hover Effects?**
Hover effects are visual changes that occur when a user hovers their mouse over a specific element on a webpage. These changes can include transitions in color, size, shape, and other visual properties, as well as more complex animations. Hover effects help guide users’ attention, highlight important elements, and make the navigation more intuitive and enjoyable.
Why Use Hover Effects?
- Enhanced User Experience: Hover effects provide immediate visual feedback to users, making the interaction with web elements more intuitive and engaging.
- Improved Navigation: They can highlight buttons, links, and other interactive elements, helping users navigate the site more efficiently.
- Aesthetic Appeal: Thoughtfully designed hover effects can add a level of polish and professionalism to your website, making it stand out.
- Interactivity: Hover effects can make a website feel more interactive and dynamic, improving user satisfaction and retention.
**Tools and Technologies**
To create effective hover effects, we’ll be utilizing the following technologies:
**HTML**: The backbone of any webpage, HTML provides the structure of the content.

**CSS**: CSS is used to style the HTML elements, allowing us to define the appearance and behaviour of hover effects.

**JavaScript**: While many hover effects can be achieved with CSS alone, JavaScript adds additional functionality and can be used to create more complex interactions.

**Line 1:** This line selects an HTML element with the class cursor and assigns it to the constant variable cursor. This element represents the custom cursor that will follow the mouse movements.
**Line 2**: This line adds an event listener to the entire document for the mousemove event. This event triggers whenever the user moves their mouse over the webpage. The event listener executes a callback function with the event object e as its parameter.
**Lines 3-5**: Within the callback function:
cursor.style.left = e.pageX + "px"; sets the horizontal position of the custom cursor. e.pageX provides the X-coordinate of the mouse pointer relative to the entire document. Adding "px" converts the coordinate into a valid CSS value for positioning.
cursor.style.top = e.pageY + "px"; sets the vertical position of the custom cursor. e.pageY provides the Y-coordinate of the mouse pointer relative to the entire document.
So,
By combining HTML, CSS, and JavaScript, we can create dynamic and engaging web interactions that greatly enhance the user experience.
Dive into the code, experiment with different styles and effects, and watch your web pages come to life!
THANK YOU!
HAPPY CODING!
| shrishti_srivastava_ |
1,895,160 | The Hardest Problem in RAG... Handling 'NOT FOUND' Answers 🔍🤔 | First of All... What is RAG? 🕵️♂️ Retrieval-Augmented Generation (RAG) is an approach to... | 0 | 2024-06-24T17:04:03 | https://dev.to/llmware/the-hardest-problem-in-rag-handling-not-found-answers-7md | rag, ai, python, programming | ## First of All... What is RAG? 🕵️♂️
Retrieval-Augmented Generation (RAG) is an approach to natural language processing that references external documents to provide more accurate and contextually relevant answers. Despite its advantages, RAG faces some challenges, one of which is handling 'NOT FOUND' answers. Addressing this issue is crucial for developing an effective and reliable model that everyone can use.
***
## Why 'NOT FOUND' Answers Can Be Concerning ⛔️
Some models respond with "hallucinations" when they cannot find an answer, creating inaccurate responses that may mislead the user. This can undermine the trust users have in the model, making it less reliable and effective.
***
## How Can We Remedy This? 🛠️
For starters, it is better for the model to inform the user that it could not find the answer rather than fabricating one.

Next, we will delve into one way LLMWare handles 'NOT FOUND' cases effectively. By examining these methods, we can gain a better understanding of how to address this issue and enhance the overall performance and reliability of RAG systems.
***
## For the Visual Learners... 📺
Here is a video discussing the same topic as this article. A good idea would be to watch the video, and then work through the steps in this article.
{% youtube slDeF7bYuv0 %}
***
## Framework 🖼️
**LLMWare**
For our new readers, LLMWARE is a comprehensive, open-source framework that provides a unified platform for application patterns based on LLMs, including Retrieval Augmented Generation (RAG).
Please run `pip3 install llmware` in the command line to download the package.
***
## Import Libraries and Create Context 📚
```python
from llmware.models import ModelCatalog
from llmware.parsers import WikiParser
```
**ModelCatalog**: A class within `llmware` that manages selecting the desired model, loading the model, and configuring the model.
**WikiParser**: A class within `llmware` that handles the retrieval and packaging of content from Wikipedia.
```python
text =("BEAVERTON, Ore.--(BUSINESS WIRE)--NIKE, Inc. (NYSE:NKE) today reported fiscal 2024 financial results for its "
"third quarter ended February 29, 2024.) “We are making the necessary adjustments to drive NIKE’s next chapter "
"of growth Post this Third quarter revenues were slightly up on both a reported and currency-neutral basis* "
"at $12.4 billion NIKE Direct revenues were $5.4 billion, slightly up on a reported and currency-neutral basis "
"NIKE Brand Digital sales decreased 3 percent on a reported basis and 4 percent on a currency-neutral basis "
"Wholesale revenues were $6.6 billion, up 3 percent on a reported and currency-neutral basis Gross margin "
"increased 150 basis points to 44.8 percent, including a detriment of 50 basis points due to restructuring charges "
"Selling and administrative expense increased 7 percent to $4.2 billion, including $340 million of restructuring "
"charges Diluted earnings per share was $0.77, including $0.21 of restructuring charges. Excluding these "
"charges, Diluted earnings per share would have been $0.98* “We are making the necessary adjustments to "
"drive NIKE’s next chapter of growth,” said John Donahoe, President & CEO, NIKE, Inc. “We’re encouraged by "
"the progress we’ve seen, as we build a multiyear cycle of new innovation, sharpen our brand storytelling and "
"work with our wholesale partners to elevate and grow the marketplace.")
```
Here is the initial text for our extraction. It provides details about the popular sports brand, Nike. Feel free to modify this text to suit your needs.

***
## Create Key for Extraction 🔐
```python
extract_key = "company founding date"
dict_key = extract_key.replace(" ", "_")
company_founding_date = ""
```
Here, we set the company founding date as the target extraction from the text.
***
## Run Initial Extract 🏃
```python
model = ModelCatalog().load_model("slim-extract-tool", temperature=0.0, sample=False)
response = model.function_call(text, function="extract", params=[extract_key])
llm_response = response["llm_response"]
```
**Model**: In this snippet, we load LLMWare's slim-extract-tool, which is a 2.8B parameter GGUF model that is fine tuned for general-purpose extraction (GGUF is a quantization method that allows for quicker inference time and decreased model size at the cost of accuracy).
**Temperature**: This controls the randomness of the output. Valid values range between 0 and 1, where lower values make the model more deterministic, and higher values make the model more random and creative.
**Sample**: Determines if the output is generated deterministically or probabilistically. False generates deterministic output. True generates probabilistic output.
We then attempt to extract the information from the text using the model and store it in `llm_response`.
***
## If Answer is Found... ✅
```python
if dict_key in llm_response:
company_founding_date = llm_response[dict_key]
if len(company_founding_date) > 0:
company_founding_date = company_founding_date[0]
print(f"update: found the {extract_key} value - ", company_founding_date)
return company_founding_date
```
If the model successfully finds and extracts the company founding date, we will return the information.
***
## If Answer is Not Found... ❌
```python
else:
print(f"update: did not find the target value in the text - {company_founding_date}")
print("update: initiating a secondary process to try to find the information")
response = model.function_call(text, function="extract", params=["company name"])
```
If the model does not find the company founding date, we will run a second query to find the company name for future use in gathering more information.

***
## Retrieve Information from Wiki 📖
```python
if "company_name" in response["llm_response"]:
company_name = response["llm_response"]["company_name"][0]
if company_name:
print(f"\nupdate: found the company name - {company_name} - now using to lookup in secondary source")
output = WikiParser().add_wiki_topic(company_name,target_results=1)
```
After extracting the company's name from the text, we will then retrieve additional information about the company from Wiki.
***
## Generate a Summary Snippet from Retrieved Article Data ✍️
```python
if output:
supplemental_text = output["articles"][0]["summary"]
if len(supplemental_text) > 150:
supplemental_text_pp = supplemental_text[0:150] + " ... "
else:
supplemental_text_pp = supplemental_text
print(f"update: using lookup - {company_name} - found secondary source article "
f"(extract displayed) - ", supplemental_text_pp)
```
If we have successfully retrieved additional data from the Wiki, we truncate the response if it is over 150 characters and set `supplemental_text_pp` to
***
## Call Extract Again With New Information 📞
```python
new_response = model.function_call(supplemental_text,params=["company founding date"])
print("\nupdate: reviewed second source article - ", new_response["llm_response"])
```
Using the new information retrieved from Wiki, we run the same extraction on the model again.
***
## Print Response If Found 🖨️
```python
if "company_founding_date" in new_response["llm_response"]:
company_founding_date = new_response["llm_response"]["company_founding_date"]
if company_founding_date:
print("update: success - found the answer - ", company_founding_date)
```
If we find the company founding date after incorporating the new information, we print the result.
***
## Fully Integrated Code 🧑💻
```python
from llmware.models import ModelCatalog
from llmware.parsers import WikiParser
text =("BEAVERTON, Ore.--(BUSINESS WIRE)--NIKE, Inc. (NYSE:NKE) today reported fiscal 2024 financial results for its "
"third quarter ended February 29, 2024.) “We are making the necessary adjustments to drive NIKE’s next chapter "
"of growth Post this Third quarter revenues were slightly up on both a reported and currency-neutral basis* "
"at $12.4 billion NIKE Direct revenues were $5.4 billion, slightly up on a reported and currency-neutral basis "
"NIKE Brand Digital sales decreased 3 percent on a reported basis and 4 percent on a currency-neutral basis "
"Wholesale revenues were $6.6 billion, up 3 percent on a reported and currency-neutral basis Gross margin "
"increased 150 basis points to 44.8 percent, including a detriment of 50 basis points due to restructuring charges "
"Selling and administrative expense increased 7 percent to $4.2 billion, including $340 million of restructuring "
"charges Diluted earnings per share was $0.77, including $0.21 of restructuring charges. Excluding these "
"charges, Diluted earnings per share would have been $0.98* “We are making the necessary adjustments to "
"drive NIKE’s next chapter of growth,” said John Donahoe, President & CEO, NIKE, Inc. “We’re encouraged by "
"the progress we’ve seen, as we build a multiyear cycle of new innovation, sharpen our brand storytelling and "
"work with our wholesale partners to elevate and grow the marketplace.")
def not_found_then_triage_lookup():
print("\nNot Found Example - if info not found, then lookup in another source.\n")
extract_key = "company founding date"
dict_key = extract_key.replace(" ", "_")
company_founding_date = ""
model = ModelCatalog().load_model("slim-extract-tool", temperature=0.0, sample=False)
response = model.function_call(text, function="extract", params=[extract_key])
llm_response = response["llm_response"]
print(f"update: first text reviewed for {extract_key} - llm response: ", llm_response)
if dict_key in llm_response:
company_founding_date = llm_response[dict_key]
if len(company_founding_date) > 0:
company_founding_date = company_founding_date[0]
print(f"update: found the {extract_key} value - ", company_founding_date)
return company_founding_date
else:
print(f"update: did not find the target value in the text - {company_founding_date}")
print("update: initiating a secondary process to try to find the information")
response = model.function_call(text, function="extract", params=["company name"])
if "company_name" in response["llm_response"]:
company_name = response["llm_response"]["company_name"][0]
if company_name:
print(f"\nupdate: found the company name - {company_name} - now using to lookup in secondary source")
output = WikiParser().add_wiki_topic(company_name,target_results=1)
if output:
supplemental_text = output["articles"][0]["summary"]
if len(supplemental_text) > 150:
supplemental_text_pp = supplemental_text[0:150] + " ... "
else:
supplemental_text_pp = supplemental_text
print(f"update: using lookup - {company_name} - found secondary source article "
f"(extract displayed) - ", supplemental_text_pp)
new_response = model.function_call(supplemental_text,params=["company founding date"])
print("\nupdate: reviewed second source article - ", new_response["llm_response"])
if "company_founding_date" in new_response["llm_response"]:
company_founding_date = new_response["llm_response"]["company_founding_date"]
if company_founding_date:
print("update: success - found the answer - ", company_founding_date)
return company_founding_date
if __name__ == "__main__":
founding_date = not_found_then_triage_lookup()
```
You may also find the fully integrated code on our github [here](https://github.com/llmware-ai/llmware/blob/main/examples/SLIM-Agents/not_found_extract_with_lookup.py)
Additionally, the notebook version (ipynb) is available [here](https://github.com/llmware-ai/llmware/blob/main/examples/Notebooks/NoteBook_Examples/not_found_extract_with_lookup.ipynb)
***
## Conclusion 🤖
<img src="https://media.giphy.com/media/rSVRXeKPgeM5xfGyCR/giphy.gif" alt="Men In Kilts" width="25%" height="25%">
Handling 'NOT FOUND' answers is one of the hardest problems in RAG, but it's a challenge that can be mitigated with thoughtful design. By implementing techniques like broader lookups, LLMWare aims to enhance the overall user experience and reliability of its AI systems.
Please check out our Github and leave a star! https://github.com/llmware-ai/llmware
Follow us on Discord here: https://discord.gg/MgRaZz2VAB | will_taner |
1,899,172 | Introduction to Docker: Revolutionizing Software Development and Deployment | In the fast-paced world of software development, efficiency and consistency are key. Docker, a... | 0 | 2024-06-24T17:02:25 | https://dev.to/gimkelum/introduction-to-docker-revolutionizing-software-development-and-deployment-42mf | docker, webdev | In the fast-paced world of software development, efficiency and consistency are key. Docker, a powerful platform for developing, shipping, and running applications, has emerged as a game-changer. This blog article will provide an introduction to Docker, exploring its benefits, core concepts, and how it revolutionizes the software development and deployment process. Let’s dive into the world of Docker and understand why it has become a crucial tool for developers.
What is Docker?
Docker is an open-source platform designed to automate the deployment of applications as portable, self-sufficient containers that can run in the cloud or on-premises. These containers encapsulate an application along with its dependencies, ensuring that it runs consistently across different environments. Docker was initially released in 2013 by Docker, Inc. and has since gained immense popularity due to its ability to simplify and streamline the development workflow.
to continue to reading ..[url](https://blog.inivac.co/2024/06/introduction-to-docker-revolutionizing.html) | gimkelum |
1,899,142 | Build AI powered projects *for free* | In this ever-evolving world, AI has become a must-have on your resume, and using AI in our side... | 0 | 2024-06-24T17:00:47 | https://dev.to/lemmecode/build-ai-powered-projects-for-free-e50 | In this ever-evolving world, AI has become a must-have on your resume, and using AI in our side projects can surely give us an advantage over those who don’t. but AI is expensive? or is it?
First and foremost, let me clarify that by side projects I don’t necessarily mean weird, dumb projects made just for fun, but projects that actually solve a problem.
Sure, you can have fun along the way, but make it in such a way that it always has the potential to become a successful SaaS.
I have seen many programmers build a successful SaaS business by accident. They started it as a side project, but it later turned out to be a profitable business, allowing them to make lots of money.
So be cautious — you never know when your chance will come.
## How AI can make our application better?
Using an AI API can significantly enhance our application’s capabilities. For instance, you can create a Personalized Content Recommendation Engine, an Automated Language Translation and Summarization Tool, or explore countless other possibilities limited only by the breadth of AI itself.
## Which AI model to choose?
Once you’ve finalized the idea on which the application is going to be based, we can then proceed to choosing the right model.
There are numerous options available, including the well-known OpenAI API, although it’s not free.

We need to find an AI API that is not only powerful but also free of charge. However, many AI APIs are expensive.
One alternative is to use AI locally, which is free, but if you tried to host it on the internet you will most likely go bankrupt.
I also hustled a lot to find a perfect API that aligns with both my needs and budget, and my search ended when I found out about the Gemini ecosystem.
Google’s Gemini is another well-known AI provider, made by google and fortunately, they offer a free-forever plan.
Pricing of Google gemini
Like I mentioned earlier, unlike other AI API providers, Google Gemini provides a free forever plan, which is perfect for us developers to create side projects using AI.
Gemini offers free access to three models under their free tier.
And maximum 15 RPM (requests per minute) and 1,500 RPD (requests per day) which is more than enough to get started and you can always pay to get more once your app scales larger.

Getting things ready to start working
First of all you need a Google account (obviously), and then you can just simply head over to Aistudio.google.com, and on the left sidebar hit the “Get API key” button and that's it, you got the api key!
you can test how AI will respond by clicking on “create new prompt” and then “chat prompt” .

Using the API in the code
Start by creating a project in your desired language, i will use JavaScript for this purpose.
1- Install the Generative AI SDK
```
npm install @google/generative-ai
```
2- Import/Require the dependencies
```
const {
GoogleGenerativeAI,
HarmCategory,
HarmBlockThreshold,
} = require("@google/generative-ai")`;
```
3- Store your API key in a variable, and initialize Google Generative AI
```
const apiKey = process.env.GEMINI_API_KEY;
const genAI = new GoogleGenerativeAI(apiKey);
```
4- Select a model and few configuration
```
const model = genAI.getGenerativeModel({
model: "gemini-1.5-flash",
});
const generationConfig = {
temperature: 1,
topP: 0.95,
topK: 64,
maxOutputTokens: 8192,
responseMimeType: "text/plain",
};
```
5- Initialize chat session
```
const chatSession = model.startChat({
generationConfig,
// safetySettings: Adjust safety settings
// See https://ai.google.dev/gemini-api/docs/safety-settings
history: [
{
role: "user",
parts: [
{text: "hey there\n"},
],
},
{
role: "model",
parts: [
{text: "Hey there! What can I do for you today? \n"},
],
},
],
});
```
6- finally start calling the “chatSession” function to talk to the model
```
const results = await chatSession.sendMessage("Is javascript enough to takeover the world?")
console.log(results) //That's a fun question! While JavaScript is incredibly powerful and plays a huge role in the digital world..
```
and that's it! you have successfully created (a very basic) AI powered application, now there’s one to keep in mind that: the only thats limited here is your imagination!
Stay updated with the latest trends in the world of code! Sign up for Code Whispers and never miss out on the latest insights, tips, and updates.
{% embed https://code-whispers.beehiiv.com/ %} | lemmecode | |
1,899,171 | Docker: Introduction step-by-step guide for beginners DevOps Engineers. | What is a Docker? Docker is a popular open-source project written in Go and developed by Dot Cloud (A... | 0 | 2024-06-24T16:58:56 | https://dev.to/oncloud7/docker-introduction-step-by-step-guide-for-beginners-devops-engineers-5fee | docker, devops, awschallenge | **What is a Docker?**
Docker is a popular open-source project written in Go and developed by Dot Cloud (A PaaS Company). It is a container engine that uses Linux Kernel features like namespaces and control groups to create containers on top of an operating system.
Docker is a Container management service. Is an Open Platform for developing, shipping & Running applications.
You can separate your applications from your infrastructure and treat your infrastructure like a managed application.
Docker combines kernel containerization features with workflow & tooling that helps you manage & deploy your application.

**Why Use Docker?**
With Docker, you can:
🔧 Build Once, Run Anywhere: Docker containers ensure your application behaves the same way in every environment, from your laptop to production servers.
🏭 Faster Deployment: Say goodbye to time-consuming setup processes. Docker's lightweight containers launch in a snap!
🚀 Scalability: With Docker, scaling your applications becomes as simple as hoisting the sails. 🎉
**Docker Architecture:-**

Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers.
The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon.
The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.
Another Docker client is Docker Compose, which lets you work with applications consisting of a set of containers
**Key Features of Docker**
**Containerization and Isolation:**
Docker allows you to create isolated environments called containers, ensuring that applications and their dependencies are encapsulated, reducing conflicts and promoting consistency.
**Portability:**
Containers built with Docker are highly portable, meaning you can run them on any platform that supports Docker, from local development machines to cloud servers.
**Version Control:**
Docker makes it easy to version and share containers, enabling collaboration between developers and streamlining the continuous integration and deployment (CI/CD) pipeline.
**Efficiency and Performance:**
With its lightweight architecture, Docker minimizes the overhead of virtualization, resulting in faster application start times and better resource utilization.
**Scalability:**
Docker's design allows for effortless scaling of applications, both horizontally and vertically, to handle changing workloads and demand spikes.
**Integration and Ecosystem:**
Docker integrates seamlessly with various tools and platforms, making it an essential part of the DevOps ecosystem.
**What is a Container?**
In the above image, two applications are running on the same machine. These applications are run in a container and any changes made on this container do not affect the other container. Each of them can have a different Operating System running on the same physical machine.
Containers are instances of Docker images that can be run using the Docker run command.
The basic purpose of Docker is to run containers.
You can create, start, stop, move, or delete a container using the Docker API or CLI. You can connect a container to one or more networks, attach storage to it, or even create a new image based on its current state.
A Docker container is an isolated, Secured shipping box produced or created when the docker image is run.
A container packages up the code and all its dependencies so the application runs quickly and reliably from one computing environment to another.
**What is a Docker Image**
A Docker Image is a read-only file with a bunch of instructions. When these instructions are executed, it creates a Docker container.
Docker Image is a read-only template, composed of the layered file system, needed to build a running docker container, basically the running instance of the image.
**What is a Dockerfile**
Dockerfile is a simple text file that consists of instructions to build Docker images.
**Why should you use docker?**
Containers allow a developer to package up an application with all of the parts it needs, such as libraries and other dependencies, and deploy it as one package.
Previously for developing purposes, we are using monolithic Architecture, but nowadays we are using microservices architecture, monolithic architecture is built as one large system and usually on a code base.
To overcome this we used Docker. built as a small independent module based on business functionality.
**Before –**
Monolithic application
Long development cycles
Single environment
Slowly scaling up
**Now -**
Decoupled services
Fast and iterative improvement
Multiple environments
Quickly scaling out.
| oncloud7 |
1,899,170 | Do senior devs use terminal/VIM? Do they type faster? | A reply to this great post about productivity of console proved to be quite popular, so I decided to... | 0 | 2024-06-24T16:58:27 | https://dev.to/latobibor/do-senior-devs-use-terminalvim-do-they-type-faster-glf | programming, console, terminal | A reply to this [great post about productivity of console](https://dev.to/sotergreco/improve-your-productivity-by-using-more-terminal-and-less-mouse--2o7b) proved to be quite popular, so I decided to turn it into an article and add a bit of extra.
## (Skippable) Little Story About DOS and `config.sys`
First a little story. I was a kid in the 90s and my dad had a computer at home. I totally got sucked into it; mostly to play video games. Back in the day we had DOS (Disk Operating System) and it was terminal based. You started Windows 3.1 _from_ the terminal and not the other way around. 🙂🙃
To juggle with files and folder, I used Norton Commander. It had two panes for two folders and you used F5 for example to copy (if I still remember correctly) from active pane to the other. I've gotten really-really fast with it, so fast in fact, that one day I have accidentally deleted the important looking `config.sys` file. Since there was no "Bin" back then, and I didn't know about the `undelete` command, it was a huge trauma to me: ruining dad's computer (which I didn't as I realized later) by being reckless. Later a friend of mine gave me his notes he learnt about DOS configuration in IT class, and I could recreate the file (and go back to playing games that required EMS memory handling instead of XMS).
So it is safe to say I was growing up not using the mouse and doing everything in the terminal. Yet, I have no particular love for it.
## Do You Need Serious Terminal/VIM Skills?
I try to use a great IDE as much as possible and the terminal as little as possible.
Why? Let's say I'm writing and fixing unit tests. The jest integration runs in my Webstorm works as a breeze: I get diff, I can click on the failing test, it jumps to the relevant line in the codebase, I can put in a breakpoint if I want it and so on. Lightning fast, I have everything at my hands, no need to manually scroll, load, struggle with something. 21st century experience.
I do advocate for learning the keyboard shortcuts while using the IDE and also to learn tricks like multiline cursor editing and so on, because they save so much time.
## Do you need great typing speed? Is using a mouse an anti-pattern?
Now, about typing speed. When I code, I write a little and then mostly I think for a while, then type out my code, then check the results (either through a unit test or on the frontend). It's more like a quick chess game than a horse race. It's not a typing competition, it's not a copy-paste-modify extravaganza. In case I've written the samish thing multiple times I tend to step back to evaluate what to do with this repetition. Do I need to extract the code or am I expecting them to grow in a different direction very soon, therefore no need to extract the code? It requires stopping and thinking, nonetheless.
Half of my "coding time" is actually also talking with stakeholders, making agreement with my colleagues on what to work on and what not to work on. Sometimes you do a lot of damage to a codebase if you start solving the problem in the wrong place (e.g. is doing the `join` on the _frontend_ the right place for it or is it better suited in the _database_?) and sometimes you have to advocate for an easier path to take.
## Terminals, CLIs and Mental Mapping
Finally, about CLIs: personally, I hate command line interfaces. They violate the _"avoid mental mapping"_ principle of clean coding: you have to memorize the flags for each CLI tool, remember if `-f` stood for _file_ or _force_ or _folder_ or something completely different.
> There is no "Codex of CLI Conventions", no "Knights of Inter-CLI Consistence". Every CLI is special in its own ways, and to remember the quirks of each is mentally taxing for me.
The only things I do in the terminal are basically "npm start" or "npx something something". Very, very limited instructions.
## Why People Might Prefer Terminal over Full-Fledged IDEs?
This is going to be pure speculation on my side:
- maybe there is this "Coder/Hacker" image which glorifies stream of logs in consoles
- or people are influenced by a famous engineer who uses terminal for everything
- can be a bit of impostor syndrome: the Big Fish uses terminal and VIM therefore if I don't they will think I'm just an impostor; you are not!
- neurodivergency 1: some people have very low tolerance to any type of slowness, they prefer high responsivity; IDEs, since they provide many functions tend to have problems with that time-to-time
- neurodivergency 2: some people have excellent mental mapping abilities (great, reliable memory) like my university math professors; I assume they tend to prefer tools with hardcore mental mapping; since it's a "superability", not the norm, I would not force it on others, personally (I don't have it - I love clean code for this reason)
- using console is a great way to freak out grandma or your parents that you are so intelligent you can speak directly with the CPU
## Do Senior DEVs Use Terminal/VIM then?
I think most of us do not. Personally, I feel there tend to be a lot of friction with the people who push CLIs and bash scripts on others as they increase the mental load on others who are neurologically not that great at mental mapping. If you can keep common projects working without hacking too much in the terminal, without adding magic lines to `.bashprofile`, you should do your own thing. Also if you are working alone, you are 100% free to choose the methods that let you stay in the zone.
Therefore my productivity hack is to get a good IDE and learn it well! The shortcuts, the multiline editing capabilities, the integrations (in my favorite IDE, you can run an SQL statement from a TypeScript to manually test your query; how cool is that?).
In a team, you have to code and plan your productivity in accordance of the tools available. If you "muscle" your code through by typing quickly and a lot, there is a chance you will ignore very effective coding assistance tools, while you have the illusion of productivity since you type a lot, and press Enter a lot.
In reality you might be doing double/triple work. Think before you strike! | latobibor |
1,899,169 | golang: Understanding the difference between nil pointers and nil interfaces | An explanation of the difference between nil interfaces and nil pointers in go | 0 | 2024-06-24T16:56:35 | https://dev.to/goodevilgenius/golang-understanding-the-difference-between-nil-pointers-and-nil-interfaces-31ka | programming, webdev, go | ---
title: "golang: Understanding the difference between nil pointers and nil interfaces"
published: true
description: An explanation of the difference between nil interfaces and nil pointers in go
tags:
- programming
- webdev
- go
- golang
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v5y3c9rhfkcy68ylgywd.jpg
---
I was thinking a bit about the different ways in which nil works in go, and how sometimes, something can be both nil and not nil at the same time.
[Here](https://go.dev/play/p/wgMSKWFjtvv) is a little example of something that can be a nil pointer, but not a nil interface. Let's walk through what that means.
## Interfaces
First, go has a concept of interfaces, which are similar, but not quite the same as interfaces in some object-oriented languages (go is not OOP by most definitions). In go, an interface is a type that defines functions that another type must implement to satisfy the interface. This allows us to have multiple concrete types that can satisfy an interface in different ways.
For example, `error` is a built-in interface that has a single method. It looks like this:
```go
type error interface {
Error() string
}
```
Any type that wants to be used as an error must have a method called `Error` which returns a string. For example, the following code could be used:
```go
type ErrorMessage string
func (em ErrorMessage) Error() string {
return string(em)
}
func DoSomething() error {
// Try to do something, and it fails.
if somethingFailed {
var err ErrorMessage = "This failed"
return err
}
return nil
}
func main() {
err := DoSomething()
if err != nil {
panic(err)
}
}
```
Notice in this example that `DoSomething` returns an `error` if something goes wrong. We can use our `ErrorMessage` type, because it has the `Error` function, which returns a string, and therefore implements the `error` interface.
If no error occurred, we returned nil.
## Pointers
In go, pointers point to a value, but they can also point to no value, in which case the pointer is nil. For example:
```go
var i *int = nil
func main() {
if i == nil {
j := 5
i = &j
}
fmt.Println("i is", *i)
}
```
In this case, the `i` variable is a pointer to an int. It starts out as a nil pointer, until we create an int, and point it to that.
## Pointers and interfaces
Since user-defined types can have functions (methods) attached, we can also have functions for pointers to types. This is a very common practice in go. This also means that pointers can also implement interfaces. In this way, we could have a value that is a non-nil interface, but still a nil pointer. Consider the following code:
```go
type TruthGetter interface {
IsTrue() bool
}
func PrintIfTrue(tg TruthGetter) {
if tg == nil {
fmt.Println("I can't tell if it's true")
return
}
if tg.IsTrue() {
fmt.Println("It's true")
} else {
fmt.Println("It's not true")
}
}
```
Any type that has an `IsTrue() bool` method can be passed to `PrintIfTrue`, but so can `nil`. So, we can do `PrintIfTrue(nil)` and it will print "I can't tell if it's true".
We can also do something simple like this:
```go
type Truthy bool
func (ty Truthy) IsTrue() bool {
return bool(ty)
}
func main() {
var ty Truthy = true
PrintIfTrue(ty)
}
```
This will print "It's true".
Or, we can do something more complicated, like:
```go
type TruthyNumber int
func (tn TruthyNumber) IsTrue() bool {
return tn > 0
}
func main() {
var tn TruthyNumber = -4
PrintIfTrue(tn)
}
```
That will print "It's not true". Neither of these examples are pointers, and so there's no chance for a nil with either of these types, but consider this:
```go
type TruthyPerson struct {
FirstName string
LastName string
}
func (tp *TruthyPerson) IsTrue() bool {
return tp.FirstName != "" && tp.LastName != ""
}
```
In this case `TruthyPerson` does not implement `TruthGetter`, but `*TruthyPerson` does. So, this should work:
```go
func main() {
tp := &TruthyPerson{"Jon", "Grady"}
PrintIfTrue(tp)
}
```
This works because `tp` is a pointer to a `TruthyPerson`. However, if the pointer is nil, we'll get a panic.
```go
func main() {
var tp *TruthyPerson
PrintIfTrue(tp)
}
```
This will panic. However, the panic doesn't happen in `PrintIfTrue`. You would think it's fine, because `PrintIfTrue` checks for nil. But, here's the issue. It's checking nil against a `TruthGetter`. In other words, it's checking for a nil interface, but not a nil pointer. And in `func (tp *TruthyPerson) IsTrue() bool`, we don't check for a nil. In go, we can still call methods on a nil pointer, so the panic happens there. The fix is actually pretty easy.
```go
func (tp *TruthyPerson) IsTrue() bool {
if tp == nil {
return false
}
return tp.FirstName != "" && tp.LastName != ""
}
```
Now, we're checking for a nil interface in `PrintIfTrue` and for a nil pointer in `func (tp *TruthyPerson) IsTrue() bool`. And it will now print "It's not true". We can see all this code [working here](https://go.dev/play/p/IihhiSXMm-q).
## Bonus: Check for both nils at once with reflection
With reflection, we can make a small change to `PrintIfTrue` so that it can check for both nil interfaces and nil pointers. Here's the code:
```go
func PrintIfTrue(tg TruthGetter) {
if tg == nil {
fmt.Println("I can't tell if it's true")
return
}
val := reflect.ValueOf(tg)
k := val.Kind()
if (k == reflect.Pointer || k == reflect.Chan || k == reflect.Func || k == reflect.Map || k == reflect.Slice) && val.IsNil() {
fmt.Println("I can't tell if it's true")
return
}
if tg.IsTrue() {
fmt.Println("It's true")
} else {
fmt.Println("It's not true")
}
}
```
Here, we check for the nil interface first, as before. Next, we use reflection to get the type. `chan`, `func`, `map`, and `slice` can also be nil, in addition to pointers, so we check if the value is one of those types, and if so, check if it's nil. And if it is, we also return the "I can't tell if it's true" message. This may or may not be exactly what you want, but it's an option. With this change, we can do this:
```go
func main() {
var tp *TruthyPerson
PrintIfTrue(tp)
}
```
You might sometimes see a suggestion to something simpler, like:
```go
// Don't do this
if tg == nil && reflect.ValueOf(tg).IsNil() {
fmt.Println("I can't tell if it's true")
return
}
```
There are two reasons this doesn't work well. First, is that there is a performance overhead when using reflection. If you can avoid using reflection, you probably should. If we check for the nil interface first, we don't have to use reflection if it's a nil interface.
The second reason is the `reflect.Value.IsNil()` will panic if the type of the value isn't a type that can be nil. That's why we add in the check for the kind. If we hadn't checked the Kind, then we would've gotten a panic on the `Truthy` and `TruthyNumber` types.
So, as long as we ensure we check the kind first, this will now print "I can't tell if it's true", instead of "It's not true". Depending on your perspective, this may be an improvement. [Here](https://go.dev/play/p/mL048k4C1be) is the complete code with this change.
*This was originally published on [Dan's Musings](https://goodevilgenius.org/2024/06/24/golang-Understanding-the-difference-between-nil-pointers-and-nil-interfaces/)*
| goodevilgenius |
1,899,166 | Text-based language processing enhanced with AI/ML | TL;DR: On this family summer trip to Asia, I've admittedly been relying heavily on Google... | 0 | 2024-06-24T16:53:22 | https://dev.to/wescpy/text-based-language-processing-enhanced-with-aiml-1b1h | python, node, machinelearning, ai | <!-- Text-based language processing enhanced with AI/ML: Intro to the Google Cloud Natural Language & Translation APIs -->
<!-- How to use the Google Cloud Natural Language & Translation APIs -->
<!-- Intro to AI-backed text-based language processing with Google Cloud -->
<!-- Text-based language processing with Google Cloud -->
<!-- Language processing with Google Cloud -->
<!-- Translating text and language understanding with Google Cloud -->
## TL;DR:
On this family summer trip to Asia, I've admittedly been relying heavily on [Google Translate](https://translate.google.com). As someone who lives in the world of APIs, that makes me think of "its API,"^ the Google [Cloud Translation API](https://cloud.google.com/translate). Pure translation, though, is not the same as finding the right words (although they're similar), and _that_ makes me think of natural language understanding (NLU). When considering NLU and NLP (natural language processing), I think of the [Cloud Natural Language API](https://cloud.google.com/natural-language). While there's a brief intro to _this_ API in another post on API keys, let's take a closer look at _both_ of these language-flavored Google Cloud (GCP) APIs and how to use them with Python and Node.js.
<small>
<sup>^</sup> -- The Google Translate product itself doesn't offer an official API, but the Cloud Translation API serves that purpose.
</small>
<!--[GCP Natural Language & Translation APIs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/17m9p6g0c5zww4re0yzs.png)-->

## Introduction
If you're interested in Google APIs and how to access them from Python (and sometimes Node.js), you're in the right place. I especially like to present code and samples you won't find in Google's documentation (unless I wrote it). :-) I also aim for broad coverage across a wide variety of products and topics while leaning towards server-side functionality. Here are some covered so far:
- Google API credentials: [API keys](https://dev.to/wescpy/series/25404) and [OAuth client IDs](https://dev.to/wescpy/series/25403)
- [Serverless computing](https://dev.to/wescpy/a-broader-perspective-of-serverless-1md1), including the latest news with [App Engine](https://dev.to/wescpy/python-app-engine-jan-2024-deprecation-what-you-need-to-know-4bci)
- [Generative AI with the Gemini API](https://dev.to/wescpy/series/27183) via [Google AI](https://ai.google.dev) or [GCP Vertex AI](https://cloud.google.com/vertex-ai)
- [Google Workspace (GWS) APIs](https://developers.google.com/gsuite), e.g., [exporting Google Docs as PDF](https://dev.to/wescpy/export-google-docs-as-pdf-without-the-docs-api-9o4) files
- [Geolocation, time zones, and walking directions](https://dev.to/wescpy/explore-the-world-with-google-maps-apis-lhj) with [Google Maps](https://developers.google.com/maps) (GMP) APIs
### Background and motivation
If you're interested in exploring text-based language processing but are new to AI/ML, aren't ready to train your own models yet nor ready to dive into LLMs (large language models), the Natural Language & Translation APIs from Google Cloud may be some of the simplest ways to get up-to-speed with AI/ML. With these APIs, you don't have to train models because they're already backed by pre-trained models from Google.
The purpose of this post is to give developers enough familiarity with these APIs so you can start using them right away in your next Python or Node.js projects. Client libraries encapsulate much of the required boilerplate and do a lot of the heavy lifting so developers can focus on using API features and solving problems sooner. The code samples in this post leverage the respective client library packages for those languages. Let's do a brief introduction to these APIs.
### Translation API
The Cloud Translation API gives you the core functionality of a tool like Google Translate, allowing you to access that feature programmatically. It uses a pre-trained custom model for this purpose, and your apps can use it to translate an arbitrary string in a supported language to its equivalent in another supported language using state-of-the-art Neural Machine Translation (NMT). Google updates this NMT model on a semi-regular basis, "when more training data or better techniques become available."
While out-of-scope for the content in this post, if this model doesn't suit your purposes exactly, or you need to finetune this model with your own data, use the [AutoML version of the Translation API](https://cloud.google.com/translate/automl/docs/beginners-guide) instead. LLM translations are also now available as well as translations from Google's Gemini. You can read more on [accessing all three from Vertex AI in the docs](https://cloud.google.com/vertex-ai/generative-ai/docs/translate/translate-text). For introductory purposes, the pre-trained Translation API suffices.
### Natural Language API
The Cloud Natural Language API offers multiple natural language features. It can reveal the structure and meaning of text, perform sentiment analysis, content classification, entity extraction, and determine syntactical structure analysis, and it can do so across multiple supported languages.
Similar to the Translation API, the Natural Language API is backed by a pre-trained model from Google. If you need a finetuned model trained on your own data, the original solution was the [AutoML Natural Language API](https://cloud.google.com/natural-language/automl/docs) (AMLNL). Google then merged [AMLNL into the larger Vertex AI platform](https://cloud.google.com/vertex-ai/docs/start/migrating-applications#automl-natural-language), and more recently, you would be [finetuning the Gemini model](https://cloud.google.com/vertex-ai/generative-ai/docs/models/tune-models) from Vertex AI. Those are great options when your needs grow/change... for now use of the pre-trained Natural Language API works.
## Prerequisites
As with most Google APIs, there are several distinct steps that you must take before you can use them from your application:
1. **Install client libraries** for your development language(s)
1. **Enable desired API(s)** in developer's console ("DevConsole")
1. **Get credentials** (choose from your options)
These actions are unrelated to each other, meaning you can (almost always) do them in any order, however unlike with other Google APIs, there's a bit more to discuss with regards to credentials, so I'm going to discuss the basics in this section but circle back to it towards the end.
> :warning: ALERT: **Billing required (but free?!?)**
> While many Google APIs are free to use, GCP APIs are not. You must [enable billing in the Cloud Console](http://console.cloud.google.com/billing) and create a billing account supported by a financial instrument ([payment method depends on region/currency](https://support.google.com/paymentscenter/answer/9001356?ref_topic=9023854#allowed-methods)) in order to run the code samples. If you're new to GCP, review the [billing and onboarding guide](https://cloud.google.com/billing/docs/onboarding-checklist).
> That said, Google surely wants you to "try before you buy," get an idea of what kinds of services are available, _and_ how to use their APIs, so GCP has a [free monthly (or daily) tier for certain products](https://cloud.google.com/free/docs/free-cloud-features#free-tier-usage-limits), including the Natural Language and Translation APIs. Review the information on the [Natural Language](http://cloud.google.com/natural-language/pricing) and [Translation](http://cloud.google.com/translate/pricing) API pricing pages to understand how their usage is billed. Running the sample scripts in this post a reasonable number of times should not incur billing... just be sure to stay within their corresponding free tier limits.
### Install client libraries
Commands are listed below for Python (2 or 3) and Node.js to install the client libraries for both APIs. Pick the one for your development language, or both if you're inclined:
| Language | Command |
| --- | --- |
| **Python** | `pip install -U pip google-cloud-language google-cloud-translate` # (or `pip3`) |
| **Node.js** | `npm i @google-cloud/language @google-cloud/translate` |
<figcaption>Client library installation commands</figcaption>
Confirm all required packages have been installed correctly with the validation commands below... if they complete without error, the installation(s) succeeded and are ready for use:
| Language | Command |
| --- | --- |
| **Python** | `python -c "from google.cloud import language, translate"` # (or `python3`) |
| **Node.js** | `node -e "require('@google-cloud/language'); require('@google-cloud/translate').v2"` |
<figcaption>Client library validation commands</figcaption>
Client libraries for both APIs are available in a variety of languages. If you work in a different development language or want to learn more about the client libraries in general, see the relevant documentation page for the [Natural Language API](https://cloud.google.com/natural-language/docs/sentiment-analysis-client-libraries#install_the_client_library) and [Translation API](https://cloud.google.com/translate/docs/setup#installing_client_libraries).
| :memo: NOTE: Translation API has two editions |
|:---------------------------|
| The Translation API comes in [basic](https://cloud.google.com/translate/docs/basic/translate-text-basic) and [advanced](https://cloud.google.com/translate/docs/advanced/translate-text-advance) editions. We are using **Basic**, so you only need to follow the client library instructions for _that_ service. To learn more, see the [comparing both editions page](https://cloud.google.com/translate/docs/editions) in the documentation. |
### Enable APIs
All Google APIs must be enabled before they can be used. As mentioned in the earlier sidebar on billing, an active billing account is required, as well as [creating a new project](https://cloud.google.com/resource-manager/docs/creating-managing-projects) or reusing an existing one that has a working billing account attached to it. If you don't, you'll be prompted to do so when enabling billable APIs like Natural Language or Translation.
There are generally three ways of enabling Google APIs:
1. **DevConsole manually** -- Enable one API at a time by following these steps:
- Go to [DevConsole](http://console.developers.google.com)
- Click on **Library** tab
- Search for "Language", select "Cloud Natural Language API", click **Enable API** button
- Go back and search for "Translate", pick "Cloud Translation API", and enable _that_ one
1. **DevConsole link** -- You may be new to Google APIs or don't have experience enabling APIs manually in the DevConsole. If this is you, the above steps can be simplified with a single [DevConsole link (and click) that enables both APIs](http://console.developers.google.com/start/api?id=language.googleapis.com,translate.googleapis.com).
1. **Command-line** (`gcloud`) -- For those who prefer working in a terminal, you can enable APIs with a single command in the [Cloud Shell](https://cloud.google.com/shell) or locally on your computer if you [installed the Cloud SDK](https://cloud.google.com/sdk/install) (which includes the `gcloud` command-line tool [CLI]) and initialized its use. If this is you, issue the following command to enable both APIs: `gcloud services enable language.googleapis.com translate.googleapis.com`. Confirm all the APIs you've enabled using this command: `gcloud services list`.
### Get credentials
Google APIs require one or more of the following credentials types:
- API keys
- OAuth client IDs
- Service accounts
Which you use depends on which APIs you're trying to access. When using GCP APIs, you're more likely to use service accounts and OAuth client IDs.
#### Which credentials type?
To learn the code and run the sample scripts from the blog post in your development environment, you're more likely to use the latter (OAuth client ID). When you're ready for production and move away from your "dev box" to a VM or other server in the cloud, you'd then transition to a service account.
To ease the burden on the developer -- no one wants to write different code for different credentials types -- GCP client libraries use an associated library named Application Default Credentials (ADC) for API access. Depending on the execution environment, ADC will point to either service account or OAuth client ID credentials. Create your credentials by going to your terminal and issue the following command: `gcloud auth application-default login` (and follow the instructions).
#### (optional) More about ADC and credentials
The command above asks you for the relevant permissions then obtains user-authorized credentials (OAuth client ID) accessible by the ADC library so you can experiment with the code samples. Learn more about ADC in the docs, starting with the [ADC setup page](https://cloud.google.com/docs/authentication/provide-credentials-adc). Also see the [page on the different authentication methods and credentials types](https://cloud.google.com/docs/authentication).
Of the three credentials types, OAuth client IDs and service accounts provide _authorized_ access to APIs and the data behind them. The last credentials type are API keys which provides _simple_ access to APIs (no user permissions needed beyond API key creation). Because API keys impose less friction and provide an easier onboarding process than other credentials types, many developers prefer them, so I'll show you how to access both APIs with API keys in an optional section at the end of this post if it's something that interests you.
OAuth client IDs provide user-authorized access, so they're generally used with APIs that access user-owned data, for example, Google Workspace (GWS) APIs, but not in this case. When using GCP APIs, they provide slightly-elevated access over API keys and are less confusing than learning about service accounts and IAM (identity & access management) permissions.
Learn more about using OAuth client IDs with GWS APIs in a separate [post series](https://dev.to/wescpy/series/25403) covering the topic. To learn more about API keys, there's [another post series](https://dev.to/wescpy/series/25404) covering _that_ subject. (A third series on service accounts is forthcoming.)
## Code samples
If you've completed the above steps (enabled billing & APIs, installed the client libraries, and obtained credentials via the `gcloud` command above), you're ready to look at and run code!
### Application
The sample script translates text from English to Spanish using the Translation API, and demonstrates two features from the Natural Language API, sentiment analysis and content classification. [Sentiment analysis](https://cloud.google.com/natural-language/docs/analyzing-sentiment) determines whether the provided text is positive or negative (or neither) while [content classification](https://cloud.google.com/natural-language/docs/classifying-text) attempts to classify the text as belonging to one or more categories. The analyzed text is the same as the text that it translates.
In each script, you'll first see the text followed by the three forms of processing just described above. Let's start with Python.
### Python
The main Python script is available in the [repo](https://github.com/wescpy/google/tree/main/cloud/transnatlang) as [`transnatlang-svcacct-gcp.py`](https://github.com/wescpy/google/blob/main/cloud/transnatlang/transnatlang-svcacct-gcp.py):
```python
'''
transnatlang-svcacct-gcp.py -- Cloud Natural Language & Translation demo
'''
from __future__ import print_function
import sys
from google.cloud import language_v1 as language, translate_v2 as translate
# Text to process
_TEXT = '''\
Google, headquartered in Mountain View, unveiled the new Android
phone at the Consumer Electronics Show. Sundar Pichai said
in his keynote that users love their new Android phones.
'''
TEXT = ' '.join(_TEXT.strip().split())
LINE = '-' * 60
# Display text to process
print('TEXT:')
print(TEXT)
print(LINE)
# Create API clients/endpoints
NL = language.LanguageServiceClient()
TL = translate.Client()
# Detect text sentiment
TYPE = 'type_' if sys.version_info.major == 3 else 'type'
BODY = {'content': TEXT, TYPE: language.types.Document.Type.PLAIN_TEXT}
sent = NL.analyze_sentiment(document=BODY).document_sentiment
print('\nSENTIMENT: score (%.2f), magnitude (%.2f)' % (
sent.score, sent.magnitude))
print(LINE)
# Categorize text
print('\nCATEGORIES:')
categories = NL.classify_text(document=BODY).categories
for cat in categories:
print('* %s (%.2f)' % (cat.name[1:], cat.confidence))
print(LINE)
# Translate text to Spanish
TARGET = 'es'
txlns = TL.translate(TEXT, TARGET)
txln = txlns[0] if isinstance(txlns, list) else txlns
print('\nTRANSLATION to %r:\n%s' % (TARGET, txln['translatedText']))
```
After the `import`s and seeing the text that will be processed, the code creates the API clients that are used to access API features. Sentiment analysis comes first, followed by content classification, and finally, the text is translated. This script is architected as Python 2/3-compatible (to help remaining 2.x users migrate to 3.x), and running it with either Python version results in identical output:
```
$ python transnatlang-svcacct-gcp.py
TEXT:
Google, headquartered in Mountain View, unveiled the new Android phone at the Consumer
Electronics Show. Sundar Pichai said in his keynote that users love their new Android
phones.
------------------------------------------------------------
SENTIMENT: score (0.20), magnitude (0.50)
------------------------------------------------------------
CATEGORIES:
* Internet & Telecom (0.76)
* Computers & Electronics (0.64)
* News (0.56)
------------------------------------------------------------
TRANSLATION to 'es':
Google, con sede en Mountain View, presentó el nuevo teléfono Android en el Consumer
Electronics Show. Sundar Pichai dijo en su discurso de apertura que a los usuarios
les encantan sus nuevos teléfonos Android.
'''
```
### Node.js
The near-equivalent Node.js script can be accessed as [`transnatlang-svcacct-gcp.js`](https://github.com/wescpy/google/blob/main/cloud/transnatlang/transnatlang-svcacct-gcp.js) in the repo:
```javascript
// transnatlang-svcacct-gcp.js -- Cloud Natural Language & Translation APIs demo
const LanguageClient = require('@google-cloud/language');
const {Translate} = require('@google-cloud/translate').v2;
// Text to process
const TEXT = `Google, headquartered in Mountain View, unveiled
the new Android phone at the Consumer Electronics Show. Sundar
Pichai said in his keynote that users love their new Android
phones.`.replace(/\n/g, ' ');
const LINE = '-'.repeat(60);
const BODY = {content: TEXT, type: 'PLAIN_TEXT'};
// Create API clients/endpoints
const NL = new LanguageClient.LanguageServiceClient();
const TL = new Translate();
// Detect text sentiment
async function sentAnalysis() {
const [result] = await NL.analyzeSentiment({document: BODY});
const sent = result.documentSentiment;
console.log(`\nSENTIMENT: score (${sent.score.toFixed(2)}), magnitude (${sent.magnitude.toFixed(2)})`);
console.log(LINE);
}
// Categorize text
async function categorizeText() {
console.log('\nCATEGORIES:');
const [result] = await NL.classifyText({document: BODY});
const categories = result.categories;
for (let cat of categories) {
console.log(`* ${cat.name.slice(1)} (${cat.confidence.toFixed(2)})`);
};
console.log(LINE);
}
// Translate text to Spanish
async function translateText() {
const TARGET = 'es';
const [txlns] = await TL.translate(TEXT, TARGET);
let txln = Array.isArray(txlns) ? txlns[0] : txlns;
console.log(`\nTRANSLATION to "${TARGET}":\n${txln}`);
}
// Display text to process
console.log('TEXT:');
console.log(TEXT);
console.log(LINE);
// Execute all
sentAnalysis()
.then(categorizeText)
.then(translateText)
.catch(console.error);
```
Executing the Node.js script results in the same output as the Python version:
```
$ node transnatlang-svcacct-gcp.js
TEXT:
Google, headquartered in Mountain View, unveiled the new Android phone at the Consumer
Electronics Show. Sundar Pichai said in his keynote that users love their new Android
phones.
------------------------------------------------------------
SENTIMENT: score (0.20), magnitude (0.50)
------------------------------------------------------------
CATEGORIES:
* Internet & Telecom (0.76)
* Computers & Electronics (0.64)
* News (0.56)
------------------------------------------------------------
TRANSLATION to "es":
Google, con sede en Mountain View, presentó el nuevo teléfono Android en el Consumer
Electronics Show. Sundar Pichai dijo en su discurso de apertura que a los usuarios
les encantan sus nuevos teléfonos Android.
```
For those who prefer a modern ECMAScript module, here's the equivalent `.mjs` file, in the repo as [`transnatlang-svcacct-gcp.mjs`](https://github.com/wescpy/google/blob/main/cloud/transnatlang/transnatlang-svcacct-gcp.mjs):
```javascript
// transnatlang-svcacct-gcp.mjs -- Cloud Natural Language & Translation APIs demo
import LanguageClient from '@google-cloud/language';
import {v2} from '@google-cloud/translate';
// Text to process
const TEXT = `Google, headquartered in Mountain View, unveiled
the new Android phone at the Consumer Electronics Show. Sundar
Pichai said in his keynote that users love their new Android
phones.`.replace(/\n/g, ' ');
const LINE = '-'.repeat(60);
const BODY = {content: TEXT, type: 'PLAIN_TEXT'};
// Create API clients/endpoints
const NL = new LanguageClient.LanguageServiceClient();
const TL = new v2.Translate();
// Detect text sentiment
async function sentAnalysis() {
const [result] = await NL.analyzeSentiment({document: BODY});
const sent = result.documentSentiment;
console.log(`\nSENTIMENT: score (${sent.score.toFixed(2)}), magnitude (${sent.magnitude.toFixed(2)})`);
console.log(LINE);
}
// Categorize text
async function categorizeText() {
console.log('\nCATEGORIES:');
const [result] = await NL.classifyText({document: BODY});
const categories = result.categories;
for (let cat of categories) {
console.log(`* ${cat.name.slice(1)} (${cat.confidence.toFixed(2)})`);
};
console.log(LINE);
}
// Translate text to Spanish
async function translateText() {
const TARGET = 'es';
const [txlns] = await TL.translate(TEXT, TARGET);
let txln = Array.isArray(txlns) ? txlns[0] : txlns;
console.log(`\nTRANSLATION to "${TARGET}":\n${txln}`);
}
// Display text to process
console.log('TEXT:');
console.log(TEXT);
console.log(LINE);
// Execute all
sentAnalysis()
.then(categorizeText)
.then(translateText)
.catch(console.error);
```
Take my word for it that its output is identical to the CommonJS version. Play around with the code, try using other languages with both the [Translation](https://cloud.google.com/translate/docs/languages) and [Natural Language](https://cloud.google.com/natural-language/docs/languages) APIs, modify the code with different data, or experiment with other API features as desired. Regardless, you now have basic working knowledge of both APIs. (Skip the next section if you're not interested in using API keys with these APIs.)
### (optional) Python: API key access
Now for some stuff you _definitely_ won't find much information on or code samples in the official or _any_ Google documentation.
#### Background
As mentioned earlier, use of GCP APIs generally leverage OAuth client IDs or service accounts. API keys are much easier to use and implement in applications though. Unfortunately API keys are easy to lose/leak, so many consider them less secure than the other credentials types. (Also see sidebar below.)
Generally API keys are used with Google APIs that access "public data," meaning data not owned by a (human or robot) user or belonging to a project. This includes text strings sent to APIs to be processed, analyzed, or translated, and both APIs today covered are part of that group of [GCP APIs accepting API keys](http://web.archive.org/web/20200510072541/https://cloud.google.com/docs/authentication/api-keys). (More information can be found in the [post series on API keys](https://dev.to/wescpy/series/25404).
Another hurdle using API keys is that because they're not typical nor recommended for GCP APIs, support for API keys isn't always complete. While the [Python Natural Language API client library supports API keys](https://cloud.google.com/docs/authentication/api-keys#using-with-client-libs), the [Python Translation API client library](https://cloud.google.com/translate/docs/setup#installing_client_libraries) **does not**. Both of these are considered _product_ client libraries because there is one per product.
Developers need to take a leap of faith here because you're about to be exposed to something you haven't seen before: the older but broader [_platform_ Google API client library](https://developers.google.com/api-client-library) for Python. This is the same client library used to [access GWS APIs along with OAuth client IDs](https://dev.to/wescpy/series/25403), so if you've ever used GWS APIs, this won't take you by surprise like it will developers who have only used GCP APIs and product client libraries.
In order to keep the code consistent and _not_ mix styles of different client libraries, I need to switch to the sample app to use the platform client library _and_ the older auth libraries that go with it. If you're willing to take this leap, then continue onto creating an API key to call both language-oriented APIs with.
| :memo: NOTE: Node.js API key support unknown |
|:---------------------------|
| I have yet to try using API keys in Node.js with these APIs, whether product or platform client libraries. If you have a working example, please file an issue and submit a PR. |
#### Get credentials (API key)
Follow these steps to create an API key:
1. Go to the [DevConsole credentials page](https://console.cloud.google.com/apis/credentials)
1. Click **+ Create Credentials** at the top
1. Select **API key** and wait a few seconds for completion
1. Copy and save API key as a variable `API_KEY = '<YOUR-API-KEY>'` to `settings.py` for Python (may need to refresh credentials page and click **Show key** to see it)
| :warning: WARNING: Keep API keys secure |
|:---------------------------|
| Storing API keys in files (or hard-coding them for use in actual code or even assigning to environment variables) is for prototyping and learning purposes only. When going to production, put them in environment variables or in a secrets manager. Files like `settings.py` or `.env` containing API keys are susceptible. ***Under no circumstances*** should you [upload files like those to any public or private repo](https://www.gitguardian.com/glossary/remediate-sensitive-data-leaks-api-keys-hardcoded-source-code), [have sensitive data like that in TerraForm config files](https://spacelift.io/blog/terraform-secrets), [add such files to Docker layers](https://www.darkreading.com/cloud-security/docker-leaks-api-secrets-private-keys-cybercriminals), etc., as once your API key leaks, everyone in the world can use it. |
#### Install platform client library
Follow these steps in your terminal to install and validate the platform client library:
1. **Install**: `pip install -U pip google-api-python-client` # (or `pip3`)
1. **Validate**: `python -c "import googleapiclient"` # (or `python3`)
#### Sample app using API keys
Assuming you've already enabled both APIs, you're good-to-go. The API key alternate version of the Python script shown below is available as [`transnatlang-apikey-old.py`](https://github.com/wescpy/google/blob/main/cloud/transnatlang/transnatlang-apikey-old.py) in the repo:
```python
'''
transnatlang-apikey-old.py -- GCP Natural Language & Translation APIs demo
'''
from __future__ import print_function
from googleapiclient import discovery
from settings import API_KEY
# Text to process
_TEXT = '''\
Google, headquartered in Mountain View, unveiled the new Android
phone at the Consumer Electronics Show. Sundar Pichai said
in his keynote that users love their new Android phones.
'''
TEXT = ' '.join(_TEXT.strip().split())
LINE = '-' * 60
# Display text to process
print('TEXT:')
print(TEXT)
print(LINE)
# Create API clients/endpoints
NL = discovery.build('language', 'v1', developerKey=API_KEY)
TL = discovery.build('translate', 'v2', developerKey=API_KEY)
# Detect text sentiment
BODY = {'content': TEXT, 'type': 'PLAIN_TEXT'}
sent = NL.documents().analyzeSentiment(
body={'document': BODY}).execute().get('documentSentiment')
print('\nSENTIMENT: score (%.2f), magnitude (%.2f)' % (
sent['score'], sent['magnitude']))
print(LINE)
# Categorize text
print('\nCATEGORIES:')
categories = NL.documents().classifyText(
body={'document': BODY}).execute().get('categories')
for cat in categories:
print('* %s (%.2f)' % (cat['name'][1:], cat['confidence']))
print(LINE)
# Translate text to Spanish
TARGET = 'es'
txln = TL.translations().list(
q=TEXT, target=TARGET).execute().get('translations')[0]
print('\nTRANSLATION to %r:\n%s' % (TARGET, txln['translatedText']))
```
As you can guess, the output is identical to the earlier versions above, so we won't show it here. More important is for readers to see how the code differs between both client library styles:

<figcaption>Platform client library & API keys vs. Product client library & ADC (OAuth client ID or service account)</figcaption>
Aside from the obvious differences in client library nomenclature and use of different credentials, API usage is fairly similar. **Not** visually obvious is that the older platform client library calls the REST versions of the GCP APIs whereas the newer product client libraries call the [gRPC](https://grpc.io) versions which generally perform better... yet _another_ reason why the product client libraries are always recommended.
## Summary
When it comes to getting up to speed using AI/ML to process text, whether translations or NLU, you can train your own models or finetune existing open source or proprietary models, but if you're new to AI/ML or don't need more complex features, the getting started process is accelerated when using APIs backed by single-task pre-trained models. GCP provides the Translation and Natural Language APIs that meet this introductory need. The purpose of the introductory samples in Python and Node.js are meant to help you get up-to-speed quickly and have working code you can experiment with.
If you find any errors or have suggestions on content you'd like to see in future posts, be sure to leave a comment below, and if your organization needs help integrating Google technologies via its APIs, reach out to me by submitting a request at <https://cyberwebconsulting.com>. Lastly, below are links relevant to the content in this post for further exploration.
## References
This post covered quite a bit, so there is a good amount of documentation to link you to:
### Blog post code samples
- [This post only](https://github.com/wescpy/google/cloud/transnatlang)
- [All GCP sample code from posts](https://github.com/wescpy/google/cloud)
- [All featured APIs from posts](https://github.com/wescpy/google)
### GCP/Cloud Natural Language API
- [Home page](https://cloud.google.com/natural-language)
- [Pricing page](http://cloud.google.com/natural-language/pricing)
- [_Product_ client libraries](https://cloud.google.com/natural-language/docs/sentiment-analysis-client-libraries#install_the_client_library)
- [Sentiment analysis](https://cloud.google.com/natural-language/docs/analyzing-sentiment)
- [Content classification](https://cloud.google.com/natural-language/docs/classifying-text)
- [Supported languages](https://cloud.google.com/natural-language/docs/languages)
- [AutoML Natural Language API](https://cloud.google.com/natural-language/automl/docs)
- [Client library supports API keys](https://cloud.google.com/docs/authentication/api-keys#using-with-client-libs)
### GCP/Cloud Translation API and Google Translate
- [Home page](https://cloud.google.com/translate)
- [Pricing page](http://cloud.google.com/translate/pricing)
- [_Product_ client libraries](https://cloud.google.com/translate/docs/setup#installing_client_libraries)
- [Supported languages](https://cloud.google.com/translate/docs/languages)
- [Basic edition](https://cloud.google.com/translate/docs/basic/translate-text-basic)
- [Advanced edition](https://cloud.google.com/translate/docs/advanced/translate-text-advance)
- [Comparing Basic vs. Advanced editions](https://cloud.google.com/translate/docs/editions)
- [AutoML version of the Translation API](https://cloud.google.com/translate/automl/docs/beginners-guide)
- [Google Translate](https://translate.google.com)
### Google AI, Gemini API, and GCP Vertex AI platform
- [Google AI](https://ai.google.dev)
- [GCP Vertex AI](https://cloud.google.com/vertex-ai)
- [Translating text with Vertex AI](https://cloud.google.com/vertex-ai/generative-ai/docs/translate/translate-text)
- [Migrating from AutoML to Vertex AI](https://cloud.google.com/vertex-ai/docs/start/migrating-applications)
- [Finetuning Gemini](https://cloud.google.com/vertex-ai/generative-ai/docs/models/tune-models)
### Google APIs, ADC/credentials, Cloud/DevConsole, etc.
- [Cloud console](http://console.cloud.google.com)
- [DevConsole/API Manager](http://console.developers.google.com)
- [En-/disable APIs](https://console.cloud.google.com/apis/library)
- [Create credentials](https://console.cloud.google.com/apis/credentials)
- [Cloud console Billing page](http://console.cloud.google.com/billing)
- [GCP payment methods based on region/currency](https://support.google.com/paymentscenter/answer/9001356?ref_topic=9023854#allowed-methods)
- [GCP billing and onboarding guide](https://cloud.google.com/billing/docs/onboarding-checklist)
- [GCP "Always Free" tier](https://cloud.google.com/free/docs/free-cloud-features#free-tier-usage-limits)
- [Creating new projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects)
- [Cloud SDK and `gcloud` installation](https://cloud.google.com/sdk/install)
- [GCP ADC setup page](https://cloud.google.com/docs/authentication/provide-credentials-adc)
- [Authentication & credentials guide](https://cloud.google.com/docs/authentication)
- [GCP APIs accepting API keys](http://web.archive.org/web/20200510072541/https://cloud.google.com/docs/authentication/api-keys)
- [_Platform_ Google API client libraries](https://developers.google.com/api-client-library)
### Similar blog content by the author
- Google API credentials: [API keys](https://dev.to/wescpy/series/25404)
- Google API credentials: [OAuth client IDs](https://dev.to/wescpy/series/25403)
- [Generative AI with the Gemini API](https://dev.to/wescpy/series/27183)
- [Image archive sample app](https://cloud.google.com/blog/topics/developers-practitioners/image-archive-analysis-and-report-generation-google-apis) (uses Google Sheets & Drive, Cloud Storage & Vision APIs)
- [Explore GCP serverless platforms with a nebulous app](https://developers.googleblog.com/2021/09/exploring-serverless-with-nebulous-app.html?utm_source=blog&utm_medium=partner&utm_campaign=CDR_wes_aap-serverless_nebserv_sms_201028) (uses GCP serverless platforms & Cloud Translation API; also [video 1](http://youtu.be/gle26fT28Bw) and [video 2](http://youtu.be/eTotLOVR7MQ))
### Technical session videos by the author (all LONG)
- [Easy path to machine learning](https://youtu.be/d49Bq6QryY0) (60+ mins; introduces several GCP AI/ML APIs)
- [Hands-on intro to AI/ML with Python](http://youtu.be/YZPt2wAsYUI) (75+ mins; uses Cloud Vision API and executes its codelab tutorial)
- [Google APIs 102: GCP vs. non-GCP APIs" video](http://youtu.be/3IQ4Yv80lJg) (90+ mins; covers platform vs. product client libraries, credentials types, tour of various GCP & non-GCP Google APIs)
- [Easy path to machine learning](https://youtu.be/dkx8mzkJDgk) (90+ mins; introduces several GCP AI/ML APIs)
- [GWS APIs, AI/ML APIs, and GCP serverless workshop](http://youtu.be/HFtEypXpoaY) (~4 hours[!]; made up of 3 technical sessions covering each topic)
---
<small>
**WESLEY CHUN**, MSCS, is a [Google Developer Expert](https://developers.google.com/experts) (GDE) in Google Cloud (GCP) & Google Workspace (GWS), author of Prentice Hall's bestselling ["Core Python"](https://corepython.com) series, co-author of ["Python Web Development with Django"](https://withdjango.com), and has written for Linux Journal & CNET. He runs [CyberWeb](https://cyberwebconsulting.com) specializing in GCP & GWS APIs and serverless platforms, [Python & App Engine migrations](https://appenginemigration.com), and Python training & engineering. Wesley was one of the original Yahoo!Mail engineers and spent 13+ years on various Google product teams, speaking on behalf of their APIs, producing sample apps, codelabs, and videos for [serverless migration](http://bit.ly/3xk2Swi) and [GWS developers](http://goo.gl/JpBQ40). He holds degrees in Computer Science, Mathematics, and Music from the University of California, is a Fellow of the Python Software Foundation, and loves to travel to meet developers worldwide at conferences, user group events, and universities. Follow he/him [@wescpy](https://twitter.com/wescpy) & his [technical blog](https://dev.to/wescpy). Find this content useful? [Contact CyberWeb](https://forms.gle/bQiDMiGyGrrwv5sy5) if you may need help or [buy him a coffee (or tea)](http://buymeacoffee.com/wescpy)!
</small>
| wescpy |
1,899,167 | Optimizing Business Operations with Call Center Services | Call center services are indispensable for modern businesses seeking to streamline operations and... | 0 | 2024-06-24T16:52:01 | https://dev.to/ijtecjnologies8977/optimizing-business-operations-with-call-center-services-40c0 | Call center services are indispensable for modern businesses seeking to streamline operations and elevate customer satisfaction. These services encompass both inbound and [outbound](https://ijtechnologz.com/outbound-service/[](url)) capabilities, offering crucial support for customer inquiries, sales, technical assistance, and more. Integrating call center services into business strategies is vital for maintaining competitiveness and fostering growth, often leveraging [Business Process Outsourcing (BPO)](https://ijtechnologz.com/) for enhanced efficiency.
1. Streamline Customer Support
Inbound call center services streamline customer support processes by promptly addressing inquiries, resolving issues, and providing information. This efficient handling of customer interactions ensures high levels of satisfaction and retention, facilitated by BPO partners who specialize in optimizing customer service operations.
2. Drive Sales and Upselling
Call centers play a pivotal role in driving sales through proactive outbound campaigns. BPO-managed agents initiate contact with potential customers, promote products or services, and capitalize on upselling opportunities, thereby maximizing revenue generation while benefiting from BPO expertise in sales strategies.
3. Enhance Technical Support
Businesses leverage call centers, often through BPO arrangements, to deliver reliable technical support, troubleshooting, and guidance to customers. This proactive approach helps in resolving issues promptly, minimizing downtime, and ensuring seamless operations with the backing of BPO's technical expertise.
4. Conduct Customer Surveys and Feedback
Call centers managed by BPO providers facilitate direct communication channels for conducting customer surveys and gathering feedback. This valuable data provides insights into customer satisfaction levels, preferences, and areas for improvement, guiding strategic decisions and enhancing service offerings based on BPO's analytical capabilities.
5. Utilize Multichannel Support
Modern call centers, including those managed by BPO firms, offer multichannel support encompassing phone calls, emails, live chat, and social media interactions. This flexibility enables businesses to cater to diverse customer preferences and ensure consistent, responsive service across various platforms, leveraging BPO's technological infrastructure.
6. Leverage Analytics for Performance Optimization
Analytics-driven insights from call center operations, often enhanced by BPO analytics capabilities, enable businesses to optimize performance metrics such as call handling times, customer satisfaction rates, and agent productivity. This data-driven approach supports continuous improvement and operational efficiency under the guidance of experienced BPO analysts.
7. Ensure Compliance and Security
BPO-managed call centers adhere to stringent data protection regulations and security protocols to safeguard customer information. Compliance with standards such as GDPR and PCI-DSS ensures trustworthiness and mitigates risks associated with data breaches, supported by BPO's robust compliance frameworks.
8. Scalability and Flexibility
Scalable call center solutions, often facilitated through BPO partnerships, provide businesses with the flexibility to adjust resources based on seasonal demands or growth phases. This scalability optimizes cost management and ensures seamless customer service delivery, leveraging BPO's resources and scalability models.
Conclusion
Integrating call center services into business operations, particularly through BPO partnerships, is instrumental in enhancing customer interactions, optimizing efficiency, and driving sustainable growth. By leveraging the capabilities of BPO-managed call centers, businesses can effectively manage [customer support](https://ijtechnologz.com/customer-service/), boost sales efforts, and maintain a competitive edge in dynamic market environments. Embracing these services empowers organizations to meet evolving customer expectations and achieve long-term success, supported by the strategic advantages of BPO expertise and operational efficiency.
| ijtecjnologies8977 | |
1,899,148 | Remove unwanted partition data in Azure Synapse (SQL DW) | Introduction to Partition Switching? Azure Synapse Dedicated SQL pool or SQL Server or... | 0 | 2024-06-24T16:48:21 | https://dev.to/ayush9892/remove-unwanted-partition-data-in-azure-synapse-sql-dw-5clp | dataengineering, sqlserver, azure | ## Introduction to Partition Switching?
Azure Synapse Dedicated SQL pool or SQL Server or Azure SQL Database, allows you to create partitions on a target table. Table partitions enable you to divide your data into multiple chunks or partitions. It improves query performance by eliminating partitions that is not necessary. In most cases partitions are built on date column.
---
## Why don't we simply drop the unwanted Partition?
We don't simply drop the unwanted Partition because of several regions:
- **Clustered Columnstore Index**: Dropping a partition directly can potentially lead to performance degradation, especially with a CCI. This is because CCIs are optimized for data locality and dropping a partition disrupts that organization. Rebuilding the CCI after dropping the partition would be required, which can be time-consuming for a large table.
- **Transaction Safety**: Directly dropping a partition might not be a transactional operation. This means if the drop operation fails midway, the partition might be left in an inconsistent state, potentially causing data integrity issues.
---
## Requirement to apply Partition Switching
- The definitions of source and target tables are the same.
---
## Steps for Partition Switching in Synapse SQL Pool:
### Step 1 (Optional) -> Create a credential
Skip this step if you're loading the Contoso public data.
Don't skip this step if you're loading your own data. To access data through a credential, use the following script to create a database-scoped credential. Then use it when defining the location of the data source.
```
CREATE MASTER KEY;
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH
IDENTITY = 'Managed Identity',
SECRET = 'https://rnd-learning.vault.azure.net/secrets/synapselearningadls-accesskey/d94c967cb0c5452eafaf5d207afcb86a'
;
CREATE EXTERNAL DATA SOURCE AzureStorage
WITH (
TYPE = HADOOP,
LOCATION = 'wasbs://<blob_container_name>@<azure_storage_account_name>.blob.core.windows.net',
CREDENTIAL = AzureStorageCredential
);
```
>
-
_MASTER KEY_ is required to encrypt the credential secret in the next step.
-
_IDENTITY_ refers to the type of authentication you're using. Here I am using Managed Identity, because I allow Azure Synapse workspace to securely connect to and authenticate with Azure Key Vault without having to embed any credentials directly in your code.
-
_TYPE_ is HADOOP because, PolyBase uses Hadoop APIs to access data in Azure blob storage.
### Step 2 -> Create the external data source
Use this command to store the location of the data, and the data type.
```
CREATE EXTERNAL DATA SOURCE AzureStorage_west_public
WITH (
TYPE = Hadoop,
LOCATION = 'wasbs://contosoretaildw-tables@contosoretaildw.blob.core.windows.net/'
);
```
### Step 3 -> Configure the data format
The data is stored in text files in Azure blob storage, and each field is separated with a delimiter. Use this command to specify the format of the data in the text files. The Contoso data is uncompressed, and pipe delimited.
```
CREATE EXTERNAL FILE FORMAT TextFileFormat
WITH (
FORMAT_TYPE = DELIMITEDTEXT,
FORMAT_OPTIONS (
FIELD_TERMINATOR = '|',
STRING_DELIMITER = '',
DATE_FORMAT = 'yyyy-MM-dd HH:mm:ss.fff',
USE_TYPE_DEFAULT = FALSE
)
);
```
### Step 4 -> Create the schema for the external tables
To create a place to store the Contoso data in your database, create a schema.
```
CREATE SCHEMA [asb]
GO
```
### Step 5 -> Create the external tables
Run the following script to create the FactOnlineSales external tables. All you're doing here is defining column names and data types, and binding them to the location and format of the Azure blob storage files. The definition is stored in the data warehouse and the data is still in the Azure Storage Blob.
```
CREATE EXTERNAL TABLE [asb].FactOnlineSales (
[OnlineSalesKey] [int] NOT NULL,
[DateKey] [datetime] NOT NULL,
[StoreKey] [int] NOT NULL,
[ProductKey] [int] NOT NULL,
[PromotionKey] [int] NOT NULL,
[CurrencyKey] [int] NOT NULL,
[CustomerKey] [int] NOT NULL,
[SalesOrderNumber] [nvarchar](20) NOT NULL,
[SalesOrderLineNumber] [int] NULL,
[SalesQuantity] [int] NOT NULL,
[SalesAmount] [money] NOT NULL,
[ReturnQuantity] [int] NOT NULL,
[ReturnAmount] [money] NULL,
[DiscountQuantity] [int] NULL,
[DiscountAmount] [money] NULL,
[TotalCost] [money] NOT NULL,
[UnitCost] [money] NULL,
[UnitPrice] [money] NULL,
[ETLLoadID] [int] NULL,
[LoadDate] [datetime] NULL,
[UpdateDate] [datetime] NULL
) WITH (
LOCATION='/FactOnlineSales/',
DATA_SOURCE = AzureStorage_west_public,
FILE_FORMAT = TextFileFormat,
REJECT_TYPE = VALUE,
REJECT_VALUE = 0
);
```
### Step 6 -> Load the data
There are different ways to access external data. You can query data directly from the external tables, load the data into new tables in the data warehouse, or add external data to existing data warehouse tables.
#### Step 6.1 -> Create a new schema
```
CREATE SCHEMA [cso]
GO
```
#### Step 6.2 -> Load the data into new tables
To load data from Azure blob storage into the data warehouse table, use the CREATE TABLE AS SELECT (Transact-SQL) statement.
CTAS creates a new table and populates it with the results of a select statement. CTAS defines the new table to have the same columns and data types as the results of the select statement. If you select all the columns from an external table, the new table will be a replica of the columns and data types in the external table.
```
CREATE TABLE [cso].[FactOnlineSales]
WITH (
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = HASH([ProductKey]),
PARTITION (
[DateKey] RANGE RIGHT FOR VALUES (
'2007-01-01 00:00:00.000','2008-01-01 00:00:00.000',
'2009-01-01 00:00:00.000','2010-01-01 00:00:00.000'
)
)
)
AS
SELECT * FROM [asb].FactOnlineSales;
```
With this statement I have created 5 partitions in the [cso].[FactOnlineSales] table, each of which has the duration of a year, except the first that contains all rows with DateKey < 2007–01–01 and the last that contains all rows with DateKey ≥ 2010–01–01.

### Step 7 -> Create an empty partition table
Now do the same thing for the target table. Here I forcefully created empty table, for switching with source table.
```
CREATE TABLE [cso].[FactOnlineSales_out]
WITH (
CLUSTERED COLUMNSTORE INDEX,
DISTRIBUTION = HASH([ProductKey]),
PARTITION (
[DateKey] RANGE RIGHT FOR VALUES (
'2007-01-01 00:00:00.000'
)
)
)
AS SELECT * FROM [cso].[FactOnlineSales] WHERE 1 = 2;
```

> _NOTE:_ If you are switching out the partition (means deleting the partition) you can partition data out to any table irrespective of whether that table is partition or not. So here data will be switched from partition table to a non-partition table. But if you are switching in the partition (means switching the partition with new data), there is a strict criterion where you have to same partitioning boundary define.
### Step 8 -> Switch the Partition
Here I switched out the partition. Now after switch; [cso].[FactOnlineSales_out] has the data about Jan 1st, 2007, till December 31st, 2007. While the [cso].[FactOnlineSales] has no data in partition 2.
```
ALTER TABLE [cso].[FactOnlineSales]
SWITCH PARTITION 2
TO [cso].[FactOnlineSales_out] PARTITION 2;
```
> _NOTE:_ The command is very simple, but there is one catch; it requires the partition number of source and target tables to perform the switching.
Validating partition switching for both source and target table.


### Step 9 -> Delete the stagging table
Based on your requirement, delete this table or archive the data of this table as cold data.
`DROP TABLE [cso].[FactOnlineSales_out];`
---
## What happens during the Partition Switch?
**Before the Switch:**
1. Imagine the data for FactOnlineSales is physically stored on disk, potentially spread across multiple files.
2. Each partition in FactOnlineSales has its own metadata entry that keeps track of the specific location(s) of its data on disk.
**During the Switch (using partition X as the example):**
1. You identify partition X (containing old data) in FactOnlineSales.
2. The ALTER TABLE SWITCH statement updates the metadata entries for both tables:
- In FactOnlineSales, the metadata entry for partition X is modified to point to an empty location on disk. This essentially signifies that partition X is now "empty" within FactOnlineSales.
- In FactOnlineSales__out, a new metadata entry is created for partition X. This new entry points to the same physical location on disk where the data for partition X already resides (remember, the data itself doesn't move).
**After the Switch:**
Both FactOnlineSales and FactOnlineSales__out have metadata entries for partition X. However, these entries point to different things:
- FactOnlineSales entry points to an empty location, indicating the partition is no longer actively used within that table.
- FactOnlineSales__out entry points to the actual data location, making it appear like FactOnlineSales "owns" that partition.
---
## How to check or verify the number of partitions?
SQL Pool provides different system, that is used to query the different metadata for all the objects that is in the SQL Pool. And one of the system views that provides all the information related to partition, number of rows in that partition and all those things is `sys.dm_pdw_nodes_db_partition_stats`
Use this script to check the number of partitions.
```
SELECT pnp.partition_number, sum(nps.[row_count]) AS Row_Count
FROM
sys.tables t
INNER JOIN sys.indexes i
ON t.[object_id] = i.[object_id]
AND i.[index_id] <= 1 /* HEAP = 0, CLUSTERED or CLUSTERED_COLUMNSTORE =1 */
INNER JOIN sys.pdw_table_mappings tm
ON t.[object_id] = tm.[object_id]
INNER JOIN sys.pdw_nodes_tables nt
ON tm.[physical_name] = nt.[name]
INNER JOIN sys.pdw_nodes_partitions pnp
ON nt.[object_id]=pnp.[object_id]
AND nt.[pdw_node_id]=pnp.[pdw_node_id]
AND nt.[distribution_id] = pnp.[distribution_id]
INNER JOIN sys.dm_pdw_nodes_db_partition_stats nps
ON nt.[object_id] = nps.[object_id]
AND nt.[pdw_node_id] = nps.[pdw_node_id]
AND nt.[distribution_id] = nps.[distribution_id]
AND pnp.[partition_id]=nps.[partition_id]
WHERE t.name='FactOnlineSales'
GROUP BY pnp.partition_number;
```
| ayush9892 |
1,899,165 | Aikido’s 2024 SaaS CTO Security Checklist | SaaS companies have a huge target painted on their backs when it comes to security. Aikido's 2024 SaaS CTO Security Checklist gives you over 40 items to enhance security 💪 Download it now and make your company and code 10x more secure. #cybersecurity #SaaSCTO #securitychecklist | 0 | 2024-06-24T16:45:13 | http://www.aikido.dev/blog/saas-cto-security-checklist | SaaS companies have a huge target painted on their backs when it comes to security, and that’s something that keeps their CTOs awake at night. The Cloud Security Alliance released its [State of SaaS Security: 2023 Survey Report](https://cloudsecurityalliance.org/artifacts/state-of-saas-security-2023-survey-report/) earlier this year and discovered that “55% of organizations report that they experienced an incident in the past two years”.
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_39169/image_b27c537ce0e805406ba416d08263148a.png" alt="Chart showing percentage of SaaS application security incidents from the Cloud Security Alliance State of SaaS Security: 2023 Survey Report"/><figcaption>_Chart from the Cloud Security Alliance State of SaaS Security: 2023 Survey Report_</figcaption></figure>
The importance of security is backed up by the results from Aikido’s recent [consultation with 15 SaaS CTOs](https://www.aikido.dev/blog/cloud-code-security-cto-consultation), in which “93% of CTOs ranked threat prevention importance 7 (out of 10) or higher.”
To help SaaS CTOs sleep better, we’ve created a comprehensive SaaS CTO Security Checklist. We’re confident that, if you follow it, _and keep going back to it_, you will make **both your company and application 10x more secure**.
## Real risks for SaaS companies
CI/CD tools like GitHub Actions and CircleCI are prime hacker targets. Their frequent breaches [grant access to clouds and lead to data exposure](https://www.aikido.dev/blog/prevent-fallout-when-cicd-platform-hacked). A 2023 CircleCI breach compromised customer secrets, while a 2022 GitHub Actions exploit hit open source projects.
A startup's entire AWS environment was [compromised via a basic contact form on their site](https://www.aikido.dev/blog/how-a-startups-cloud-got-taken-over-by-a-simple-form-that-sends-an-email). How? The form allowed SSRF attacks, granting access to IAM keys which were then emailed out. The attacker gained control of S3 buckets and environment variables.
These security breaches happened to real companies and had real effects. But they could have been prevented if they had invested more time and effort into improving their security practices.
## SaaS CTO Security Checklist: 40+ items to guide you
Our deceptively simple checklist covers over 40 ways to harden security across your people, processes, code, infrastructure, and more. It's organized by business growth stage - **bootstrap, startup, and scaleup** - so you can find the security best practices relevant to your current phase. As you grow, our checklist will become your trusted guide and constant companion on the journey to security best practices for your SaaS company.
Each item on the list is designed to make you and your team think about security in the first place, then give you clear, concise instructions on what you can do to deal with the vulnerability. And each item is tagged so that you can be sure it applies to your company’s current stage.
The checklist is also divided into sections so that you can consider the needs of different parts of your company. Your employees are vulnerable to different threats than your code or your infrastructure, so it makes sense to look at them separately.
As you go through the list, you’ll undoubtedly find that some items don’t apply to you yet. But we recommend that you revisit the checklist regularly so that you don’t encounter any nasty surprises. Security doesn’t have to be scary, as long as you act to become more secure _before_ something bad happens.
We’ve cherry-picked a few items to give you a sneak peek at the checklist. The final checklist contains over 40, so make sure you download your copy and get started on improving your security today.
## Back up, then back up again
The first applies to all stages of company growth, and it’s absolutely vital. But then again, we’re sure you already back up regularly, right? Right?!
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_39169/frame-36_97e3110e676f826d226f155ff1264ce1.png" alt="Image of SaaS CTO Security Checklist item: Back up, then back up again"/></figure>
## Hire an external penetration testing team
Our next item is crucial for companies that are starting to scale up. Growth is going well, you’ve dealt with all the issues that are risks on the way up, but are you sure that your infrastructure is secure at all levels? That’s when it’s time to [hire a penetration testing team](https://get.aikido.dev/pentesting-for-saas-companies-aikido-security)!
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_39169/frame-42-1_f3d495bceb88c574158342328e8bc759.png" alt="Image of SaaS CTO Security Checklist item: Hire an external penetration testing team"/></figure>
## Update your OS and Docker containers
This one is straightforward, but many developers cut corners here. Updating eats up sprint time while other tasks seem more urgent. But skipping updates leaves vital systems exposed to vulnerabilities. Stay diligent with patching and updating to avoid major headaches down the road.
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_39169/frame-39_ccfe5a1da40b4460f31fe5cb57f485b8.png" alt="Image of SaaS CTO Security Checklist item: Update your OS and Docker containers"/></figure>
## Get everyone accustomed to basic security practices
The last item is relevant at all stages and it’s part and parcel of our checklist: the need to get everyone accustomed to basic security practices. Humans make mistakes. It’s inevitable. But if you get everyone thinking about security, those mistakes can be mitigated.
<figure><img src="https://d37oebn0w9ir6a.cloudfront.net/account_39169/frame-36-1_f64dd7faf3758f3f21f9884ae8eb9231.png" alt="Image of SaaS CTO Security Checklist item: Get everyone accustomed to basic security practices"/></figure>
## Download your free SaaS CTO Security Checklist
That’s just a handful of the essential tips covered in the checklist. We’ll also give you guidance on code reviews, onboarding and offboarding, DDoS attacks, database recovery plans, and much more.
Download Aikido’s 2024 SaaS CTO Security Checklist now and get started on hardening your app and getting your team thinking seriously about security. It’s never too late, or too early, no matter what stage your company is at.
**Download the full SaaS Security Checklist:**
[https://share-eu1.hsforms.com/1HXOAHoTQQxGEKFLvZx7stAfiwyg](https://share-eu1.hsforms.com/1HXOAHoTQQxGEKFLvZx7stAfiwyg) | flxg | |
1,899,164 | DreamMachineAI - free luma ai video generator | Dream Machine AI is an exciting new AI-powered video generation tool that's transforming the way we... | 0 | 2024-06-24T16:36:37 | https://dev.to/runningdogg/dreammachineai-free-luma-ai-video-generator-3d0h | luma, lumaai, dreammachineai |
Dream Machine AI is an exciting new AI-powered video generation tool that's transforming the way we create video content. This powerful platform leverages advanced artificial intelligence technology to turn simple text prompts or images into high-quality, realistic videos.
## What is Dream Machine AI?
Dream Machine AI https://dream-machine-ai.com/ is a video generation platform based on Luma AI technology. It uses advanced transformer models to quickly produce high-quality, realistic videos from text and images[1]. The platform is specifically designed to handle complex spatiotemporal motions, ensuring that the generated video content is physically accurate and consistent.

## How It Works
Creating videos with Dream Machine AI is remarkably simple:
1. Input a text prompt or upload an image
2. Wait for 3-4 minutes
3. Download your generated video
The entire process is quick and efficient, allowing users to easily transform their ideas into engaging video content[1].

## Key Features
Dream Machine AI offers an impressive array of features:
- High-definition video output comparable to OpenAI's Sora
- Incredibly fast generation - 120 frames in just 120 seconds
- Realistic and consistent motion with accurate physics
- Excellent character consistency
- Smooth and natural camera movements[1][2]
## Pricing and Availability
Dream Machine AI currently offers new users 1 free credit to experience this revolutionary technology. Due to high traffic, the paid plan is priced at $9.99 for 250 generations[1].
Good news for those looking to try Dream Machine AI: we've discovered a promo code "**DREAM**" that can save you 15% on your purchase. Don't miss this opportunity to explore AI video creation with Dream Machine AI!

Dream Machine AI represents a significant leap forward in AI-powered video generation, offering users the ability to create high-quality, realistic videos with unprecedented ease and speed. Whether you're a content creator, marketer, or simply curious about AI technology, Dream Machine AI opens up exciting new possibilities in video creation.
| runningdogg |
1,899,163 | The Best Foot and Ankle Care Near Me: What Sets Top Clinics Apart | When it comes to maintaining mobility and overall well-being, finding the best foot and ankle care is... | 0 | 2024-06-24T16:36:28 | https://dev.to/mikecartell710/the-best-foot-and-ankle-care-near-me-what-sets-top-clinics-apart-17i5 | When it comes to maintaining mobility and overall well-being, finding the best foot and ankle care is crucial. Whether you're dealing with a minor issue or a complex condition, choosing the right clinic can make all the difference. Let's explore what sets top foot and ankle clinics apart and why their advanced care is indispensable for your health.
**Comprehensive and Advanced Foot and Ankle Care
**
Top clinics offer comprehensive and **[advanced foot and ankle care](https://benenatifootcare.com/)**. They provide a range of services from routine check-ups to specialized treatments. Utilizing state-of-the-art technology and the latest medical techniques, these clinics ensure that patients receive accurate diagnoses and effective treatments. For those suffering from chronic conditions like plantar fasciitis or arthritis, advanced care means personalized treatment plans that address specific needs, improving overall outcomes.
**Family Foot and Ankle Care
**
A key feature of leading clinics is their focus on family foot and ankle care. They cater to patients of all ages, from children to the elderly, ensuring that every family member receives the best possible treatment. This inclusive approach is beneficial because it allows for the management of hereditary foot and ankle conditions and provides a consistent medical history, making treatment more effective.
**Specialized Foot and Ankle Surgery
**
In situations where surgery is necessary, having access to top-notch foot and ankle surgery in Macomb is essential. Leading clinics are equipped with experienced surgeons who specialize in various procedures, from bunion removal to complex reconstructive surgeries. These surgeons stay abreast of the latest surgical advancements, which results in minimally invasive techniques, reduced recovery times, and better overall patient outcomes.
**Expertise of Local Podiatrists
**
The expertise of a local podiatrist near me cannot be overstated. These professionals bring a wealth of knowledge about common and rare foot conditions, and their proximity makes regular visits convenient. A local podiatrist near me offers the advantage of personalized care, understanding the unique needs of the community and being readily available for follow-ups and emergencies.
**Trusted Foot Doctors in St. Clair Shores, MI
**
In regions like St. Clair Shores, MI, having a trusted foot doctor is invaluable. These specialists are well-versed in diagnosing and treating a wide range of foot and ankle issues, from ingrown toenails to diabetic foot care. Their deep connection to the community and commitment to patient well-being make them a reliable choice for those seeking the best foot and ankle care near them.
**Addressing Sports Injuries**
For athletes, visiting a sports injury clinic is often a necessity. Top clinics understand the specific demands of sports-related injuries and offer tailored treatments that help athletes return to their activities as quickly and safely as possible. Whether it’s a sprained ankle, Achilles tendonitis, or stress fractures, specialized care ensures optimal recovery and performance.
**Conclusion**
Finding the **[best foot and ankle care near me](**https://benenatifootcare.com/podiatric-services/**)** involves considering various factors such as comprehensive services, family-oriented care, specialized surgery, local expertise, and trusted professionals. Advanced foot and ankle care provided by these top clinics ensures that every patient, from children to athletes to the elderly, receives tailored and effective treatments. By choosing a reputable clinic, you can ensure that your foot and ankle health is in the best hands, paving the way for improved mobility and quality of life. | mikecartell710 | |
1,899,162 | Meme Monday | No One Can Get Your Sales Pipeline Like You Do! Source | 0 | 2024-06-24T16:36:28 | https://dev.to/td_inc/meme-monday-nf7 | sales, meme | **No One Can Get Your Sales Pipeline Like You Do!**

[Source](https://imgflip.com/i/873wb1) | td_inc |
1,899,161 | Debugging React Native with Reactotron: A Step-by-Step Guide | Debugging a React Native application can sometimes feel like walking through a maze. But what if... | 0 | 2024-06-24T16:36:22 | https://dev.to/rohanrajgautam/debugging-react-native-with-reactotron-a-step-by-step-guide-2f02 | reactnative, debug, reactotron | Debugging a React Native application can sometimes feel like walking through a maze. But what if there was a tool that could streamline the process and give you real-time insights into your app’s behavior? Enter Reactotron—a powerful desktop application that edits React Native apps. In this blog, I’ll walk you through the steps to integrate Reactotron with React Native and make the most of its powerful debugging features.
## What is Reactotron?
Reactotron is a desktop app by Infinite Red that provides a suite of tools for inspecting, logging, and interacting with your React Native application. It supports real-time logging, state inspection, API request monitoring, and more. Reactotron is like having a supercharged console.log at your fingertips but with way more capabilities.
## Why Use Reactotron?
**Real-time event tracking**: Monitor actions, state changes, and API calls as they happen.
**Performance monitoring**: Track how long actions and renders take.
**State management**: Inspect and modify your app's state.
**Easy integration**: Simple setup process with minimal configuration.
## Step-by-Step Integration
Let's dive into the step-by-step process of integrating Reactotron into your React Native project.
**Step 1: Install Reactotron React Native**
First, you'll need to add Reactotron to your project. Open your terminal and navigate to your React Native project's root directory. Then, run the following command to install Reactotron and its React Native integration:
```
yarn add reactotron-react-native -D
```
Or if you prefer npm:
```
npm i --save-dev reactotron-react-native
```
**Step 2: Configure Reactotron**
Next, you'll need to configure Reactotron in your project. Create a new file named ` ReactotronConfig.js` in your project’s _root_ directory. Add the following code to set up Reactotron:
```
import Reactotron from "reactotron-react-native";
Reactotron.configure() // controls connection & communication settings
.useReactNative() // add all built-in react native plugins
.connect(); // let's connect!
```
Or connect to AsyncStorage if you are using that:
```
import Reactotron from 'reactotron-react-native';
import AsyncStorage from '@react-native-async-storage/async-storage';
Reactotron.setAsyncStorageHandler(AsyncStorage)
.configure({
name: 'Tirios',
})
.useReactNative({
asyncStorage: false, // there are more options to the async storage
networking: {
// optionally, you can turn it off with false.
ignoreUrls: /symbolicate/,
},
editor: false, // there are more options to editor
errors: { veto: stackFrame => false }, // or turn it off with false
overlay: false, // just turning off overlay
})
.connect();
```
**Step 3: Integrate Reactotron Configuration**
You must ensure Reactotron configuration is imported and initialized when your app starts. Open your `index.js` or `App.js` (if you're using Expo) file and import your `ReactotronConfig.js` at the very top:
```
if (__DEV__) {
require('./ReactotronConfig');
}
```
This will initialize Reactotron when your app starts.
**Step 4: Start Reactotron**
Download and install the Reactotron desktop application from the [Reactotron releases page](https://github.com/infinitered/reactotron/releases?q=reactotron-app&expanded=true). Open the app, and you'll see a dashboard ready to connect to your React Native app.
Step 5: Use Reactotron in Your Project
Now that Reactotron is configured, you can start using it in your project.
Refresh your app (or start it up `react-native start`) and have a look at Reactotron now. Do you see the `CONNECTION` line? Click that to expand.

## Troubleshooting
**Android:** If you are using an Android device or an emulator run the following command to make sure it can connect to Reactotron:
```
adb reverse tcp:9090 tcp:9090
```
## Conclusion
Integrating Reactotron into your React Native project is a straightforward process, and the benefits it brings to your development workflow are immense. So, give it a try, and take your React Native debugging to the next level!
Happy debugging! | rohanrajgautam |
1,899,341 | Masterclass De Análise De Dados Gratuita Da Hashtag Treinamentos | A Masterclass de Análise de Dados é uma oportunidade para aprender como utilizar a análise de dados... | 0 | 2024-06-28T13:38:19 | https://guiadeti.com.br/masterclass-analise-de-dados-gratuita/ | cursogratuito, analisededados, cursosgratuitos, dados | ---
title: Masterclass De Análise De Dados Gratuita Da Hashtag Treinamentos
published: true
date: 2024-06-24 16:33:43 UTC
tags: CursoGratuito,analisededados,cursosgratuitos,dados
canonical_url: https://guiadeti.com.br/masterclass-analise-de-dados-gratuita/
---
A Masterclass de Análise de Dados é uma oportunidade para aprender como utilizar a análise de dados para se destacar profissionalmente.
Oferecida pela Hashtag Treinamentos, esta masterclass ensinará como se tornar o funcionário destaque na sua empresa, o melhor candidato em processos seletivos e um dos profissionais mais valorizados e bem pagos do mercado de trabalho.
O treinamento será ministrado por Alon, sócio e especialista em Análise de Dados da Hashtag Treinamentos, que trará sua vasta experiência para proporcionar um aprendizado profundo e prático. Aproveite esta chance para transformar sua carreira e alcançar novos patamares de sucesso!
## { MASTERCLASS } ANÁLISE DE DADOS
Este evento é oferecido pela Hashtag Treinamentos e será ministrado por Alon, um dos sócios e especialista em Análise de Dados da empresa.

_Imagem da página do curso_
### Detalhes do Evento
- Data: 26 de junho
- Hora: 19:30
### O Evento Vai Te Ensinar
- Se destacar na sua empresa: Aprenda técnicas de análise de dados que o tornarão o funcionário de destaque, capaz de fornecer insights valiosos e tomar decisões informadas.
- Se destacar em processos seletivos: Descubra como a análise de dados pode torná-lo o melhor candidato em qualquer processo seletivo, demonstrando habilidades essenciais e conhecimento avançado.
- Se tornar um dos profissionais mais valorizados e bem pagos do mercado: Domine a análise de dados e posicione-se como um dos profissionais mais valorizados e bem remunerados no mercado de trabalho.
### Instrutor
Alon, um dos sócios e especialista em Análise de Dados da Hashtag Treinamentos, está organizando a MasterClass Análise de Dados.
Ele é o criador do Método Impressionador, uma metodologia que ensina pessoas comuns a crescer na carreira e se destacarem na empresa utilizando Análise de Dados.
Com mais de 1.500 alunos diretamente ajudados, Alon tem uma vasta experiência em capacitar profissionais para se destacarem em suas carreiras por meio da Análise de Dados.
### Ementa do Evento
- O que é Análise de Dados na prática;
- Por que a Análise de Dados é uma das habilidades mais valorizadas no Mercado de Trabalho;
- Os 6 passos do Processo de Análise de Dados, com casos práticos;
- Como VOCÊ pode se destacar com Análise de Dados, independente da sua formação atual ou área.
Aproveite esta oportunidade para aprimorar suas habilidades e transformar sua carreira com a { MASTERCLASS } em Análise de Dados.
<aside>
<div>Você pode gostar</div>
<div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Masterclass-Analise-De-Dados-280x210.png" alt="Masterclass Análise De Dados" title="Masterclass Análise De Dados"></span>
</div>
<span>Masterclass De Análise De Dados Gratuita Da Hashtag Treinamentos</span> <a href="https://guiadeti.com.br/masterclass-analise-de-dados-gratuita/" title="Masterclass De Análise De Dados Gratuita Da Hashtag Treinamentos"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Webinar-Sobre-MS-900-280x210.png" alt="Webinar Sobre MS-900" title="Webinar Sobre MS-900"></span>
</div>
<span>Webinar Sobre MS-900 Online E Gratuito: Prepare-se Para O Exame!</span> <a href="https://guiadeti.com.br/webinar-ms-900-online-gratuito-preparo-para-exame/" title="Webinar Sobre MS-900 Online E Gratuito: Prepare-se Para O Exame!"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Bootcamp-Java-E-IA-280x210.png" alt="Bootcamp Java E IA" title="Bootcamp Java E IA"></span>
</div>
<span>Bootcamp De Java com IA Gratuito: Desenvolva Seu Portifólio</span> <a href="https://guiadeti.com.br/bootcamp-desenvolvimento-java-ia-gratuito/" title="Bootcamp De Java com IA Gratuito: Desenvolva Seu Portifólio"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/01/Aprenda-Robotica-Javascript-280x210.png" alt="Aprenda Robótica, Javascript e maisa" title="Aprenda Robótica, Javascript e maisa"></span>
</div>
<span>Aprenda Robótica, Javascript, PHP E Vários Outros Temas Gratuitos</span> <a href="https://guiadeti.com.br/cursos-robotica-javascript-php-gratuitos/" title="Aprenda Robótica, Javascript, PHP E Vários Outros Temas Gratuitos"></a>
</div>
</div>
</div>
</aside>
## Análise De Dados
Análise de Dados é o processo de inspecionar, limpar, transformar e modelar dados com o objetivo de descobrir informações úteis, tirar conclusões e apoiar a tomada de decisões.
Em um mundo cada vez mais orientado por dados, a capacidade de analisar informações de maneira eficaz é essencial para empresas que desejam se manter competitivas e inovadoras.
### Dificuldades na Análise de Dados
#### Qualidade dos Dados
Um dos maiores desafios na análise é garantir a qualidade dos dados utilizados. Dados imprecisos, incompletos ou desatualizados podem levar a conclusões errôneas e decisões equivocadas. Garantir a integridade e a precisão dos dados é fundamental para uma análise eficaz.
#### Volume de Dados
Com o crescimento exponencial na geração de dados, analistas frequentemente enfrentam o desafio de lidar com grandes volumes de informação. Gerenciar, armazenar e processar esses dados de maneira eficiente requer ferramentas e técnicas avançadas.
#### Complexidade dos Dados
Os dados podem vir de várias fontes e em diferentes formatos, tornando a integração e a análise um processo complexo.
Analisar dados de maneira coerente e significativa exige habilidades avançadas e um entendimento profundo das técnicas de análise de dados.
#### Privacidade e Segurança
Proteger a privacidade dos dados e garantir a segurança das informações é um desafio constante.
As empresas devem cumprir regulamentações rigorosas e implementar medidas de segurança para proteger os dados contra acessos não autorizados e violações.
### Ferramentas para Análise de Dados
#### Microsoft Power BI
Power BI é uma ferramenta de visualização de dados e relatórios interativos da Microsoft. É amplamente utilizada para transformar dados brutos em insights visuais que podem ser facilmente compreendidos e compartilhados.
As principais funcionalidades incluem a criação de dashboards interativos, relatórios detalhados e a capacidade de conectar-se a várias fontes de dados.
#### Tableau
Tableau é uma ferramenta de visualização de dados que ajuda na transformação de dados em dashboards e relatórios interativos e visualmente atraentes.
É conhecida por sua facilidade de uso e capacidade de lidar com grandes volumes de dados. O Tableau permite a integração com uma variedade de fontes de dados e oferece funcionalidades avançadas de análise.
#### Qlik Sense
Qlik Sense é uma plataforma de análise de dados que facilita a exploração e visualização de dados de forma intuitiva. Utilizando um motor associativo exclusivo, permite que os usuários façam descobertas e insights ao conectar dados de diversas fontes sem a necessidade de programação complexa.
#### Apache Hadoop
Hadoop é uma framework de código aberto utilizado para processar e armazenar grandes conjuntos de dados.
Ele permite a análise de grandes volumes de dados distribuídos em clusters de computadores, oferecendo uma solução escalável e eficiente para empresas que lidam com big data.
#### Python e R
Python e R são linguagens de programação amplamente utilizadas na análise de dados. Python é conhecido por sua simplicidade e vasta biblioteca de pacotes como Pandas, NumPy e Matplotlib, que facilitam a análise e visualização de dados.
R, por outro lado, é popular em estatísticas e modelagem de dados, oferecendo uma ampla gama de pacotes para análise estatística.
## Hashtag Treinamentos
A Hashtag Treinamentos é uma instituição educacional focada em oferecer cursos de alta qualidade em áreas essenciais da tecnologia, com destaque para a Análise de Dados.
Fundada com o objetivo de capacitar profissionais para atender às demandas do mercado moderno, a Hashtag Treinamentos possui abordagem prática e orientada a resultados.
### Metodologia de Ensino
A metodologia de ensino da Hashtag Treinamentos é centrada no aprendizado prático e aplicado.
Os cursos são estruturados para simular situações do mercado de trabalho, permitindo que os alunos desenvolvam habilidades práticas através de projetos reais.
### Impacto e Reconhecimento
A Hashtag Treinamentos tem tido um impacto significativo no mercado de trabalho, capacitando mais de 1.500 alunos que agora se destacam em suas respectivas áreas.
A instituição é amplamente reconhecida pela sua contribuição na formação de profissionais altamente qualificados em Análise de Dados, tornando-os mais valorizados e bem pagos no mercado.
Tendo uma sólida reputação e um compromisso contínuo com a excelência educacional, a Hashtag Treinamentos continua a ser uma escolha preferida para aqueles que buscam avançar em suas carreiras e dominar habilidades tecnológicas essenciais.
## Link de inscrição ⬇️
As [inscrições para a { MASTERCLASS } ANÁLISE DE DADOS](https://lp.hashtagtreinamentos.com/analise-dados/aula/inscricao) devem ser realizadas no site da Hashtag Treinamentos.
## Compartilhe esta oportunidade da Hashtag Treinamentos e ajude outros a se destacarem!
Gostou do conteúdo sobre a masterclass gratuita? Então compartilhe com a galera!
O post [Masterclass De Análise De Dados Gratuita Da Hashtag Treinamentos](https://guiadeti.com.br/masterclass-analise-de-dados-gratuita/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,899,159 | Introducing Laravel Usage Limiter Package: Track and restrict usage limits for users or accounts. | GitHub Repo: https://github.com/nabilhassen/laravel-usage-limiter Description A Laravel package to... | 0 | 2024-06-24T16:24:18 | https://dev.to/nabilhassen/introducing-laravel-usage-limiter-package-track-and-restrict-usage-limits-for-users-or-accounts-243l | php, laravel, opensource, webdev | **GitHub Repo**: https://github.com/nabilhassen/laravel-usage-limiter
**Description**
A Laravel package to track, limit, and restrict usage limits of users, accounts, or any other model.
With this package, you will be able to set limits for your users, track their usages, and restrict users when they hit their maximum usage limits.
**Example use cases:**
- Tracking API usages per-second, per-minute, per-month
- Tracking resource creation such as projects, teams, users, products
- Tracking resource usages such as storage | nabilhassen |
1,899,158 | The Evolution and Future of Online Advertising: Balancing Innovation with Privacy | (9 min read) In the early 2000s, I remember browsing the internet as something very top-notch and... | 0 | 2024-06-24T16:21:59 | https://dev.to/jestevesv/the-evolution-and-future-of-online-advertising-balancing-innovation-with-privacy-3p5m | onlineadvertising, privacyconcerns, ai, targetingaudiences | (9 min read)
In the early 2000s, I remember browsing the internet as something very top-notch and advanced. The internet was transitioning from Web 1.0 to Web 2.0, and this period was crucial in defining the business model behind the revenue. Should users pay to access content, or should content be free and supported by advertising?
My curiosity about the history of online advertising arose from a desire to understand the current revenue model of the internet and where it might be headed. This article explores the evolution of online advertising, its impact on the internet, and its future trajectory.
**The Birth of Online Advertising**
The early days of the World Wide Web introduced cookie technology, crucial for online advertising. Cookies were created in the mid 1990s to keep the shopping cart list of early e-commerce applications, ensuring that selections were retained until purchase. Cookie technology was first integrated into the Netscape browser and later into other browsers. As websites began incorporating third-party elements, third-party cookies came into existence. Initially, there were no restrictions imposed on these third-party cookies, which allowed early advertisers to gather and monitor users' online activities. Before the introduction of cookies, the internet was a private space, but their implementation made it subject to monitoring (1).
|Notes|
|------------|
|In 1994, cookie technology became part of the beta version of the Netscape browser. It was designed to keep track of transactions, like maintaining items in a shopping cart. Initially, cookies were accepted without notifying users. They are crucial for smooth online transactions. Despite of this, the use of third-party cookies allows data collection and aggregation, creating detailed profiles of internet users based on personal information and browsing habits. Some websites require both first-party and third-party cookies to be enabled for access. Without third-party cookies, these sites would need to add more features themselves and could miss out on creative services such as social networks and performance metrics offered by external parties. Therefore, this reliance on cookies needs further examination and regulation (1a).|
Early experts on internet privacy issues began expressing concerns about third-party cookies. These cookies had the potential to track users' activities, thus by accident contributing to the emergence of the internet advertising industry, which significantly influenced the direction of the internet. Internet advertising was perceived as more cost-effective than traditional methods because it eliminated printing costs and was viewed as an information superhighway with limitless potential (2).
**The Dot-Com Bust and Recovery**
A 2001 article in the New York Times mentioned that the dot-com era was thought to be over, viewing early internet entrepreneurs as failed rock stars who would be replaced by more conservative and smart entrepreneurs (3). Online advertising declined because many players did not survive this wave of changes. Nevertheless, it started to recover around 2004, with the emergence of Web 2.0, which facilitated the buying and selling of advertising on websites. As a result, advertisers in the USA spent $10 billion on websites in that year, accounting for about 14% of all advertising spend (4).
In the mid 2000s, there was much discussion about how to generate revenue from internet services in the Web 2.0 era. Emerging services like the early version of Facebook prompted debates on whether users should pay for online services. Some reasons for users wanting to pay included the desire for services to evolve and survive, the belief that it was the correct thing to do, and concerns that advertising revenue was not stable. Conversely, reasons for not paying included the tradition of free services, funding from the advertising industry, and reluctance to use credit cards online (5). Business recommendations at the time suggested that users willing to pay for content would not represent a sustainable revenue source unless the content was rich and special. Casual users or those seeking content infrequently could be charged with more pricing flexibility, such as a pay-per-view service (6).
**The Rise of Programmatic Advertising**
Research during the initial stage of Web 2.0 shaped the evolution of online advertising in the following years. Traditional offline media like radio, TV, printed press, and billboards initially saw online advertising as a complement to their strategies. Nonetheless, the impact was inevitable, as printed advertising markets blamed online advertising for decreased ad revenues. Online advertising, a transformative force in the industry, began with search and display advertising. This distinction highlights the different nature of advertising sold by search engines versus publishers.
Instead of buying ad space from each publisher one by one, innovations like programmatic advertising in the late 2000s made it easier for advertisers to connect with publishers using trading platforms. Programmatic advertising, a type of online advertising, simplified the process by automatically selecting the audience and placing the best-fitting ads from online ad spaces chosen through bidding. Despite this innovation, many people in the industry were slow to adopt programmatic advertising. This was because it needed new skills to understand the results and see if a campaign was successful. Many advertisers also said it felt like a "black box" because they didn't fully understand how it worked.
|Notes|
|------------|
|Traditional advertising involves considerable manual effort to secure ad space and negotiate placements, which can be daunting given the growing demand in digital marketing. Programmatic advertising automates this process, enabling advertisers to manage ad placements in real-time through instantaneous buying of ad inventory. This approach offers enhanced insights and gathers data from multiple publishers, providing advertisers with clear visibility into campaign performance. Automation includes tools like Data Management Platforms (DMPs), which create customer profiles based on tastes, habits, and preferences stored as data cookies. Supply Side Platforms (SSPs) manage available ad space on advertising platforms, adjusting based on user channels. Demand Side Platforms (DSPs) connect publishers and advertisers, evaluating precise ad pricing through auction bidding. While programmatic advertising makes operations smoother, marketers may struggle with understanding the technical details needed to manage campaigns effectively and evaluate their performance (6a).|
While these challenges persist, the landscape of online advertising is also being influenced by regulations such as the European GDPR, which may decisively change the industry's outlook. The full implications of the GDPR are yet to be fully understood by the market. Concerns have arisen regarding the legality of tools used for data sharing, such as analytics platforms. Still, advancements towards compliance with GDPR regulations are being made across the industry, which includes updates to analytics technologies to ensure alignment with these new standards (7).
|Notes|
|------------|
|In 2023, the European Union issued substantial fines under GDPR rules to major tech firms for privacy breaches. Meta was fined €1.2 billion for moving large amounts of EU user data to the US. Additionally, Meta was fined €390 million for pushing users to agree to their data being used for personalised ads. TikTok was fined €350 million for mishandling children's accounts, which defaulted to public settings and allowed comments without proper protections. Moreover, TikTok was fined €14.5 million for processing data of children under 13 without parental consent and failing to disclose how children's data would be used. These fines show the EU's commitment to enforcing strict data protection rules across the tech industry (7a).|
**Ad-Free Services and User Preferences**
As of 2024, there is skepticism about whether customers are willing to pay for ad-free services. Some changes in social media, such as recent moves towards ad-free feeds, represent a new trend that needs further study. Streaming services offering unique content have begun lowering subscription fees. Consumers willing to receive ads instead of paying a subscription often prefer ads tailored to their interests. This raises important questions about security and privacy, as tracking user preferences extends beyond traditional limits. Additionally, it prompts us to reconsider the effectiveness of defined target audiences for advertising and whether they successfully engage users (8).
**The Challenge of Targeting Audiences**
We live in a world where advertising is everywhere: social networks, mobile notifications, streaming services, mailboxes, etc. Advertising specialists create target audiences, but in the era of abundant online advertising, ad skipping behaviour may indicate that audience identification is becoming more challenging. An Austrian experiment asked an agency to design broad-, narrow-, and no-targeting strategies in a fictitious Facebook campaign. The results illustrated the importance of targeting feasibility. Targeting is better than no targeting, but excessively narrow targeting can lead to higher costs and lower reach, making it unprofitable. This suggests that fulfilling users' desires for custom and tailored advertising can be profitable but inevitably involves targeting some wrong people. The Austrian experiment also showed an inverse correlation between reach and click-through rate (CTR), indicating that maximising both objectives simultaneously is not possible.
|Notes|
|------------|
|The recent introduction of Apple's tracking transparency framework, which requires user consent for third-party cookies, has decreased the quality of data for creating audiences on this platform. While this poses a challenge for targeting audiences, it is positive news for user privacy and security (8a).|
The online advertising industry is complex and difficult to analyse from an academic perspective. That said, the professionalism within the sector provides opportunities to enhance the quality of advertising and its impact on audiences without compromising the industry. Understanding current challenges, such as the annoyance factor, is crucial. Aggressive online marketing techniques involving repetition and intrusiveness can negatively affect the advertised product and brand. Factors to consider, regardless of user age, include the lack of personalisation, ads occupying at least 50% of the screen, fixed-size or non-skippable ads, and pop-up ads. Native advertising, which is sometimes not recognised as ads, creates a less intrusive experience and leads to less annoyance (9).
**The Role of Artificial Intelligence**
Given the widespread use of artificial intelligence, an interesting question arises: Can AI solve the current online advertising challenges? The emerging AI industry, projected to be a $13 trillion ecosystem by 2030, will inevitably transform all digital and non-digital industries. In online advertising, AI advancements are expected to enhance data analysis, copywriting, video production, and the creation of recyclable advertising concepts. Recyclable advertising can adapt to different contexts, platforms, or devices, changing aspects of publicity to better target audiences. In Internet 3.0, which includes the Internet of Things, devices that understand our needs could suggest highly focused advertising. Yet, this prompts questions about the boundaries of advertising and its potential to integrate into our lives without being intrusive or annoying, all while adhering to ethical and regulatory standards (10).
**Conclusions**
The early online advertising industry has quickly developed into a smarter industry with specialised and advanced tools from tech giants like Google and Meta, shaping the advertising landscape. Even so, issues in the industry are leading to regulatory consequences and changes in how applications will use online advertising in the future. More research is needed on the impacts of advertising on the public and the outcomes of these issues, particularly concerning ad skipping, the accuracy of advertising proposals, and the perceived value of free applications balanced with viewing advertisements. The effects of regulation and legal cases will soon become visible.
**Sources**
(1) Dean Donaldson (2008): Online advertising history
Consulted on 24/06/2024
URL: [Link](https://nothingtohide.us/wp-content/uploads/2008/01/dd_unit-1_online_advertsing_history.pdf)
(1a) Ian D. Mitchell (2012): Third-party tracking cookies and data privacy
Consulted on 24/06/2024
URL: [Link](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2058326)
(2) Karrie Chen et al. (1999): The history of advertising
Consulted on 24/06/2024
URL: [Link](https://kccesl.tripod.com/ESL91NetProject.grprojs.html)
(3) John Schwartz (2001): When the internet was, um, over?
Consulted on 24/06/2024
URL: [Link](https://www.nytimes.com/2001/11/25/style/dot-com-is-dot-gone-and-the-dream-with-it.html)
(4) Masoud Nosrati et al. (2013): Internet Marketing or Modern Advertising! How? Why?
Consulted on 24/06/2024
URL: [Link](https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=40916156a90bbee1eea9cccb33d3c6fd3a7fa8f3)
(5): L. Richard Ye et al. (2004): Fee-Based Online Services: Exploring consumers' willingness to pay
Consulted on 24/06/2024
URL: [Link](https://scholarworks.lib.csusb.edu/jitim/vol13/iss1/12/)
(6) Cheng Lu Want et al. (2005): Subscription to fee-based online services: What makes consumer pay for online content?
Consulted on 24/06/2024
URL: [Link](http://ojs.jecr.org/jecr/sites/default/files/paper4_20.pdf)
(6a) K Uday Kiran et al (2024): Role of programmatic advertising on effective digital promotion strategy: A conceptual framework
Consulted on 24/06/2024
URL: [Link](https://iopscience.iop.org/article/10.1088/1742-6596/1716/1/012032/pdf)
(7) Lucian Blaga (2022): Online advertising - History, evolution and challenges
Consulted on 24/06/2024
URL: [Link](http://economice.ulbsibiu.ro/revista.economica/archive/74311ungureanu&popescu.pdf)
(7a) Niall McCarthy (2024): The Biggest GDPR Fines of 2023
Consulted on 24/06/2024
URL: [Link](https://www.eqs.com/compliance-blog/biggest-gdpr-fines/)
(8) Charles R. Taylor (2024): Ad skipping, the "ad free internet" and privacy: a call for research
Consulted on 24/06/2024
URL: [Link](https://www.tandfonline.com/doi/full/10.1080/02650487.2024.2309747)
(8a) Iman Ahmadi et al. 2024): Overwhelming targeting options: Selecting audience segments for online advertising
Consulted on 24/06/2024
URL: [Link](https://www.sciencedirect.com/science/article/pii/S0167811623000502?via%3Dihub)
(9) Maria Rigou et al. (2023): Online Ads Annoyance Factor: A Survey of Computer Science Students
Consulted on 24/06/2024
URL: [Link](https://www.intechopen.com/online-first/86252)
(10) Eyice Başev, S. (2024): The role of artificial intelligence (AI) in the future of advertising industry: Applications and examples of AI in advertising
Consulted on 24/06/2024
URL: [Link](https://www.ijetsar.com/Makaleler/1386856240_6.%20167-183%20Sinem%20Eyice%20Ba%C5%9Fev.pdf) | jestevesv |
1,899,157 | Buy Verified Paxful Account | https://dmhelpshop.com/product/buy-verified-paxful-account/ Buy Verified Paxful Account There are... | 0 | 2024-06-24T16:21:16 | https://dev.to/cokawav672/buy-verified-paxful-account-5gp5 | tutorial, react, python, ai | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-paxful-account/\n\n\n\n\nBuy Verified Paxful Account\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, Buy verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to Buy Verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with. Buy Verified Paxful Account.\n\nBuy US verified paxful account from the best place dmhelpshop\nWhy we declared this website as the best place to buy US verified paxful account? Because, our company is established for providing the all account services in the USA (our main target) and even in the whole world. With this in mind we create paxful account and customize our accounts as professional with the real documents. Buy Verified Paxful Account.\n\nIf you want to buy US verified paxful account you should have to contact fast with us. Because our accounts are-\n\nEmail verified\nPhone number verified\nSelfie and KYC verified\nSSN (social security no.) verified\nTax ID and passport verified\nSometimes driving license verified\nMasterCard attached and verified\nUsed only genuine and real documents\n100% access of the account\nAll documents provided for customer security\nWhat is Verified Paxful Account?\nIn today’s expanding landscape of online transactions, ensuring security and reliability has become paramount. Given this context, Paxful has quickly risen as a prominent peer-to-peer Bitcoin marketplace, catering to individuals and businesses seeking trusted platforms for cryptocurrency trading.\n\nIn light of the prevalent digital scams and frauds, it is only natural for people to exercise caution when partaking in online transactions. As a result, the concept of a verified account has gained immense significance, serving as a critical feature for numerous online platforms. Paxful recognizes this need and provides a safe haven for users, streamlining their cryptocurrency buying and selling experience.\n\nFor individuals and businesses alike, Buy verified Paxful account emerges as an appealing choice, offering a secure and reliable environment in the ever-expanding world of digital transactions. Buy Verified Paxful Account.\n\nVerified Paxful Accounts are essential for establishing credibility and trust among users who want to transact securely on the platform. They serve as evidence that a user is a reliable seller or buyer, verifying their legitimacy.\n\nBut what constitutes a verified account, and how can one obtain this status on Paxful? In this exploration of verified Paxful accounts, we will unravel the significance they hold, why they are crucial, and shed light on the process behind their activation, providing a comprehensive understanding of how they function. Buy verified Paxful account.\n\n \n\nWhy should to Buy Verified Paxful Account?\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, a verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence. Buy Verified Paxful Account.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to buy a verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with.\n\n \n\nWhat is a Paxful Account\nPaxful and various other platforms consistently release updates that not only address security vulnerabilities but also enhance usability by introducing new features. Buy Verified Paxful Account.\n\nIn line with this, our old accounts have recently undergone upgrades, ensuring that if you purchase an old buy Verified Paxful account from dmhelpshop.com, you will gain access to an account with an impressive history and advanced features. This ensures a seamless and enhanced experience for all users, making it a worthwhile option for everyone.\n\n \n\nIs it safe to buy Paxful Verified Accounts?\nBuying on Paxful is a secure choice for everyone. However, the level of trust amplifies when purchasing from Paxful verified accounts. These accounts belong to sellers who have undergone rigorous scrutiny by Paxful. Buy verified Paxful account, you are automatically designated as a verified account. Hence, purchasing from a Paxful verified account ensures a high level of credibility and utmost reliability. Buy Verified Paxful Account.\n\nPAXFUL, a widely known peer-to-peer cryptocurrency trading platform, has gained significant popularity as a go-to website for purchasing Bitcoin and other cryptocurrencies. It is important to note, however, that while Paxful may not be the most secure option available, its reputation is considerably less problematic compared to many other marketplaces. Buy Verified Paxful Account.\n\nThis brings us to the question: is it safe to purchase Paxful Verified Accounts? Top Paxful reviews offer mixed opinions, suggesting that caution should be exercised. Therefore, users are advised to conduct thorough research and consider all aspects before proceeding with any transactions on Paxful.\n\n \n\nHow Do I Get 100% Real Verified Paxful Accoun?\nPaxful, a renowned peer-to-peer cryptocurrency marketplace, offers users the opportunity to conveniently buy and sell a wide range of cryptocurrencies. Given its growing popularity, both individuals and businesses are seeking to establish verified accounts on this platform.\n\nHowever, the process of creating a verified Paxful account can be intimidating, particularly considering the escalating prevalence of online scams and fraudulent practices. This verification procedure necessitates users to furnish personal information and vital documents, posing potential risks if not conducted meticulously.\n\nIn this comprehensive guide, we will delve into the necessary steps to create a legitimate and verified Paxful account. Our discussion will revolve around the verification process and provide valuable tips to safely navigate through it.\n\nMoreover, we will emphasize the utmost importance of maintaining the security of personal information when creating a verified account. Furthermore, we will shed light on common pitfalls to steer clear of, such as using counterfeit documents or attempting to bypass the verification process.\n\nWhether you are new to Paxful or an experienced user, this engaging paragraph aims to equip everyone with the knowledge they need to establish a secure and authentic presence on the platform.\n\nBenefits Of Verified Paxful Accounts\nVerified Paxful accounts offer numerous advantages compared to regular Paxful accounts. One notable advantage is that verified accounts contribute to building trust within the community.\n\nVerification, although a rigorous process, is essential for peer-to-peer transactions. This is why all Paxful accounts undergo verification after registration. When customers within the community possess confidence and trust, they can conveniently and securely exchange cash for Bitcoin or Ethereum instantly. Buy Verified Paxful Account.\n\nPaxful accounts, trusted and verified by sellers globally, serve as a testament to their unwavering commitment towards their business or passion, ensuring exceptional customer service at all times. Headquartered in Africa, Paxful holds the distinction of being the world’s pioneering peer-to-peer bitcoin marketplace. Spearheaded by its founder, Ray Youssef, Paxful continues to lead the way in revolutionizing the digital exchange landscape.\n\nPaxful has emerged as a favored platform for digital currency trading, catering to a diverse audience. One of Paxful’s key features is its direct peer-to-peer trading system, eliminating the need for intermediaries or cryptocurrency exchanges. By leveraging Paxful’s escrow system, users can trade securely and confidently.\n\nWhat sets Paxful apart is its commitment to identity verification, ensuring a trustworthy environment for buyers and sellers alike. With these user-centric qualities, Paxful has successfully established itself as a leading platform for hassle-free digital currency transactions, appealing to a wide range of individuals seeking a reliable and convenient trading experience. Buy Verified Paxful Account.\n\n \n\nHow paxful ensure risk-free transaction and trading?\nEngage in safe online financial activities by prioritizing verified accounts to reduce the risk of fraud. Platforms like Paxfu implement stringent identity and address verification measures to protect users from scammers and ensure credibility.\n\nWith verified accounts, users can trade with confidence, knowing they are interacting with legitimate individuals or entities. By fostering trust through verified accounts, Paxful strengthens the integrity of its ecosystem, making it a secure space for financial transactions for all users. Buy Verified Paxful Account.\n\nExperience seamless transactions by obtaining a verified Paxful account. Verification signals a user’s dedication to the platform’s guidelines, leading to the prestigious badge of trust. This trust not only expedites trades but also reduces transaction scrutiny. Additionally, verified users unlock exclusive features enhancing efficiency on Paxful. Elevate your trading experience with Verified Paxful Accounts today.\n\nIn the ever-changing realm of online trading and transactions, selecting a platform with minimal fees is paramount for optimizing returns. This choice not only enhances your financial capabilities but also facilitates more frequent trading while safeguarding gains. Buy Verified Paxful Account.\n\nExamining the details of fee configurations reveals Paxful as a frontrunner in cost-effectiveness. Acquire a verified level-3 USA Paxful account from usasmmonline.com for a secure transaction experience. Invest in verified Paxful accounts to take advantage of a leading platform in the online trading landscape.\n\n \n\nHow Old Paxful ensures a lot of Advantages?\n\nExplore the boundless opportunities that Verified Paxful accounts present for businesses looking to venture into the digital currency realm, as companies globally witness heightened profits and expansion. These success stories underline the myriad advantages of Paxful’s user-friendly interface, minimal fees, and robust trading tools, demonstrating its relevance across various sectors.\n\nBusinesses benefit from efficient transaction processing and cost-effective solutions, making Paxful a significant player in facilitating financial operations. Acquire a USA Paxful account effortlessly at a competitive rate from usasmmonline.com and unlock access to a world of possibilities. Buy Verified Paxful Account.\n\nExperience elevated convenience and accessibility through Paxful, where stories of transformation abound. Whether you are an individual seeking seamless transactions or a business eager to tap into a global market, buying old Paxful accounts unveils opportunities for growth.\n\nPaxful’s verified accounts not only offer reliability within the trading community but also serve as a testament to the platform’s ability to empower economic activities worldwide. Join the journey towards expansive possibilities and enhanced financial empowerment with Paxful today. Buy Verified Paxful Account.\n\n \n\nWhy paxful keep the security measures at the top priority?\nIn today’s digital landscape, security stands as a paramount concern for all individuals engaging in online activities, particularly within marketplaces such as Paxful. It is essential for account holders to remain informed about the comprehensive security protocols that are in place to safeguard their information.\n\nSafeguarding your Paxful account is imperative to guaranteeing the safety and security of your transactions. Two essential security components, Two-Factor Authentication and Routine Security Audits, serve as the pillars fortifying this shield of protection, ensuring a secure and trustworthy user experience for all. Buy Verified Paxful Account.\n\nConclusion\nInvesting in Bitcoin offers various avenues, and among those, utilizing a Paxful account has emerged as a favored option. Paxful, an esteemed online marketplace, enables users to engage in buying and selling Bitcoin. Buy Verified Paxful Account.\n\nThe initial step involves creating an account on Paxful and completing the verification process to ensure identity authentication. Subsequently, users gain access to a diverse range of offers from fellow users on the platform. Once a suitable proposal captures your interest, you can proceed to initiate a trade with the respective user, opening the doors to a seamless Bitcoin investing experience.\n\nIn conclusion, when considering the option of purchasing verified Paxful accounts, exercising caution and conducting thorough due diligence is of utmost importance. It is highly recommended to seek reputable sources and diligently research the seller’s history and reviews before making any transactions.\n\nMoreover, it is crucial to familiarize oneself with the terms and conditions outlined by Paxful regarding account verification, bearing in mind the potential consequences of violating those terms. By adhering to these guidelines, individuals can ensure a secure and reliable experience when engaging in such transactions. Buy Verified Paxful Account.\n\n \n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n" | cokawav672 |
1,899,150 | 995. Minimum Number of K Consecutive Bit Flips | 995. Minimum Number of K Consecutive Bit Flips Hard You are given a binary array nums and an... | 27,523 | 2024-06-24T16:20:22 | https://dev.to/mdarifulhaque/995-minimum-number-of-k-consecutive-bit-flips-5gpp | php, leetcode, algorithms, programming | 995\. Minimum Number of K Consecutive Bit Flips
Hard
You are given a binary array `nums` and an integer `k`.
A k-bit flip is choosing a subarray of length `k` from `nums` and simultaneously changing every `0` in the subarray to `1`, and every `1` in the subarray to `0`.
Return _the minimum number of **k-bit flips** required so that there is no `0` in the array. If it is not possible, return `-1`_.
A **subarray** is a **contiguous** part of an array.
**Example 1:**
- **Input:** nums = [0,1,0], k = 1
- **Output:** 2
- **Explanation:** Flip nums[0], then flip nums[2].
**Example 2:**
- **Input:** nums = [1,1,0], k = 2
- **Output:** -1
- **Explanation:** No matter how we flip subarrays of size 2, we cannot make the array become [1,1,1].
**Example 3:**
- **Input:** nums = [0,0,0,1,0,1,1,0], k = 3
- **Output:** 3
- **Explanation:**
```
Flip nums[0],nums[1],nums[2]: nums becomes [1,1,1,1,0,1,1,0]
Flip nums[4],nums[5],nums[6]: nums becomes [1,1,1,1,1,0,0,0]
Flip nums[5],nums[6],nums[7]: nums becomes [1,1,1,1,1,1,1,1]
```
**Constraints:**
- <code>1 <= nums.length <= 10<sup>5</sup></code>
- <code>1 <= k <= nums.length
**Solution:**
```
class Solution {
/**
* @param Integer[] $nums
* @param Integer $k
* @return Integer
*/
function minKBitFlips($nums, $k) {
$flipped = array_fill(0, count($nums), false);
$validFlipsFromPastWindow = 0;
$flipCount = 0;
for ($i = 0; $i < count($nums); $i++) {
if ($i >= $k) {
if ($flipped[$i - $k]) {
$validFlipsFromPastWindow--;
}
}
if ($validFlipsFromPastWindow % 2 == $nums[$i]) {
if ($i + $k > count($nums)) {
return -1;
}
$validFlipsFromPastWindow++;
$flipped[$i] = true;
$flipCount++;
}
}
return $flipCount;
}
}
```
**Contact Links**
- **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)**
- **[GitHub](https://github.com/mah-shamim)**
| mdarifulhaque |
1,899,149 | Implementando Apache Kafka com Docker e nodejs: Passo a Passo para Iniciantes | Apache Kafka com Docker e nodejs Introdução No mundo moderno da tecnologia, onde dados são gerados e... | 0 | 2024-06-24T16:17:05 | https://dev.to/warmachine13/implementando-apache-kafka-com-docker-e-nodejs-passo-a-passo-para-iniciantes-3jf6 | docker, node, javascript, kafka | Apache Kafka com Docker e nodejs
Introdução
No mundo moderno da tecnologia, onde dados são gerados e consumidos em volumes sem precedentes, a necessidade de sistemas que possam lidar com grandes fluxos de informação se torna cada vez mais crucial. Apache Kafka é uma dessas ferramentas revolucionárias que tem ganhado popularidade significativa entre empresas que precisam gerenciar grandes volumes de dados em tempo real. Mas o que exatamente é Kafka, e por que ele é tão importante?
O Que é Apache Kafka?
Apache Kafka é uma plataforma de streaming distribuída, open-source, projetada para construir pipelines de dados em tempo real e aplicações de streaming. Inicialmente desenvolvido pela LinkedIn e posteriormente doado para a Apache Software Foundation, Kafka se tornou um componente fundamental para muitas empresas que precisam processar grandes quantidades de dados de maneira eficiente.
Como Funciona o Kafka?
Kafka funciona como um sistema de mensageria, onde produtores enviam mensagens para tópicos específicos, e consumidores leem essas mensagens. Aqui estão os principais componentes do Kafka:
- Produtores: Aplicações que publicam mensagens para um ou mais tópicos no Kafka.
- Consumidores: Aplicações que leem mensagens de tópicos.
- Tópicos: Categorias para onde as mensagens são enviadas. Cada tópico é particionado e replicado para fornecer alta disponibilidade e resiliência.
- Brokers: Servidores que armazenam dados e atendem solicitações de produtores e consumidores.
- ZooKeeper: Utilizado para gerenciar o cluster Kafka, mantendo informações sobre o estado do sistema.
A arquitetura distribuída do Kafka permite que ele lide com grandes volumes de dados e forneça um alto nível de resiliência e escalabilidade.
Por Que o Kafka é Importante?
1. Processamento em Tempo Real
Em vez de lidar com dados de maneira batch, Kafka permite o processamento contínuo de fluxos de dados. Isso é crucial para aplicações que precisam de respostas imediatas, como monitoramento de sistemas, detecção de fraudes, e análise de comportamento de usuários em tempo real.
2. Escalabilidade e Desempenho
Kafka é altamente escalável, permitindo que empresas aumentem ou diminuam a capacidade conforme necessário. Ele pode processar milhões de mensagens por segundo com baixa latência, tornando-o ideal para aplicações de missão crítica.
3. Durabilidade e Resiliência
Graças à sua arquitetura de replicação, Kafka garante que os dados não sejam perdidos, mesmo em caso de falhas de hardware. Isso oferece um nível de confiança elevado para empresas que dependem de dados precisos e disponíveis.
4. Flexibilidade e Integração
Kafka pode se integrar com uma variedade de sistemas e plataformas, incluindo Hadoop, Spark, e sistemas de banco de dados tradicionais. Isso o torna uma escolha versátil para diferentes tipos de arquitetura de dados.
Casos de Uso do Kafka
Monitoramento de Aplicações
Empresas utilizam Kafka para monitorar a saúde e desempenho de suas aplicações em tempo real. Logs de aplicações são enviados para Kafka, onde são processados e analisados para detectar problemas antes que afetem os usuários finais.
Análise de Dados
Plataformas de análise de dados em tempo real usam Kafka para ingerir e processar grandes volumes de dados de várias fontes. Isso permite insights rápidos e ações informadas baseadas em dados atualizados.
Sistemas de Recomendação
Empresas como Netflix e Spotify usam Kafka para processar dados de interação de usuários e gerar recomendações personalizadas. Isso melhora a experiência do usuário ao oferecer conteúdo relevante de forma rápida.
Exemplo:
Usando Docker Compose
Para uma configuração mais completa e fácil de gerenciar, você pode usar o Docker Compose com Zookeeper e Kafka.
docker-compose.yml
version: '2'
services:
zookeeper:
container_name: zookeeper
image: confluentinc/cp-zookeeper:7.3.0
hostname: zookeeper
ports:
- "2181:2181"
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_SERVERS: zookeeper:2888:3888
kafka:
image: confluentinc/cp-kafka:7.3.0
hostname: kafka
container_name: kafka
ports:
- "9092:9092"
- "29092:29092"
- "9999:9999"
environment:
KAFKA_ADVERTISED_LISTENERS: INTERNAL://kafka:19092,EXTERNAL://172.10.0.16:9092,DOCKER://172.10.0.16:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INTERNAL:PLAINTEXT,EXTERNAL:PLAINTEXT,DOCKER:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: INTERNAL
KAFKA_ZOOKEEPER_CONNECT: "zookeeper:2181"
KAFKA_BROKER_ID: 1
KAFKA_LOG4J_LOGGERS: "kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO"
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_JMX_PORT: 9001
KAFKA_JMX_HOSTNAME: ${KAFKA_IP:-172.10.0.16}
KAFKA_AUTHORIZER_CLASS_NAME: kafka.security.authorizer.AclAuthorizer
KAFKA_ALLOW_EVERYONE_IF_NO_ACL_FOUND: "true"
depends_on:
- zookeeper
Código para Produzir Mensagens: Este código mostra como produzir mensagens para um tópico Kafka usando KafkaJS.
import { Kafka } from 'kafkajs';
const kafka = new Kafka({
clientId: 'my-app',
brokers: ['localhost:9092'] // Substitua pelo endereço dos seus brokers
});
const producer = kafka.producer();
const runProducer = async () => {
// Conecte o produtor
await producer.connect();
// Envie uma mensagem para o tópico 'test-topic'
await producer.send({
topic: 'test-topic',
messages: [
{ value: 'Hello KafkaJS user!' },
],
});
// Desconecte o produtor
await producer.disconnect();
};
runProducer().catch(console.error);
Código para Consumir Mensagens: Este código mostra como consumir mensagens de um tópico Kafka usando KafkaJS.
import { Kafka } from 'kafkajs';
const kafka = new Kafka({
clientId: 'my-app',
brokers: ['localhost:9092'] // Substitua pelo endereço dos seus brokers
});
const consumer = kafka.consumer({ groupId: 'test-group' });
const runConsumer = async () => {
// Conecte o consumidor
await consumer.connect();
// Subscreva ao tópico 'test-topic'
await consumer.subscribe({ topic: 'test-topic', fromBeginning: true });
// Consuma mensagens
await consumer.run({
eachMessage: async ({ topic, partition, message }) => {
console.log({
value: message.value.toString(),
});
},
});
};
runConsumer().catch(console.error);
Conclusão
Apache Kafka tem se consolidado como uma ferramenta indispensável para empresas que lidam com grandes volumes de dados e necessitam de processamento em tempo real. Sua escalabilidade, durabilidade e flexibilidade fazem dele a escolha ideal para uma ampla gama de aplicações, desde monitoramento de sistemas até análise de dados e sistemas de recomendação. À medida que o volume de dados continua a crescer, a importância de ferramentas como Kafka só tende a aumentar, posicionando-se como uma peça chave na infraestrutura de dados moderna. | warmachine13 |
1,899,147 | Developing a Paycheck Calculator from Scratch and Integrating it into Your Website | Providing employees with easy access to their payroll information is essential for maintaining... | 0 | 2024-06-24T16:15:55 | https://dev.to/elainecbennet/developing-a-paycheck-calculator-from-scratch-and-integrating-it-into-your-website-4gdk | calculatorintegration, development, paycheckcalculator | Providing employees with easy access to their payroll information is essential for maintaining satisfaction and transparency. A paycheck calculator is a valuable tool that allows employees to estimate their take-home pay after deductions. Developing this tool from scratch and integrating it into your website can enhance user experience and streamline payroll processes. This article outlines the step-by-step process of creating a paycheck calculator, from initial planning to seamless website integration.
## Planning and Requirements Gathering
Before diving into development, it's crucial to outline the requirements for the paycheck calculator. This phase involves identifying the key functionalities, such as calculating gross pay, deducting taxes, and accounting for other deductions like retirement contributions and health insurance premiums. Additionally, consider user interface elements to ensure ease of use and accessibility.
**Define Functional Requirements:** Identify the core functionalities your calculator must have. This includes:
Input fields for hourly wage, hours worked, and other income sources.
Options for various types of deductions.
Output fields displaying gross pay, total deductions, and net pay.
User Interface Design: Design a simple and intuitive interface. Use wireframes or mockups to visualize the layout. Consider accessibility features, such as clear labels and easy navigation.
**Technology Stack:** Choose the technologies you'll use for development. For instance, HTML, CSS, and JavaScript for the front-end, and possibly a backend language like Python or Node.js for more complex calculations.
## Development Process
Once the planning phase is complete, you can [begin developing](https://dev.to/javascriptacademy/create-a-simple-calculator-using-html-css-and-javascript-4o7k) the paycheck calculator. This phase involves coding the functionality and designing the user interface.
**HTML Structure:** Create the basic structure of your calculator using HTML. Include input fields for user data and placeholders for the calculated results.
<div id="paycheck-calculator">
<label for="hourly-wage">Hourly Wage:</label>
<input type="number" id="hourly-wage" name="hourly-wage">
<label for="hours-worked">Hours Worked:</label>
<input type="number" id="hours-worked" name="hours-worked">
<label for="deductions">Deductions:</label>
<input type="number" id="deductions" name="deductions">
<button onclick="calculatePay()">Calculate</button>
<div id="results">
<p>Gross Pay: <span id="gross-pay"></span></p>
<p>Total Deductions: <span id="total-deductions"></span></p>
<p>Net Pay: <span id="net-pay"></span></p>
</div>
</div>
**CSS Styling:** Add CSS to style your calculator, making it visually appealing and user-friendly.
#paycheck-calculator {
font-family: Arial, sans-serif;
width: 300px;
margin: 0 auto;
padding: 20px;
border: 1px solid #ccc;
border-radius: 5px;
}
#paycheck-calculator label,
#paycheck-calculator input,
#paycheck-calculator button {
display: block;
width: 100%;
margin-bottom: 10px;
}
**JavaScript Functionality:** Implement the logic to perform calculations based on user input.
function calculatePay() {
var hourlyWage = document.getElementById("hourly-wage").value;
var hoursWorked = document.getElementById("hours-worked").value;
var deductions = document.getElementById("deductions").value;
var grossPay = hourlyWage * hoursWorked;
var totalDeductions = deductions;
var netPay = grossPay - totalDeductions;
document.getElementById("gross-pay").innerText = grossPay.toFixed(2);
document.getElementById("total-deductions").innerText = totalDeductions.toFixed(2);
document.getElementById("net-pay").innerText = netPay.toFixed(2);
}
## Testing and Validation
After developing the initial version, it's essential to test the calculator thoroughly to ensure accuracy and usability. Conduct unit tests to verify that the calculations are correct under various scenarios. Additionally, perform user testing to gather feedback and identify any usability issues.
- Unit Testing: Write test cases for different input scenarios to ensure the calculator handles edge cases and typical use cases correctly.
- User Testing: Involve a small group of users to test the calculator and provide feedback on the user interface and overall experience.
## Integration into Your Website
Once testing is complete, the next step is to integrate the calculator into your website. This involves embedding the calculator's HTML, CSS, and JavaScript code into your web pages and ensuring it fits seamlessly with the site's overall design and functionality.
- Embedding the Calculator: Copy the HTML structure, CSS styles, and JavaScript functions into your website's codebase. Ensure the calculator is responsive and adapts to different screen sizes.
- Consistent Styling: Ensure the calculator's styling matches your website's overall design. Adjust the CSS as needed to maintain consistency.
- Deployment and Monitoring: Deploy the updated website with the integrated calculator. Monitor its usage and gather feedback to make further improvements.
## Conclusion
Developing a paycheck calculator from scratch and integrating it into your website can greatly enhance the user experience for your employees. By following a structured approach, from planning and development to testing and integration, you can create a valuable tool that provides accurate payroll information with ease. Whether it's a general tool or a region-specific solution like a [paycheck calculator Florida](https://oysterlink.com/paycheck-calculator/florida/), this not only helps in maintaining transparency but also improves employee satisfaction by giving them easy access to their financial information. | elainecbennet |
1,899,121 | Secure AWS Resource Deployment Using GitHub Actions | Setting up a GitHub pipeline often involves initiating resource deployment on cloud platforms like... | 0 | 2024-06-24T16:15:38 | https://dev.to/sepiyush/efficiently-managing-multi-directory-repositories-with-github-actions-cicd-pipeline-2hn | idp, aws, githubactions | Setting up a GitHub pipeline often involves initiating resource deployment on cloud platforms like AWS. To accomplish this, a secure authentication mechanism between GitHub Actions and your AWS account is necessary. This blog explores the use of OpenID Connect (OIDC) for secure authentication and provides a detailed example of configuring a GitHub Actions workflow for AWS resource deployment.
## Why Authentication is Necessary
To enable GitHub Actions to interact with AWS and create resources, you need a way to authenticate GitHub with your AWS account. Traditionally, this was done using static credentials like a username and password, but this approach poses significant security risks. Instead, the OIDC method provides a more secure and scalable solution.
## Using OIDC for Secure Authentication
OIDC allows you to configure a trusted relationship between GitHub and AWS. This method involves setting up a provider in AWS IAM, specifically token.actions.githubusercontent.com, and assigning a role to this provider. This role can then be stored in GitHub as a secret. Here’s a step-by-step guide to achieving this:
- **Configure OIDC Provider in AWS IAM:**
1. In the AWS Management Console, go to IAM and create an identity provider.
2. Select OpenID Connect as the provider type.
3. Set the provider URL to https://token.actions.githubusercontent.com.
4. Add sts.amazonaws.com to the audience.

- **Create an IAM Role:**
1. Create a new role in IAM and select the newly created OIDC provider as the trusted entity.
2. Assign necessary permissions to this role to allow it to interact with your AWS resources.
3. Restrict permissions to ensure that only the intended services and resources can be accessed or modified by GitHub Actions using this role.



- **Store the Role ARN in GitHub Secrets:**
Go to your GitHub repository settings and add a new secret named AWS_ROLE_ARN with the value being the ARN of the IAM role created in the previous step.
## Advantages of Using OIDC
- **Enhanced Security**: Instead of using static credentials, GitHub assumes the role and receives temporary credentials for deployment. This minimizes the risk of credential leaks.
- **Fine-Grained Access Control**: You can define precise permissions for the IAM role, ensuring that GitHub Actions can only perform specific actions on your AWS account.
## Configuring GitHub Actions Workflow
To utilize the IAM role with OIDC, you can use the aws-actions/configure-aws-credentials@v1 GitHub Action. Ensure that your GitHub workflow includes the necessary permissions (id-token: write and contents: read) to allow GitHub Actions to perform AWS token exchange successfully.
Here is an example of a GitHub Actions workflow file for deploying AWS resources using Terraform:
```yml
name: Deploy
on:
push:
branches:
- main
jobs:
terraform:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Configure AWS Credentials
id: aws-creds
uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: ${{ secrets.AWS_ROLE_ARN }}
aws-region: us-east-1
- name: Interacting with AWS
run: aws lambda update-function-code --function-name my-lambda-function --zip-file fileb://my-lambda-package.zip
```
## Breakdown of the Workflow
- **Checkout Repository**: This step checks out the repository to access the code.
- **Configure AWS Credentials**: This step configures AWS credentials using the IAM role. The role to assume and AWS region are specified.
- **Interacting with AWS**: This step runs a command to update the AWS Lambda function code. You can replace this with any AWS CLI command relevant to your deployment.
## Conclusion
Using OIDC for authentication between GitHub Actions and AWS is a secure and efficient method for deploying resources. By setting up an OIDC provider, creating a restricted IAM role, and configuring your GitHub Actions workflow correctly, you can ensure secure and seamless interactions with your AWS account. This approach not only enhances security but also simplifies the management of credentials and permissions. | sepiyush |
1,899,145 | Serverless on GCP using Cloud Functions | Introduction: This Post Introduce and Demonstrate how to deploy a stateless code/script on Google... | 0 | 2024-06-24T16:11:58 | https://dev.to/iamgauravpande/serverless-on-gcp-using-cloud-functions-ekp | serverless, githubactions, googlecloud, security | **Introduction:**
This Post Introduce and Demonstrate how to deploy a stateless code/script on Google Cloud Platform Serverless Environment named Cloud Functions.
**GCP Resources Used:**
1. Cloud Scheduler Job 2. Pub/Sub Topic 3. Cloud Function(1st Gen) 4. Two Service Accounts (for Infra and Cloud Function Runtime Service Account)
**Tools Used:**
1. Terraform for IaaC 2. CI/CD - Github Actions
**Conceptual Diagram:**

As per above diagram the idea is to run a Event/trigger based Cloud function in following scenario:
When a _Cloud Scheduler Job_ runs (automatically/force Run) it will send a message body to a _Cloud Pub/Sub dedicated Topic_ which is having a Push based subscription with the _1st Gen Cloud Function_ .
As soon as the push based subscription gets active the **_entrypoint function_** defined in Cloud Function will get triggered and execute the Code Flow.
**Security Practice:**
1. The GitHub action responsible for deploying the Infra and the Cloud function itself will be using GCP WIF Pool based authentication for more information see this: https://dev.to/iamgauravpande/enabling-workload-identity-federation-for-github-actions-on-gcp-h8g
2. The Python code ran by Cloud Function will fetch its secret/token/password from Google Secret Manager Resource to avoid storing plaintext password on github repo.
**NOTE:** The Cloud Function source code along with Infra can be found at: https://github.com/iamgauravpande/serverless-on-gcp
| iamgauravpande |
1,899,143 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-06-24T16:08:00 | https://dev.to/cokawav672/buy-verified-cash-app-account-abc | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n• Genuine and activated email verified\n• Registered phone number (USA)\n• Selfie verified\n• SSN (social security number) verified\n• Driving license\n• BTC enable or not enable (BTC enable best)\n• 100% replacement guaranteed\n• 100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n \nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n \nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n \nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n \nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n \nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n\n" | cokawav672 |
1,899,140 | Stay Updated with PHP/Laravel: Weekly News Summary (17/06/2024–23/06/2024) | Dive into the latest tech buzz with this weekly news summary, focusing on PHP and Laravel updates... | 0 | 2024-06-24T16:04:17 | https://poovarasu.dev/php-laravel-weekly-news-summary-17-06-2024-to-23-06-2024/ | php, laravel | Dive into the latest tech buzz with this weekly news summary, focusing on PHP and Laravel updates from June 17th to June 23rd, 2024. Stay ahead in the tech game with insights curated just for you!
This summary offers a concise overview of recent advancements in the PHP/Laravel framework, providing valuable insights for developers and enthusiasts alike. Explore the full post for more in-depth coverage and stay updated on the latest PHP/Laravel development.
Check out the complete article here https://poovarasu.dev/php-laravel-weekly-news-summary-17-06-2024-to-23-06-2024/ | poovarasu |
1,899,139 | Promising Urology Specialist revealed to us how she has restored the potency of her 60-70-year-old patient | The most effective treatment for potency problems, Men's Defence, can now be ordered with 50%... | 0 | 2024-06-24T16:02:45 | https://dev.to/freelancerabed/promising-urology-specialist-revealed-to-us-how-she-has-restored-the-potency-of-her-60-70-year-old-patient-10j1 | The most effective treatment for potency problems, Men's Defence, can now be ordered with 50% discount on the manufacturer's official website - the private center of urology. Read more about the details.
Our reporter spoke with the best specialist in the private centre of urology, Katharina Weiler. She has the best reviews and a reputation in the field of erectile dysfunction treatment.
Ms. Weiler, you have patients aged 60-70 years. Do you really have sex at this age?
- Of course! Especially men. Sex life determines all health, in other words it is a biological watch that determines how fast you age.
Patients with prostate inflammation and prostate agronomists came to me and when I asked them when they had last sex, they laughed and said something like: "I had no sex for 7 years". Of course, I was surprised by their answers. This is the reason for prostate infections, which can later become prostate benema.
The missing sex harms the organism, whether at 30, 40, 50 or even 70 years! The prostate is the organ in which the semi-fluid is formed. If this is not ejaculated, it is corrupted. A process of pathological stagnation takes place in the prostate (collection of dangerous microorganisms). They lead to inflammation of the prostate (prostatitis). Inflammation of the prostate gland over a long period of time leads to prostate adenoma, urinal impairments and pyelonephritis. The next stage after the adenoma is cancer. Patients with potency problems have a 7 times higher risk of developing prostate cancer.
This is just one of the many consequences. In other words, the long abstinence can lead to rapid death. People are done to have sex. But when this function is reduced, the organism quickly starts with aging. The blood vessels become thinner and break, the risk of a stroke or a heart attack increases, dysfunctions of the psychosomatic system occur. The patient's hair quickly turns grey (if they are not yet grey). Joint pain occurs, the posture is crooked. Even insomnia, which elderly people suffer, is associated with lack of sex and insufficient levels of certain hormones.
I want to show you some pictures so that you can see for yourself what happens to the urinary tract and other organs of a man who has no sex with the genital system.
This is the prostate inflammation caused by sperm stagnation (if the testicles are not emptied). The persistent inflammation initially leads to prostate apotches and later to prostate cancer (based on statistics, 38% of men die therefore). Without sex, this always happens why prostate adenomas are called the “disease of age”.
Prostate cancer in a 58-year-old man. Reason – persistent abstinences (since the age of 51). The patient died.
Due to stagnation, the cholesterol is stored on the walls of the blood vessels and thrombosis can be formed. Thromboses increase the risk of stroke or heart attack. The patient's death occurred due to a heart attack, his heart is seen in these pictures.
Do you still believe that you do not need to have sex? Sex brings stability into a relationship. Unfortunately, most of the impotent men are single. Women can have sex up to the age of 70-80 years. You need sex. The rule says: the more you can have sex, the longer the relationship holds.
- Some patients say that you use a unique drug in your clinic to restore potency so that men can have sex at any age? Is that true?
- Our clinic has always had a high recovery rate for various disorders. We use unique medicines developed by our scientists. When it comes to the male sex life, we use a method of treatment called Men’s Defence, which was developed by our institute.
Soon after administration, Men’s Defence increases the excitement (it can be used instead of Viagra as a harmless sexual stimulant) and with regular use it strengthens the natural potency. This is the only product that works in this way, but you cannot find it in the pharmacy.
- Please tell us more about the effect and efficiency of this product.
- It is really an exceptional treatment method. You can see that our dates are fully booked for weeks. We only prescribe Men’s Defence, which you can buy at the pharmacy of our institute or order it from a special page with 50% discount.
With regard to the effectiveness of Men’s Defence, I would like to show you the results of the clinical studies conducted by the specialists of our institute in the years 2015-2016. 50 men over 50 years old participated in this experiment. Most of the participants were impotent for years.
The study results:
1. Complete recovery of potency (ability to have sex at least once a week) – 96% of participants
2. High testosterone level – 94% of participants
3. Normalization of urination (including frequency) – 98% of participants
4. Overall improvement of condition – 99% of participants
5. No side effects – 100% of participants
6. The treatment is perfect.
The innovative mechanism of action of this treatment for the male urinal and genital system
We decided to learn more about Men’s Defence’s mode of action on the male urinal and genital system from one of the persons involved in its development – Professor, principal specialist in urology, correspondent member of the United Kingdom Science Academy, head of the Department of Urology, Mr. Heinrich Schneider.
Mr Schneider, how does Men’s Defence effectively fight erectile dysfunction at the age of 60-70? What is its mode of action?

| freelancerabed | |
1,898,737 | TW Elements - TailwindCSS Focus, active & other states. Free UI/UX design course | Focus, active and other states Hover isn't the only state supported by Tailwind... | 25,935 | 2024-06-24T16:00:00 | https://dev.to/keepcoding/tw-elements-tailwindcss-focus-active-other-states-free-uiux-design-course-1jaa | tailkwind, design, html, tutorial | ## Focus, active and other states
Hover isn't the only state supported by Tailwind CSS.
Thanks to Tailwind we can use directly in our HTML any state available in regular CSS.
Below are examples of several pseudo-class states supported in Tailwind CSS.
_You don't have to try to memorize them now, we'll cover them in detail in the next lessons. For now, just be aware of their existence so you won't be surprised if you spot them in some TW Elements component._
**Focus:** Applied when an element has the focus. It is also enabled by default in Tailwind CSS. The modifier used is _focus:_
**Active:** Applied when an element is being activated by the user. The modifier used is _active:_
**Visited:** Applied once a user has visited a link. The modifier used is _visited:_
**Disabled:** Applied when an element is disabled. The modifier used is _disabled:_
**Group-hover and group-focus:** These are special states in Tailwind CSS that are used to apply styles to one element when another element is hovered or focused. They are typically used with the group utility class. The modifiers used are _group-hover:_ and _group-focus:_ respectively.
**First-child and last-child:** Applied to the first or last child of its parent. The modifiers used are _first:_ and _last:_
**Even and odd:** Applied to even or odd children of its parent. The modifiers used are _even:_ and _odd:_
**Dark:** Applied when dark mode is active. The modifier used is _dark:_
**Placeholder:** Applied to style placeholder text in form inputs. The modifier used is _placeholder:_
**Checked:** Applied when a checkbox or radio button is checked. The modifier used is _checked:_
**Focus-within:** Applied when an element itself has focus or contains an element that has focus. The modifier used is _focus-within:_
## Using multiple states
There is no problem in Tailwind if you want to use multiple states in one element.
A good example is a simple button that must handle both hover, focus, active, dark, and sometimes other states.

And here is the button code:
**HTML**
```
<a
href="#"
target="_blank"
role="button"
data-twe-ripple-init
data-twe-ripple-color="light"
class="mb-4 inline-block rounded bg-primary px-6 pb-2 pt-2.5 text-xs font-medium uppercase leading-normal text-white shadow-primary-3 transition duration-150 ease-in-out hover:bg-primary-accent-300 hover:shadow-primary-2 focus:bg-primary-accent-300 focus:shadow-primary-2 focus:outline-none focus:ring-0 active:bg-primary-600 active:shadow-primary-2 motion-reduce:transition-none dark:shadow-black/30 dark:hover:shadow-dark-strong dark:focus:shadow-dark-strong dark:active:shadow-dark-strong">
Primary
</a>
```
I know, it looks disturbing. The amount of classes in HTML raises concerns about being too cluttered, but it's another thing you just have to get used to in Tailwind.
Anyway, note that we can apply any modifiers to our button to handle any states.
We just add them side by side:
**HTML**
```
<div
class="bg-blue-500 hover:bg-red-400 focus:bg-yellow-600 active:bg-green-700">
Random element
</div>
```
## Multiple states in the Navbar
Take another look at the Navbar in our project.
When you hover over a link on the left or an icon on the right, the color will change. However, when you move the cursor away, the color returns to its original. Thanks to the _hover:_ modifier.
However, if you click on a specific link or icon and then move the cursor away, the changed color will remain the same until you unclick somewhere else on the page. Thanks to the _focus:_ modifier.

So if you look closely at the sample link in our Navbar, you'll see that we use different modifiers here to handle different states:
**HTML**
```
<a
class="text-neutral-500 hover:text-neutral-700 focus:text-neutral-700 disabled:text-black/30 dark:text-neutral-200 dark:hover:text-neutral-300 dark:focus:text-neutral-300 lg:px-2 [&.active]:text-black/90 dark:[&.active]:text-zinc-400"
href="#"
data-twe-nav-link-ref
>Dashboard</a
>
```
You probably noticed there, in addition to modifiers such as focus: or _active:_, also the modifier _dark:_
It is used to support the famous dark mode, which we will deal with in the next lesson.
| keepcoding |
1,899,137 | AIM Weekly for 24 June 2024 | Liquid syntax error: Tag '{% raw %}' was not properly terminated with regexp: /\%\}/ | 0 | 2024-06-24T15:57:56 | https://dev.to/tspannhw/aim-weekly-for-24-june-2024-4o5l | milvus, opensource, vectordatabase, genai | ## 24-June-2024
Tim Spann @PaaSDev
Milvus - Towhee - Attu - Feder - GPTCache - VectorDB Bench
### AIM Weekly
### Towhee - Attu - Milvus (Tim-Tam)
### FLaNK - FLiPN
With a name like that I am not sure how I don't add that to my group.
SPANN: Highly-efficient Billion-scale Approximate Nearest Neighborhood Search
https://proceedings.neurips.cc/paper/2021/hash/299dc35e747eb77177d9cea10a802da2-Abstract.html
Congrats to Milvus
https://www.dbta.com/Editorial/Trends-and-Applications/DBTA-100-2024-The-Companies-That-Matter-Most-in-Data-164289.aspx
https://github.com/milvus-io/milvus?utm_source=partner&utm_medium=referral&utm_campaign=2024_newsletter_tspann-ai-newsletters_external
https://pebble.is/PaaSDev
https://vimeo.com/flankstack
https://www.youtube.com/@FLaNK-Stack
https://www.threads.net/@tspannhw
https://medium.com/@tspann/subscribe
https://ossinsight.io/analyze/tspannhw
### CODE + COMMUNITY
Please join my meetup group NJ/NYC/Philly/Virtual.
https://www.meetup.com/unstructured-data-meetup-new-york/?utm_source=partner&utm_medium=referral&utm_campaign=2024_newsletter_tspann-ai-newsletters_external
This is Issue #143
Join me:
July 25, 2024 5:30 to 8:30 PM in NYC @ Cloudera
101 5th Ave · New York, NY
Cloudera office - 8th Floor
https://www.meetup.com/unstructured-data-meetup-new-york/events/301720478/
#### New Releases
#### Hardware
Raspberry Pi AI Kit
#### Upcoming
July 18 - Meetup - AI Camp NYC @ Microsoft Reactor
July 25 - Meetup @ Cloudera NYC
August 13 - Meetup @ Hudson Yards NYC
#### Cool Stuff
Look at this cool MAD Map for Real-Time Python with Milvus listed
https://bytewax.io/blog/bytewax-mad-map
Token price of most the major models
https://github.com/AgentOps-AI/tokencost
Launch into PyTorch Lightning Studio
https://lightning.ai/badge
Low Code AutoGen Studio
https://github.com/microsoft/autogen/tree/main/samples/apps/autogen-studio
Another great work by Victor D!
A really great article by Rakuten on Milvus
https://symphony.rakuten.com/blog/rakuten-symphony-makes-llms-breakthrough-with-ipv6-innovation
Milvus has fast insert
https://zilliz.com/blog/clear-up-misconceptions-about-data-insertion-speed-in-milvus
#### Articles
There's a lot of cool stuff with Milvus and new models, techniques, libraries and use cases.
https://medium.com/@tspann/summer-ai-camp-in-times-square-20-june-2024-report-842543bc587b
https://medium.com/@tspann/unstructured-data-processing-with-a-raspberry-pi-ai-kit-c959dd7fff47
https://medium.com/@tspann/startup-grind-princeton-report-for-18-june-2024-ed7b9928a725
https://medium.com/@tspann/not-every-field-is-just-text-numbers-or-vectors-976231e90e4d
https://developer.nvidia.com/blog/tips-for-building-a-rag-pipeline-with-nvidia-ai-langchain-ai-endpoints/?dysig_tid=00412d783bcd4bf3acd6c76be95c699f
https://github.com/nicanorflavier/spf-dkim-dmarc-simplified
https://www.infoq.com/news/2024/06/meta-llm-megalodon
https://github.com/mistralai/cookbook/blob/main/third_party/LlamaIndex/ollama_mistral_llamaindex.ipynb
https://www.decube.io/post/ingest-data-vector-database-milvus?utm_source=partner&utm_medium=referral&utm_campaign=2024_newsletter_tspann-ai-newsletters_external
https://medium.com/@zilliz_learn/crafting-superior-rag-for-code-intensive-texts-with-zilliz-cloud-pipelines-and-voyage-ai-49cba221f76e
https://medium.datadriveninvestor.com/llmops-monitoring-for-agent-ai-platforms-dee474b2877f
https://medium.com/sourcescribes/7-open-source-tools-you-will-find-very-useful-06eddb20518b
https://medium.com/ubiai-nlp/fine-tuning-mistral-7b-for-named-entity-recognition-ner-bbb96af918d3
https://thenewstack.io/generate-learned-sparse-embeddings-with-bge-m3/
https://thenewstack.io/install-ollama-ai-on-ubuntu-linux-to-use-llms-on-your-own-machine/?utm_source=
https://medium.com/@enrico.randellini/exploring-microsoft-phi3-vision-language-model-as-ocr-for-document-data-extraction-c269f7694d62
https://ai.plainenglish.io/top-9-open-source-text-to-speech-tts-models-7ac572cfc7d4
https://augmentedstartups.medium.com/yolov10-vs-yolov9-vs-yolov8-8a3c3d596b6f
#### Videos
Unstructured Data Processing with RPI 5 AI Kit
https://www.youtube.com/watch?v=tZFJ1DDkD1Q
Using JSON Fields with Milvus
https://www.youtube.com/watch?v=HP5L3Hr6Mt8
DSS ML Talk
https://www.youtube.com/watch?v=t17Ga4l5gvo
RPI Videos
https://www.youtube.com/watch?v=lrLAjqWsYjU
https://www.youtube.com/watch?v=GgxLWPNLf9I
https://www.youtube.com/watch?v=OMy7ggCVeEY
https://www.youtube.com/watch?v=gRP9y3w_Ago
https://www.youtube.com/watch?v=myZgzNfY08U
https://www.youtube.com/watch?v=gYVbj2nLVYc
https://www.youtube.com/watch?v=396ZN4T66uA
https://www.youtube.com/watch?v=i_caGhsFwMM
#### Slides
https://www.slideshare.net/slideshow/06-20-2024-ai-camp-meetup-unstructured-data-and-vector-databases/269789268
https://www.slideshare.net/slideshow/06-18-2024-princeton-meetup-introduction-to-milvus/269765983
https://www.slideshare.net/slideshow/06-12-2024-budapestdataforum-buildingreal-timepipelineswithflank-aim/269645846
#### Events
Oct 27 - 29, Raleigh, NC - All Things Open
https://2024.allthingsopen.org/speakers/timothy-spann

Nov 5-7, 10-12, 2024: CloudX. Online/Santa Clara. https://www.developerweek.com/cloudx/
Nov 19, 2024: XtremePython. Online.
https://xtremepython.dev/2024/
#### Code
* https://github.com/tspannhw/AIM-MilvusLite
* https://github.com/tspannhw/AIM-NYCStreetCams
* https://github.com/tspannhw/AIM-MotorVehicleCollisions
* https://github.com/milvus-io/milvus?utm_source=partner&utm_medium=referral&utm_campaign=2024_newsletter_tspann-ai-newsletters_external
#### Models
* https://huggingface.co/mistralai/Codestral-22B-v0.1
#### Tools
* https://opensustain.tech/
* https://amphi.ai/
* https://omid.dev/2024/06/19/advanced-shell-scripting-techniques-automating-complex-tasks-with-bash/
* https://blog.christianperone.com/2023/06/appreciating-llms-data-pipelines/
* https://github.com/Canner/wren-engine
* https://github.com/Helicone/helicone
* https://github.com/BasedHardware/OpenGlass
* https://www.firecrawl.dev/
* https://github.com/mendableai/firecrawl
* https://github.com/OpenBMB/MiniCPM/blob/main/README-en.md
* https://github.com/nomic-ai/nomic
* https://github.com/AiuniAI/Unique3D
* https://github.com/mayneyao/eidos
* https://event-driven-io.github.io/emmett/getting-started.html
© 2020-2024 Tim Spann https://www.youtube.com/@FLaNK-Stack
~~~~~~~~~~~~~~~ CONNECT ~~~~~~~~~~~~~~~
🖥️ Videos: [https://www.youtube.com/@MilvusVectorDatabase/videos](https://www.youtube.com/@MilvusVectorDatabase/videos)
X Twitter - / milvusio [https://x.com/milvusio](https://x.com/milvusio)
🔗 Linkedin: / zilliz [https://www.linkedin.com/company/zilliz/](https://www.linkedin.com/company/zilliz/)
😺 GitHub: [https://github.com/milvus-io/milvus](https://github.com/milvus-io/milvus)
🦾 Invitation to join discord: / discord [https://discord.com/invite/FjCMmaJng6](https://discord.com/invite/FjCMmaJng6)
| tspannhw |
1,899,135 | Server Side Rendering (SSR) in Next.js to enhance performance and SEO. | Hi everybody! In this article, I'm going to explain you what is server-side rendering (SSR) in... | 0 | 2024-06-24T15:54:48 | https://dev.to/mooktar_dev/server-side-rendering-ssr-in-nextjs-to-enhance-performance-and-seo-175c | nextjs, ssr, javascript, react | Hi everybody!
In this article, I'm going to explain you what is server-side rendering (SSR) in Next.js.
**Server-Side Rendering (SSR) in Next.js: Enhancing Performance and SEO**
In the realm of web development, delivering a seamless user experience is paramount. Next.js, a popular React framework, empowers developers to create dynamic web applications that excel in both performance and search engine optimization (SEO) through the power of SSR.
**What is Server-Side Rendering (SSR)?**
SSR is a technique where the server generates the initial HTML content of a web page, including data, on the server in response to a client's request. This pre-rendered HTML is then sent to the browser, allowing for a faster initial page load and improved SEO.
**Benefits of SSR in Next.js:**
- **Enhanced SEO:** Search engines can readily index and understand the content of your pages because the HTML is fully rendered on the server, making your application more discoverable in search results.
- **Faster Initial Load:** Users perceive a quicker initial load time as the browser receives a complete HTML document, eliminating the wait for JavaScript to download and execute.
- **Improved User Experience:** Faster rendering translates to a more responsive and engaging user experience, especially for users on slower internet connections.
- **Accessibility:** SSR ensures that your application's content is readily available to users who may have JavaScript disabled in their browsers.
**Implementing SSR in Next.js:**
Next.js provides two primary methods for implementing SSR:
- **getServerSideProps:** This asynchronous function is executed on the server for each request. It's ideal for fetching data that is specific to a request or user, such as personalized content or user authentication. The data fetched is then passed as props to the page component, allowing it to be rendered with the retrieved data.
- **getStaticProps:** This function is also asynchronous, but it's executed at build time. It's well-suited for fetching data that is relatively static and doesn't require frequent updates, such as blog posts or product information. The data fetched is then pre-rendered into the HTML, resulting in exceptional performance for those pages.
**Choosing the Right Method:**
The choice between `getServerSideProps` and `getStaticProps` depends on the nature of your content:
- Use `getServerSideProps` for dynamic content that varies on a per-request basis, such as user-specific data or personalized recommendations.
- Use `getStaticProps` for content that is relatively static and doesn't require frequent updates, as it offers the best performance for those pages.
**Additional Considerations:**
- **Data Fetching:** Next.js integrates seamlessly with various data fetching libraries like `fetch` or third-party APIs, making it easy to retrieve data for SSR.
- **Incremental Static Regeneration (ISR):** Next.js 10 introduced ISR, which allows you to partially re-generate static pages at specific intervals or upon request, providing a balance between static content and dynamic updates.
**Conclusion**
SSR in Next.js is a powerful tool for building high-performance, SEO-friendly web applications. By understanding its benefits, implementation options, and how to choose the right method, you can create exceptional user experiences and enhance your application's discoverability in search engines. | mooktar_dev |
1,899,136 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-06-24T15:54:43 | https://dev.to/menohiw889/buy-verified-cash-app-account-45op | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | menohiw889 |
1,899,134 | Computer Organization and Architecture | During my preparation for the master’s degree entrance exam 😁🥇, I delved into William Stallings'... | 0 | 2024-06-24T15:53:10 | https://dev.to/_hm/computer-organization-and-architecture-1d3f | webdev, beginners, programming, tutorial | During my preparation for the master’s degree entrance exam 😁🥇, I delved into William Stallings' "Computer Organization and Architecture: Designing for Performance, 9th ed., Pearson Education, Inc., 2013." Chapter 17 particularly caught my attention as it explores parallel processing—an essential topic for developers and engineers aiming to maximize productivity. Given the complexity of the subject matter, I utilized various artificial intelligence tools to aid my study process. Here’s a brief overview of the chapter's introduction and its key insights. Hope you find it insightful and useful for you
**Computer Organization and Architecture: Designing for Performance, 9th ed., Pearson Education, Inc., 2013.**
_Chapter 17 : Parallel Processing_
Computers traditionally operate as sequential machines where instructions are executed one after another: fetch instruction, fetch operands, perform operation, store results. However, beneath this sequential appearance, modern processors utilize micro-operations that can occur simultaneously. Techniques like instruction pipelining overlap fetch and execute phases to improve efficiency.
Superscalar processors extend this parallelism further by incorporating multiple execution units within a single processor. This allows execution of multiple instructions concurrently from the same program, enhancing performance significantly.
Advancements in computer hardware have driven the pursuit of parallelism to boost performance and availability. This chapter explores several parallel organization strategies:
**1. Symmetric Multiprocessors (SMPs):** Multiple processors share a common memory, enabling parallel execution. Cache coherence management is critical in SMPs to maintain data consistency.
**2. Multithreaded Processors and Chip Multiprocessors:** These architectures improve throughput by executing multiple threads simultaneously, either within a single core (multithreaded) or across multiple cores (chip multiprocessors).
**3. Clusters:** Clusters are groups of independent computers working together, often interconnected via high-speed networks. They handle large workloads beyond the capability of single SMP systems.
**4. Nonuniform Memory Access (NUMA) Machines:**
NUMA architectures optimize memory access by providing faster local memory access compared to remote memory, suitable for scalable systems.
**5. Vector Computation:** Supercomputers use specialized hardware like vector processors to efficiently handle arrays or vectors of data, accelerating tasks involving large-scale computations.
These parallel organizational approaches reflect ongoing efforts to maximize computer performance and scalability as technology evolves.


| _hm |
1,899,133 | Role-Based Access Control Using Dependency Injection (Add User Roles) | In this video, we’re setting up role-based access control for our FastAPI project. Role-based access... | 0 | 2024-06-24T15:52:49 | https://dev.to/jod35/role-based-access-control-using-dependency-injection-add-user-roles-25k2 | fastapi, python, webdev, api | In this video, we’re setting up role-based access control for our FastAPI project. Role-based access control control allows users to perform actions in an application basing on their role.
We create roles for users and admins, and then check these roles for every API endpoint. This way, we protect our API endpoints so only users with the right role can do certain things. We use dependency injection to implement this.
{%youtube _k2M-LpxId8%}
| jod35 |
1,899,130 | Stop click-baiting me to use your workflow | As I've been learning how to code, I've noticed that many developers, especially those in web... | 0 | 2024-06-24T15:47:22 | https://dev.to/andrewwiley57/stop-click-baiting-me-to-use-your-workflow-51i4 | programming, productivity, coding, softwaredevelopment | As I've been learning how to code, I've noticed that many developers, especially those in web development, often have strong opinions about the "right" way to do things. The sheer volume of content can be overwhelming, particularly for new developers who are still finding their footing. One of the most pervasive and annoying trends in this realm is "click-baiting" - using sensational or misleading headlines to lure developers into adopting certain tools or workflows. It's time we address this issue head-on.
My question is, will following their advice benefit my workflow for the project I'm currently working on?
If you find a new way to solve a problem that already has multiple solutions, congratulations!

It's great to share what you've learned, but it's not so great to be told what we should be doing.
If you're new to coding, have been learning for a few months, or haven't even started your first job, just focus on what works for you. If jQuery works for you right now, then use jQuery. Don't feel pressured to learn react.js just because everyone else is using it. Here is a life TIP: Not every one will work for big tech companies or those using the most cutting-edge tools. Many of us will work for small or lesser-known companies that still find jQuery useful.
Focus on learning to code, solving problems, and staying focused on your goal, which is getting a job, not learning every JavaScript tool ever created.
I hope this message helps if you're feeling overwhelmed. There's a lot to learn, so focus on what works for you. | andrewwiley57 |
1,899,128 | Mastering SEO with Artificial Intelligence: Tips for 2024 | Introduction AI (Artificial Intelligence) is changing the world of search engine optimization (SEO)... | 0 | 2024-06-24T15:46:45 | https://dev.to/akshay_ramesh/mastering-seo-with-artificial-intelligence-tips-for-2024-5bge |
Introduction
AI (Artificial Intelligence) is changing the world of search engine optimization (SEO) and will be an important part of future SEO strategies. It’s essential for businesses and marketers to keep up with AI advancements in order to stay competitive in the constantly evolving online world.
This article will explore the significance of AI in SEO and provide useful advice on utilizing AI for SEO success in 2024. Through the application of AI tools and strategies, companies can
Improve their online visibility
Boost their organic search rankings
Drive more traffic to their website
Here’s what we’ll cover in this article:
1. Understanding the Impact of AI on Search Engine Optimization
In this section, we’ll look at how AI has changed the SEO game. We’ll discuss:
Google’s RankBrain algorithm that uses machine learning to understand what users are looking for
The importance of creating content that matches what search engines want
2.Using AI for Automated SEO Tasks
Here, we’ll explore how AI-powered tools can help with various SEO tasks. These tools can:
Handle repetitive tasks efficiently
Analyze data effectively
Streamline your SEO campaigns
3. Knowing User Intention in a World Driven by AI
User intent refers to what a person is trying to achieve when they perform a search query. In this section, we’ll cover:
The growing significance of user intent in today’s online landscape
How search engines are becoming better at understanding context
Strategies for optimizing your content based on user intent signals
Aligning your SEO efforts with different stages of the user journey
4. The Role of AI in Voice Search and Visual Search Optimization
Discover how voice assistants like Siri and Alexa are becoming increasingly popular, and how AI technologies like natural language processing are revolutionizing voice search optimization. We’ll also delve into the exciting possibilities of visual search and image recognition technology for improving SEO.
5. Moral Concerns with AI-Powered SEO
Explore the crucial ethical issues that emerge when using AI in SEO, including biases in algorithms and the importance of transparency in determining search engine rankings.
It’s time to embrace the power of AI in SEO while still relying on human expertise. Let’s dive into the world of AI-driven SEO and uncover its potential for transforming your digital marketing strategies in 2024.
1. The Impact of AI on Search Engine Optimization
Exploring AI’s Role in SEO Transformation
Because of its complex algorithms and capabilities, artificial intelligence (AI) has significantly altered the search engine optimization surroundings.. One prominent example is Google’s RankBrain, an AI algorithm that interprets and processes search queries to deliver more relevant results to users.
User Intention and Machine Learning Algorithms
Machine learning algorithms power AI systems to understand and serve user intent better, thereby influencing organic search rankings. By analyzing patterns in user behavior and preferences, AI can refine search results to match the specific intent behind a query.
AI’s Influence on Content Creation Strategies
AI has revolutionized content creation strategies by enabling natural language generation and optimization techniques. Through AI-driven tools, businesses can tailor their content to align with search engine algorithms, ensuring greater visibility and relevance in search results.
Elevating User Experience with AI-driven Technologies
Search engine rankings heavily depend on user experience, and solutions based on artificial intelligence are vital for improving the user experience and performance of websites.From personalized recommendations to responsive design elements, AI contributes to creating a seamless and engaging user experience.
The impact of AI on SEO expands beyond the technical aspects of optimization to include the fundamental concepts of efficiently understanding and addressing user wants. As we explore deeper into the world of AI-powered SEO, it becomes clear that utilizing these capabilities is critical to being competitive in the constantly evolving digital marketplace.
2. Using AI for Automated SEO Tasks
AI-driven tools have transformed how we manage and perform SEO tasks. These advanced technologies offer many benefits, including:
Simplifying procedures: AI tools automate repetitive tasks like keyword research, content optimization, and performance tracking, saving time and effort.
Enhancing scalability: With their ability to adapt to changing search engine requirements and user behaviors, these tools provide flexible solutions for growing SEO campaigns.
Delivering actionable insights: By analyzing large amounts of data quickly and accurately, AI-powered analytics tools enable marketers to make informed decisions and develop effective strategies.
Choosing the Right AI Tools for Your Business
When it comes to selecting AI tools for automated SEO tasks, it’s essential to consider your specific needs and goals. Here are some recommendations to help you make the right choice:
Identify what’s bothering you:: Determine which areas of your SEO workflow could benefit most from automation. Whether it’s content creation, link building, or technical audits, focus on finding tools that address these pain points.
Evaluate tool features: Look for AI solutions that offer the features you require. For example, if you’re looking to optimize content creation, consider tools with natural language processing capabilities.
Consider integration possibilities: Evaluate the tool’s compatibility with your current systems and processes. Smooth integration can optimize time management and promote effective teamwork.
Read user reviews: Before making a final decision, take the time to read reviews from other users. Their feedback can provide valuable insights into the tool’s effectiveness and ease of use.
By carefully selecting and implementing the right AI tools, you can supercharge your SEO efforts and achieve better results in less time.
3 Understanding User Intent in a World Dominated by Artificial Intelligence
In an AI-driven world, understanding user intent is crucial for successful search engine optimization (SEO) strategies. Here are some key points to consider:
1. Growing importance of semantic search
Semantic search refers to search engines’ ability to understand the meaning behind a user’s query rather than just matching keywords. With the help of AI and machine learning algorithms, search engines now focus on context and user intent. This means that SEO professionals need to optimize their content for not just specific keywords but also the underlying meaning and intent behind those keywords.
2. Utilizing structured data markup and identifying entities
Structured data markup, such as schema.org, helps search engines understand the content on a webpage better. By using structured data markup, SEO professionals can provide additional information about their content, such as product details or event information, which can enhance the understanding of user intent by search engines.
3. Matching SEO efforts with the user journey
In an AI-first world, search results are becoming more personalized based on individual user preferences and behavior. SEO professionals need to consider this personalization factor when optimizing their websites for different stages of the user journey. For example:
During the awareness stage, they could concentrate on creating informative blog posts that address common questions or problems.
During the consideration stage, they might optimize landing pages with relevant product information or customer reviews.
And during the decision stage, they may give priority to features such as convenient checkout processes or customer support choices.
In an AI-driven era, comprehending user intent is crucial for successful SEO. It enables businesses to deliver pertinent content that matches user search queries. Employing semantic search, structured data markup, and aligning SEO with the user journey can boost visibility in search results and enhance user experience.
4. The Role of AI in Voice Search and Visual Search Optimization
The Rise of Voice Assistants and Smart Speakers
Voice search has quickly become popular thanks to devices like smart speakers and virtual assistants such as Amazon’s Alexa, Apple’s Siri, and Google Assistant. With more people using voice commands instead of typing, the way we search for information is changing.
The Impact of AI on Voice Search Optimization
Artificial Intelligence (AI) technologies like natural language processing (NLP) are crucial in improving voice search results. NLP helps search engines understand spoken language better, leading to more accurate responses to user queries. This means that businesses need to focus on optimizing their content for long-tail conversational keywords and phrases that match how people speak.
The Potential of Visual Search for SEO
Visual search is an emerging trend in SEO that uses image recognition technology. Instead of typing out a query, users can now upload an image to conduct a search. This opens up new opportunities for businesses to optimize their visual content and make it more discoverable.
Some examples of platforms using visual search technology are Google Lens and Pinterest Lens. These tools use AI algorithms to analyze images and provide relevant search results based on visual similarities. As visual content becomes more prevalent on the internet, it’s essential for brands to incorporate visual search optimization into their SEO strategies.
How AI Can Enhance Voice and Visual Search Optimization
By utilizing AI in voice and visual search optimization, businesses can:
Understand user intent better through natural language processing
Optimize content for conversational keywords and phrases
Improve image recognition for visual search accuracy
Provide personalized recommendations based on user preferences
This proactive approach to SEO allows companies to adapt their strategies according to changing consumer behaviors while maintaining a competitive edge in the digital landscape.
5. Ethical Considerations in AI-Driven SEO
Artificial intelligence (AI) is transforming search engine optimization (SEO), but we must also address the ethical issues that come with it. Ethical SEO practices are crucial for fair, transparent, and inclusive decision-making by algorithms. Here are some key points to consider:
1. Importance of Ethical Frameworks
As AI becomes more integrated into SEO, we need clear ethical frameworks to guide its use. These frameworks should prioritize fairness, accountability, and transparency. By following ethical guidelines, businesses can ensure that their SEO strategies align with what society values.
To understand why ethical considerations are crucial in AI-driven SEO, let’s look at an example:
As AI becomes more prevalent in SEO, it is vital to prioritize ethical considerations. By implementing ethical frameworks and addressing bias in algorithms, businesses can ensure fairness, inclusiveness, and transparency in their SEO strategies. A responsible approach towards AI-driven SEO will contribute to a more diverse and equitable online ecosystem.
2. Bias in AI Algorithms
One of the main concerns with AI-driven SEO is the potential bias in algorithms. These algorithms learn from large amounts of data, including historical biases present in our society. This can lead to biased search results and a lack of diversity in search engine rankings. It’s important to acknowledge and address these biases, striving for unbiased and inclusive algorithms.
Embracing the Future: Achieving a Balance between Human Expertise and AI Capabilities in SEO
In this age of rapid technological advancements, it is crucial to recognize and embrace the continued importance of human expertise in the field of SEO. While AI technology has revolutionized many aspects of search engine optimization, human insights and strategic planning remain invaluable for achieving optimal results. Here are some key points to consider when balancing human expertise with AI capabilities in SEO:
Strategic Planning
While AI systems can process large data sets and produce insights, SEO professionals still need to offer strategic guidance.
Human experts can identify trends, understand business goals, and align SEO strategies accordingly.
By combining the power of AI-driven tools with human intuition, businesses can create comprehensive strategies that maximize their online presence.
Content Strategy
While AI algorithms can assist in generating and optimizing content, human expertise is vital for creating engaging and relevant content that resonates with the target audience.
Human professionals have a deep understanding of their customers’ needs and preferences, enabling them to craft compelling narratives and deliver value-added content.
A collaborative approach that leverages AI tools for data analysis and human creativity for content creation can lead to superior results.
Audience Engagement
Building strong connections with the target audience requires a human touch.
AI-driven chatbots and automated response systems can handle routine queries, but personal interactions and genuine engagement are best delivered by human professionals.
Understanding customer emotions, cultural nuances, and changing market dynamics are areas where human expertise excels.
To effectively collaborate with AI systems while maintaining a human touch in optimization strategies, here are some recommendations for SEO professionals:
Continuous Learning: Stay updated with the latest advancements in AI technology and SEO practices. By keeping abreast of industry trends, professionals can identify areas where AI tools can complement their skills.
Experimentation: Embrace a culture of experimentation by testing new AI-driven tools and techniques. This allows professionals to understand the strengths and limitations of AI systems and identify the most effective ways to integrate them into their SEO workflows.
Data Interpretation: While AI can process large volumes of data, human experts are essential for interpreting the insights generated. SEO professionals should analyze AI-driven reports and combine them with their domain expertise to make informed decisions.
By striking a balance between human expertise and AI capabilities, businesses can harness the full potential of SEO in the future. The collaboration between humans and machines will enable organizations to optimize their online presence while delivering exceptional user experiences. Remember, AI is a tool that empowers human professionals, not a replacement for their expertise.
Predictions about the Future Intersection of AI and SEO
As AI continues to advance, its intersection with SEO is poised to shape the future of search engine optimization. Predictive analytics and machine learning models are expected to play a significant role in this evolution, empowering businesses to stay ahead of evolving search trends. Here are some predictions for the future of AI in SEO:
Predictive SEO: With the help of AI algorithms, SEO professionals can leverage predictive analytics to forecast search trends, allowing them to optimize their strategies accordingly. By analyzing historical data and user behavior patterns, AI systems can provide insights into future search trends and help businesses adapt their content and keywords proactively.
Natural Language Understanding and Generation: As AI technology continues to improve, natural language understanding (NLU) and generation (NLG) are becoming more sophisticated. This has profound implications for SEO, as search engines become better at interpreting user queries and generating relevant, high-quality content. In the future, NLU and NLG capabilities will enable more personalized search results and enhance the overall user experience.
Hyper-Personalization: AI-powered algorithms will continue to refine their ability to understand individual user preferences and deliver highly personalized search results. This means that SEO professionals will need to focus on creating content that caters to specific user segments rather than generic target audiences. By leveraging AI-driven tools, businesses can tailor their SEO strategies to meet the unique needs and interests of their target customers.
Automated Content Optimization: AI technologies such as natural language processing (NLP) can automatically analyze and optimize content based on search engine algorithms. In the future, we can expect AI-driven tools to become even more adept at suggesting improvements for on-page elements like meta tags, headers, and keyword usage. This automation will save time for SEO professionals while ensuring that content meets the ever-changing requirements of search engines.
Advanced User Experience Optimization: AI systems will continue to play a crucial role in enhancing user experience on websites. By analyzing user behavior, AI can identify areas for improvement, such as page load speed, mobile responsiveness, and user engagement. SEO professionals can leverage AI-driven tools to optimize user experience factors and improve their website’s search engine rankings.
The future of AI in SEO looks bright. Predictive analytics, natural language understanding, and advanced personalization capabilities are transforming search engine optimization strategies. As technology keeps progressing, SEO experts need to stay current with the latest AI developments and adjust their approaches to stay ahead in the dynamic online world.
Conclusion
AI will play a crucial role in mastering SEO techniques and staying ahead in the ever-changing digital world.
It is critical to remember that AI should be considered as a tool that complements, not replaces, human skill. Although AI algorithms and automation can help with regular SEO duties and data analysis, human input is still necessary for strategic planning, content strategy, and engagement with audiences.
[Digital Marketing Strategist in Kannur](akshayramesh.in) is an excellent example of how AI can be applied in real-world SEO scenarios. By merging AI with human knowledge, they develop creative techniques and generate concrete outcomes, showing AI’s significant impact on SEO professionals.
To stay ahead in SEO, it’s crucial to continuously learn and explore the latest advancements in AI and SEO. By staying informed about emerging trends and technologies, SEO professionals can adapt their strategies to meet evolving search engine demands and user preferences.
AI offers exciting opportunities to master techniques for SEO in the future. Businesses that adopt a balanced approach that appreciates human expertise while also accepting AI breakthroughs can discover new opportunities to boost their organic search rankings and achieve long-term success in the world of digital marketing.
[Visit For More](akshayramesh.in)
| akshay_ramesh | |
1,899,127 | Looking for some support! | hey everyone! I’m new to coding and just started learning Python. It’s a bit lonely doing it solo, so... | 0 | 2024-06-24T15:46:32 | https://dev.to/tomdsbr/looking-for-some-support-3ig7 | hey everyone! I’m new to coding and just started learning Python. It’s a bit lonely doing it solo, so I’m looking for some people in the same boat to chat with and support each other. feel free to dm! | tomdsbr | |
1,899,119 | C# PDF Generator Tutorial (HTML to PDF, Merge, Watermark. Extract) | PDF generation is a common requirement in many applications, from generating reports to creating... | 0 | 2024-06-24T15:31:47 | https://dev.to/mhamzap10/c-pdf-generator-tutorial-html-to-pdf-merge-watermark-extract-h3i | csharp, dotnetcore, ironpdf, softwaredevelopment | [PDF](https://en.wikipedia.org/wiki/PDF) generation is a common requirement in many applications, from generating reports to creating invoices and documentation. IronPDF is a powerful library that simplifies this process in C#. This article covers everything you need to know about using [IronPDF](https://ironpdf.com/) to generate PDFs, including installation, basic usage, advanced features, and common scenarios.
##How to Generate a PDF file in C#:
1. Create or open an existing project in Visual Studio.
2. Install the IronPDF library.
3. Initialize the Chrome PDF Renderer object.
4. Generate a PDF using the RenderHtmlAsPdf method.
5. Save the PDF file.
##Introduction to IronPDF
[IronPDF](https://ironpdf.com/) is a .NET PDF library designed to [create](https://ironpdf.com/blog/using-ironpdf/csharp-html-to-pdf-example/), [edit](https://ironpdf.com/tutorials/csharp-edit-pdf-complete-tutorial/), and [manipulate](https://ironpdf.com/how-to/extract-text-and-images/) PDF files. It integrates seamlessly with .NET applications, providing a robust set of features for generating high-quality PDF documents from various sources, such as HTML, images, and raw text.
###Key Features of IronPDF
1. [HTML to PDF](https://ironpdf.com/tutorials/html-to-pdf/) conversion
2. Support for CSS and JavaScript
3. PDF [merging](https://ironpdf.com/examples/merge-pdfs/) and [splitting](https://ironpdf.com/examples/split-pdf-pages-csharp/)
4. Adding [headers](https://ironpdf.com/examples/html-headers-and-footers/), [footers](https://ironpdf.com/examples/html-headers-and-footers/), and [watermarks](https://ironpdf.com/examples/pdf-watermarking/)
5. [Extracting](https://ironpdf.com/examples/reading-pdf-text/) text and images from PDFs
6. [Editing](https://ironpdf.com/examples/editing-pdfs/) existing PDFs
###Installing IronPDF
To get started with IronPDF, you need to install the library via NuGet. Open your project in Visual Studio and run the following command in the Package Manager Console:
```
Install-Package IronPdf
```
This command will install IronPDF with all required dependencies.

##Basic PDF Generation
###Creating a PDF Document from HTML String
One of the most common uses of IronPDF is converting HTML to PDF. Here’s a simple example:
```
var Renderer = new ChromePdfRenderer();
var pdf = Renderer.RenderHtmlAsPdf("<h1>Hello, IronPDF!</h1>");
pdf.SaveAs("output.pdf");
```
The code initializes a ChromePdfRenderer instance to render HTML content into a PDF. It then converts the HTML string `<h1>Hello, IronPDF!</h1>` into a PDF document using the RenderHtmlAsPdf method. Finally, it saves the generated PDF file to the disk with the name output.pdf.

###Creating a PDF from the URL
You can also generate a PDF directly from the URL:
```
var Renderer = new ChromePdfRenderer();
var pdf = Renderer.RenderUrlAsPdf("https://www.adobe.com/acrobat/about-adobe-pdf.html");
pdf.SaveAs("pdf_from_url.pdf");
```
The code creates an instance of ChromePdfRenderer to render a webpage into a PDF. It converts the content of the URL "https://www.adobe.com/acrobat/about-adobe-pdf.html" into a PDF document using the RenderUrlAsPdf method. Finally, it saves the generated PDF to the file pdf_from_url.pdf.

###Generate PDF file from HTML File:
You can also generate a PDF directly from the HTML file. The HTML file we will use for creating PDF is as:

Here is the code to Generate a PDF file from an HTML file.
```
var Renderer = new ChromePdfRenderer();
var pdfFromHtmlFile = Renderer.RenderHtmlFileAsPdf("index.html"); // file path
pdfFromHtmlFile.SaveAs("pdf_from_html_file.pdf");
```
The code creates a ChromePdfRenderer instance to convert an HTML file into a PDF. It reads the content of the local HTML file index.html and converts it into a PDF document using the RenderHtmlFileAsPdf method. The resulting PDF is saved as pdf_from_html_file.pdf. A practical use case is generating a PDF version of a locally stored webpage or report for distribution or printing.

##Advanced Features
###Using CSS and JavaScript
IronPDF supports CSS and JavaScript, allowing you to create styled and interactive PDFs.
```
var Renderer = new ChromePdfRenderer();
var htmlContent = @"
<html>
<head>
<style>
h1 { color: blue; }
</style>
</head>
<body>
<h1>Hello, IronPDF with CSS!</h1>
</body>
</html>";
var pdf = Renderer.RenderHtmlAsPdf(htmlContent);
pdf.SaveAs("styled_output.pdf");
```
The code initializes a ChromePdfRenderer to convert HTML content with CSS styling into a PDF. It defines an HTML string that includes a `<style>` block to make the `<h1>` tag blue. This HTML content is rendered into a PDF using the RenderHtmlAsPdf method, and the resulting PDF is saved as styled_output.pdf. A practical use case is generating a styled PDF document, such as a report or presentation, directly from HTML and CSS.

###Adding Headers and Footers
IronPDF allows you to modify PDF files and add headers and footers to your PDF documents:
```
PdfDocument pdf = new PdfDocument("styled_output.pdf");
TextHeaderFooter textHeader = new TextHeaderFooter
{
CenterText = "This is the header!",
};
// Create text footer
TextHeaderFooter textFooter = new TextHeaderFooter
{
CenterText = "This is the footer!",
};
// Add text header and footer to the PDF
pdf.AddTextHeaders(textHeader);
pdf.AddTextFooters(textFooter);
pdf.SaveAs("styled_output.pdf");
```
The code loads an existing PDF document named styled_output.pdf and creates text headers and footers with the specified center text. The header text is "This is the header!" and the footer text is "This is the footer!". These headers and footers are added to the PDF using the AddTextHeaders and AddTextFooters methods, respectively. Finally, the modified PDF is saved as styled_output.pdf.pdf. A practical use case is adding uniform headers and footers to a PDF for branding or informational purposes, such as adding company names or page numbers.

###Merging PDFs
IronPDF makes it easy to merge multiple PDFs into a single document:
```
var renderer = new ChromePdfRenderer();
var pdfdoc_a = new PdfDocument("styled_output.pdf");
var pdfdoc_b = new PdfDocument("pdf_from_url.pdf");
var merged = PdfDocument.Merge(pdfdoc_a, pdfdoc_b);
merged.SaveAs("merged_pdf.pdf");
```
In just a few lines, the code uses ChromePdfRenderer to handle PDF rendering. It loads two existing PDF documents, styled_output.pdf and pdf_from_url.pdf, and merges them into a single PDF document using PdfDocument.Merge. The combined document is then saved as merged_pdf.pdf. This approach is useful for generating PDF documents by combining content from multiple existing PDFs into one.

###Adding Watermarks
You can add watermarks to your PDFs using IronPDF:
```
var pdf = new PdfDocument("styled_output.pdf");
pdf.ApplyWatermark("<h2 style='color:red'>MY WATER MARK</h2>", 30, IronPdf.Editing.VerticalAlignment.Middle, IronPdf.Editing.HorizontalAlignment.Center);
pdf.SaveAs("watermark_output.pdf");
```
The code loads an existing PDF document named styled_output.pdf and applies a watermark with the text "MY WATER MARK" styled in red. The watermark is positioned at a 30-degree angle and aligned to the center both vertically and horizontally using IronPdf.Editing.VerticalAlignment.Middle and IronPdf.Editing.HorizontalAlignment.Center. The modified PDF is then saved as watermark_output.pdf. This process is useful for adding custom watermarks to create PDF documents with branding or ownership information.

###Extracting Text from PDF
IronPDF can also extract text and images from existing PDFs. The following code snippet demonstrates how IronPDF can extract all text content from a PDF file:
```
var pdf = new PdfDocument("styled_output.pdf");
var text = pdf.ExtractAllText();
Console.WriteLine(text);
```
The code initializes a PdfDocument object by loading the PDF file named "styled_output.pdf". It then extracts all textual content from the PDF using the ExtractAllText method and stores it in the variable text. Finally, it prints the extracted text to the console. This process showcases how IronPDF can programmatically retrieve and manipulate text content from PDF documents.

###Extract Image from a PDF document:
The following code snippet demonstrates how IronPDF can extract all images from a PDF file:
```
var pdf = new PdfDocument("pdf_from_url.pdf");
var images = pdf.ExtractAllImages();
for (int i = 0; i < images.Count; i++)
{
// Export the extracted images
images[i].SaveAs($"PDF_Images/image{i}.png");
}
```
The code initializes a PdfDocument object by loading a PDF file from a URL ("pdf_from_url.pdf"). It then extracts all images embedded within the PDF using the ExtractAllImages method. Each extracted image is saved as a PNG file in a directory named "PDF_Images", with filenames numbered sequentially. This demonstrates how IronPDF can automate the extraction and export of images from PDF documents for further processing or analysis.

##Conclusion:
In conclusion, IronPDF provides a robust solution for C# PDF generation, offering a comprehensive set of tools to create, edit, and manipulate PDF documents directly from .NET applications. From converting HTML to PDF and merging multiple PDFs to adding headers, footers, watermarks, and extracting content, IronPDF simplifies complex tasks with ease and efficiency. It stands as an essential C# PDF generator library, enabling developers to enhance their applications with professional-grade PDF functionality due to its seamless integration and powerful features.
For developers looking to explore its capabilities, IronPDF offers [a free trial](https://ironpdf.com/#trial-license), allowing them to evaluate its features and integration within their projects. Whether for generating reports, creating invoices, or archiving documents, IronPDF's versatility makes it a valuable tool in various scenarios. Additionally, commercial [licenses](https://ironpdf.com/licensing/) are available for those needing full access to advanced features and support.
| mhamzap10 |
1,899,126 | Buy verified cash app account | Buy verified cash app account Cash app has emerged as a dominant force in the realm of mobile banking... | 0 | 2024-06-24T15:45:39 | https://dev.to/menohiw889/buy-verified-cash-app-account-26mh | webdev, javascript, beginners, programming | Buy verified cash app account
Cash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.
Contact Us / 24 Hours Reply
Telegram:dmhelpshop
WhatsApp: +1 (980) 277-2786
Skype:dmhelpshop
Email:dmhelpshop@gmail.com | menohiw889 |
1,898,498 | Mobile app development with LiveView Native and Elixir | What is LiveView Native LiveView Native is a new framework by Dockyard. Do more... | 0 | 2024-06-24T15:44:03 | https://dev.to/rushikeshpandit/mobile-app-development-with-liveview-native-and-elixir-4f79 | liveviewnative, elixir, mobile, phoenix | ## What is LiveView Native
LiveView Native is a new framework by Dockyard.
### Do more with less, with LiveView Native
As their tag line suggests, it allows Elixir developers to create mobile applications using LiveView, enabling teams to seamlessly build both web and mobile applications with Elixir. Similar to how React Native works for JavaScript, LiveView Native enhances team efficiency by eliminating the need for multiple frameworks or programming languages.
According to DockYard CEO Brian Cardarella, "With Phoenix handling your web views, your mobile native views, and serving as your server-side framework, it creates a more unified and single-stack team."
LiveView Native empowers even small engineering teams to develop custom web and mobile applications, leveraging Elixir and LiveView’s speed, reliability, and concurrency benefits. It helps meet product deadlines faster by reducing obstacles and potential delays since one team can handle both web and native development.
The benefits extend beyond development, as a single team can manage both web and mobile applications, removing barriers related to multiple frameworks and languages, thus achieving more with fewer resources.
Additionally, LiveView Native doesn’t force developers to use a single UI across all platforms. Whether building for iOS or Android, it provides tools to create applications that look and function like true native products, using templates that utilize the native component UI for each platform.
## Setup
**Prerequisite**: You have already installed elixir, phoenix framework along with Xcode and Android studio on you'r machine.
Since android version of LiveView Native is not stable yet, we will be setting up only iOS app in this blog.
With this enough information, let's start with building live view native with Phoenix application.
Let's start with creating phoenix app first with following command
```
mix phx.new native_demo
```
Our actual live view integration starts from here.
Open project folder in any of your favourite code editor, I am using VS Code in this case.
Make sure that your project is using elixir version 1.15 and above.

Now start making changes in the code as shown below.
`mix.exs`
```
{:live_view_native, github: "liveview-native/live_view_native", branch: "main", override: true},
{:live_view_native_stylesheet, github: "liveview-native/live_view_native_stylesheet", branch: "main"},
{:live_view_native_swiftui, github: "liveview-native/liveview-client-swiftui", branch: "main"},
{:live_view_native_live_form, github: "liveview-native/liveview-native-live-form"},
{:live_view_native_jetpack, github: "liveview-native/liveview-client-jetpack", branch: "main"}
```
And add following line to `config.exs` (I somehow missed this step, Thanks to @maratk )
```
config :live_view_native, plugins: [
LiveViewNative.SwiftUI
]
```
Post this, install dependencies manually using following command.
```
mix deps.get
```
Live view native will some mix tasks to your project to help you out with setup of live view native.
Let's setup live view native in our app using following command in the terminal
```
mix lvn.setup
```
This command will create some necessary files in our project.
You can learn more about lvn task from below command.
```
mix help lvn.swiftui.gen
```
After that run the following command in the terminal
```
mix lvn.swiftui.gen
```
This command will create new swiftui project under the native directory which got created automatically in the root directory of the project. Also it will create some core components for swiftui.
Incase you lost those changes from terminal, no need to worry. You can use following command to see it again.
```
mix lvn.gen --no-copy
```
This command will show the list of changes that we are supposed to add manually into our application. Please make sure that your make all the changes that are mentioned under the header `LVN - Required`
This is the most crucial step in the live view native integration.
Now, head over to the `components` directory of the app and create new directory with the name `layouts_swiftui` and add 2 new files with the name `app.swiftui.neex` and `root.swiftui.neex` with following content.
`root.swiftui.neex`
```
<.csrf_token />
<Style url={~p"/assets/app.swiftui.styles"} />
<NavigationStack>
<%= @inner_content %>
</NavigationStack>
```
`app.swiftui.neex`
```
<%= @inner_content %>
```
Then create a new directory with the name `styles` with the same level as that of `components` directory and add 2 new files with the name `app.jetpack.ex` and `app.swiftui.ex` with following content.
`app.jetpack.ex`
```
defmodule NativeDemoWeb.Styles.App.Jetpack do
use LiveViewNative.Stylesheet, :jetpack
~SHEET"""
"""
end
```
`app.swiftui.ex `
```
defmodule NativeDemoWeb.Styles.App.SwiftUI do
use LiveViewNative.Stylesheet, :swiftui
~SHEET"""
"""
end
```
Now, create a new directory with the name `live` with the same level as that of `components` directory and add 2 new files with the name `home_live.ex` and `home_live.swiftui.ex` with following content.
`home_live.ex`
```
defmodule NativeDemoWeb.HomeLive do
use NativeDemoWeb, :live_view
use LiveViewNative.LiveView,
formats: [:swiftui],
layouts: [
swiftui: {NativeDemoWeb.Layouts.SwiftUI, :app}
]
def render(assigns) do
~H"""
<div>
Hello from Web
</div>
"""
end
end
```
`home_live.swiftui.ex`
```
defmodule NativeDemoWeb.HomeLive.SwiftUI do
use NativeDemoNative, [:render_component, format: :swiftui]
def render(assigns, _interface) do
~LVN"""
<VStack id="hello-ios">
<HStack>
<Text>Hello iOS!</Text>
</HStack>
</VStack>
"""
end
end
```
Then, open the `router.ex` and add following code.

After making those changes, folder structure of project's `lib` directory will look something like this.

After this, head out to `native/swiftui` directory and open the file with the extension `.xcodeproj` and then when you try to run the app on simulator, xcode will show popup like shown below.

All that you have to do is click on `Trust & Enable All`.
Then try to re-run the application from xcode, you can see some error as shown below.

Then you have to select any error, you will see following popup.

We need to click on `Trust & Enable` button. We have to perform this steps for all the errors.
Once you done with enabling all the macros, try to build and run the app.
If everything done properly, you should be able to see following screen on iOS simulator

and if you hit `http://localhost:4000/home` from your browser, should be able to see following.

If you are able to see this on your simulator and browser, congratulations. You have successfully integrated live view native into phoenix aplication.
You can find sample code on [Github](https://github.com/rushikeshpandit/live_view_native_demo)
In case if you face any issue, try deleting `_build` directory and compile code again using `mix compile` command.
If you have any suggestions/doubts or stuck somewhere in this process, feel free to reach to me via one of the following method.
LinkedIn : https://www.linkedin.com/in/rushikesh-pandit-646834100/
GitHub : https://github.com/rushikeshpandit
Portfolio : https://www.rushikeshpandit.in
In my next blog, I will be covering the about how to handle the states in web and mobile app when using live view native.

Stay tuned!!!
#myelixirstatus , #liveviewnative , #dockyard , #elixir , #phoenixframework
| rushikeshpandit |
1,899,124 | Exploring the "legendary-dollop" Repository: An SVG Generator | The legendary-dollop repository, created by charudatta10, is an intriguing project that generates... | 0 | 2024-06-24T15:43:42 | https://dev.to/charudatta10/exploring-the-legendary-dollop-repository-an-svg-generator-4388 | svg, github, generator, basic | The **legendary-dollop** repository, created by [charudatta10](https://github.com/charudatta10), is an intriguing project that generates SVGs. Whether you're designing banners, badges, or other visual elements, this tool can be a valuable addition to your toolkit.
## Key Features
1. **SVG Generation:**
- The primary purpose of this repository is to create SVGs programmatically.
- SVGs (Scalable Vector Graphics) are versatile, resolution-independent images that can be easily customized and scaled without loss of quality.
2. **Built with Python:**
- The project is implemented in Python.
- It leverages the Flask web framework for serving the SVGs and uses Waitress as the production server.
## Getting Started
1. **Installation:**
- You have two options:
- **Binary Installation:**
- Download the binary file from the [releases](https://github.com/charudatta10/legendary-dollop/releases) section.
- Double-click the binary to run the application.
- **Source Installation:**
- Clone the repository using:
```
gh repo clone charudatta10/legendary-dollop
```
- Explore the code and set up your development environment.
2. **Usage:**
- The main entry point is `api/app.py`.
- Customize the SVG generation logic according to your needs.
- Refer to the [README](https://github.com/charudatta10/legendary-dollop/blob/main/README.md) for detailed instructions.
## Conclusion
The **legendary-dollop** repository provides a straightforward way to create SVGs dynamically. Whether you're a designer, developer, or hobbyist, give it a try and unleash your creativity!
Explore the repository: [charudatta10/legendary-dollop](https://github.com/charudatta10/legendary-dollop)
Happy SVG crafting! 🎨🚀
| charudatta10 |
1,899,122 | Efficiently Managing Multi-Directory Repositories with GitHub Actions CI/CD Pipeline | Working with repositories that house multiple subdirectories, each representing an independent... | 0 | 2024-06-24T15:42:01 | https://dev.to/sepiyush/efficiently-managing-multi-directory-repositories-with-github-actions-cicd-pipeline-1mde | github, githubactions, cicd | Working with repositories that house multiple subdirectories, each representing an independent service deployed on AWS Lambda, presents unique challenges. One of the primary issues is determining which services need deployment based on changes within their respective directories. In this blog, I will share how I tackled this challenge by creating an efficient CI/CD pipeline using GitHub Actions, divided into two key jobs: detecting changes and building and deploying the necessary services.
## Challenges with Multi-Directory Repositories
When dealing with a repository structured with multiple subdirectories, each corresponding to an independent service, it becomes crucial to identify changes at a granular level. Each service must be individually assessed for modifications to ensure that only the necessary components are built and deployed. This selective deployment helps optimize the CI/CD process, saving time and resources.
## Solution: Dividing the Pipeline into Two Jobs
To address this issue, I designed the CI/CD pipeline with two primary jobs:
1. **detect-changes**: This job identifies which folders (services) have changes using git diff and stores them in an array.
2. **build-and-deploy**: This job takes the array generated in the previous step, creates zip packages, and deploys them as Lambda functions.
**Workflow Configuration**
Here is a detailed breakdown of the GitHub Actions workflow configuration:
```yml
name: lambda package update
on:
push:
branches:
- dev
paths:
- "**"
jobs:
detect-changes:
runs-on: ubuntu-latest
outputs:
services: ${{ steps.set-services.outputs.services }}
steps:
- name: Checkout repository
uses: actions/checkout@v2
with:
fetch-depth: 0
- name: Set services variable
id: set-services
run: |
echo "Checking for changes..."
services="[]"
current_commit=${{ github.sha }}
previous_commit=${{ github.event.before }}
for service in serviceA serviceB serviceC; do
if git diff --name-only ${{github.event.before}} ${{github.sha}} | grep "^${service}/"; then
services=$(jq -c --arg service "$service" '. + [$service]' <<<"$services")
fi
done
echo "::set-output name=services::$services"
- name: Debug services variable
run: echo ${{ steps.set-services.outputs.services }}
build-and-deploy:
needs: detect-changes
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
if: ${{ needs.detect-changes.outputs.services != '[]' }}
strategy:
matrix:
service: ${{ fromJson(needs.detect-changes.outputs.services) }}
steps:
- name: Checkout repository
uses: actions/checkout@v2
- name: Set up Node.js
uses: actions/setup-node@v2
with:
node-version: "20"
- name: Install dependencies
run: |
cd ${{ matrix.service }}
npm install --omit=dev
- name: Create zip package
run: |
cd ${{ matrix.service }}
zip -r ../${{ matrix.service }}.zip node_modules src index.js
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials@v1
with:
role-to-assume: role-arn
aws-region: region
- name: Deploy to AWS Lambda
run: |
aws lambda update-function-code --function-name ${{ matrix.service }}-${{ github.ref_name }}-api --zip-file fileb://${{ matrix.service }}.zip
```
## Key Components Explained
**Detect Changes Job**
- **Checkout Repository**: Checks out the repository to access the code.
- **Set Services Variable**: Uses git diff to identify which service directories have changes and sets the services variable accordingly. The services array is then output for use in the next job.
**Build and Deploy Job**
- **Conditional Execution**: This job only runs if there are changes detected in the previous step.
- **Matrix Strategy**: Iterates over each service that has changes.
- **Install Dependencies and Create Zip Package**: Installs the necessary dependencies and creates a zip package for each modified service.
- **AWS Deployment**: Deploys the zip package to AWS Lambda, updating the function code.
## Benefits and Optimization
By implementing this two-step CI/CD pipeline, I achieved a more efficient and optimized deployment process. This setup ensures that only the services with changes are built and deployed, significantly reducing unnecessary builds and deployments. Additionally, using GitHub Actions' matrix strategy allows for parallel processing of multiple services, further speeding up the deployment process.
## Conclusion
Managing multi-directory repositories with independent services can be complex, but with a well-structured CI/CD pipeline, you can streamline the process. By detecting changes and conditionally deploying only the necessary services, you can save time and resources, making your development workflow more efficient. This approach can be adapted to various use cases, providing a scalable solution for complex repository structures. | sepiyush |
1,899,120 | Como Transformar-se em uma Máquina de Aprender: Um Guia Pragmático | Disclaimer Esse post foi concebido pela IA Generativa em cima da transcrição do episódio original no... | 0 | 2024-06-24T15:37:07 | https://dev.to/dev-mais-eficiente/como-transformar-se-em-uma-maquina-de-aprender-um-guia-pragmatico-31l | **Disclaimer**
Esse post foi concebido pela IA Generativa em cima da transcrição do episódio original no Canal Dev Eficiente. Foi feita uma revisão rápida para adequação do conteúdo. [Você pode assistir o vídeo completo no próprio canal. ](https://www.youtube.com/watch?v=Cw9mcZSQDrY&list=PLVHlvMRWE0Y7_5jsAtVs44ZUNDOr5csx-&index=53)
## A Regra do Jogo
A regra básica é simples: quem estuda mais tende a chegar mais longe. Ter um método pragmático de estudo, que pode ser aplicado a qualquer coisa que você queira aprender, é essencial. Embora nosso foco aqui seja o viés profissional, esse método pode ser aplicado a hobbies e outras áreas de interesse.
## A Importância de Ser uma Máquina de Aprender
Ser uma máquina de aprender é uma habilidade cada vez mais desejada pelo mercado. Isso faz diferença na sua carreira, permitindo que você se movimente e se adapte a novos lugares. Eu, pessoalmente, construí essa habilidade ao longo do tempo, estudando bastante e desenvolvendo um método eficaz.
## Elementos de um Objetivo de Aprendizagem
**Objetivo Bem Definido:** Um objetivo de aprendizagem deve ser específico. Por exemplo, em vez de "quero aprender Spring Boot", defina "quero construir cruds super básicos com Spring Boot".
**Nível de Profundidade:** Estabeleça o nível de profundidade que deseja alcançar. Você quer entender, explicar ou ser capaz de construir algo? Cada nível exige um grau diferente de prática e compreensão.
**Ferramentas e Tecnologias:** Defina quais tecnologias e ferramentas você usará. Por exemplo, "quero construir crudes super básicos utilizando Spring Boot e Spring Data JPA".
**Contexto de Utilização:** O contexto em que você usará essa habilidade é crucial. Por exemplo, "preciso dessa habilidade para concorrer a vagas de dev júnior em empresas que me pedem para demonstrar conhecimento em quadros de qualquer cor".
**Critérios de Aceite:** Estabeleça critérios de aceite para saber quando atingiu seu objetivo. Por exemplo, "considero que estou pronto quando uma pessoa plena avaliar e confirmar que estou bem". Você pode estabelecer múltiplos critérios de aceite.
## Conclusão
Se você quer estudar de maneira mais eficaz, acelerar seu desenvolvimento profissional e aumentar sua adaptabilidade, considero que este método pode ser muito útil. Qualquer dúvida, deixe um comentário e eu estarei aqui para ajudar. | asouza | |
1,899,117 | Tricky Golang interview questions - Part 4: Concurrent Consumption | I want to discuss an example that is very interesting. I was surprised that many experienced... | 0 | 2024-06-24T15:28:39 | https://dev.to/crusty0gphr/tricky-golang-interview-questions-part-4-concurrent-consumption-34oe | interview, go, programming, tutorial | I want to discuss an example that is very interesting. I was surprised that many experienced developers were unable to answer it correctly. This example involves buffered channels and concurrency.
**Question: What will happen when we run this code?**
```go
package main
import "fmt"
func main() {
ch := make(chan int, 4)
go func() {
ch <- 1
ch <- 2
ch <- 3
ch <- 4
ch <- 5
close(ch)
}()
for num := range ch {
fmt.Println(num)
}
}
```
Let's analyse this piece of code. What we have here:
- a buffered channel ch of type int with a capacity of 4 is created
- a goroutine that sends values
- a goroutine sends five values to the channel
- the first 4 sends will fill the channel
- the fifth send operation will be blocked because the channel is at capacity
This idea seems straightforward, and you give the interviewer a confident answer: **Since the buffer capacity is only 4, the fifth send operation will block, causing a deadlock. Channels in go block when they are full and more values are being sent.**
```go
fatal error: all goroutines are asleep - deadlock!
```
At this point, the interviewer will kindly ask you to run the code and check the results.
```go
[Running] go run "main.go"
1
2
3
4
5
[Done] exited with code=0 in 0.318 seconds
```
And surprise surprise we see that this code actually works without deadlocks. Many interviewees, even those with significant experience, struggle to explain this unusual behaviour. There must be a deadlock in this code!
This raises another question: What can you change to create a deadlock in this code? It's quite simple: just remove the anonymous goroutine and send integers into the channel directly within the `main` function.
```go
package main
import "fmt"
func main() {
ch := make(chan int, 4)
ch <- 1
ch <- 2
ch <- 3
ch <- 4
ch <- 5 // deadlock at line:12
close(ch)
for num := range ch {
fmt.Println(num)
}
}
```
Now, we've created a deadlock:
```go
[Running] go run "main.go"
fatal error: all goroutines are asleep - deadlock!
goroutine 1 [chan send]:
main.main()
main.go:12 +0x78
[Done] exited with code=0 in 0.318 seconds
```
Let me explain why this is happening. Why does introducing a simple anonymous goroutine fix the problem?
Let's first define how concurrency works in go and what are channels:
- Concurrency in Go is built around goroutines and channels, which provide a simple and powerful model for concurrent programming.
- A goroutine is a lightweight thread managed by the Go runtime. Goroutines are created using the go keyword followed by a function call.
- Channels provide a way for goroutines to communicate with each other and synchronize their execution. Channels can be used to send and receive values between goroutines.
Channels can be buffered or unbuffered:
- **Unbuffered channels** have a capacity of 0. When a sender sends a value on an unbuffered channel, it will block until a receiver is ready to receive the value.
- **Buffered channels** have a capacity greater than 0. When a sender sends a value on a buffered channel, it will only block if the buffer is full.
Now that we understand how concurrency works in Go, let’s explore an important concept related to channels.
### Concurrent Consumption
The main idea behind concurrent consumption is to ensure that values sent to a channel are being read (or consumed) while they are being sent. This prevents the channel from getting full and blocking further send operations. Concurrent consumption in Go involves having one or more goroutines that send data to a channel while one or more other goroutines read from the same channel. This pattern is commonly used to handle situations where production (sending data) and consumption (receiving data) happen at different rates.
To ensure a smooth concurrent flow in this scenario, we need to clearly define producers and consumers that run concurrently. In go, the `main` function starts executing immediately upon the program's start, in its own goroutine.
1. In the example with a deadlock, there's a single goroutine that acts as both **the producer and consumer**. This means that within the same goroutine, values are being sent into a channel and simultaneously received from it. However, the deadlock occurs because the channel is filled with values before all it gets consumed. This results in a situation where the channel becomes blocked, preventing further operations.
2. In the first example the **the producer and consumer are separated**, they run in different goroutines. By using an anonymous goroutine to send values, the `main` function is free to read from the channel concurrently. This prevents the channel from getting full and blocking further sends, avoiding a deadlock situation.
So the correct answer is:
**The program will output all values that were sent into the buffered channel.**
Why?
**Because we have 2 goroutines one to produce data (anonymous) and another to consume (main). This way we ensure a flawless execution that empties the channel on time.**
It's that easy! | crusty0gphr |
1,899,115 | Security Measures to Consider When Choosing a Cloud Call Center | Undoubtedly, in this digital era, practically all organisations are adopting digital transformation.... | 0 | 2024-06-24T15:22:04 | https://dev.to/pradipmohapatra/security-measures-to-consider-when-choosing-a-cloud-call-center-1d36 | programming, news, productivity, api | Undoubtedly, in this digital era, practically all organisations are adopting digital transformation. This is the rationale behind businesses moving from classic call centre systems to cloud-based ones. Thus, instead of wasting time with on-premise systems, companies are relying on cloud-based software.
Cloud-based solutions improve scalability and allow remote labour, as seen in the utilisation of 66% of call centers. However, with increasing cyber-attacks and security threats, it becomes much more important to consider security measures when choosing a [cloud contact center](https://www.knowlarity.com/voice/cloud-call-center-solutions). So, to help you in the process, here’s the list of security measures to consider when looking for a cloud call center:
**Important Security Measures to Consider**
Encryption for data protection, access controls like multi-factor authentication, industry compliance, intrusion detection/prevention systems, regular audits, and vendor security assessments are some of the essential security attribute features to look for in cloud-based contact center solutions. These characteristics effectively safeguard private client information and protect against potential online threats.
**● Access Control and Authentication**
The fundamental security features of cloud-based call center solutions include call control and authentication. Administrators can restrict access to important information and resources based on user roles and permissions by establishing rules and policies through access control.
Reliable authentication mechanisms, such as multi-factor authentication, enhance security by demanding the user to present multiple forms of identity to get entry to the system. These safeguards help to guarantee that client information is protected from potential risks and from unauthorized access.
**● Data Protection and Encryption **
The security of cloud-based contact center solutions is significantly influenced by data protection and encryption. When transmitting and storing sensitive consumer data, encryption converts it into a safe format that prevents unwanted parties from accessing it.
Customer data security and privacy are ensured by these procedures, which guard against interception and unintentional disclosure of personal information, payment information, and call logs.
**● Regular Compliance and Security Audits **
Maintaining the integrity and security of cloud-based contact center solutions needs regular security assessments and compliance. To help organizations prevent security breaches, security audits are used. It is a systematic assessments of security controls and procedures with the goal of identifying vulnerabilities and shortcomings.
Legal requirements are met and sensitive consumer data is kept safe when industry rules, such as general data protection legislation, are followed. These steps help maintain the companies' customers' trust while safeguarding the confidentiality and accuracy of consumer information.
**The Role of Cloud Call Centre Providers in Ensuring Security**
Cloud call center security is heavily reliant on the suppliers. Their responsibility includes implementing and maintaining robust security protocols designed to safeguard customer information and infrastructure. Here are the important roles the cloud contact center providers play in ensuring security:
**● Understanding the Security Protocols of the Provider**
The security measures offered by the supplier are an important aspect to take into account when choosing a cloud-based call center system. To protect sensitive client data and guarantee the stability of their systems, the provider is responsible for putting in place and maintaining strict security measures.
Access controls, intrusion detection, data encryption, and regular security audits are the main security precautions to be considered. Providers must also give all information about their security efforts and follow industry norms like basic data protection laws.
You can choose a solution that prioritizes operational safety and data security by making well-informed decisions based on a comprehensive assessment of the security measures implemented by the supplier.
**● Selecting the Best Software for Cloud Call Centers**
The right cloud contact center software must be selected to guarantee optimal operation and the accomplishment of company goals. Consider factors such as feature set, affordability, scalability, flexibility, ease of deployment, integration capabilities, and security features when selecting a solution.
Look for a solution that meets the specific needs and objectives of your business. In addition, the vendor's reputation and reliability should be considered along with their track record of service and support.
Through careful consideration of these variables and thorough research, you can choose cloud contact center software that meets your needs and helps your company succeed.
**Final thoughts**
Selecting a cloud call center provider is an important choice that can help a company in the long run by improving data security, cutting expenses, and opening up innovation opportunities.
However, it is important to carry out a comprehensive assessment of possible providers based on established standards to guarantee the best match for your company. By conducting a thorough assessment, companies can make well-informed decisions and establish a fruitful collaboration with cloud service providers who offer secured services.
| pradipmohapatra |
1,899,340 | Webinar Sobre MS-900 Online E Gratuito: Prepare-se Para O Exame! | A Ka Solution oferece um curso para te permitir aprender mais sobre as ofertas de serviços em nuvem... | 0 | 2024-06-28T13:38:23 | https://guiadeti.com.br/webinar-ms-900-online-gratuito-preparo-para-exame/ | cursogratuito, cloud, cursosgratuitos, microsoft | ---
title: Webinar Sobre MS-900 Online E Gratuito: Prepare-se Para O Exame!
published: true
date: 2024-06-24 15:21:44 UTC
tags: CursoGratuito,cloud,cursosgratuitos,microsoft
canonical_url: https://guiadeti.com.br/webinar-ms-900-online-gratuito-preparo-para-exame/
---
A Ka Solution oferece um curso para te permitir aprender mais sobre as ofertas de serviços em nuvem do Microsoft 365, com um webinar a distância e totalmente gratuito.
Ministrado por um instrutor MCT (Microsoft Certified Trainer), o curso apresenta todo o conteúdo preparatório para o exame Microsoft 365 Fundamentals (MS-900).
Ao final do curso, os participantes receberão um Certificado de Conclusão e Vouchers de Desconto para os próximos cursos da carreira Azure. Aproveite essa chance para aprimorar seus conhecimentos em Microsoft 365!
## Curso Oficial Microsoft 365 Fundamentals (MS-900)
Se você deseja aprender mais sobre as ofertas de serviços em nuvem do Microsoft 365, aproveite essa excelente chance de participar de um curso a distância, totalmente gratuito!

_Imagem da página do curso_
### Instrutor Certificado e Conteúdo Preparatório
O curso será ministrado por um instrutor MCT (Microsoft Certified Trainer) e apresentará aos alunos todo o conteúdo preparatório para o exame Microsoft 365 Fundamentals (MS-900).
Após o término do curso, os participantes receberão um Certificado de Conclusão e Vouchers de Desconto para os próximos cursos da carreira Azure.
### Detalhes do Curso
- Data e Hora: 29 de junho de 2024, às 08:30 da manhã;
- Local: Online e gratuito;
- Carga Horária: 8 horas;
- Conformidade: Os cursos entregues pela Ka Solution obedecem rigorosamente todos os padrões definidos pela Microsoft.
### Público-Alvo
Este curso é destinado a tomadores de decisões de negócios e profissionais de TI que desejam implantar serviços em nuvem em suas organizações ou que estão simplesmente procurando adquirir conhecimentos básicos sobre os fundamentos da nuvem.
O conteúdo inclui as considerações e benefícios da adoção de serviços em nuvem em geral e o modelo de nuvem Software as a Service (SaaS), com foco nas ofertas de serviço em nuvem do Microsoft 365.
<aside>
<div>Você pode gostar</div>
<div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Masterclass-Analise-De-Dados-280x210.png" alt="Masterclass Análise De Dados" title="Masterclass Análise De Dados"></span>
</div>
<span>Masterclass De Análise De Dados Gratuita Da Hashtag Treinamentos</span> <a href="https://guiadeti.com.br/masterclass-analise-de-dados-gratuita/" title="Masterclass De Análise De Dados Gratuita Da Hashtag Treinamentos"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Webinar-Sobre-MS-900-280x210.png" alt="Webinar Sobre MS-900" title="Webinar Sobre MS-900"></span>
</div>
<span>Webinar Sobre MS-900 Online E Gratuito: Prepare-se Para O Exame!</span> <a href="https://guiadeti.com.br/webinar-ms-900-online-gratuito-preparo-para-exame/" title="Webinar Sobre MS-900 Online E Gratuito: Prepare-se Para O Exame!"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Bootcamp-Java-E-IA-280x210.png" alt="Bootcamp Java E IA" title="Bootcamp Java E IA"></span>
</div>
<span>Bootcamp De Java com IA Gratuito: Desenvolva Seu Portifólio</span> <a href="https://guiadeti.com.br/bootcamp-desenvolvimento-java-ia-gratuito/" title="Bootcamp De Java com IA Gratuito: Desenvolva Seu Portifólio"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/01/Aprenda-Robotica-Javascript-280x210.png" alt="Aprenda Robótica, Javascript e maisa" title="Aprenda Robótica, Javascript e maisa"></span>
</div>
<span>Aprenda Robótica, Javascript, PHP E Vários Outros Temas Gratuitos</span> <a href="https://guiadeti.com.br/cursos-robotica-javascript-php-gratuitos/" title="Aprenda Robótica, Javascript, PHP E Vários Outros Temas Gratuitos"></a>
</div>
</div>
</div>
</aside>
## MS-900
O exame MS-900, também conhecido como Microsoft 365 Fundamentals, é uma certificação oferecida pela Microsoft que valida os conhecimentos básicos sobre os serviços em nuvem do Microsoft 365.
Este exame é ideal para aqueles que desejam entender os conceitos fundamentais do Microsoft 365, incluindo suas funcionalidades, benefícios e como ele pode ser utilizado em um ambiente empresarial.
### Conteúdo do Exame
#### Introdução ao Microsoft 365
O exame cobre os conceitos básicos do Microsoft 365, como os componentes principais e a arquitetura dos serviços em nuvem.
Os candidatos aprenderão sobre as diferentes ofertas do Microsoft 365, incluindo Exchange Online, SharePoint Online, OneDrive for Business e Microsoft Teams.
#### Princípios de Nuvem
Os fundamentos da nuvem são uma parte essencial do MS-900. Os candidatos precisarão entender os conceitos de computação em nuvem, modelos de implantação (Pública, Privada e Híbrida) e tipos de serviços em nuvem (IaaS, PaaS, SaaS).
#### Segurança, Conformidade, Privacidade e Confiança
O exame também aborda tópicos relacionados à segurança e conformidade no Microsoft 365, incluindo a compreensão das melhores práticas de segurança, o uso de ferramentas de conformidade e como o Microsoft 365 gerencia a privacidade e a proteção de dados.
### Preços e Suporte
Outro aspecto importante do exame é o conhecimento sobre os planos de preços e licenciamento do Microsoft 365. Os candidatos devem estar familiarizados com as opções de assinatura e como obter suporte técnico e administrativo para o Microsoft 365.
### Benefícios da Certificação MS-900
#### Validação de Conhecimento
A certificação MS-900 valida o conhecimento fundamental do Microsoft 365, ajudando profissionais a demonstrar suas habilidades e conhecimentos para empregadores e colegas.
É uma excelente adição ao currículo para aqueles que estão começando na área de TI ou que desejam especializar-se em serviços em nuvem.
#### Preparação para Certificações Avançadas
O MS-900 serve como um trampolim para certificações mais avançadas da Microsoft. Ele fornece uma base sólida que facilita a compreensão de tópicos mais complexos e avançados em futuras certificações.
#### Aplicabilidade Empresarial
Compreender os fundamentos do Microsoft 365 permite que os profissionais de TI e tomadores de decisão empresariais aproveitem ao máximo os serviços em nuvem da Microsoft, podendo levar a uma melhor implementação e utilização do Microsoft 365 em suas organizações, resultando em maior eficiência e produtividade.
## Ka Solution
A Ka Solution é uma das principais empresas de capacitação em tecnologia no Brasil, reconhecida por seu compromisso com a excelência e inovação no ensino.
Fundada com o objetivo de fornecer educação de alta qualidade, a Ka Solution oferece uma ampla gama de cursos e treinamentos focados no desenvolvimento profissional e especialização em diversas áreas de TI.
### Abordagem Educacional
A metodologia da Ka Solution inclui laboratórios interativos, projetos práticos e desafios de código, permitindo que os alunos aprendam fazendo.
Os cursos são ministrados por instrutores altamente qualificados e certificados, muitos dos quais possuem certificações reconhecidas internacionalmente, como Microsoft Certified Trainers (MCT).
A diversidade de cursos abrange áreas como Microsoft Azure, Microsoft 365, desenvolvimento de software, ciência de dados e segurança da informação, garantindo que os alunos estejam sempre atualizados com as tendências e demandas do mercado.
### Parcerias e Impacto no Mercado
A Ka Solution mantém parcerias estratégicas com grandes empresas de tecnologia, como Microsoft, AWS e Cisco, proporcionando acesso a recursos exclusivos e oportunidades de networking.
A empresa é muito reconhecida por sua contribuição à formação de profissionais de TI altamente qualificados, ajudando milhares de alunos a avançar em suas carreiras.
Ao oferecer treinamentos atualizados e alinhados com as exigências do mercado, a Ka Solution garante que seus alunos estejam preparados para novas oportunidades de emprego e promoções, contribuindo para o crescimento e inovação no setor de TI.
## Link de inscrição ⬇️
As [inscrições para o Curso Oficial Microsoft 365 Fundamentals (MS-900)](https://kasolution.zoom.us/webinar/register/4417162326327/WN_-T4vcw8QTBeG2aSbsU3ZAw#/registration) devem ser realizadas no formulário da Ka Solution.
## Compartilhe esta oportunidade e ajude outros a transformarem suas carreiras com a Ka Solution!
Gostou do conteúdo sobre o curso gratuito de MS-900? Então compartilhe com a galera!
O post [Webinar Sobre MS-900 Online E Gratuito: Prepare-se Para O Exame!](https://guiadeti.com.br/webinar-ms-900-online-gratuito-preparo-para-exame/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,899,114 | ClientException with SocketException: Failed host lookup | hi, i put in my AndroidManifest.xml, but id i call my api android show this error: ClientException... | 0 | 2024-06-24T15:21:02 | https://dev.to/maxdiable/clientexception-with-socketexception-failed-host-lookup-4eni | hi, i put <uses-permission android:name="android.permission.INTERNET"/> in my AndroidManifest.xml, but id i call my api android show this error:
ClientException with SocketException: Failed host lookup: 'xxx.www.it' (OS Error: No address associated with hostname, errno = 7)
any Help?
Br max | maxdiable | |
1,899,084 | Sharing is Caring: Securely Extending AWS Resources Across Accounts | Sharing is Caring: Securely Extending AWS Resources Across Accounts In the realm of... | 0 | 2024-06-24T15:17:34 | https://dev.to/virajlakshitha/sharing-is-caring-securely-extending-aws-resources-across-accounts-54ea | 
# Sharing is Caring: Securely Extending AWS Resources Across Accounts
In the realm of cloud computing, where agility and scalability reign supreme, AWS offers a myriad of services that empower businesses to build robust and innovative solutions. As organizations grow and diversify, the need to segregate workloads and resources into separate AWS accounts often arises. This multi-account strategy enhances security, streamlines billing, and promotes efficient resource management. However, it also introduces the challenge of securely sharing resources and data across these isolated environments.
This is where the power of cross-account access in AWS comes into play. This essential feature allows users and resources in one AWS account (the **consumer account**) to securely access and utilize services and data residing in another AWS account (the **resource account**). This mechanism underpins a wide range of use cases, enabling seamless collaboration and resource optimization without compromising security.
### Centralized Logging and Monitoring
Imagine a scenario where your organization manages multiple applications, each deployed in a separate AWS account. Establishing a centralized logging and monitoring system becomes paramount to gain a holistic view of your operational landscape.
**Cross-account access provides a solution:**
1. **Dedicated Logging Account:** Designate a central AWS account dedicated to housing your logging and monitoring infrastructure, such as Amazon CloudWatch Logs, Amazon Elasticsearch Service (Amazon ES), or third-party tools.
2. **IAM Role Assumption:** Configure IAM roles in each application account, granting permissions to write log data to the central logging account.
3. **Seamless Log Ingestion:** Utilize AWS services like CloudWatch Agents or AWS Kinesis to stream logs from application accounts to the central logging account.
This setup ensures that all log data converges in a single location, simplifying analysis, troubleshooting, and security auditing.
### Sharing Resources with Internal Teams
In large organizations, different teams often need access to specific AWS resources owned by other teams. Consider a development team requiring access to a staging environment provisioned in a separate AWS account managed by the operations team.
**Cross-account access enables secure resource sharing:**
1. **Resource Account (Operations):** The operations team defines an IAM role in their account, granting specific permissions to the shared resources (e.g., read-only access to an Amazon S3 bucket).
2. **Consumer Account (Development):** The development team configures their IAM users or roles to assume the role defined in the operations account.
3. **Controlled Access:** Developers can now access the shared resources with the permissions stipulated by the operations team, ensuring a secure and well-defined access control model.
### SaaS Application Integration
Software-as-a-Service (SaaS) applications have become integral to modern business operations. When integrating your own AWS environment with a SaaS application, secure cross-account access is essential.
**Here's how it works:**
1. **SaaS Provider Account:** The SaaS provider configures an IAM role in their AWS account, granting specific permissions required for the integration to function.
2. **Customer Account:** You, as the customer, configure your AWS resources to assume the IAM role provided by the SaaS provider.
This setup allows the SaaS application to securely access and interact with your AWS resources (with your explicit permission) without requiring you to share sensitive credentials.
### Multi-Tenant Architecture
Building a multi-tenant application on AWS often necessitates segregating tenant data and resources into separate accounts for enhanced security and isolation.
**Cross-account access facilitates multi-tenancy:**
1. **Tenant Accounts:** Each tenant's data and resources reside in their dedicated AWS account.
2. **Management Account:** A central management account hosts the core application logic and utilizes cross-account access to interact with resources in tenant accounts.
3. **Dynamic Policy Generation:** The management account dynamically generates IAM policies based on tenant-specific permissions, ensuring granular access control.
This architecture ensures data isolation while enabling the management account to perform operations across tenants efficiently.
### Disaster Recovery in a Multi-Account Strategy
A comprehensive disaster recovery strategy is crucial for business continuity. In a multi-account AWS environment, cross-account access plays a pivotal role in orchestrating failover mechanisms.
**Here's how it works:**
1. **Primary and Secondary Accounts:** Your primary application workload runs in one account, while a designated secondary account hosts the disaster recovery infrastructure.
2. **Cross-Account Replication:** Leverage AWS services like Amazon S3 Cross-Region Replication or database replication mechanisms to replicate data from the primary to the secondary account.
3. **Automated Failover:** In a disaster scenario, pre-configured scripts or automation tools, utilizing cross-account access, can trigger the failover process, redirecting traffic and resources to the secondary account.
This setup ensures minimal downtime and data loss in the event of a primary account outage.
## Alternative Cloud Solutions
While AWS pioneered many cross-account access paradigms, other cloud providers offer comparable functionality:
- **Google Cloud Platform (GCP):** GCP utilizes service accounts with granular IAM roles to facilitate cross-project access, mirroring many AWS concepts.
- **Microsoft Azure:** Azure relies heavily on Azure Active Directory (Azure AD) for cross-tenant access management, leveraging its robust identity and access control features.
## Conclusion
Cross-account access stands as a cornerstone of secure and efficient resource sharing in a multi-account AWS environment. By enabling controlled access to services, data, and functionality across account boundaries, it empowers organizations to build sophisticated architectures, streamline operations, and enhance security. As you embrace the flexibility and scalability of the cloud, mastering cross-account access becomes an essential skill in your AWS arsenal.
## Architecting a Serverless Data Processing Pipeline with Cross-Account Access
Let's imagine a scenario where you are tasked with building a real-time data processing pipeline for a financial services company. Security and compliance are paramount, necessitating a multi-account strategy.
**Here's a high-level architecture leveraging cross-account access:**
**Account Structure:**
- **Data Ingestion Account:** This account is solely responsible for receiving sensitive financial data streams from various sources.
- **Processing Account:** This account houses the data processing logic, utilizing AWS Lambda functions and Amazon Kinesis Data Streams to transform and enrich the data in real time.
- **Storage Account:** This account provides secure storage for the processed data, potentially using Amazon S3 or a data warehouse like Amazon Redshift.
**Data Flow and Cross-Account Interaction:**
1. **Secure Data Ingestion:** Data streams from external sources flow into the Data Ingestion Account via services like Amazon Kinesis Data Firehose or Amazon API Gateway. Strict security groups and network access controls are implemented to restrict access to this account.
2. **Cross-Account Data Streaming:** Kinesis Data Streams in the Data Ingestion Account are configured to allow cross-account access from the Processing Account. This enables the Lambda functions in the Processing Account to consume and process the data without having direct access to the Data Ingestion Account's resources.
3. **Data Transformation and Enrichment:** Lambda functions in the Processing Account perform real-time data transformation, cleansing, and enrichment tasks. This might involve data validation, aggregation, or integration with external APIs.
4. **Secure Data Storage:** The processed data is then securely stored in the Storage Account. Again, cross-account access is utilized, with the Processing Account granted write permissions to specific S3 buckets or Redshift tables in the Storage Account.
5. **Auditing and Monitoring:** CloudTrail logs all cross-account API activity, providing a comprehensive audit trail. CloudWatch monitors the health and performance of the entire pipeline across all accounts.
**Advantages of this Architecture:**
- **Enhanced Security:** Segregating responsibilities into separate accounts minimizes the blast radius of potential security breaches. Each account operates with the principle of least privilege.
- **Compliance Adherence:** The strict access controls and audit trails inherent in this architecture support compliance requirements common in regulated industries like finance.
- **Scalability and Flexibility:** The serverless nature of Lambda and Kinesis allows the pipeline to scale elastically based on data volume fluctuations.
This advanced use case demonstrates how cross-account access, combined with a well-defined account strategy and serverless technologies, empowers you to build secure, scalable, and highly available solutions on AWS.
| virajlakshitha | |
1,892,869 | Fastly and the Linux kernel | Today, Fastly announced a $40M commitment of free CDN, compute, and security services to the Linux... | 0 | 2024-06-24T15:16:00 | https://dev.to/fastly/fastly-and-the-linux-kernel-8fg | linux, fastly, opensource, fastforward | Today, Fastly [announced a $40M commitment of free CDN, compute, and security services to the Linux Foundation’s ecosystem of maintainers](https://www.fastly.com/blog/fast-forward-were-here-for-the-maintainers/), furthering our shared mission of sustaining free and open source software. To celebrate, we wanted to highlight an essential component of the Fastly platform and part of the Linux Foundation’s original raison d’être: the Linux kernel.
When we launched Fastly 13 years ago, our founders were fueled by a vision to create a radically different kind of platform—faster and more efficient than its predecessors, utilizing specific hardware and a foundation built on open source software. The tools they reached for? [SSDs](https://www.youtube.com/watch?v=H7PJ1oeEyGg), [Varnish](https://www.slideshare.net/slideshow/varnish-oscon-2009/1761244), and the [Linux kernel](https://youtu.be/oebqlzblfyo?t=147&si=evu_qAG0NvfXFaLj).
Our founders opted for open source, like the Linux [kernel](https://en.wikipedia.org/wiki/Kernel_(operating_system)), whenever possible for the same reasons anyone does. We could quickly fix bugs without needing to call a support line or wait on a ticket. We could change the kernel’s code so that it was even better suited to our needs. Perhaps one of the most important reasons is that we could not only stand on the shoulders of giants but contribute alongside them, too. Throughout the Linux Foundation’s almost 25-year history, the organization has provided vital support and resources to the maintainers and contributors that build some of the most important and impactful technologies of our time. As of 2023, the Linux Foundation is home to over 1,000 active projects and foundations ranging in focus from cloud computing and web development to visual effects and IoT devices.
The open source projects Fastly uses and the foundations we partner with are vital to Fastly’s mission and success. Here's an unscientific list of projects and organizations supported by the Linux Foundation that we use and love: [The Linux Kernel](https://www.kernel.org/), [Kubernetes](https://kubernetes.io/), [containerd](https://containerd.io/), [eBPF](https://ebpf.io/), [Falco](https://falco.org/), [OpenAPI Initiative](https://www.openapis.org/), [ESLint](https://eslint.org/), [Express](https://expressjs.com/), [Fastify](https://fastify.dev/), [Lodash](https://lodash.com/), [Mocha](https://mochajs.org/), [Node.js](http://Node.js), [Prometheus](https://prometheus.io/), [Jenkins](https://www.jenkins.io/), [OpenTelemetry](https://opentelemetry.io/), [Envoy](https://www.envoyproxy.io/), [etcd](https://etcd.io/), [Helm](https://helm.sh/), [osquery](https://osquery.io/), [Harbor](https://goharbor.io/), [sigstore](https://www.sigstore.dev/), [cert-manager](https://cert-manager.io/), [Cilium](https://cilium.io/), [Fluentd](https://www.fluentd.org/), [Keycloak](https://www.keycloak.org/), [Open Policy Agent](https://www.openpolicyagent.org/), [Coalition for Content Provenance and Authority (C2PA)](https://c2pa.org/), [Flux](https://fluxcd.io/), [gRPC](https://grpc.io/), [Strimzi](https://strimzi.io/), [Thanos](https://thanos.io/), [Linkerd](https://linkerd.io/), [Let’s Encrypt](https://letsencrypt.org/), [WebAssembly](https://tag-runtime.cncf.io/wgs/wasm/charter/). And the list goes on!
Because the Foundation is a fundamental component of Fastly’s tech stack, we are delighted to give back to the community that maintains and supports it. We do that in several ways, the chief of which is our donation of free services to various projects under the Foundation. Since 2015, Fastly has served nearly 35,000 terabytes to Linux Foundation projects, including the Linux Kernel, Kubernetes, jQuery, and Jenkins. We’ve heard that the projects we support have done [wonders for their peace of mind and the continued sustainability](https://dev.to/fastly/this-is-your-sign-to-update-your-kubernetes-download-workflow-5hf2) of their projects, so we’re recommitting to our existing partners and extending our commitment to any other projects or nonprofits that can benefit from our support.
We contribute back to the Linux ecosystem in other ways, too. Beyond our in-kind donation, we encourage our employees to submit bug fixes upstream and employ several engineers who contribute to various projects, including [Linux kernel contributor Joe Damato](https://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next.git/log/?qt=author&q=Damato). His contributions are focused on improving the granularity and ease of customizing and optimizing the data path, among other things:
* Recently, Joe upstreamed [a major and widely anticipated new interface to the kernel](https://lore.kernel.org/netdev/20240213061652.6342-4-jdamato@fastly.com/T/), allowing programs to control network socket busy polling behavior more simply and closely. [This new interface](https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/fs/eventpoll.c?h=v6.9&id=18e2bf0edf4dd88d9656ec92395aa47392e85b61) allows Fastly and the open source community to build and control latency-sensitive programs far more easily.
* Joe recently [upstreamed a change to the Mellanox mlx5 driver](https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=f25e7b82635f59af87bd720bbb8c2ea19e8e0f67), which creates a mapping between hardware network queues on the device, their associated interrupt number, and the kernel’s internal NAPI ID. This change allows users to access this helpful information via a [new netlink interface added to the kernel by Intel](https://lore.kernel.org/netdev/170147307026.5260.9300080745237900261.stgit@anambiarhost.jf.intel.com/T/#mf82b92c584173f9542c222429a10d2183bfc5d00).
* Another recent highlight among Joe’s many contributions is a kernel change that allows users to set [a custom flow hash on custom RSS contexts](https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/commit/?id=0212e5d915a293dcde06415f8c82d31576576a97) if the driver supports it. This change also allows network device driver maintainers to support this for their devices, providing users more flexibility in how network flows are mapped to device queues.
And as far as Fastly’s kernel goes, it’s laser-focused on instant speed. Suresh Bhogavilli is the distinguished engineer who leads our Edge Host Networking team, which is tasked with optimizing our kernel and reducing latency by every zeptosecond possible:
>
To significantly reduce [round-trip time](https://en.wikipedia.org/wiki/Transmission_time#:~:text=order%20of%20milliseconds.-,Roundtrip%20time,received%20at%20the%20same%20node.) and improve customer connectivity to Fastly’s edge we developed our [Precision Path](https://www.fastly.com/blog/improving-network-availability-with-precision-path/) and [Fast Path Failover](https://www.fastly.com/blog/traffic-delivery-reliability-improvements/) technologies. We use them to tune TCP window parameters per destination, improving TCP connectivity between caches at our own PoPs [[Points of Presence](https://www.fastly.com/network-map/)]. Linux also helps power our ingress traffic engineering tool, called Harmonizer, which allows us to carry traffic to the PoP that is most optimal to serve that customer request. | haubles |
1,899,082 | Deep Dive into JavaScript Promises: Patterns and Best Practices | JavaScript promises are a powerful tool for handling asynchronous operations. They represent a value... | 0 | 2024-06-24T15:15:26 | https://dev.to/delia_code/deep-dive-into-javascript-promises-patterns-and-best-practices-9cg | web3, javascript, programming, learning | JavaScript promises are a powerful tool for handling asynchronous operations. They represent a value that may be available now, or in the future, or never. Understanding how to effectively use promises can significantly improve the quality and readability of your code. This article will explore the intricacies of JavaScript promises, common patterns, and best practices.
## What is a Promise?
A promise is an object representing the eventual completion or failure of an asynchronous operation. It allows you to attach handlers for the eventual success or failure of that operation.
### Basic Promise Syntax
```javascript
let promise = new Promise(function(resolve, reject) {
// executor (the producing code, "singer")
});
```
### States of a Promise
- **Pending:** Initial state, neither fulfilled nor rejected.
- **Fulfilled:** The operation completed successfully.
- **Rejected:** The operation failed.
### Creating a Promise
A promise is created using the `new Promise` constructor which takes a function (executor) with two arguments: `resolve` and `reject`.
```javascript
let promise = new Promise((resolve, reject) => {
setTimeout(() => resolve("done"), 1000);
});
```
### Consuming Promises
You consume promises using `then`, `catch`, and `finally` methods.
```javascript
promise
.then(result => console.log(result)) // "done" after 1 second
.catch(error => console.error(error))
.finally(() => console.log("Promise finished"));
```
## Common Patterns with Promises
### 1. **Chaining Promises**
Promise chaining is a pattern where each `then` returns a new promise, making it easy to perform a series of asynchronous operations sequentially.
```javascript
fetch('https://api.example.com/data')
.then(response => response.json())
.then(data => {
console.log(data);
return fetch('https://api.example.com/other-data');
})
.then(response => response.json())
.then(otherData => console.log(otherData))
.catch(error => console.error('Error:', error));
```
### 2. **Error Handling**
Proper error handling in promises ensures that you can catch errors at any point in the chain.
```javascript
promise
.then(result => {
throw new Error("Something went wrong");
})
.catch(error => {
console.error(error.message);
});
```
### 3. **Parallel Execution with Promise.all**
When you need to run multiple asynchronous operations in parallel, use `Promise.all`.
```javascript
Promise.all([
fetch('https://api.example.com/data1'),
fetch('https://api.example.com/data2')
])
.then(responses => Promise.all(responses.map(res => res.json())))
.then(data => console.log(data))
.catch(error => console.error('Error:', error));
```
### 4. **Promise.race**
`Promise.race` returns a promise that resolves or rejects as soon as one of the promises in the iterable resolves or rejects.
```javascript
Promise.race([
new Promise(resolve => setTimeout(resolve, 100, 'one')),
new Promise(resolve => setTimeout(resolve, 200, 'two'))
])
.then(value => console.log(value)); // "one"
```
## Best Practices with Promises
### 1. **Always Return a Promise**
When working within a `then` handler, always return a promise. This ensures that the next `then` in the chain waits for the returned promise to resolve.
```javascript
fetch('https://api.example.com/data')
.then(response => response.json())
.then(data => {
return fetch('https://api.example.com/other-data');
})
.then(response => response.json())
.then(otherData => console.log(otherData));
```
### 2. **Use `catch` for Error Handling**
Always use `catch` at the end of your promise chain to handle errors.
```javascript
fetch('https://api.example.com/data')
.then(response => response.json())
.catch(error => console.error('Fetch error:', error));
```
### 3. **Use `finally` for Cleanup**
Use `finally` to execute code that should run regardless of whether the promise is fulfilled or rejected.
```javascript
promise
.then(result => console.log(result))
.catch(error => console.error(error))
.finally(() => console.log('Cleanup'));
```
### 4. **Avoid the Promise Constructor Anti-Pattern**
Don't use the promise constructor when you can use existing promise-based APIs.
**Anti-Pattern:**
```javascript
new Promise((resolve, reject) => {
fs.readFile('file.txt', (err, data) => {
if (err) reject(err);
else resolve(data);
});
});
```
**Better:**
```javascript
const {promises: fsPromises} = require('fs');
fsPromises.readFile('file.txt');
```
## Examples of Promises in Real-World Scenarios
### Example 1: Fetching Data from an API
```javascript
function fetchData() {
return fetch('https://jsonplaceholder.typicode.com/todos/1')
.then(response => {
if (!response.ok) {
throw new Error('Network response was not ok');
}
return response.json();
})
.then(data => console.log(data))
.catch(error => console.error('Fetch error:', error));
}
fetchData();
```
### Example 2: Sequential API Calls
```javascript
function fetchUserAndPosts() {
return fetch('https://jsonplaceholder.typicode.com/users/1')
.then(response => response.json())
.then(user => {
console.log(user);
return fetch(`https://jsonplaceholder.typicode.com/posts?userId=${user.id}`);
})
.then(response => response.json())
.then(posts => console.log(posts))
.catch(error => console.error('Error:', error));
}
fetchUserAndPosts();
```
Promises are a cornerstone of modern JavaScript, enabling developers to handle asynchronous operations with greater ease and readability. By understanding the various patterns and best practices, you can write more efficient and maintainable code. Remember to always handle errors properly, return promises in `then` handlers, and utilize `Promise.all` and `Promise.race` for parallel and competitive asynchronous tasks. With these techniques, you'll be well-equipped to tackle complex asynchronous programming challenges. | delia_code |
1,899,081 | Dive into the Depths of Deep Learning with NYU's Cutting-Edge Course! 🚀 | This increasingly popular course is taught through the Data Science Center at NYU. Originally introduced by [Yann Lecun](http://yann.lecun.com/), it is now led by [Zaid Harchaoui](http://www.harchaoui.eu/), although Prof. Lecun is rumored to still stop by from time to time. It covers the theory, technique, and tricks that are used to achieve very high accuracy for machine learning tasks in computer vision and natural language processing. The assignments are in Lua and hosted on Kaggle. | 27,844 | 2024-06-24T15:15:26 | https://getvm.io/tutorials/ds-ga-1008-deep-learning-new-york-university | getvm, programming, freetutorial, universitycourses |
As a passionate learner and enthusiast of the latest advancements in artificial intelligence, I'm thrilled to share with you an incredible opportunity to dive into the world of deep learning. The DS-GA 1008 Deep Learning course offered by New York University (NYU) is a true gem that I simply can't wait to tell you all about! 😊
## Unraveling the Mysteries of Deep Learning
This course, originally introduced by the renowned deep learning pioneer, [Yann Lecun](http://yann.lecun.com/), is now led by the equally brilliant [Zaid Harchaoui](http://www.harchaoui.eu/). Together, they have crafted a curriculum that delves deep into the theory, techniques, and cutting-edge tricks used to achieve remarkable accuracy in machine learning tasks, particularly in the realms of computer vision and natural language processing.
## A Hands-On Approach to Mastering Deep Learning 👨💻
What sets this course apart is its practical, hands-on approach. The assignments are designed in Lua and hosted on the popular Kaggle platform, allowing you to get your hands dirty with real-world deep learning challenges. This unique blend of theory and practice ensures that you not only understand the concepts but also gain the skills to apply them effectively.
## Learning from the Best in the Field 🎓
One of the biggest draws of this course is the opportunity to learn from the best in the field. With Yann Lecun, a pioneer in the field of deep learning, and Zaid Harchaoui, a renowned expert, as your guides, you'll have access to a wealth of knowledge and insights that will propel your understanding of deep learning to new heights.
## Unlock the Future of AI with DS-GA 1008 🔍
If you're ready to dive into the latest advancements in deep learning and unlock the future of artificial intelligence, I highly recommend checking out the DS-GA 1008 Deep Learning course at NYU. You can find more information and the course schedule at [http://cilvr.cs.nyu.edu/doku.php?id=deeplearning2015:schedule](http://cilvr.cs.nyu.edu/doku.php?id=deeplearning2015:schedule).
Get ready to embark on an exciting journey of discovery and mastery in the world of deep learning! 🌟
## Unleash Your Deep Learning Potential with GetVM's Playground 🚀
For those eager to dive into the DS-GA 1008 Deep Learning course from New York University, I highly recommend exploring the GetVM Playground. This powerful online coding environment seamlessly integrates with the course materials, allowing you to put the theory into practice with ease.
The GetVM Playground [https://getvm.io/tutorials/ds-ga-1008-deep-learning-new-york-university] provides a dynamic and interactive space where you can experiment with the Lua-based assignments, without the hassle of setting up a local development environment. With just a few clicks, you'll have access to a fully-equipped coding workspace, complete with the necessary libraries and tools to tackle the deep learning challenges head-on.
The beauty of the GetVM Playground lies in its user-friendly interface and instant feedback mechanisms. As you work through the assignments, you'll receive real-time guidance and support, ensuring that you stay on track and make the most of your learning experience. Whether you're a seasoned deep learning enthusiast or a newcomer to the field, the Playground's intuitive design and seamless integration with the course materials will empower you to unlock your full potential.
So, why not take the first step and dive into the DS-GA 1008 Deep Learning course with the help of GetVM's Playground? Unlock the secrets of deep learning, experiment with cutting-edge techniques, and watch your skills soar to new heights. The future of AI is waiting for you to explore! 🌟
---
## Practice Now!
- 🔗 Visit [DS-GA 1008 Deep Learning - New York University](http://cilvr.cs.nyu.edu/doku.php?id=deeplearning2015:schedule) original website
- 🚀 Practice [DS-GA 1008 Deep Learning - New York University](https://getvm.io/tutorials/ds-ga-1008-deep-learning-new-york-university) on GetVM
- 📖 Explore More [Free Resources on GetVM](https://getvm.io/explore)
Join our [Discord](https://discord.gg/XxKAAFWVNu) or tweet us [@GetVM](https://x.com/getvmio) ! 😄 | getvm |
1,899,080 | How I Went from a Broke 15-Year-Old Employee to Launching My First SaaS Product | Two months ago, I started developing a free Link-In-Bio tool that pays users for each click. It's not... | 0 | 2024-06-24T15:14:01 | https://sotergreco.com/how-i-went-from-a-broke-15-year-old-employee-to-launching-my-first-saas-product | saas, startup, story | Two months ago, I started developing a free [Link-In-Bio](https://linkinbio.cc/) tool that pays users for each click. It's not finished yet, but it's getting there.
I don't usually do interviews, but a friend suggested I share my story. So here I am, talking about how I went from a broke 15-year-old employee to starting my first SaaS business [Linkinbio](https://linkinbio.cc/).
*This idea can help many creators and businesses, and I’d love your opinion on how to move forward and what do you think about the app. My story may inspire some of you to keep going despite setbacks.*
Here's a quick overview of the article:
* Broke and lost at 15 years old.
* Waking up at 4 AM to work before school.
* Finding my first job, being taken advantage of, and not getting paid.
* Starting my first business and failing after losing $60K.
* How I started Linkinbio.
Without further ado...
## Hello Sotiris, Who are you and what is your background?
Hello Michael, I am a 22-year-old software engineer from Greece with an interest in Indie Hacking and entrepreneurship. But it wasn't always this way. As a kid, I was lost and faced many setbacks before I could develop my first business idea.
Like many kids, I dreamed of becoming a professional athlete. I played basketball, but things didn't turn out as I hoped. Injuries were a big part of it, and there wasn't much money to be made from it.
I didn't think about money as a kid, but when the economic crisis of 2008 hit Greece, my family struggled. A few years later, I wanted to earn my own money to cover my basic expenses, like school lunch, upcoming university costs, and basketball practices, as my parents couldn't afford everything.
## So what did you do to face these challenges?
My father told me that I could go to university and then find my first job. But due to the economic situation back then, I wanted money immediately and couldn't wait 10 years to finish school and university to start making money. I needed something now.
Call it fate, but that year a new student joined our school. He was a nerdy-looking kid that almost no one talked to. Despite the economic situation, my father taught me valuable lessons like "never judge someone by their cover." So after a couple of months, I finally talked to him.
That young kid, at 15 years old, told me he made $1,000 from creating websites. What? $1,000? This was so much money for me back then; I could literally cover my expenses for the entire year.
From that day, I decided to start learning programming. Without ever having touched a computer in my life, I was determined for the first time. So I learned and I found my first job.
## How long did it take you to learn programming, was it hard? Talk to me about it.
Hard? That's an understatement. I didn't sleep for six months. I remember that day vividly. It was the 1st of March, and I told myself that by the end of summer, I had to find a job.
The first few weeks were really challenging for me. I thought this was something I couldn't do. I studied after school for a couple of hours, maybe 1-2 hours daily, but I saw no progress.
In the meantime, I kept talking with my friend, and he told me, "You need to work hard. I've been studying since I was 12 years old to be in the position I am today."
That day, something clicked for me. I set an alarm clock for 4:30 AM, giving me 3 hours to study before school. I don't know how, but I was certain I could do it. I studied 3 hours before school, 3 hours after school, and read coding books for 1 hour at night.
Finally, after 2 months, I saw progress. I created my first HTML website. It was simple, but it worked.
Time passed and mid-July I started looking for jobs.
## Did you find any? Talk to me through the process of finding a job.
I decided with that friend of mine to create a website where we would offer website creation services. So that's what we did.
But nothing happened. August came and we had no clients. Then one day, I got a call from an unknown number. It was a guy who wanted a real estate website. He told me he found us through Google Search.
We closed a deal with him for **$2000**. That amount of money was unimaginable for me back then. After a month, another guy messaged me on LinkedIn and said he wanted to hire us. So, for the first time ever, my friend and I closed another deal for **$800** per month to work 4 hours a day. For a 15-year-old in Greece, that was a lot of money.
I don't want to be unfair to him, but I learned a lot on that job and created websites and applications from scratch that I couldn't believe I could make before.
Then, one month, he decided not to pay us.
## Not pay you anything? What do you mean?
Yes, you heard right. One month, after a year of working for him, he did not pay us—neither me nor my friend. At first, I thought it was okay. He told us not to worry and assured us that everything would be fine. He promised to pay us double next month because they were making some internal changes to the company.
The next month passed, but we received no money. We then heard that he had stopped paying four other employees at the company. So, one day, we decided to leave. And that's what we did four software engineers left the company, with him owing us $1600 plus a bonus he had promised after one year of work.
But as kids do, we spent all the money we made. Just kidding, I saved all of it. So when I left the job, I had some money to invest. But what investments can a 16-year-old software engineer make?
I created an underwear store.
## Wait what ? Why did you do that ?
Well, I had an uncle who had some money. He wasn't rich, but he was doing well. He always wanted to open an underwear store, not just an e-commerce store but a physical one as well. Don't ask me why, but older uncles from Greece who know little about business either open a coffee shop or a clothing store.
The thing with my uncle was that he lived on a deserted island in the middle of nowhere. So, he had the brilliant idea to give $50,000 to a 16-year-old to open a store.
That's what he did. I put in $10,000, and he put in $50,000. He told me,
\- "Now you are a man, go sort things out."
I found a physical location, went to the IRS to handle all the paperwork, hired a designer to design the store's interior, found wholesale suppliers to order underwear, and created the e-commerce website. I basically did everything by myself. I was working 10 hours a day in that store at a very young age.
I must confess that everything went well for the first few months. But then the pandemic hit. It literally obliterated the physical store, and I didn't know what marketing was back then. I spent $0 on marketing. So, long story short, I left that business, and we lost all of our money.
For the next couple of years, I continued to work as a software engineer. Today, I have quit my job to pursue something new in my life and create my first SaaS product.
## So tell me more about [Linkinbio](https://linkinbio.cc/), your product. Where did you get the idea?
I spent a lot of time on social media as a kid. I noticed many creators using link-in-bio apps that lacked themes and customization options and had their branding, making them look unprofessional.
Most social media accounts use these apps without realizing it. They only showcase a few links and charge a monthly fee for it.
Then I thought, why not create a free tool without branding that also supports an internal economy where creators help each other and make money?
So, two months ago, I started developing [linkinbio.cc](https://linkinbio.cc/), a link-in-bio tool with unlimited customization options. You can promote your links and buttons to other profiles, and 90% of the money you pay goes back to the creators. If someone promotes a button, you get paid for the clicks on your page.
The product is now in the early bird phase. I want the community to give feedback so I can make changes before the final launch in mid-July.
Check it out and leave comments about anything you don't like or any features you want to see. I'm open to criticism and want to know what the community wants.
***If you want you can register and claim your handle on*** [***linkinbio.cc***](https://linkinbio.cc/) ***either for free or pay $3 for the early-bird premium package.***
Thanks for reading, and I hope you found this article helpful. If you have any questions, feel free to email me at [**x@sotergreco.com**](mailto:x@sotergreco.com)**, and I will respond.**
You can also keep up with my latest updates by checking out my X here: [**x.com/sotergreco**](http://x.com/sotergreco) | sotergreco |
1,899,079 | iTextShartp (Itext 7) VS IronPDF C# PDF Library Comparison | When it comes to handling PDFs in .NET applications, two popular libraries often come to mind:... | 0 | 2024-06-24T15:13:32 | https://dev.to/mhamzap10/itextshartp-itext-7-vs-ironpdf-c-pdf-library-comparison-n8m | itextsharp, ironpdf, csharp, dotnet | When it comes to handling [PDFs](https://en.wikipedia.org/wiki/PDF) in .NET applications, two popular libraries often come to mind: iTextSharp and IronPDF. Both have their own set of features, strengths, and limitations. This article will provide a detailed comparison of iTextSharp and IronPDF, covering various scenarios and topics, and will demonstrate which library is the better choice for many applications.
##Overview
###iTextSharp
[iTextSharp](https://itextpdf.com/) is a .NET port of the popular Java PDF library iText. iTextSharp has now reached to end of Life and is replaced by iText also known as iText7. It is a powerful library for creating and manipulating PDF documents. It offers a wide range of features, including the ability to generate PDFs from scratch, modify existing PDFs, and extract content.
###IronPDF
[IronPDF](https://ironpdf.com/) is a .NET library designed to facilitate [PDF generation](https://ironpdf.com/tutorials/dotnet-core-pdf-generating/) and [manipulation](https://ironpdf.com/examples/csharp-remove-page-from-pdf/) within C# and VB.NET applications. It allows developers to create, edit, and manipulate PDF documents programmatically with ease. IronPDF supports a wide range of features including [HTML to PDF](https://ironpdf.com/tutorials/html-to-pdf/) conversion, PDF [merging](https://ironpdf.com/how-to/merge-or-split-pdfs/) and splitting, adding [watermarks](https://ironpdf.com/how-to/custom-watermark/), and extracting [text](https://ironpdf.com/examples/csharp-replace-text-in-pdf/) and [images](https://ironpdf.com/examples/image-to-pdf/) from PDF files. It's commonly used in web applications, desktop software, and cloud environments where PDF functionality is required.
##Comparison
###1. Installation and Setup
####iTextSharp
iTextSharp (iText7) can be installed via NuGet. Here is a typical installation command:
```
Install-Package iText7
```

####IronPDF
IronPDF can also be installed via NuGet. Here is the command to Install IronPDF
```
Install-Package IronPdf
```

###2. Creating PDFs
####iTextSharp
iTextSharp (iText7) requires a fair amount of code to create pdf documents. Here is an example:
```
PdfWriter writer = new PdfWriter("pdf_with_iText.pdf");
PdfDocument pdf = new PdfDocument(writer);
Document document = new Document(pdf);
document.Add(new Paragraph("Hello, World!"));
document.Add(new Paragraph("PDF Created with iText7 (iTextSHarp)!"));
document.Close();
```
The Output PDF is as:

####IronPDF
IronPDF excels in HTML to PDF conversion, offering built-in support that simplifies the process:
```
var renderer = new ChromePdfRenderer();
var pdfDocument = renderer.RenderHtmlAsPdf("<h1>Hello, World!</h1><p>PDF Created with IronPDF!</p>");
pdfDocument.SaveAs("pdf_with_IronPDF.pdf");
```

In the comparison between iText and IronPDF for creating PDF files, iText requires multiple steps to initialize a document and add content programmatically, which can be more complex. On the other hand, IronPDF offers simplicity with its HTML to PDF rendering capability, allowing for quick conversion of HTML content to PDF directly. This makes IronPDF a better choice for rapid development and straightforward PDF generation tasks.
###3. HTML to PDF Conversion
####iTextSharp:
iText Library does not natively support HTML to PDF conversion. We need to install an additional "PdfHTML" Library to achieve this.
Run the following command in the Package Manager Console to install the library
```
Install-Package itext7.pdfhtml
```
The following code will convert HTML to pdf
```
using (FileStream htmlSource = File.Open("index.html", FileMode.Open))
using (FileStream pdfDest = File.Open("html_to_pdf_withiText.pdf", FileMode.Create))
{
ConverterProperties converterProperties = new ConverterProperties();
HtmlConverter.ConvertToPdf(htmlSource, pdfDest, converterProperties);
}
```
The Output file is as:

####IronPDF
IronPDF excels in HTML to PDF conversion, offering built-in support that simplifies the process:
```
var renderer = new ChromePdfRenderer();
var pdfDocument = renderer.RenderHtmlFileAsPdf("index.html");
pdfDocument.SaveAs("HtmlToPDFWith_IronPDF.pdf");
```
PDF Document is as:

In comparing the two approaches for converting HTML to PDF, iText requires additional setup and dependencies (itext7.pdfhtml library) for HTML conversion, which increases complexity. Meanwhile, IronPDF offers a straightforward solution with built-in HTML to PDF conversion capabilities, making it easier and more convenient to create PDFs directly from HTML files. This makes IronPDF advantageous for developers needing seamless integration of HTML content into PDF documents without the need for extra libraries or configurations.
###4. Manipulating Existing PDFs
####Adding Watermark with iText
iText provides comprehensive tools for manipulating existing PDFs, but the API can be complex. Here's an example of adding a watermark to an existing PDF:
```
string inputFilePath = "pdf_with_iText.pdf";
string outputFilePath = "add_watermark_with_iText.pdf";
PdfDocument pdfDoc = new PdfDocument(new PdfReader(inputFilePath), new PdfWriter(outputFilePath)); // document class
PdfCanvas under = new PdfCanvas(pdfDoc.GetFirstPage().NewContentStreamBefore(), new PdfResources(), pdfDoc);
PdfFont font = PdfFontFactory.CreateFont(FontProgramFactory.CreateFont(StandardFonts.HELVETICA));
PdfCanvas over = new PdfCanvas(pdfDoc.GetFirstPage());
over.SetFillColor(ColorConstants.BLACK);
Paragraph paragraph = new Paragraph("This TRANSPARENT watermark is added from iText")
.SetFont(font)
.SetFontSize(15);
over.SaveState();
// Creating a dictionary that maps resource names to graphics state parameter dictionaries
PdfExtGState gs1 = new PdfExtGState();
gs1.SetFillOpacity(0.5f);
over.SetExtGState(gs1);
Canvas canvasWatermark3 = new Canvas(over, pdfDoc.GetDefaultPageSize())
.ShowTextAligned(paragraph, 297, 450, 1, iText.Layout.Properties.TextAlignment.CENTER, iText.Layout.Properties.VerticalAlignment.TOP, 0);
canvasWatermark3.Close();
over.RestoreState();
pdfDoc.Close();
```
The Output PDF is as:

####Adding Watermark with IronPDF
IronPDF simplifies this process with higher-level methods:
```
var pdf = PdfDocument.FromFile("pdf_with_IronPDF.pdf");
pdf.ApplyWatermark("This TRANSPARENT watermark is added from IronPDF");
pdf.SaveAs("watermark_with_IronPDF");
```
The output pdf file is as:

In comparison, iText requires manual handling of PDF content streams and graphics states to add a transparent watermark, involving lower-level operations and specific positioning of text. On the other hand, IronPDF provides a simpler and more intuitive method for applying transparent watermarks directly to PDF documents, abstracting the complexities of PDF manipulation and enhancing developer productivity in PDF-related tasks. This makes IronPDF a preferred choice for developers seeking ease of use and efficiency in PDF watermarking operations.
###5. Extracting Text and Images
####iText
Extracting text and images from PDFs using iText involves parsing the PDF content stream, which can be complex. The source code is as:
```
StringBuilder text = new StringBuilder();
PdfDocument pdfDoc = new PdfDocument(new PdfReader("html_to_pdf_withiText.pdf"));
for (int i = 1; i <= pdfDoc.GetNumberOfPages(); i++)
{
text.Append(PdfTextExtractor.GetTextFromPage(pdfDoc.GetPage(i)));
}
pdfDoc.Close();
Console.WriteLine(text.ToString());
```
The Extracted Text is printed on the Console.

####IronPDF
IronPDF provides straightforward methods for extracting text:
```
var pdf = PdfDocument.FromFile("HtmlToPDFWith_IronPDF.pdf");
Console.WriteLine(pdf.ExtractAllText());
```
The Extracted Text is Printed on the Console.

In comparison, iText requires explicit iteration through PDF pages and manual extraction of text using its PdfTextExtractor, which involves managing PDF document objects and extracting text from each page individually. Conversely, IronPDF simplifies text extraction with a straightforward method called (ExtractAllText()), providing a more streamlined approach that abstracts the complexities of PDF parsing and enhances developer productivity in handling PDF content retrieval tasks.
###6. Printing PDF Documents
####Print PDF Documents using itextSharp:
iTextSharp (iText7) provides capabilities for generating PDF documents programmatically, but it does not directly handle printing functionalities. Implementing printing from iText-generated PDFs typically requires integrating with additional libraries or platforms that support printing.
####Print PDF Documents using IronPDF:
IronPDF offers built-in support for [printing PDF documents](https://ironsoftware.com/csharp/print/) directly from within .NET applications. This feature simplifies the process of printing PDFs without requiring external dependencies or complex setup, making it convenient for applications that need to generate and print documents seamlessly.
```
PdfDocument pdf = new PdfDocument("HtmlToPDFWith_IronPDF.pdf");
pdf.Print();
```
This will send the document to the default printer. In my case, it is 'Microsoft Print to PDF', so it will prompt to choose a path to save the file, as shown below

##Performance and Reliability
###iText
iText is known for its reliability and robustness, especially in enterprise environments. However, it can be more challenging to work with due to its lower-level API.
###IronPDF
IronPDF focuses on ease of use and rapid development. It provides a more intuitive API and better support for modern web technologies like HTML and CSS. This makes it an excellent choice for developers who need to quickly implement PDF functionality without delving into complex PDF internals.
##Licensing
###iText7 (iTextSharp)
iTextSharp is available under the AGPL license, which requires that your application be open source if you distribute it. A [commercial license](https://itextpdf.com/how-buy) is available if you need to use it in a closed-source project.
###IronPDF
IronPDF is a commercial product with various [licensing options](https://ironpdf.com/licensing/), including [a free trial](https://ironpdf.com/#trial-license). It does not have open-source licensing restrictions, making it suitable for commercial applications.
##Conclusion
While both iTextSharp and IronPDF are powerful tools for handling PDFs in .NET, IronPDF offers several advantages that make it a better choice for many developers:
**<u>Ease of Use:</u>** IronPDF provides a more straightforward API, reducing the amount of code needed for common tasks.
<u>**HTML to PDF Conversion:**</u> IronPDF excels in converting HTML to PDF, a feature that requires additional effort with iTextSharp.
**<u>High-Level Functions:</u>** IronPDF offers higher-level functions for manipulating PDFs, and simplifying tasks like adding watermarks and extracting text.
**<u>Commercial Licensing:</u>** IronPDF's commercial licensing model is more flexible for closed-source projects.
For developers looking for a quick, efficient, and powerful way to handle PDFs in their .NET applications, IronPDF is often the best choice. It is ideal for integrating PDF handling into existing projects or moving existing projects to incorporate robust PDF functionalities seamlessly.
| mhamzap10 |
1,899,078 | Macros no Flutter/Dart: A Revolução da Metaprogramação que Você Esperava | Macros no Flutter/Dart: A Revolução da Metaprogramação que Você Esperava Fala Devs, No... | 0 | 2024-06-24T15:12:33 | https://dev.to/flutterbrasil/macros-no-flutterdart-a-revolucao-da-metaprogramacao-que-voce-esperava-42mi | dart, metaprogramming, macros |
### Macros no Flutter/Dart: A Revolução da Metaprogramação que Você Esperava

Fala Devs,
No ultimo Google I/O tivemos muitas novidades, e uma delas que era uma das mais esperadas foi a confirmação dos Macros para fase experimental (que significa que logo sairá pra stable).
Mas o que são Macros?
Macros são um recurso que visa revolucionar a metaprogramação, ou seja, a habilidade de escrever código que manipula outros códigos.
Mas o que isso significa?
Imagine uma tarefa comum que realizamos com frequência hoje toJson() e fromJson().
Hoje temos duas maneiras de realizar essa tarefa, podemos simplesmente fazer os métodos na mão.
{% gist https://gist.github.com/toshiossada/2e61d4951d17fe5f89ccec6d81e1eaa4 file=ordinary_log_data.dart %}

O problema desse método é que toda hora que precisamos adicionar um campo novo é no mínimo em 3 lugares que precisamos fazer alteração, um processo que pode ser chato.

Outra forma é utilizarmos uma biblioteca terceira que irá gerar todo esse código com a ajuda do odiado build_runner, como por exemplo o json_serializable

Nesse caso temos alguns fatos que podem frustrar os desenvolvedores, primeiro é que preciso adicionar um “quase” extenso boilerplate

E “outro” problema é que toda alteração precisamos executar novamente o build_runner, as vezes não é um processo tão demorado, entretanto se torna um processo chato de ter que executar o build_runner em toda alteração


Hoje temos que fazer estes processos pois no dart que está embedado no Flutter é nerfado e não temos a possibilidade de utilizar **Reflections** ou as famigeradas **Mirrors** alegando que há trade-off que talvez não valeria a pena para habilitar as Mirros no Flutter.

Mas calma, a equipe do Flutter não deu as costas e ignorou este problema. No Flutter Foward que ocorreu ano passado (2023) eles anunciaram que iriam começar a trabalhar em uma nova feature para resolver este problema, eles anunciaram que brevemente iriam lançar as Macros.

Depois de muita espera, chegaram até rolar boatos que eles iriam desistir das macros, neste Google I/O (2024) eles anunciaram que já encontra em fase experimental.

Para começar a testar as macros precisamos ter em mente que ainda está em fase experimental, então não está disponível no canal **stable** (Espero que você esteja usando esse canal para desenvolver sus apps) então nossa primeira tarefa para habilitar a utilização dos macros é apontar para o canal master
Execute:
> flutter channel master

Isso fará com que apontemos para o dart 3.5

E se executarmos o comando flutter channel irá nos apresentar que está apontando para a master com um asterisco (*)

Outra etapa (caso use visual studio code) é apontarmos as extensões do dart e flutter para a pré release, procure as extensões e pressione o botão “switch to Pre-Release Version”

Faça o mesmo procedimento para o Flutter

Precisamos também habilitar no analysis_options.yaml a feature experimental dos macros
{% gist https://gist.github.com/toshiossada/3044f07f28ac9d6b3ffcacb6c92490a3 file=analysis_options.yaml %}

Pronto agora já podemos utilizar a macros, algo bom que é que não partiremos de um mundo que teremos que criar todas as macros pois a equipe do Dart em paralelo já está desenvolvendo alguns macros que para utiliza-los basta importar em nosso projeto e para esse processo que citamos acima do toJson() e fromJson() existe o pacote json que está publicado por **labs.dart.dev** (publicador da equipe do dart para pacotes que estão em experimental)

Para adicionar basta executar um **pub add**

Para utiliza-lo basta colocarmos a annotation em nossa classe **@JsonCodable()**

Note que no VS Code ele habilitou um botão “Go to Augmentation” se clicarmos nele podemos ver os macros que foram gerados


E se alterar o código da classe e salvar ele automaticamente vai alterar minhas macros

Impressionante a velocidade

Outro ponto legal é que também podemos fazer nossos próprios macros, vamos então automatizar a geração do toString()

Notamos que iremos necessitar de 2 informações da classe, o nome da classe e os campos

Para criar uma classe iremos criar uma class com o modificador macro e devemos implementar a ClassDeclaration, um ponto importante é que fica obrigatório a criação de um construtor que seja constante.

Feito isso implementamos o método **buildDeclarationsForClass** que tem dois parâmetros, a **clazz** (está com Z pois class é uma palavra reservada do dart) que é onde irá conter as informações da classe (como o nome da classe) e também temos o **builder** que iremos utilizar para gerar o código na macro.
Para recuperar o nome da classe utilizamos o clazz.identifier.name e para recuperar os atributos da classe utilizamos o **builder.fieldsOf(clazz)** lembrando que ele é uma Feature então precisamos usar o **await** na execução do método e adicionar o **async** na função.

Para gerar o código utilizaremos o builder.declareIntype() e geraremos o código em uma String que podemos separar por virgula(,) para que não fique uma linha gigantesca

Ressaltando que a saída do nosso código será “campo: $campo” então é necessário adicionar o **&** com a tag escape na frente (**\**).
{% gist https://gist.github.com/toshiossada/f46d2aa3e9f4fe88ab1ea30909e87783 file=data_clazz.dart %}
Para utiliza-lo em nossa classe basta adicionar a annotation que será o mesmo nome do macro criado.

E ele automaticamente irá gerar o código no macro

Agora podemos utiliza-lo em nosso main.dart
{% gist https://gist.github.com/toshiossada/ed68512917c1350a57f8043ddc48e0c9 file=main.dart %}
Para executar precisamos adicionar a tag — enable-experiment=macros

Ou podemos também deixar configurado em nosso launch.json

Ressaltando novamente que os macros está em fase experimental e podemos ter alguns bugs que estão sendo corrigidos pela equipe do Dart, mas esperamos que logo(talvez ainda esse ano) já sairá uma versão em em stable para nós deliciar com os macros.
Se quiser ler mais a respeito das macros visite [https://dart.dev/go/macros](https://dart.dev/go/macros)
Tembém gravei um video no canal da Flutter Brasil de como podemos utilizar os macros, confira:
<a href="https://www.youtube.com/watch?v=AjA4sJR2CN8" target="_blank">
<img src="https://img.youtube.com/vi/AjA4sJR2CN8/0.jpg" alt="Monorepo (Por Que Essa Estratégia Funciona em Grandes Empresas?) " width="240" height="180" border="10" />
</a>

Entre em nosso discord para interagir com a comunidade e ficar por dentro de todas as nossas publicações: [https://www.flutterbrasil.com.br](https://www.flutterbrasil.com.br) | toshiossada |
1,898,891 | Spring Boot 3 application on AWS Lambda - Part 8 Introduction to Spring Cloud Function | During the parts 2, 3 and 4 we introduced the concept behind the AWS Serverless Java Container and... | 26,522 | 2024-06-24T15:11:59 | https://dev.to/aws-builders/spring-boot-3-application-on-aws-lambda-part-8-introduction-to-spring-cloud-function-99a | java, springboot, aws, serverless | During the parts [2](https://dev.to/aws-builders/spring-boot-3-application-on-aws-lambda-part-2-introduction-to-aws-serverless-java-container-144), [3](https://dev.to/aws-builders/spring-boot-3-application-on-aws-lambda-part-3-develop-application-with-aws-serverless-java-container-2901) and [4](https://dev.to/aws-builders/spring-boot-3-application-on-aws-lambda-part-4-measuring-cold-and-warm-starts-with-aws-serverless-java-container-mb0) we introduced the concept behind the AWS Serverless Java Container and especially its variant for the Spring Boot (3) application, learned how to develop, deploy, run and optimize Spring Boot 3 Application on AWS Lambda using AWS Serverless Java Container. In the parts [5](https://dev.to/aws-builders/spring-boot-3-application-on-aws-lambda-part-5-introduction-to-aws-lambda-web-adapter-m21), [6](https://dev.to/aws-builders/spring-boot-3-application-on-aws-lambda-part-6-develop-application-with-aws-lambda-web-adapter-h88) and [7](https://dev.to/aws-builders/spring-boot-3-application-on-aws-lambda-part-7-measuring-cold-and-warm-starts-with-aws-lambda-web-adapter-310h) we did the same but for AWS Lambda Web Adapter. In this part of the series we'll introduce another alternative, the framework called [Spring Cloud Function](https://spring.io/projects/spring-cloud-function), concepts behind it and especially its integration with AWS Lambda.
## Spring Cloud Function
Spring Cloud Function is a project with the following high-level goals:
- Promote the implementation of business logic via functions.
- Decouple the development lifecycle of business logic from any specific runtime target so that the same code can run as a web endpoint, a stream processor, or a task. One of these specific runtime targets can be AWS Lambda which will be focus of this article.
- Support a uniform programming model across serverless providers, as well as the ability to run standalone (locally or in a PaaS).
- Enable Spring Boot features (auto-configuration, dependency injection, metrics) on serverless providers.
- It abstracts away all of the transport details and infrastructure, allowing the developer to keep all the familiar tools and processes, and focus firmly on business logic.
A simple function application (in context or Spring) is an application that contains beans of type Supplier, [Java 8 Function interface](https://docs.oracle.com/javase/8/docs/api/java/util/function/Function.html) or Consumer.
## Spring Cloud Function on AWS Lambda
With [Spring Cloud Function on AWS Lambda](https://docs.spring.io/spring-cloud-function/reference/adapters/aws-intro.html), [AWS adapter takes a Spring Cloud Function](https://github.com/spring-cloud/spring-cloud-function/tree/main/spring-cloud-function-adapters/spring-cloud-function-adapter-aws) app and converts it to a form that can run in AWS Lambda. So, with AWS it means that a simple function bean should somehow be recognized and executed in AWS Lambda environment. Which very well fits into the AWS Lambda model with Amazon API Gateway in front which similar to the Java 8 function receives the (HTTP) request, executes some business logic and then sends the (HTTP) response to the caller.
```
@SpringBootApplication
public class FunctionConfiguration {
public static void main(String[] args) {
SpringApplication.run(FunctionConfiguration.class, args);
}
@Bean
public Function<String, String> uppercase() {
return value -> value.toUpperCase();
}
}
```
The diagram below describes the request flow in the typical web application on AWS. AWS Request Adapter converts the JSON coming from Lambda function to the [HttpServletRequest](https://jakarta.ee/specifications/platform/9/apidocs/jakarta/servlet/http/httpservletrequest) which then invokes the [Spring Dispatcher Servlet](https://docs.spring.io/spring-framework/reference/web/webmvc/mvc-servlet.html) which then interacts with our Spring Boot application on API level without starting web server (i.e. Tomcat). Then response flows back and AWS Response Adapter converts [HttpServletResponse](https://jakarta.ee/specifications/platform/9/apidocs/jakarta/servlet/http/httpservletresponse) to JSON which Lambda function sends back to API Gateway.

## Conclusion
In this part of the series, we introduced Spring Cloud Function and especially its integration with AWS Lambda and concepts behind it. In next part we'll learn how to develop and deploy application using this framework. | vkazulkin |
1,899,075 | How to Create an AI-Powered CLI with Pieces | Introduction Integrating AI into Command Line Interfaces (CLIs) can transform how... | 0 | 2024-06-24T15:06:35 | https://dev.to/arindam_1729/how-to-create-an-ai-powered-cli-with-pieces-463j | node, webdev, javascript, beginners | ## Introduction
Integrating AI into Command Line Interfaces (CLIs) can transform how developers interact with their tools, making tasks more efficient and intelligent.
In this article, we'll learn how to use Pieces to build a CLI that can ask questions, get formatted responses, and search Stack Overflow for relevant coding issues
Sounds Interesting?
So, without delaying further, Let's START!
## **What is Pieces OS Client?**

Pieces OS Client is a flexible database that helps us to save, create, and improve code snippets throughout our development process. It includes built-in Local Large Language Models that can answer questions and generate strong code solutions, no matter which development tool we use.
Pieces offers different language-based SDKs to utilize the wide range of functionalities provided by Pieces OS. For this tutorial, we'll be using the [TypeScript SDK](https://github.com/pieces-app/pieces-os-client-sdk-for-typescript).
## **What Can our CLI Do?**
Here is a high-level list of some of the feature, our CLI will be able to perform by the end of this Article:
1. Answer Coding Questions: Provide answers to coding-related questions using its built-in language models.
2. Search Stack Overflow: Search Stack Overflow for relevant answers to coding issues.
3. Interactive Mode: An interactive mode where developers can have a continuous conversation with the AI.
You can perform several other actions through the Pieces OS Client and can discover the other endpoints and SDKs through the [Open Source by Pieces Repo](https://github.com/pieces-app/opensource).
## Building Pieces CLI
### Prerequisites
Before Starting the Project, make sure you've Pieces OS running in the background. This is crucial because the Pieces OS will serve as the backbone of our CLI, providing the necessary database and language model functionalities.
You can Download Pieces OS from [here](https://docs.pieces.app/installation-getting-started/what-am-i-installing).
### Basic Setup
First, we'll create a Simple Nodejs Project with the following Command:
```javascript
npm init -y
```
This command will create a `package.json` file with default settings. The `-y` flag automatically answers "yes" to all prompts, allowing for a quick setup.
Next, we'll install the required packages by running the following command:
```javascript
npm i @pieces.app/pieces-os-client os
```
Now, We'll create an `index.js` file in the root directory of our project. This file will serve as the entry point for our CLI application.We'll also import the required packages that we have just installed
```javascript
#!/usr/bin/env node
import * as Pieces from '@pieces.app/pieces-os-client';
import os from 'os';
```
> 💡The #!/usr/bin/env node line at the top of the file is called a shebang. It allows the script to be run as an executable from the command line.
**Pieces OS** uses different localhost ports depending on the operating system. We'll check the user's platform and set the appropriate port. Add the following code to `index.js`:
```javascript
const platform = os.platform();
const port = platform === 'linux' ? 5323 : 1000;
```
Here, the `os.platform()` gets the user's OS and changes the port accordingly.
Next, we'll create a instance of the QGPTApi with Pieces Configuration.This instance will allow us to interact with the Pieces OS API.
```javascript
const configuration = new Pieces.Configuration({
basePath: `http://localhost:${port}`
});
const apiInstance = new Pieces.QGPTApi(configuration);
```
In this code, we create a new `Configuration` object with the `basePath` set to the appropriate localhost URL based on the user's platform.
With that, we have completed the basic setup of our project.
> (optional) To verify that everything is working correctly, write this code in the index.js file
>
> ```javascript
> const healthInstance = new Pieces.WellKnownApi(configuration)
>
> async function fetchHealthData() {
> try {
> const data = await healthInstance.getWellKnownHealth();
> console.log(data); // ok
> } catch (error) {
> console.error(error);
> }
> }
>
> fetchHealthData();
> ```
>
> and Run the following script:
>
> ```javascript
> node index.js
> ```
>
> If everything is set up correctly, you should see a message ie ok indicating that the connection to Pieces OS was successful.
### Implementing the AskQuestion Function
Next we'll create our main `askQuestion` function which will communicate with API instance of the `QGPTApi` that we have created previously.
To improve the User experience we'll install a package `ora`.
```javascript
npm i ora
```
This package provides a nice animation while generating the response.
Now, we will define the `askQuestion` function. This function will take a query as an argument, send it to the `QGPTApi`, and return the response.
```javascript
const askQuestion = async (query) => {
// Check if the CLI is not in interactive mode
if (!interactiveMode) {
// Start the spinner animation
const spinner = ora('Generating response...').start();
}
// Define the parameters for the API request
const params = {
query,
relevant: {
iterable: []
},
};
try {
// Send the query to the API and get the result
const result = await apiInstance.question({ qGPTQuestionInput: params });
// If the CLI is not in interactive mode, stop the spinner and display success message
if (!interactiveMode) {
spinner.succeed('Response generated.');
}
// Return the text of the first answer
return result.answers.iterable[0].text;
} catch (error) {
// If an error occurs, stop the spinner and display an error message
if (!interactiveMode) {
spinner.fail('Error generating response.');
}
console.error('Error calling API:', error);
throw error;
}
};
```
> Note:
>
> The QGPT endpoint handles queries and provide relevant responses using the QGPT model. This endpoint is part of the QGPTApi and is used to interact with the model by sending questions and receiving answers.
>
> The endpoint processes the input query, utilizes relevant context if provided, and generates a response based on the QGPT model's capabilities.
>
> For more information, check out the [docs](https://docs.pieces.app/build/reference/typescript/apis/QGPTApi).
### Displaying Help and Version Information
While building a CLI, one of the crucial steps is providing a clear instructions on how to use the tool. We usually provide a help message to show all commands that a user can use with that CLI tool.
Let's Implement that in our tool:
```javascript
const displayHelp = () => {
console.log(
`\x1b[1mWelcome to Pieces CLI | By Arindam\x1b[0m
Usage: pieces-cli [options] [query]
Options:
-i, --interactive Enter interactive mode
-h, --help Display this help message
-v, --version Display the version number
Examples:
pieces-cli "What is the capital of France?"
pieces-cli -i
pieces-cli --help
pieces-cli --version`
);
};
const isHelp = process.argv.includes("-h") || process.argv.includes("--help");
if(isHelp){
displayHelp();
process.exit(0);
}
```
With this, if the user uses `-h` or `--help` in the command, it will show the help message.
Similarly, we'll display the Version of the tool when the user uses the `-v` or `--version` command. Here's the code for that:
```javascript
import pkgJSON from './package.json';
const version = pkgJSON.version || '1.0.4';
const isVersion = process.argv.includes("-v") || process.argv.includes("--version");
if(isVersion){
console.log(`pieces-cli version: ${version}`);
process.exit(0);
}
```
This enhances the user experience and makes our tool more accessible.
### Adding StackOverflow Search
In this section, we'll add a function to search Stack Overflow for relevant coding issues based on a query. This will improve the User Experience by providing the links to relevant Stack Overflow discussions based on their queries.
For that, let's install the `axios` package to make HTTP requests using the following command:
```bash
npm i axios
```
Now, we'll define the `searchStackOverflow` function. This function will take a query as an argument, send it to the Stack Overflow API, and return a list of relevant links.
```javascript
const searchStackOverflow = async (query) => {
// Check if the CLI is not in interactive mode
if (!interactiveMode) {
// Start the spinner animation
const spinner = ora('Searching Stack Overflow...').start();
}
try {
// Send the query to the Stack Overflow API and get the response
const response = await axios.get('https://api.stackexchange.com/2.3/search/advanced', {
params: {
order: 'desc',
sort: 'relevance',
q: query,
site: 'stackoverflow'
}
});
// If the CLI is not in interactive mode, stop the spinner and display success message
if (!interactiveMode) {
spinner.succeed('Stack Overflow search completed.');
}
// Return the list of links to relevant Stack Overflow discussions
return response.data.items.map(item => item.link);
} catch (error) {
// If an error occurs, stop the spinner and display an error message
if (!interactiveMode) {
spinner.fail('Error searching Stack Overflow.');
}
console.error('Error searching Stack Overflow:', error.message);
return [];
}
};
```
By following these steps, we have successfully implemented the `searchStackOverflow` function. This feature can be particularly useful for developers looking for solutions to coding problems.
### Implementing Interactive Mode
Next, we'll create a Interactive mode that will allow users to continuously interact with the CLI. This feature will be super helpful for tasks that require ongoing input and feedback.
First, we'll install readline package to handle input output:
```javascript
npm i readline
```
Next, we'll check if the user has passed the `-i` or `--interactive` flag to enter interactive mode:
```javascript
const isInteractiveMode = process.argv.includes("-i") || process.argv.includes("--interactive");
```
Now, we'll use the `readline` module to create an interface for reading input from the user. This interface will handle input and output streams and provide a prompt for the user.
```javascript
import readline from 'readline';
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout,
prompt: 'pieces-cli> ',
historySize: 100,
completer: (line) => {
const completions = ['exit', 'help', 'version', 'clear', 'model'];
const hits = completions.filter((c) => c.startsWith(line));
return [hits.length ? hits : completions, line];
}
});
```
We'll then call `rl.prompt()` to display the prompt to the user.
```javascript
rl.prompt();
```
After that, we'll set up an event listener for the `line` event, which is triggered whenever the user enters a line of input. Here we will define the logic to handle specific commands.
```javascript
rl.on('line', async (line) => {
const input = line.trim();
switch (input.toLowerCase()) {
case 'exit':
rl.close();
break;
case 'help':
console.log("Available commands: exit, help, version, clear, model");
rl.prompt();
break;
case 'version':
console.log('pieces-cli version: 1.0.3');
rl.prompt();
break;
case 'clear':
console.clear();
rl.prompt();
break;
default:
// ...default case
}
}).on('close', () => {
console.log('Exiting interactive mode.');
process.exit(0);
});
```
Here, we have defined the logic to handle specific commands:
* `exit`: Close the readline interface and exit the process.
* `help`: Display a list of available commands.
* `version`: Display the version number of the CLI.
* `clear`: Clear the terminal screen.
For any other input, we assume it might be a question or command that needs further processing. We call an asynchronous function `askQuestion(input)` to handle the input and print the formatted response using `formatResponse(response)`.
```javascript
// default case
try {
const response = await askQuestion(input);
console.log(formatResponse(response));
// Check if the query is related to coding and search Stack Overflow if necessary
if (/code|error|exception|bug|issue|problem|function|method|class|variable|syntax|compile|runtime/i.test(input)) {
const stackOverflowLinks = await searchStackOverflow(input);
if (stackOverflowLinks.length > 0) {
console.log(chalk.blue("\nRelevant Stack Overflow links:"));
stackOverflowLinks.forEach(link => console.log(chalk.blue(link)));
} else {
console.log(chalk.yellow("\nNo relevant Stack Overflow links found."));
}
}
} catch (error) {
console.error('Error calling API:', error);
}
rl.prompt();
break;
```
If the input seems related to coding (determined by a regular expression test), it search Stack Overflow for relevant links. If links are found, we'll print them; otherwise, we'll indicate that no relevant links were found.
Finally, when the `readline` interface is closed (e.g., by the `exit` command), the `close` event is triggered. We'll then log a message indicating that we are exiting interactive mode and then exit the process.
```javascript
.on('close', () => {
console.log('Exiting interactive mode.');
process.exit(0);
});
```
### Formatting Responses
When working with text responses, especially those that include code snippets, it's important to format them in a way that's easy to read and understand. It improves the user experience of the tool.
### Formatting Responses
When working with text responses, especially those that include code snippets, it's important to format them in a way that's easy to read and understand. It improves the user experience of the tool.
Now let's Implement this in our CLI tool.
First, we'll create a `formatResponse` function takes a text input and formats it by applying different styles to different parts of the text, such as code blocks, headers, and list items.
```
const formatResponse = (text) => {
const lines = text.split('\n');
let formattedText = '';
let inCodeBlock = false;
let codeLang = '';
lines.forEach(line => {
if (line.startsWith('```')) {
inCodeBlock = !inCodeBlock;
codeLang = inCodeBlock ? line.slice(3).trim() : '';
} else if (inCodeBlock) {
const highlighted = codeLang ? hljs.highlight(line, { language: codeLang }).value : hljs.highlightAuto(line).value;
formattedText += applyChalk(highlighted) + '\n';
} else if (line.startsWith('# ')) {
formattedText += chalk.bold(line) + '\n';
} else if (line.startsWith('* ')) {
formattedText += chalk.green('• ') + line.slice(2) + '\n';
} else if (line.startsWith('**') && line.endsWith('**')) {
formattedText += chalk.bold(line.slice(2, -2)) + '\n';
} else {
formattedText += line + '\n';
}
});
return formattedText;
};
```
Here, if a line starts with triple backticks (\`\`\`) it toggles the `inCodeBlock` flag and sets the `codeLang` if entering a code block.
> Note: This Function Made the code blocks wrapped in HTML code so, we need to remove those unwanted HTML tags.
So, we'll create the `cleanHtml` function that uses regular expression to remove HTML tags from a string and replaces HTML entities with their corresponding characters.
```javascript
const cleanHtml = (rawHtml) => {
let cleanText = rawHtml.replace(/<[^>]*>/g, '');
const htmlEntities = {
'"': '"',
'&': '&',
'<': '<',
'>': '>',
' ': ' ',
''': "'",
'¢': '¢',
'£': '£',
'¥': '¥',
'€': '€',
'©': '©',
'®': '®'
};
cleanText = cleanText.replace(/&[a-zA-Z]+;/g, (match) => htmlEntities[match] || match);
cleanText = cleanText.replace(/\s+/g, ' ').trim();
return cleanText;
};
```
Finally, we'll add the `applyChalk` function that applies chalk styles to highlighted code by replacing HTML span tags with corresponding chalk styles.
```javascript
const applyChalk = (highlighted) => {
return cleanHtml(highlighted.replace(/<span class="hljs-(\w+)">(.*?)<\/span>/g, (match, p1, p2) => {
switch (p1) {
case 'keyword': return chalk.blue(p2);
case 'string': return chalk.green(p2);
case 'built_in': return chalk.cyan(p2);
case 'comment': return chalk.gray(p2);
case 'title': return chalk.yellow(p2);
case 'params': return chalk.magenta(p2);
case 'function': return chalk.red(p2);
case 'operator': return chalk.white(p2);
default: return p2;
}
}));
};
```
And with that We've successfully created our CLI tool using Pieces!!
Kudos to us!!
## What's Next?
Now that you've built a basic CLI tool using Pieces OS Client, you can improve and add more features to it.
I've also published the CLI tool as an [npm package](https://www.npmjs.com/package/pieces-cli), which you can check out and use right away.
The code for this project is also public, and you can check it out on [GitHub](https://github.com/Arindam200/PiecesCLI). Feel free to contribute by fixing bugs, adding new features, or improving the documentation.
Your contributions can help make this tool even more powerful and user-friendly.
## Conclusion
Overall, integrating AI into your CLI using Pieces OS Client provides a seamless and efficient way to enhance your terminal workflow. It not only improves productivity but also enables more sophisticated interactions with AI models.
Install it today and transform your workflow with the ease of AI at your fingertips
If you found this blog post helpful, please consider sharing it with others who might benefit.
For Paid collaboration mail me at : [arindammajumder2020@gmail.com](mailto:arindammajumder2020@gmail.com)
Connect with me on [Twitter](https://twitter.com/intent/follow?screen_name=Arindam_1729), [LinkedIn](https://www.linkedin.com/in/arindam2004/), [Youtube](https://www.youtube.com/channel/@Arindam_1729) and [GitHub](https://github.com/Arindam200).
Thank you for Reading!
 | arindam_1729 |
1,899,074 | How To Choose The Best Slot Site For Your Needs | Choosing the best slot site for your needs requires careful consideration of several factors to... | 0 | 2024-06-24T15:04:51 | https://dev.to/iamfranklin/how-to-choose-the-best-slot-site-for-your-needs-4bjc | Choosing the best slot site for your needs requires careful consideration of several factors to ensure a safe, enjoyable, and rewarding experience. With countless online slot sites available, it can be overwhelming to navigate through the options.
Here are some essential tips to help you make an informed decision.
<h2>Assess the Site’s Reputation and Security</h2>
The first aspect to consider is the reputation and security of the slot site. Before getting started playing, review sites and ensure they are licensed and regulated by reputable authorities, such as the UK Gambling Commission, Malta Gaming Authority, or Gibraltar Regulatory Authority. These licenses ensure that the site operates legally and adheres to strict standards of fairness and security.

Additionally, read reviews and testimonials from other players or experts to gauge the site's reputation. Many players opt to review <a href="https://kpkesihatan.com/">슬롯사이트 순위</a> online before choosing a site in order to ensure a good pick. Trusted review sites and online forums can provide valuable insights into the site's reliability, customer service, and payout efficiency. Ensure that the site uses advanced encryption technology to protect your personal and financial information.
<h2>Evaluate the Game Selection</h2>
A diverse game selection is crucial for an engaging slot experience. The best slot sites offer a wide variety of games from top developers like NetEnt, Microgaming, Playtech, and Evolution Gaming. Check if the site provides different types of slots, such as classic slots, video slots, and progressive jackpot slots.
Moreover, consider the site's game library in terms of themes, features, and volatility. A good site should cater to various preferences, whether you enjoy simple three-reel slots or complex multi-line video slots with bonus rounds and free spins. The availability of demo versions or free play options can also help you test games before wagering real money.
<h2>Check for Bonuses and Promotions</h2>
Bonuses and promotions are significant factors that can enhance your gaming experience and extend your playtime. Look for sites that offer generous welcome bonuses, no-deposit bonuses, free spins, and ongoing promotions like reload bonuses, cashback offers, and loyalty programs.
Always review the fine print associated with these bonuses. Pay attention to wagering requirements, maximum bet limits, game restrictions, and expiration dates. The best slot sites provide transparent and fair bonus terms that give you a genuine chance to benefit from the offers.
<h2>Consider Payment Methods and Withdrawal Times</h2>
Convenient and secure banking options are vital for a smooth gambling experience. The best slot sites offer a variety of payment methods, including credit/debit cards, e-wallets like PayPal and Skrill, bank transfers, and even cryptocurrencies. A growing number of players are starting to use crypto at crypto casinos and <a href="https://anonymouscasinos.ltd/">anonymous casinos</a> as they provide an extra layer of privacy when wagering online.
Evaluate the deposit and withdrawal processes, including the minimum and maximum limits, processing times, and any associated fees. Fast and hassle-free withdrawals are a hallmark of a reputable slot site. Look for sites that process withdrawal requests promptly and offer multiple withdrawal options to suit your preferences.
<h2>Look for Mobile Compatibility</h2>
With the increasing popularity of mobile gaming, it’s essential to choose a slot site that offers a seamless mobile experience. The best slot sites are fully optimized for mobile devices, providing a responsive design and easy navigation on smartphones and tablets.
Check if the site offers a dedicated mobile app or a mobile-friendly website that allows you to play your favorite slots on the go. The mobile platform should offer the same level of functionality, game selection, and security as the desktop version.
<h2>Evaluate Customer Support</h2>
Reliable <a href="https://www.iopex.com/blogs/gaming-customer-support-services-do-we-really-need-it/">customer support is crucial</a> for resolving any issues or answering queries that may arise during your gaming experience. The best slot sites offer multiple support channels, including live chat, email, and phone support, available 24/7.
Test the responsiveness and helpfulness of the customer support team by asking a few questions before signing up. Efficient and friendly customer service can significantly enhance your overall experience and provide peace of mind knowing that assistance is readily available when needed.
<h2>Explore Additional Features</h2>
Some slot sites offer additional features that can enhance your gaming experience. These may include:
<ul>
<li>VIP Programs: Exclusive rewards, personalized bonuses, and dedicated account managers for loyal players.</li>
<li>Tournaments: Opportunities to compete against other players for prizes and bragging rights.</li>
<li>Social Features: Community features like chat rooms and forums where you can <a href="https://digiday.com/sponsored/how-gaming-platforms-are-driving-social-connection/">interact with other players</a>.</li>
</ul>
<h2>Assess Overall User Experience</h2>
The overall user experience of the slot site is another critical factor to consider. The site should have an intuitive interface, easy navigation, and fast loading times. The design and layout should be visually appealing and user friendly, ensuring that you can find your favorite games and features without any hassle.
<h2>Conclusion</h2>
<p dir="ltr" style="line-height: 1.38; margin-top: 0pt; margin-bottom: 0pt;">Choosing the best slot site for your needs involves a combination of thorough research and personal preferences. By assessing the site’s reputation, game selection, bonuses, payment methods, mobile compatibility, customer support, additional features, and overall user experience, you can make an informed decision that aligns with your gaming style and preferences.</p> | iamfranklin | |
1,899,073 | NETWORK REQUEST FAIL | Hi @everyone, please i have an issue and i don’t know if anyone here has faced something like that... | 0 | 2024-06-24T15:04:40 | https://dev.to/charlesthetechguy/network-request-fail-2481 | Hi @everyone, please i have an issue and i don’t know if anyone here has faced something like that before. Error is “network request fail” while working on this mobile app, it was actually working at first but since i have connected to the supabase and running locally with Docker, I have been having the issue of “NETWORK REQUEST FAIL” please how can i resolve this ? | charlesthetechguy | |
1,899,072 | AWS open source newsletter, #200 | Edition #200 Welcome to a milestone edition of this newsletter, number #200!! Wow, it... | 0 | 2024-06-24T15:04:08 | https://community.aws/content/2iKXkQejzUlmpNb0r1NrX37AAwC/aws-open-source-newsletter-200 | opensource, aws | ## Edition #200
Welcome to a milestone edition of this newsletter, number #200!! Wow, it feels like quite an achievement. Before diving into this newsletter, a big thank you for sticking with me. Time has flown by so quickly, and am looking forward to the next 100. As I have done in a few of the previous milestone issues, I wanted to share a few interesting stats from sharing open source projects with you over the past few years.
* Through the power of greyskull (ok, well maybe not, but through the awesome sed/awk tools) I have featured over 3000 contributors in this newsletter, which just blows my mind. So thank you all for creating so many amazing open source projects, blog posts, videos, and other kinds of content that I share in this open source newsletter. I remain always, very humbled by your amazingness.
* Readers have clicked on over 5K links (5121 if you are a lover of precision), with 1.76 million* engagements to access the open source projects that I share in the newsletter. I have shared a total of 1591 new open source projects over the past 199 editions of this newsletter, which is just incredible. But what are the ones that seem to attract developers attention most? Well, the current top three projects that you all seem to love are the following (If you have not checked those out, then why not find out what all the interest is about):
[*the number is probably higher as I did not track links for the first year of the newsletter]
* [iac-devtools-cli-for-cdk](https://aws-oss.beachgeek.co.uk/30h) - is command line interface tool automates many of the tedious tasks of building, adding to, documenting, and extending AWS CDK applications.
* [aws-observability/cdk-aws-observability-accelerator](https://aws-oss.beachgeek.co.uk/31y) - The AWS Observability Accelerator for CDK is a set of opinionated modules to help you set up observability for your AWS environments
* [ariga/atlas](https://aws-oss.beachgeek.co.uk/364) - A tool for managing database schemas (not AWS specific this one)
* When it comes to folks who read this newsletter, they come from all over the world. Ethiopia, Azerbaijan, Botswana, Tajikistan, Jordan, Costa Rica, and many many more (104 countries to be specific). The top three countries, which will probably be of no surprise, are the United States, India, and the UK. (Happy to see Spain is in the top 10!). I want to send you all a big thank you for visiting this humble newsletter, and I hope you have enjoyed what I have shared with you. It has been a privilege to share and raise awareness of some of the cool work being done by open source developers, everywhere.
* I am proud that I am always listening to developer feedback on how to improve this newsletter, including the latest change where I put together a repo that separates out all the projects featured in this newsletter. If you did not know about, or where aware, check out the [newsletter-oss-projects](https://github.com/094459/newsletter-oss-projects) repo, which is not one of my most active repos. If you are a regular GitHub user, you can also keep informed when I drop the next edition of this newsletter, as I create a new release in that repo. This is your regular reminder that all feedback is a gift, so if you have ideas you think would improve this newsletter, I want to hear about it. Please use the feedback link at the bottom of this newsletter.
I would love to hear any stories or highlights that readers have. I have five AWS open source challenge coins that I am going to send to folks who are willing to share them with me, so get in touch via the comments below.
---
I have been totally absorbed by Euro 2024 so far (tell me in the comments who you fancy, but my prediction for the final four will be Spain, Portugal, Germany and Holland - as to the winner, I will make my prediction in the next edition!), but I have still found time to source only the very best open source projects and content for you. This edition features more great open source projects for you to practice your four freedoms, including some really cool projects from our CTO Dr Werner Vogels that provides a rust based cli that will help you transcribe audio using the latest Amazon Bedrock models, plugins for MLFlow that generate Sigv4, a project to help you explore and optimise S3 file uploads, an IAM visualisation tool, an LLM based Slackbot that provides chat directly from your slack channels, a tool for OpenSearch users to help capture user insights, a tool to help you quickly determine your RDS disk utilisation from the command line, as well as the usual assortment of other demos for you to check out.
Also in this edition we have content covering open source technologies such as O3DE, Lustre, OpenSearch, Kubernetes, MariaDB, TinkerPop, Gremlin, AWS ParallelCluster, AWS Amplify, MySQL, PostgreSQL, Apache Kafka, Hive, Locust, Ray, OpenShift, ROSA, Apache Airflow, cfn-lint, Valkey, AWS CDK, cdktf, Keycloak, and Karpenter.
As always, get in touch if you want me to feature your projects in this open source newsletter. Until the next time, I will leave you to dive into the good stuff!
### Latest open source projects
*The great thing about open source projects is that you can review the source code. If you like the look of these projects, make sure you that take a look at the code, and if it is useful to you, get in touch with the maintainer to provide feedback, suggestions or even submit a contribution. The projects mentioned here do not represent any formal recommendation or endorsement, I am just sharing for greater awareness as I think they look useful and interesting!*
### Tools
**sagemaker-mlflow**
[sagemaker-mlflow](https://aws-oss.beachgeek.co.uk/3ya) This plugin generates Signature V4 headers in each outgoing request to the Amazon SageMaker with MLflow capability, determines the URL of capability to connect to tracking servers, and registers models to the SageMaker Model Registry. It generates a token with the SigV4 Algorithm that the service will use to conduct Authentication and Authorization using AWS IAM.
If you missed the announcement, you can now use MLFlow on AWS as part of the fully managed MLflow on Amazon SageMaker service. My colleague Veliswa has put together a post that walks you through this new managed service, as well as how to get started. Go check the post out, [Announcing the general availability of fully managed MLflow on Amazon SageMaker](https://aws-oss.beachgeek.co.uk/3y9)
**distill-cli**
[distill-cli](https://aws-oss.beachgeek.co.uk/3yz) is a new project from Amazon CTO Dr Werner Vogels, which uses Amazon Transcribe and Amazon Bedrock to create summaries of your audio recordings (e.g., meetings, podcasts, etc.) directly from the command line. Distill CLI takes a dependency on Amazon Transcribe, and as such, supports the following media formats: AMR, FLAC, M4A, MP3, MP4, Ogg, WebM, WAV. It is great to feature this latest project, with the previous one being featured in [#197](https://community.aws/content/2gPNtsdSfQRIpmbUrNyPrjUg54D/aws-open-source-newsletter-197). To go with this repo, there is a post too, [Introducing Distill CLI: An efficient, Rust-powered tool for media summarization](https://aws-oss.beachgeek.co.uk/3yy) where Werner shares his experience building this tool in Rust, and provides some closing thoughts too.

**config-rds-ca-expiry**
[config-rds-ca-expiry](https://aws-oss.beachgeek.co.uk/3z7) provides sample code to create a custom AWS Config rule to detect expiring CA certificates. Everyone loves TLS certs, but we all hate it when we realise that stuff has broken because they expired. It can happen to anyone, so check this out and make sure you are proactively managing your certs on your Amazon RDS instances, and how this is different to the out of the box notifications you already get with Amazon RDS.
**s3-diff-uploader**
[s3-diff-uploader](https://aws-oss.beachgeek.co.uk/3z0) is the latest project from open source good guy Damon Cortesi, that came about from some [experimentations](https://www.linkedin.com/posts/dacort_i-wanted-to-experiment-recently-with-incremental-activity-7206314345599832065--95_
) he was doing with incremental uploads of compressed files to S3. He decided to publish a simple proof-of-concept CLI tool that demonstrates how you can both append and compress file uploads to S3. The result so far, is it uses UploadPartCopy and the fact that you can concatenate gzip chunks to reduce the amount of data you need to upload directly.

**awsviz**
[awsviz](https://aws-oss.beachgeek.co.uk/3z1) is a super nice little tool from Bour Mohamed Abdelhadi, that helps you quickly visualy your IAM policies. You can check out the hosted version of [awsviz](https://aws-oss.beachgeek.co.uk/3z3) and there are some sample policies to show you what you can expect. Check out the[ use cases doc](https://aws-oss.beachgeek.co.uk/3z4) to see why you might want to try this tool out.
You can find out more, including watching a short video on this tool, by checking out the [original LinkedIn post here](https://aws-oss.beachgeek.co.uk/3z5).
**sparklepop**
[sparklepop](https://aws-oss.beachgeek.co.uk/3z2) is a simple Python package from Daniel B designed to check the free disk space of an AWS RDS instance. It leverages AWS CloudWatch to retrieve the necessary metrics. This package is intended for users who need a straightforward way to monitor disk space without setting up complex alerts.
**user-behavior-insights**
[user-behavior-insights](https://aws-oss.beachgeek.co.uk/3yx) This repository contains the OpenSearch plugin for the User Behavior Insights (UBI) capability. This plugin facilitates persisting client-side events (e.g. item clicks, scroll depth) and OpenSearch queries for the purpose of analyzing the data to improve search relevance and user experience.
### Demos, Samples, Solutions and Workshops
**slackrock**
[slackrock](https://aws-oss.beachgeek.co.uk/3yw) is a conversational AI assistant powered by Amazon Bedrock & your favorite cutting-edge frontier models. The project is focused on cost efficiency & simplicity, while supporting a wide variety of AI models with differing strengths & weaknesses to fit the widest array of use cases. Converse with your favourite LLMs without ever leaving Slack!
Corey Lane has also put together a blog post, [Introducing Slackrock: Conversing with AI in Slack Made Easy](https://aws-oss.beachgeek.co.uk/3yv) that walks you through this project, from how it works, how to run and configure it, and more.

**amazon-bedrock-slack-gateway**
[amazon-bedrock-slack-gateway](https://aws-oss.beachgeek.co.uk/3z8) lets you use Amazon Bedrock's generative AI to enable Slack channel members to access your organisations data and knowledge sources via conversational question-answering. You can connect to your organisation data via data source connectors and integrate it with Slack Gateway for Amazon Bedrock to enable access to your Slack channel members. It allows your users to converse with Amazon Bedrock using Slack Direct Message (DM) to ask questions and get answers based on company data, get help creating new content such as emails, and performing tasks. You can also invite it to participate in your team channels. In a channel users can ask it questions in a new message, or tag it in a thread at any point. Get it to provide additional data points, resolve a debate, or summarise the conversation and capture next steps.
**build-neptune-graphapp-cdk**
[build-neptune-graphapp-cdk](https://aws-oss.beachgeek.co.uk/3z9) this repo provides a quick example on how to build a graph application with Amazon Neptune and AWS Amplify.

### AWS and Community blog posts
Each week I spent a lot of time reading posts from across the AWS community on open source topics. In this section I share what personally caught my eye and interest, and I hope that many of you will also find them interesting.
**This weeks essential reading**
* [AWS CloudFormation Linter (cfn-lint) v1](https://aws-oss.beachgeek.co.uk/3yb) walks you through some potential breaking changes for folk using cfn-lint, and covers what is changing and offers some transition guidance
* [Disaster recovery strategies for Amazon MWAA – Part 2](https://aws-oss.beachgeek.co.uk/3yf) follows up from a previous post on how to implement DR strategies for your Apache Airflow environments on AWS, this time diving deeper into implementation (with code examples) [hands on]

* [Introducing the Advanced Python Wrapper Driver for Amazon Aurora](https://aws-oss.beachgeek.co.uk/3yk) provides details on how to use some of the features of the Advanced Python Wrapper Driver [hands on]
**The best from around the Community**
AWS Community Builder Stefan Weber starts us off this week, with a short post, [How to Package .NET Lambda Functions Using AWS CDK](https://aws-oss.beachgeek.co.uk/3yo), that reminds everyone that sometimes writing down something you don't do frequently, so as to have a reference for future is a great thing to do. Staying with AWS CDK for a moment, we have AWS Community Builder Sidath Munasinghe who has put together [Unleashing the Power of CDK and Terraform in Cloud Deployments](https://aws-oss.beachgeek.co.uk/3yp) that covers a topic that I don't see much content on, which is how to use CDK to create and deploy your application and infrastructure with Terraform, using cfktf. The final CDK post in this round up comes from a friend of the AWS open source newsletter, AWS Hero Ran Isenberg. In [Amazon CloudFormation Custom Resources Best Practices with CDK and Python Examples](https://aws-oss.beachgeek.co.uk/3yr), Ran explores CloudFormation custom resources, why you need them, and their different types. Make sure you check that post out.
AWS Community Builder Jakub Wołynko caught my interest in his post on Keycloak, [Install Keycloak on ECS(with Aurora Postgresql) ](https://aws-oss.beachgeek.co.uk/3yq). I have written a few times on this topic, and from the data I have it seems to be an area of a lot of interest, so check out Jakub's post if you want another way to deploy Keycloak in your AWS environment. From Amazon ECS to Kubernetes with AWS Community Builder Arseny Zinchenko, who walks you through a number of different options on how to ssh into your Karpenter nodes in the post, [AWS: Karpenter and SSH for Kubernetes WorkerNodes](https://aws-oss.beachgeek.co.uk/3ys). The final cloud native post this week is from AWS Community Builder Romar Cablao, who has put together a beginners post on running workloads on Amazon EKS. There are lots of these kinds of posts, and I still try and check as many as I can. Occasionally you see one that helps you better understand how Kubernetes works, or even better, will help you explain it to others. Romar has done this I think, in his post, [Back2Basics: Running Workloads on Amazon EKS](https://aws-oss.beachgeek.co.uk/3yt).
To close things up this week, I have my colleague Abhisek Gupta who shares how you can get started with Valkey, in the post, [Getting started with Valkey using JavaScript](https://aws-oss.beachgeek.co.uk/3yu). Make sure you check that post out, and follow Abhishek if you want more Valkey content as I am sure you will be seeing a lot more coming up.
**Cloud Native**
* [OpenShift Virtualization on Red Hat OpenShift Service on AWS (ROSA)](https://aws-oss.beachgeek.co.uk/3y7) dives deep into the function of OpenShift Virtualization, and how you can running OpenShift virtualisation on ROSA allows customers to add the benefit of a managed service to underpin their virtualization stack
* [Simplify EKS Cluster Deployment and Management with Kyndryl Cloud-Native Solution (KCNS)](https://aws-oss.beachgeek.co.uk/3yc) explains how to simplify, optimize, and automate container orchestration and application modernization with Kyndryl Cloud Native Services (KCNS) for EKS (Elastic Kubernetes Service) [hands on]
* [Ray Serve High Availability](https://aws-oss.beachgeek.co.uk/3y4) provides you with a detailed solution on how you can deploy a critical component of a Ray Cluster, in a highly available and resilient fashion [hands on]
* [Host the Whisper Model with Streaming Mode on Amazon EKS and Ray Serve](https://aws-oss.beachgeek.co.uk/3yd) shows you how to build a scalable and distributed ML inference solution using the Whisper model for streaming audio transcription, deployed on Amazon EKS and using Ray Serve [hands on]

* [How to create a pipeline for hardening Amazon EKS nodes and automate updates](https://aws-oss.beachgeek.co.uk/3yh) provides a hands on guide that shows you how to create a workflow to harden Amazon EKS-optimized AMIs by using the CIS Amazon Linux 2 or Amazon Linux 2023 Benchmark and to automate the update of EKS node groups [hands on]
**Other posts to check out**
* [New – Rightsizing Recommendations for Amazon RDS MySQL and RDS PostgreSQL in AWS Compute Optimizer](https://aws-oss.beachgeek.co.uk/3y6) takes you through how you can quickly detect idle RDS resources and identify opportunities to optimise your RDS DB instance and storage usage
* [Stream multi-tenant data with Amazon MSK](https://aws-oss.beachgeek.co.uk/3y8) looks at implementation patterns a SaaS vendor can adopt when using a streaming platform as a means of integration between internal components, where streaming data is not directly exposed to third parties
* [Design a data mesh pattern for Amazon EMR-based data lakes using AWS Lake Formation with Hive metastore federation](https://aws-oss.beachgeek.co.uk/3yl) delves into the key aspects of using Amazon EMR for modern data management, covering topics such as data governance, data mesh deployment, and streamlined data discovery [hands on]

* [Deploying a web application using AWS Amplify Gen 2 with GitLab as the Repository on AWS](https://aws-oss.beachgeek.co.uk/3ye) guides you through the process of deploying a web application using GitLab as the repository and AWS Amplify Hosting as the deployment platform [hands on]
* [Optimize storage costs in Amazon OpenSearch Service using Zstandard compression](https://aws-oss.beachgeek.co.uk/3yi) explores the performance of the Zstandard algorithm, which was introduced in OpenSearch v2.9, amongst other available compression algorithms in OpenSearch [hands on]
* [Integrating Research and Engineering Studio with AWS ParallelCluster](https://aws-oss.beachgeek.co.uk/3yj) dives into a new HPC recipe which creates a Research and Engineering Studio on AWS (RES) compatible ParallelCluster login node software stack [hands on]
* [Exploring new features of Apache TinkerPop 3.7.x in Amazon Neptune](https://aws-oss.beachgeek.co.uk/3ym) explores Apache TinkerPop 3.7.x new features to the Gremlin language, and how they will help improve your ability to write graph queries in various ways [hands on]
### Quick updates
**Valkey**
The first generally available, stable Valkey release today. This release maintains the same protocol, API, return values, and data file formats with the last open source release of Redis (7.2.4). Check out the [release page for 7.2.5](https://aws-oss.beachgeek.co.uk/3yn).
**MariaDB**
Amazon Relational Database Service (Amazon RDS) for MariaDB now supports MariaDB minor versions 10.11.8, 10.6.18, 10.5.25, and 10.4.34. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of MariaDB, and to benefit from the bug fixes, performance improvements, and new functionality added by the MariaDB community.
**Kubernetes**
Amazon EKS has open sourced the Pod Identity agent, providing customers with more options to package and deploy the agent into EKS clusters. Pod Identity is a feature of EKS that simplifies the process for cluster administrators to configure Kubernetes applications with AWS IAM permissions. A prerequisite for using the Pod Identity feature is running the Pod Identity agent on the cluster’s worker nodes. With the Pod Identity agent being open sourced, you can now build the agent on your own. This gives you various options to package and deploy the agent, enabling you to align with your organisation’s deployment practices.
With access to the Pod Identity agent’s source code, you are able to inspect the source code and perform necessary scans as part of your build process. Additionally, you can choose to package and deploy the pod identity agent as a binary in your custom EKS AMI. Alternatively, you can build a container image from source code, and store it in your preferred container registry. You can then deploy the containerised agent using a Helm chart or as a Kubernetes manifest file.
If you are interested in diving straight to the source code, you can grab that here - [eks-pod-identity-agent](https://aws-oss.beachgeek.co.uk/3z6)
**OpenSearch**
Amazon OpenSearch Serverless now offers customers the option to use Internet Protocol version 6 (IPv6) addresses for the endpoint of your OpenSearch Serverless collection. Customers moving to IPv6 can simplify their network stack by enabling their OpenSearch Serverless endpoints with both IPv4 and IPv6 addresses. The continued growth of the internet is exhausting available Internet Protocol version 4 (IPv4) addresses. IPv6 increases the number of available addresses by several orders of magnitude, so customers will no longer need to manage overlapping address spaces in their VPCs. Customers can also standardise their applications on the new version of Internet Protocol by moving their OpenSearch Serverless Endpoints to IPv6 only.
**Lustre**
Amazon FSx for Lustre, a service that provides high-performance, cost-effective, and scalable file storage for compute workloads, is increasing the maximum level of metadata IO operations per second (IOPS) you can drive on a file system by up to 15x, and now allows you to provision metadata IOPS independently of your file system’s storage capacity.
A file system’s level of metadata IOPS determines the number of files and directories that you can create, list, read, and delete per second. By default, the metadata IOPS of an FSx for Lustre file system scales with its storage capacity. Starting today, you can provision up to 15x higher metadata performance per file system—independently of your file system’s storage capacity—allowing you to scale to even higher levels of performance, accelerate time-to-results, and optimise your storage costs for metadata-intensive machine learning research and high-performance computing (HPC) workloads. You can also update your file system’s metadata IOPS level with the click of a button, allowing you to quickly increase performance as your workloads scale.
Higher metadata IOPS are available on new FSx for Lustre Persistent_2 file systems in all commercial AWS Regions where Persistent_2 file systems are available.
### Videos of the week
**Strengthen open source software supply chain security: Log4Shell to xz**
If you did not attend re:Inforce a few weeks ago, then don't worry, as some of the videos are now being posted online. My pick of the bunch is the Leadership session from Mark Ryland and David Nalley, who dive deep into lessons learned from navigating past supply chain crises, and some that almost were. You will learn actionable techniques and best practices to improve the security posture of your supply chain.
{% youtube wyVAqYrEaFg %}
**O3DE Overview: Features of the Open-Source AAA Game Engine**
Grzegorz Ochmańskin explores practical considerations when selecting a game engine for various development scenarios, walking you through Open 3D Engine, diving into its distinct features and capabilities, that includes live demonstration of the O3DE engine editor.
{% youtube bx6KU26jYTU %}
### Events for your diary
If you are planning any events in 2024, either virtual, in person, or hybrid, get in touch as I would love to share details of your event with readers.
**BSides Exeter**
**July 27th, Exeter University, UK**
Looking forward to joining the community at [BSides Exeter](https://bsidesexeter.co.uk/) to talk about one of my favourite open source projects, Cedar. Check out the event page and if you are in the area, come along and learn about Cedar and more!
**Open Source Summit**
**September 16-18th, Vienna, Austria**
Come join my colleagues and myself at the AWS booth at the Open Source Summit Europe, which is being held in the wonderful city of Vienna. There will be a bunch of around, doing talks, open source technology demos, and just hanging out with the open source community. Be great to see some of you there.
**All Things Open**
**27-29th October, Raleigh, North Carolina**
I will be speaking at All Things Open this coming Autumn, on the topic of applying modern application techniques with your Apache Airflow environments. I am really looking forward to coming to one of my favourite tech conferences, with the amazing community that comes year in, year out. As always my colleagues will be manning the AWS booth, and I am sure we will have some cool stuff and SWAG to share with the community.
Check out and grab your ticket while they are still available at [2024.allthingsopen.org](https://2024.allthingsopen.org/)
**Cortex**
**Every other Thursday, next one 16th February**
The Cortex community call happens every two weeks on Thursday, alternating at 1200 UTC and 1700 UTC. You can check out the GitHub project for more details, go to the [Community Meetings](https://aws-oss.beachgeek.co.uk/2h5) section. The community calls keep a rolling doc of previous meetings, so you can catch up on the previous discussions. Check the [Cortex Community Meetings Notes](https://aws-oss.beachgeek.co.uk/2h6) for more info.
**OpenSearch**
**Every other Tuesday, 3pm GMT**
This regular meet-up is for anyone interested in OpenSearch & Open Distro. All skill levels are welcome and they cover and welcome talks on topics including: search, logging, log analytics, and data visualisation.
Sign up to the next session, [OpenSearch Community Meeting](https://aws-oss.beachgeek.co.uk/1az)
### Celebrating open source contributors
The articles and projects shared in this newsletter are only possible thanks to the many contributors in open source. I would like to shout out and thank those folks who really do power open source and enable us all to learn and build on top of what they have created.
So thank you to the following open source heroes: Bour Mohamed Abdelhadi, Daniel B, Mark Ryland, David Nalley, Grzegorz Ochmańskin, Bowen Wang, Jimmy Yang, Ryan Niksch, Emanuele Levi, Lorenzo Nicora, Nicholas Tunney, Kevin DeJong, Veliswa Boya, Darren Lin, Frank Fan, Bonnie Ng, Shawn Zhang, Eunice Tsao, Ben-Amin York Jr., Chandan Rupakheti, Parnab Basak, Nima Fotouhi, Sarthak Aggarwal, Akash Shankaran, Mulugeta Mammo, Prabhakar Sithanandam, Praveen Nischal, Doug Morand, Jianjun Xu, Dave Cramer, Sudipta Mitra, Nanda Chinnappa, Deepak Sharma, Stephen Mallette, Abhisek Gupta, Romar Cablao, Jakub Wołynko, Arseny Zinchenko, Stefan Weber, Sidath Munasinghe, and Ran Isenberg
**Feedback**
Please please please take 1 minute to [complete this short survey](https://www.pulse.aws/promotion/10NT4XZQ).
### Stay in touch with open source at AWS
Remember to check out the [Open Source homepage](https://aws.amazon.com/opensource/?opensource-all.sort-by=item.additionalFields.startDate&opensource-all.sort-order=asc) for more open source goodness.
One of the pieces of feedback I received in 2023 was to create a repo where all the projects featured in this newsletter are listed. Where I can hear you all ask? Well as you ask so nicely, you can meander over to[ newsletter-oss-projects](https://aws-oss.beachgeek.co.uk/3l8).
Made with ♥ from DevRel
| 094459 |
1,899,056 | VerifyVault Beta v0.2.2 Released - Alternative to Authy | 🚀 VerifyVault Beta v0.2.2 Released! 🚀 Key Updates: Bugs fixed 🐞 Updated user guide 📘 Improved... | 0 | 2024-06-24T15:01:50 | https://dev.to/verifyvault/verifyvault-beta-v022-released-3mac | 2fa, authenticator, security, privacy | 🚀 **VerifyVault Beta v0.2.2 Released!** 🚀
Key Updates:
- Bugs fixed 🐞
- Updated user guide 📘
- Improved program navigation 🧭
- Code organization enhancements 🛠️
Upgrade your experience with VerifyVault's latest version today:
📂 Repository: [VerifyVault on GitHub ](https://github.com/VerifyVault)
📦 Download EXE: [Beta v0.2.2](https://github.com/VerifyVault/VerifyVault/releases/tag/Beta-v0.2.2)
_VerifyVault is an open source 2 Factor Authenticator for Desktop_ | verifyvault |
1,899,070 | The Power Of Generative AI Observability | The concept of observability in artificial intelligence (AI) is associated with understanding the... | 0 | 2024-06-24T15:00:25 | https://dev.to/anshul_kichara/the-power-of-generative-ai-observability-2pl2 | devops, technology, trending, software | The concept of observability in artificial intelligence (AI) is associated with understanding the behaviour and performance of software systems. With the emergence of generative AI, observability transforms and opens up a new concept that allows a good understanding of the inner details of complex models. This blog plunges into the deep impact of Generative AI observability, exploring how it helps researchers, developers and businesses level up by using AI-based systems to their full capacity.
## Understanding Generative AI
Generative AI represents a class of algorithms that can produce original and authentic information that holds a similarity with the characteristics of the given dataset. Such models, comprising variational auto-encoders and generative adversarial networks (GANs), have shown impressive results in generating images, text, music, and even whole virtual environments. However, as the level of complexity and plausibility increases, the ability of models to create output becomes increasingly more difficult.
## The Challenges of Black Box Models
Traditional machine learning algorithms, often called “black boxes,” are hardly predictable. Their mechanisms, to the outside world, remain opaque. It may be so, that their task can be accurately predicted. Explaining why the system makes predictions as such, however, can be extremely intricate. The major issue here is the lack of transparency, which plays a crucial role in domains like health care, finance & automated systems and implementation of DevOps solutions, where interpretation and responsibility are valued.
## Shedding Light on Black Box
Generative AI observability brings insight into the obscure black box problem to recognize inner details and specifics. By using various monitoring methods, for example, logging, monitoring, visualization and application performance monitoring, researchers can gain visibility into every stage of the generative process. With this visibility, teams can track the production of generated outputs, identify patterns and diagnose potential issues in real time.
## Empowering Innovation and Creativity
The ability to manage Artificial Intelligence using generative AI is not just abstractive but also combines both innovation and creativity in cross-function domains. Initially, in art and design creative people perform generation of art objects and develop new styles with the assistance of generative models while researchers in pharmacology conduct research using AI-generated molecules and this helps in the faster emergence of new drugs. By offering an opportunity for people to gain insights into the creative process of artificial intelligence, generative AI observability gives people and society the freedom to go beyond the limits of what is possible.
## Enhancing Trust and Reliability
Trust and dependability in safety-oriented apps including autopilots and medical software are a must. Generative AI observability is key to understanding performance, quality confirmation, application performance monitoring, abnormalities detection, and ensuring robustness in deployment. Through reliability and responsibility, the AI system is made transparent to increase the observer’s trust in AI and make it easy to use in applications.
**You can check more info about: [Understanding Generative AI](https://www.buildpiper.io/blogs/the-power-of-generative-ai-observability/)**.
- [DevOps Company](https://opstree.com/).
- [DevOps tools for infrastructure automation](https://www.buildpiper.io/).
- [Cloud Consulting](https://opstree.com/cloud-devsecops-advisory/).
- [Security Consulting Services](https://opstree.com/security-as-a-service/).
| anshul_kichara |
1,898,478 | Understanding closures, promises, and async/await | Understanding Closures, Promises, and Async/Await in JavaScript JavaScript is a powerful... | 0 | 2024-06-24T14:58:03 | https://dev.to/sumit_01/understanding-closures-promises-and-asyncawait-2601 | javascript, webdev, programming, tutorial | ### Understanding Closures, Promises, and Async/Await in JavaScript
JavaScript is a powerful and versatile language, but it can sometimes be tricky to master some of its more advanced concepts. Closures, promises, and async/await are fundamental to modern JavaScript development, enabling more efficient and readable code. In this article, we'll break down these concepts, explain how they work, and show how you can use them in your projects.
#### Closures
**What is a Closure?**
A closure is a feature in JavaScript where an inner function has access to variables defined in its outer (enclosing) function scope, even after the outer function has finished executing. This is possible because functions in JavaScript form closures, retaining access to their scope even when they are passed around and executed outside their original context.
**Example of a Closure:**
```javascript
function outerFunction() {
let outerVariable = 'I am outside!';
function innerFunction() {
console.log(outerVariable); // Can access outerVariable
}
return innerFunction;
}
const myClosure = outerFunction();
myClosure(); // Logs: "I am outside!"
```
In this example, `innerFunction` forms a closure that includes the `outerVariable` from `outerFunction`'s scope. Even after `outerFunction` has finished executing, `innerFunction` retains access to `outerVariable`.
**Why Use Closures?**
Closures are useful for creating private variables and functions, emulating encapsulation in JavaScript. They also enable powerful functional programming techniques, such as currying and higher-order functions.
#### Promises
**What is a Promise?**
A promise is an object representing the eventual completion or failure of an asynchronous operation. It allows you to write asynchronous code in a more synchronous and manageable way. A promise can be in one of three states: pending, fulfilled, or rejected.
**Creating and Using a Promise:**
```javascript
const myPromise = new Promise((resolve, reject) => {
setTimeout(() => {
const success = true; // Simulate success or failure
if (success) {
resolve('Operation was successful!');
} else {
reject('Operation failed.');
}
}, 2000);
});
myPromise
.then(result => {
console.log(result); // Logs: "Operation was successful!"
})
.catch(error => {
console.error(error); // Logs: "Operation failed."
});
```
In this example, `myPromise` simulates an asynchronous operation (a `setTimeout` that completes after 2 seconds). If the operation is successful, the promise is resolved, and the `then` method is called. If it fails, the promise is rejected, and the `catch` method is called.
**Why Use Promises?**
Promises provide a cleaner, more readable way to handle asynchronous operations compared to callbacks. They also support chaining, making it easier to manage sequences of asynchronous tasks.
#### Async/Await
**What is Async/Await?**
Async/await is syntactic sugar built on top of promises, introduced in ES2017 (ES8). It allows you to write asynchronous code in a more synchronous, linear fashion, making it easier to read and maintain.
**Using Async/Await:**
```javascript
async function fetchData() {
try {
const response = await fetch('https://api.example.com/data');
const data = await response.json();
console.log(data);
} catch (error) {
console.error('Error fetching data:', error);
}
}
fetchData();
```
In this example, the `fetchData` function is declared as `async`, allowing the use of `await` within it. The `await` keyword pauses the execution of the function until the promise returned by `fetch` is resolved or rejected. This makes the code appear synchronous, even though it's still asynchronous under the hood.
**Why Use Async/Await?**
Async/await simplifies the handling of asynchronous operations, especially when dealing with multiple promises. It helps avoid "callback hell" and makes the code more readable and easier to debug.
### Conclusion
Closures, promises, and async/await are essential concepts in modern JavaScript development. Closures provide powerful ways to manage scope and state. Promises offer a cleaner approach to handling asynchronous operations. Async/await builds on promises to make asynchronous code look and behave more like synchronous code.
Understanding and mastering these concepts will significantly improve your ability to write efficient, readable, and maintainable JavaScript code. | sumit_01 |
1,899,068 | You're not in the right place if... | Lack of Support from Management: Your manager does not provide the necessary guidance or... | 0 | 2024-06-24T14:54:05 | https://dev.to/pulkit30/youre-not-in-the-right-place-if-34k6 | softwareengineering, engineer, career, job | 1. **Lack of Support from Management**:
Your manager does not provide the necessary guidance or opportunities to help you grow professionally. Without their support, it becomes challenging to develop new skills or advance in your career.
2. **Credit for Work Taken by Senior**:
Your senior colleague often presents the work you have completed as their own, taking credit for your efforts and contributions. This undermines your achievements and can negatively impact your career progression.
3. **Team Members Hindering Recognition**:
Your team members actively prevent you from receiving the recognition you deserve for your hard work and accomplishments. This lack of acknowledgment can be demotivating and can affect your job satisfaction.
4. **Insufficient Time for Task Completion**:
You are not allocated enough time to complete tasks effectively, leading to rushed work and potentially lower-quality results. This can create stress and hinder your ability to perform to the best of your abilities.
5. **Lack of Team Cohesion**:
Your team does not function cohesively and fails to work collaboratively. This lack of teamwork can lead to inefficiencies, misunderstandings, and a less productive work environment.
Anything I missed? Please mention that in the comments.
| pulkit30 |
1,899,069 | Stop Using JSON.parse(JSON.stringify(object)) for Deep Cloning! Try This Instead | I am writing this article because I recently posted an article regarding Shallow Copy and Deep Copy.... | 0 | 2024-06-24T14:51:08 | https://dev.to/rajusaha/stop-using-jsonparsejsonstringifyobject-for-deep-cloning-try-this-instead-2cmc | webdev, javascript, beginners, programming | I am writing this article because I recently posted an article regarding Shallow Copy and Deep Copy. I wanted to jot down what I was doing until now and highlight a better approach for deep cloning objects, especially when dealing with nested keys containing non-string or non-number data.
Previously, I was using the method `JSON.parse(JSON.stringify(object))` for deep cloning. However, this approach often falls short when the nested key has different data types, requiring manual assignment for those particular key values.
Thanks to @jonrandy and @jahid6597 for shedding light on a better solution introduced in 2022: the `structuredClone()` function. This global function uses the structured clone algorithm to create a deep clone of an object, handling various data types more effectively.
The `structuredClone()` method also supports transferable objects. In such cases, the transferable objects in the original value are transferred rather than cloned to the new object. These transferred objects are detached from the original and attached to the new object, making them inaccessible to the original object.
Example

Below, we use JSON.parse and JSON.stringify for cloning the object.
However, notice that when it's logged, the 'eighth' key is not shown.

To address the above issue, we can use the structuredClone() global function for a deep copy.

The `structuredClone()` method is supported by modern browsers and can handle transferable objects efficiently.
For more details, please refer to the [official documentation](https://developer.mozilla.org/en-US/docs/Web/API/structuredClone). | rajusaha |
1,898,753 | AI Workflow Automation: what is it and how to get started | In this comprehensive guide we will delve into different AI workflows aspects to provide a clear... | 0 | 2024-06-24T14:50:38 | https://www.edenai.co/post/ai-workflow-automation-what-is-it-and-how-to-get-started | ai, api, productivity | _In this comprehensive guide we will delve into different AI workflows aspects to provide a clear understanding, and how they can have great significance in modern business operations._
## What is [AI Workflow Automation](https://www.edenai.co/workflows?referral=ai-workflow-definition-tuto)?
An AI workflow refers to a structured sequence of operations designed to automate and optimize tasks using artificial intelligence (AI) technologies. It integrates various AI models and tools to process and analyze data, make decisions, and execute tasks, aiming to improve efficiency, accuracy, and productivity.

_**[Watch the video HERE](https://youtu.be/hBcVDVQxZ_E)**_
By leveraging advanced technologies such as machine learning, natural language processing, and robotic process automation, organizations can unlock new opportunities for growth, efficiency, and competitive advantage in an increasingly digital world.
AI workflows can encompass a wide range of applications, from customer service automation and predictive analytics to complex problem-solving in various industries.
_**[Create my AI Workflow](https://app.edenai.run/user/register?referral=ai-workflow-definition-tuto)**_
## Why do I need an AI workflow?
Implementing an AI workflow is not just about adopting cutting-edge technology; it's about transforming the way businesses operate and interact with their customers. From enhancing operational efficiency and improving decision-making to reducing costs and delivering exceptional customer experiences, AI workflows have become indispensable tools for organizations looking to thrive in today's fast-paced and competitive business environment. Let's explore the key reasons why businesses need to embrace AI workflows:
**- Improved Decision Making:** AI workflows have the capability to analyze vast amounts of data quickly and accurately, providing valuable insights and recommendations to support decision-making processes. By leveraging AI technologies such as machine learning and predictive analytics, businesses can make data-driven decisions that are based on real-time information and predictive models. This leads to more informed, strategic decisions that align with business objectives and drive sustainable growth.
**- Cost Reduction:** One of the key benefits of implementing an AI workflow is the potential for significant cost reduction across various operational areas. By automating tasks that would otherwise require manual intervention, businesses can optimize resource utilization, minimize waste, and improve operational efficiency. This cost-saving aspect extends beyond labor expenses to include reduced errors, improved process efficiency, and better resource allocation, all contributing to increased profitability and competitiveness in the market.
**- Customer Experience:** AI-driven workflows play a crucial role in enhancing customer experience by personalizing interactions and tailoring services to individual preferences. Through advanced algorithms and data analysis, businesses can create personalized recommendations, targeted marketing campaigns, and customized services that cater to the unique needs of each customer. This level of personalization not only improves customer satisfaction but also fosters loyalty and long-term relationships with clients, ultimately driving revenue growth and brand reputation.
**- Innovation and Growth:** AI workflows serve as catalysts for innovation and growth within organizations by enabling businesses to explore new opportunities, experiment with emerging technologies, and adapt to changing market dynamics. By leveraging AI capabilities such as natural language processing, computer vision, and deep learning, businesses can develop innovative products, services, and solutions that differentiate them from competitors and capture new market segments. This culture of innovation fueled by AI workflows fosters continuous improvement, agility, and adaptability in a rapidly evolving business landscape.
**- Complex Business Needs:** Complex business requirements often necessitate the combination of multiple AI technologies within a cohesive workflow. By integrating diverse AI models and tools tailored to specific use cases or challenges, businesses can address complex problems more effectively. Combining different AI capabilities such as machine learning algorithms, natural language processing systems, and computer vision technologies allows businesses to tackle multifaceted issues comprehensively and derive deeper insights from data-driven analyses.
## Types of AI Workflow you can build
Let's explore some examples of how Eden AI's AI-powered features can automate and optimize various marketing and content-related tasks.
### [Generative AI](https://www.edenai.co/technologies/generative-ai?referral=ai-workflow-definition-tuto) with [Prompt Optimization](http://www.edenai.co/feature/prompt-optimization?referral=ai-workflow-definition-tuto) - Marketing Content Creation Workflow
As businesses increasingly rely on content marketing to engage their audiences, the demand for high-quality, personalized content has grown exponentially. However, the traditional content creation process can be time-consuming and resource-intensive, often leading to bottlenecks and inconsistencies.
To address these challenges, organizations are turning to generative AI technologies to streamline and optimize their marketing content creation workflows.
1. [Prompt optimization](http://www.edenai.co/feature/prompt-optimization?referral=ai-workflow-definition-tuto) is a crucial component of this workflow, as it involves iteratively refining the prompts used to generate content in order to improve the relevance, tone, and effectiveness of the AI-produced output.
2. Generative AI models, such as large language models in [Text Generation](http://www.edenai.co/feature/text-generation?referral=ai-workflow-definition-tuto), are used to assist and enhance the marketing content creation process by generating initial content ideas, drafting content, and personalizing the output.

### [OCR](http://www.edenai.co/feature/ocr?referral=ai-workflow-definition-tuto), [Translation](http://www.edenai.co/feature/machine-translation?referral=ai-workflow-definition-tuto) & [Summarization](http://www.edenai.co/feature/summarization?referral=ai-workflow-definition-tuto) - Document Analysis Workflow
Financial institutions, such as investment firms, banks, and insurance companies, often need to review and analyze a high volume of complex financial documents, including reports, contracts, and regulatory filings. Moreover, the traditional manual approach to document review can be time-consuming, error-prone, and a barrier to timely decision-making.
To address these challenges, financial institutions can implement a document analysis workflow that leverages Optical Character Recognition (OCR), automatic translation, and text summarization technologies:
1. [Document Analysis with OCR:](http://www.edenai.co/feature/ocr?referral=ai-workflow-definition-tuto) OCR is used to extract the text content from scanned or image-based financial documents, converting them into a machine-readable format.
2. [Automatic Translation:](http://www.edenai.co/feature/machine-translation?referral=ai-workflow-definition-tuto) If the extracted text is not in English, an automatic translation task is used to convert the content to the desired language, typically English.
3. [Summarization:](http://www.edenai.co/feature/summarization?referral=ai-workflow-definition-tuto) After the text is in the desired language, text summarization is applied to condense the key data, insights, and trends from the document into a concise summary.

This document analysis process is critical for making informed business decisions, managing risk, and ensuring compliance.
### [NSFW](http://www.edenai.co/feature/text-moderation?referral=ai-workflow-definition-tuto), [Spell Check](http://www.edenai.co/feature/grammar-spell-check?referral=ai-workflow-definition-tuto) & [Generative AI](https://www.edenai.co/technologies/generative-ai?referral=ai-workflow-definition-tuto) - Marketing Content Moderation Workflow
Businesses that produce and publish marketing content, such as social media posts, blog articles, and product descriptions, often need to ensure that the content is appropriate, error-free, and aligned with their brand and messaging. This content moderation process can be time-consuming and challenging, especially as the volume of content continues to grow.
To address these challenges, organizations are increasingly turning to AI-powered workflows to automate and streamline the content moderation process.
1. [NSFW Content Detection:](http://www.edenai.co/feature/text-moderation?referral=ai-workflow-definition-tuto) Utilize machine learning models trained to identify and flag potentially inappropriate or offensive content, such as explicit language, violence, or sexual imagery.
2. [Spell Check and Grammar Correction:](http://www.edenai.co/feature/grammar-spell-check?referral=ai-workflow-definition-tuto) Integrate an advanced spell-checking and grammar correction system to automatically identify and correct errors in the content.
3. Generative AI for Brand Alignment: Leverage large language models and [Image Generation](http://www.edenai.co/feature/image-generation?referral=ai-workflow-definition-tuto) to analyze the content and provide illustrations that align with the organization's brand voice and messaging guidelines.

## What are the challenges I could face when creating a workflow?
### Too Many AI Models
Integrating multiple AI models from different providers can lead to a complex ecosystem of APIs, each with its own costs, latencies, and accuracies. Managing this diversity of models requires careful consideration of performance metrics, compatibility issues, and cost-effectiveness. Without a unified platform, businesses may struggle to streamline their AI workflows efficiently.
### Complex Integration
Connecting AI models from competing providers adds another layer of complexity to the workflow creation process. Ensuring seamless integration between models with potentially conflicting architectures or data formats can be challenging. This complexity can hinder the scalability and interoperability of the workflow, impacting its overall effectiveness and performance.
### Maintenance Over Time
As AI models evolve rapidly, keeping up with updates and improvements becomes crucial for maintaining the efficiency and relevance of an AI workflow. The constant evolution of AI technologies means that the models and tools used in a workflow can quickly become outdated, requiring frequent updates and migrations to ensure the workflow remains effective. Without a platform that provides continuous updates and support for the latest AI advancements, businesses are burdened with the responsibility of manually tracking changes, integrating new models, and migrating their workflows accordingly.
### Monitoring Usage
Tracking usage across multiple AI providers is essential for optimizing costs, resource allocation, and performance within an AI workflow. Without proper monitoring tools, businesses may struggle to identify inefficiencies, overutilization, or underutilization of AI services, leading to suboptimal outcomes and increased operational expenses.
## Why is Eden AI the Best Platform to Create an AI Workflow?
Eden AI serves as a comprehensive platform for managing and creating workflows with various AI APIs. Here's why Eden AI stands out:

- **Unified API:** Eden AI provides a unified API that acts as a gateway to a diverse range of AI models from various providers. This simplifies the integration process by offering a standardized interface for accessing and managing different services within a single platform. Users can seamlessly switch between models without worrying about compatibility issues or complex setup procedures.
- **Provider Agnosticism:** By being provider-agnostic, Eden AI allows users to choose from a wide selection of AI models without being tied to a specific vendor or technology stack. This flexibility enables businesses to experiment with different solutions, optimize costs based on performance metrics, and adapt their workflows according to evolving requirements without constraints imposed by proprietary systems.
- **Continuous Updates:** Eden AI constantly updates its GitHub repository with the latest advancements in AI technology, ensuring users have access to cutting-edge solutions effortlessly. This proactive approach eliminates the need for manual tracking of updates or migrating workflows to newer versions, empowering businesses to stay competitive and innovative in their use of AI technologies by keeping pace with industry trends.
- **Usage Monitoring:** Effective monitoring tools provided by Eden AI enable users to track usage metrics across all integrated services in real-time. This visibility into resource consumption, performance benchmarks, and cost implications allows businesses to make informed decisions regarding resource allocation, scaling strategies, and optimization efforts within their AI workflows. By proactively managing usage patterns, businesses can maximize the value derived from their investments in AI technologies while minimizing unnecessary expenses or inefficiencies.
## About Eden AI
Eden AI is the future of AI usage in companies: our app allows you to call multiple AI APIs.

- Unified API: quick switch between AI models and providers
- Standardized response format: the JSON output format is the same for all suppliers.
- The best Artificial Intelligence APIs in the market are available
- Data protection: Eden AI will not store or use any data.
_**[Create your Account on Eden AI](https://app.edenai.run/user/register?referral=ai-workflow-definition-tuto)**_ | edenai |
1,899,062 | Trouble with laying two SVG Treemaps on one page. (with D3.js) | Hello, I am new to D3.js and I had trouble laying two SVG Treemaps on one page. I got a hint from... | 0 | 2024-06-24T14:49:12 | https://dev.to/lucaslim/trouble-with-laying-two-svg-treemaps-on-one-page-with-d3js-59e0 | d3, treemap, svg | Hello,
I am new to D3.js and I had trouble laying two SVG Treemaps on one page.
I got a hint from this.
[](https://jsfiddle.net/suayipekmekci/x77z1rqr/12/)
What I want to do is lay two zoomable treemap elements on one page. (The sample above is for one treemap.)
I implemented the code myself, but not working properly.

Please help me if you are familiar with SVG and have experience with the zoomable D3 treemap.
chart1.js
```
fetch('../data/chart1.json')
.then((response) => response.json())
.then((json) => {
render(json, 'investicija');
render(json, 'proizvod');
})
.catch((error) => console.error('Error fetching data:', error));
function render(treemapData, selector) {
const renderData = treemapData[selector][0]; // Ensure this key exists in your JSON data
(function () {
const margin = { top: 20, right: 0, bottom: 0, left: 0 };
const container = document.getElementById(selector);
const width = container.clientWidth;
const height = container.clientHeight;
let svg, grandparent, scaleX, scaleY, treemap;
function createSvg() {
scaleX = d3.scale.linear()
.domain([0, width])
.range([0, width]);
scaleY = d3.scale.linear()
.domain([0, height])
.range([0, height]);
treemap = d3.layout.treemap()
.children((d, depth) => depth ? null : d._children)
.sort((a, b) => a.amount - b.amount)
.ratio(height / width * 0.5 * (1 + Math.sqrt(5)))
.round(false)
.value(d => d.value);
svg = d3.select(`#${selector}`).append("svg")
.attr("width", width + margin.left + margin.right)
.attr("height", height + margin.bottom + margin.top)
.style("margin-left", -margin.left + "px")
.style("margin-right", -margin.right + "px")
.append("g")
.attr("transform", `translate(${margin.left},${margin.top})`)
.style("shape-rendering", "geometricPrecision");
grandparent = svg.append("g")
.attr("class", "grandparent");
grandparent.append("rect")
.attr("y", -margin.top)
.attr("width", width)
.attr("height", margin.top + 10);
grandparent.append("text")
.attr("x", 25)
.attr("y", 6 - margin.top + 3)
.attr("dy", ".75em");
}
function initialize(d) {
d.x = d.y = 0;
d.dx = width;
d.dy = height;
d.depth = 0;
}
function accumulate(d) {
return (d._children = d.children) ? d.amount = d.children.reduce((p, v) => p + accumulate(v), 0) : d.amount;
}
function layout(d) {
if (d._children) {
treemap.nodes({ _children: d._children });
d._children.forEach(c => {
c.x = d.x + c.x * d.dx;
c.y = d.y + c.y * d.dy;
c.dx *= d.dx;
c.dy *= d.dy;
c.parent = d;
});
}
}
function display(d) {
grandparent
.datum(d.parent)
.on("click", transition)
.select("text")
.text(selector);
const g1 = svg.insert("g", ".grandparent")
.datum(d)
.attr("class", "depth");
const g = g1.selectAll("g")
.data(d._children)
.enter().append("g");
g.filter(d => d._children)
.classed("children", true)
.on("click", transition);
g.append("rect")
.attr("class", "parent")
.call(rect)
.style("fill", d => d.color)
.append("title")
.text(d => d.fullname);
g.selectAll(".child")
.data(d => {
layout(d);
return d._children || [d];
})
.enter().append("rect")
.attr("class", "child")
.call(rect);
g.append("text")
.classed("overlaidText1", true)
.text(d => d.name)
.call(middletext1);
g.append("text")
.classed("overlaidText", true)
.text(d => d.text)
.call(middletext);
function transition(d) {
if (!d || this.transitioning) return;
this.transitioning = true;
const g2 = display(d),
t1 = g1.transition().duration(1000),
t2 = g2.transition().duration(1000);
scaleX.domain([d.x, d.x + d.dx]);
scaleY.domain([d.y, d.y + d.dy]);
svg.style("shape-rendering", null);
svg.selectAll(".depth").sort((a, b) => a.depth - b.depth);
g2.selectAll("text")
.style("fill-opacity", 0);
t1.selectAll("text:not(.overlaidText)").call(middletext1).style("fill-opacity", 0);
t2.selectAll("text:not(.overlaidText)").call(middletext1).style("fill-opacity", 1);
t1.selectAll(".overlaidText").call(middletext).style("fill-opacity", 0);
t2.selectAll(".overlaidText").call(middletext).style("fill-opacity", 1);
t1.selectAll("rect").call(rect);
t2.selectAll("rect").call(rect);
t1.remove().each("end", () => {
svg.style("shape-rendering", "geometricPrecision");
this.transitioning = false;
});
}
return g;
}
function text(text) {
text.attr("x", d => scaleX(d.x) + 6)
.attr("y", d => scaleY(d.y) + 2);
}
function middletext(text) {
text.attr("x", d => scaleX(d.x + d.dx / 2))
.attr("y", d => scaleY(d.y + d.dy / 2) + 18);
}
function middletext1(text) {
text.attr("x", d => scaleX(d.x + d.dx / 2))
.attr("y", d => scaleY(d.y + d.dy / 2) - 3.5);
}
function rect(rect) {
rect.attr("x", d => scaleX(d.x))
.attr("y", d => scaleY(d.y) - 6)
.attr("width", d => {
const x0 = scaleX(d.x);
const x1 = scaleX(d.x + d.dx);
const w = x1 - x0;
console.log(`Width for ${d.name} in ${selector}: x0=${x0}, x1=${x1}, width=${w}`);
return w;
})
.attr("height", d => scaleY(d.y + d.dy) - scaleY(d.y))
.attr("rx", "0px");
}
// Debug logs
createSvg();
initialize(renderData);
accumulate(renderData);
layout(renderData);
display(renderData);
})();
}
```
chart1.html
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>D3-Stacked-Bar-Chart (Percentage)</title>
<link href="css/chart1.css" rel="stylesheet" type="text/css" />
</head>
<body>
<div>
<div id="investicija" style="display: inline-block; width: 500px; height: 300px;" ></div>
<div id="proizvod" style="display: inline-block; width: 500px; height: 300px;"></div>
</div>
</body>
<script src="https://d3js.org/d3.v3.min.js" charset="utf-8"></script>
<script src="script/chart1.js" charset="utf-8"></script>
</html>
```
chart1.json
```
{
"investicija": [
{
"value": 3500000,
"kvadrata": 1880,
"invest_kvadrat": 1862,
"description": "<p class='desc'>Opis investicije:<br>Stambena zgrada...</p>",
"children": [
{
"name": "PLAC",
"value": 1300000,
"image": "some_image.jpg",
"link": "http://somelink.com?id=1",
"color": "#6060e0"
},
{
"name": "SOFTCOST",
"value": 300000,
"color": "#ff8259",
"children": [
{
"name": "Nadzor",
"image": "some_image.jpg",
"link": "http://somelink.com?id=1",
"value": 100000
},
{
"name": "Dozvola",
"image": "some_image.jpg",
"link": "http://somelink.com?id=1",
"value": 100000
},
{
"name": "Direkcija",
"image": "some_image.jpg",
"link": "http://somelink.com?id=1",
"value": 100000
}
]
},
{
"name": "HARDCOST",
"value": 1900000,
"color": "#fff214",
"image": "some_image.jpg",
"link": "http://somelink.com?id=1",
"children": [
{
"name": "Izvodjenje",
"value": 600000,
"children": [
{
"job_id": "1",
"name": "Obezbedjenje temeljne jame",
"value": 200000,
"izvodjac": "InKop",
"izvodjac_id": 21,
"trajanje": 33,
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"rok": "20-12-2024"
},
{
"job_id": "2",
"name": "Iskop",
"value": 200000,
"izvodjac": "InKop",
"izvodjac_id": 21,
"trajanje": 33,
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"rok": "20-12-2024"
},
{
"job_id": "3",
"name": "Temeljna ploca",
"value": 200000,
"izvodjac": "InKop",
"izvodjac_id": 21,
"trajanje": 33,
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"rok": "20-12-2024"
}
]
},
{
"name": "Materijal",
"value": 1300000,
"children": [
{
"order_id": "3",
"name": "Armaturna mreza",
"value": 300000,
"dobavljac": "Metal metal",
"dobavljac_id": 22,
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"isporuka": "20-12-2024"
},
{
"order_id": "3",
"name": "Plocice",
"value": 300000,
"dobavljac": "RA KERAMIKA",
"dobavljac_id": 21,
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"isporuka": "20-12-2024"
},
{
"order_id": "3",
"name": "Lepak za fasadu",
"value": 700000,
"dobavljac": "ROMA",
"dobavljac_id": 25,
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"isporuka": "20-12-2024"
}
]
}
]
}
]
}
],
"proizvod": [
{
"value": 5100000,
"kvadrata": 1880,
"invest_kvadrat": 2712,
"children": [
{
"name": "Stanovi",
"kvadrata": 1284,
"komada": 21,
"color": "#6060e0",
"children": [
{
"name": "Stan 1",
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"sprat": 1,
"kvadrata": 45.44,
"value": 1200000
},
{
"name": "Stan 2",
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"sprat": 2,
"kvadrata": 85.5,
"value": 2300000
},
{
"name": "Stan 3",
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"sprat": 2,
"kvadrata": 152,
"value": 3300000
}
]
},
{
"name": "Lokali",
"kvadrata": 480,
"komada": 2,
"color": "#ff8259",
"children": [
{
"name": "Lokal 1",
"opis": "Lep s pogledom na ulicu",
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"kvadrata": 152,
"value": 4300000
},
{
"name": "Lokal 2",
"opis": "Na 2 sprata, idealan za restoran",
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"kvadrata": 152,
"value": 1300000
},
{
"name": "Lokal 3",
"opis": "Veliki magacin",
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"kvadrata": 152,
"value": 1300000
}
]
},
{
"name": "Garažna mesta",
"kvadrata": 980,
"komada": 21,
"color": "#fff214",
"children": [
{
"name": "Garažno mesto 1",
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"nivo": "-2",
"kvadrata": 12.5,
"value": 20000
},
{
"name": "Garažno mesto 2",
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"nivo": "-2",
"kvadrata": 12.5,
"value": 20000
},
{
"name": "Garažno mesto 3",
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"nivo": "-1",
"kvadrata": 12.5,
"value": 20000
},
{
"name": "Garažno mesto 4",
"image": "some_image.jpg",
"link": "http://somelink.com?job_id=1",
"nivo": "-1",
"kvadrata": 12.5,
"value": 20000
}
]
}
]
}
]
}
```
| lucaslim |
1,899,067 | Zoom-in zoom-out a sticky point in canvas | Zoom-in zoom-out with stick point is a regular use case we meet on design or builder tools such as... | 0 | 2024-06-24T14:45:15 | https://dev.to/dung_nguyenthithuy/zoom-in-zoom-out-a-sticky-point-in-canvas-4dnf | javascript, css, html, canvas | Zoom-in zoom-out with stick point is a regular use case we meet on design or builder tools such as Figma. In this blog, I will present a basic algorithm to handle it by javascript, HTML and CSS.
## Demo
[Demonstration](https://vitejsvitecsvymd-3npc--5173--9e2d28a3.local-credentialless.webcontainer.io/)
## Code step-by-step
**1. Create a container and a scalable item**
```
<div id="app">
<div class="parent">
<div class="scalable-child"></div>
</div>
</div>
```
```
.scalable-child {
width: 300px;
height: 300px;
position: relative;
top: 0;
left: 0;
pointer-events: none;
transform-origin: left top;
background-image: url('https://cdn4.vectorstock.com/i/1000x1000/17/58/caro-pattern-background-vector-2261758.jpg');
background-size: contain;
}
.parent {
position: relative;
background-color: white;
width: 100vw;
height: 100vh;
}
```
In this example, I use an div as a scalable item with class “scalable-child” and its container is a div with class “parent”.
Please note for some properties:
- Top, left: 0 is the default position
- Pointer-event: none, because we will add event to parent, if pointer-event !== none the algorithm will be failed.
- Transform-origin: left top, that makes coordinate origin to calculate the position
**2. Add wheel event listener**
```
const parent = document.querySelector('.parent');
const child = document.querySelector('.scalable-child');
```
```
parent.addEventListener('wheel', wheelEventHandler, {
passive: false,
capture: true,
});
```
We will use [WheelEvent](https://developer.mozilla.org/en-US/docs/Web/API/Element/wheel_event) to handle zoom-in, zoom-out, and moving child
Note: this example demonstrates only for the trackpad. You need to handle events for hotkeys such as (Ctrl +, Ctr -) or mouse as well.
```
let left = 0;
let top = 0;
let scale = 1;
const wheelEventHandler = (e) => {
e.preventDefault();
// Handle zoom with touch pad and hot key.
const isZooming = e.ctrlKey || e.metaKey;
let newValues = {};
if (isZooming) {
newValues = calculateOnZooming(e, scale, left, top);
} else {
newValues = calculateOnMoving(e, scale, left, top);
}
left = newValues.newLeft;
top = newValues.newTop;
scale = newValues.newScale;
Object.assign(child.style, {
transform: `scale(${scale})`,
left: `${left}px`,
top: `${top}px`,
});
};
```
Firstly, we have isZooming variable to check if zooming or moving the child element.
Then we calculate the new position and scale for the child element. Left, top, and scale are being used as temperate variables.
And it’s time to focus the algorithm on 2 calculate functions:
**3. Calculate on Zooming**
```
const calculateOnZooming = (e, oldScale, oldLeft, oldTop) => {
let newScale = oldScale - e.deltaY * oldScale * 0.01;
newScale = Math.max(newScale, 0.1);
const newLeft = oldLeft - (e.offsetX - oldLeft) * (newScale / scale - 1);
const newTop = oldTop - (e.offsetY - oldTop) * (newScale / scale - 1);
return {
newScale,
newLeft,
newTop,
};
};
```
On zooming, wheelEvent will return the deltaY as a scale ratio and we can use it to calculate newScale
- deltaY > 0 => zoom-out
- deltaY < 0 => zoom-in
The detalScale = e.deltaY * oldScale * 0.01 to control scaling speed
Let’s see the below image to more understand how to calculate the newLeft and newTop variables:

Start zooming-in child when mouse is in A point. At that time, we can get some values:
- e.offsetX: distance between mouse to parent’s left edge
- e.offsetY: distance between mouse to parent’s top edge
- left: current child’s left style value
- top: current child’s top style value
Child is scaled from scale ratio to scale’ ratio, and point A go to A’.
So to make A point sticky (with parent), we need to calculate deltaX and deltaY then move child revert with exactly px.
detalX = x’ - x
= x * (scale’ / scale) - x
= x * (scale’ / scale - 1)
= (e.offsetX - left) * (scale’ / scale - 1)
detalY = y’ - y
= y * (scale’ / scale) - y
= y * (scale’ / scale - 1)
= (e.offsetY - top) * (scale’ / scale - 1)
newLeft = left - detalX
newTop = top - detalY
**4. Calculate on Moving**
```
const calculateOnMoving = (e, oldScale, oldLeft, oldTop) => {
return {
newLeft: oldLeft - e.deltaX * 2,
newTop: oldTop - e.deltaY * 2,
newScale: oldScale,
};
};
```
On moving event, we need to calculate only newLeft and newTop values. And we *2 each delta value to increase speed as well.
That’s all we need to handle. I hope it’s helpful. Thank you for watching!
You can view the full source code [here](https://stackblitz.com/edit/vitejs-vite-csvymd?file=index.html).
| dung_nguyenthithuy |
1,878,833 | Number of Closed Islands | LeetCode | class Solution { public int closedIsland(int[][] grid) { int row = grid.length; ... | 0 | 2024-06-24T14:43:00 | https://dev.to/tanujav/number-of-closed-islands-leetcode-4i | leetcode, java, beginners, algorithms | ``` java
class Solution {
public int closedIsland(int[][] grid) {
int row = grid.length;
int col = grid[0].length;
int countIsland = 0;
for(int i=0; i<row; i++){
for(int j=0; j<col; j++){
if(grid[i][j]==0 && dfs(grid, i, j)){
countIsland++;
}
}
}
return countIsland;
}
boolean dfs(int grid[][], int i, int j){
if(i<0 || j<0 || i>=grid.length || j>=grid[0].length)
return false;
if(grid[i][j]==1 || grid[i][j]==2)
return true;
grid[i][j] = 2;
boolean up = dfs(grid, i+1, j);
boolean down = dfs(grid, i-1, j);
boolean left = dfs(grid, i, j+1);
boolean right = dfs(grid, i, j-1);
return up && down && left && right;
}
}
```
Thanks for reading :)
Feel free to comment and like the post if you found it helpful
Follow for more 🤝 && Happy Coding 🚀
If you enjoy my content, support me by following me on my other socials:
https://linktr.ee/tanujav7
| tanujav |
1,899,065 | One simple solution for leetcode problem 14 | First let us figure out what should i do,we need find the common prefix for all strings in the... | 0 | 2024-06-24T14:42:52 | https://dev.to/hallowaw/one-simple-solution-for-leetcode-problem-14-32c | beginners, programming, tutorial, cpp |

**First let us figure out what should i do,we need find the common prefix for all strings in the vector**
So we just need compare the char in the same position for each string
But how to denote the particular char in the particular string in a vector?
Let us use the example 1 for a explanation
So if we wanna denote the 'r' in the 'flower' ,we can use strs[0][5] to refer 'r'
So now we get the methods to make a comparison for each char,if it is the same we save it into the result string
but we also need to notice if it exit in the position of a string
We use the code to check :if(strs[j][i]=='\0')
so the full code is:
```
class Solution {
public:
string longestCommonPrefix(vector<string>& strs) {
//first we define a string for storing the result
string commonprefix="";
int condition=1;
for(int i=0;i<strs[0].size();i++){
char currentchar=strs[0][i];
for(int j=1;j<strs.size();j++)
{
//should require two conditions :the position of the string is not NULL and same with currentchar,here we use the contrary condition for understand easily.
if(strs[j][i]=='\0'||strs[j][i]!=currentchar){
return commonprefix;
}
}
commonprefix.push_back(currentchar);
}
return commonprefix;
}
};
```
Thank you for reading!
| hallowaw |
1,899,064 | The GUI Way of Using Linux (How To Use Cockpit on CentOS) | Have you every thought of executing Linux operations without the command line? Maybe not. But, now... | 0 | 2024-06-24T14:40:52 | https://dev.to/whotarusharora/the-gui-way-of-using-linux-how-to-use-cockpit-on-centos-lp | linux, webdev, beginners, productivity | Have you every thought of executing Linux operations without the command line? Maybe not. But, now there’s a way to do that. You can use the Cockpit package for this purpose, which is extremely helpful for system administrators and server monitoring and maintenance teams.
You can get the info about [cockpit ](https://cockpit-project.org) on the official website. But, the most convenient way to configure it is here in this blog. So, without wasting any second, let’s start with the practical.
## The Cockpit Configuration Procedure
We are going to perform this practical on CentOS operating system, a distribution of the Linux, developed by RedHat enterprises. In numerous enterprise networks, it gets utilized as the primary server OS. So, it’s going to be beneficial for sure.
Additionally, the entire procedure is divided into three phases. The first phase deals with download and installation of the cockpit package. Further, the phase two and three deals with the configuration and accessing the GUI-based Linux interface.
So, let’s get started.
### Phase 1: Cockpit Download and Installation
Before, we dive into the practical, ensure to look at the snippets simultaneously for better understanding.
We are going to configure cockpit as a root server. So, we have ensured that, system is accessed as root by using the command `whoami`.

There can be a possibility that cockpit service package is already installed on your CentOS system. To find it out, use the command `rpm -qa | grep cockpit`. If the package would be available, you’ll see it in the list.
In our case, it’s not available.

To install the cockpit package, use the command: `yum install cockpit -y`
The command will start the download and installation of the cockpit package on your CentOS operating system.

After the successful installation, you will see a output as below.

Now, again check the cockpit package availability to verify its installation
Run the command: `rpm -qa | grep cockpit`

Here, the first phase ends. Now, move to the second one.
### Phase 2: Enabling the Cockpit Service
You have to use the classic method to enable the cockpit service, which is using the `systemctl`.
Firstly, check the status of the services through command: `systemctl status cockpit`
As you can see, currently the service is inactive.

To make the service active, run command: `systemctl start cockpit`
However, if you want this service to run automatically with every boot, use the command: `systemctl enable cockpit`

After starting the service, check the status: `systemctl status cockpit`
As you can see, this time the service is running on our Linux machine.

Here, our second phase ends. Move to the third one now.
### Phase 3: Accessing the Linux System (GUI-based)
Before you access the GUI-based interface, you should know about the URL for accessing Linux machine through cockpit.
URL: `http:// IP of your machine:9090`
So, to fulfil the demand of the URL, run the `ifconfig` command to view your IP address. In our case, it’s `192.168.1.6`.

Now, go to your web browser and use the URL, just like the snippet below. Enter the URL and hit enter.

Once you hit enter, a warning can be showed to you. You have to accept the risk and move forward to view the following login interface.
However, if even after accepting the risk, you didn’t view the interface, go to Linux machine and disable the firewall using command: `systemctl disable firewalld`
(Note: Do not disable firewall in production environment)
Following it, again enter the URL and this time you’ll absolutely see the interface. Now, put your username and password of the Linux machine and hit the Log in button.

As a result, you will see the cockpit interface or the GUI-interface of your Linux machine. Now, you can execute all core operations without commands with utmost convenience.

Here, the third phase ends and also the practical procedure. Explore the interface and have fun with Linux GUI.
## Concluding Up
Nothing much to write here, as you already know how to use cockpit. For any further query or question, you can write a comment and expect me to get back to you within a week. Also, you can suggest topics and I would try to provide you the desired content on them.
Keep supporting.
| whotarusharora |
1,899,063 | Backpex - a highly customizable admin panel for Phoenix LiveView applications | Hello everyone! I am excited to share our heart project Backpex with you. After building several... | 0 | 2024-06-24T14:40:02 | https://dev.to/d3d3h3h3/backpex-a-highly-customizable-admin-panel-for-phoenix-liveview-applications-4421 | elixir, backend, admin, liveview | Hello everyone!
I am excited to share our heart project Backpex with you.
After building several Phoenix applications, we realized that we were repeating ourselves when it came to building administration panels. We were writing the same CRUD views, search and filter functionality over and over again. We wanted a tool that would allow us to quickly scaffold these views and focus on building the core functionality of our applications.
The tool we wanted had to be able to serve as a simple backend administration panel in one project, while being the core of the application in another.
We looked at existing solutions, but found that none of them offered the flexibility and customization we were looking for, especially in terms of live updates and LiveView features. We decided to develop Backpex to solve this problem and provide a highly customizable administration panel for Phoenix LiveView applications. Backpex should not only be a simple administration interface for developers, but also a good-looking UI for end users.
Even though Backpex already offers a lot of features, there is still a lot to do. We also have many ideas for future work. That’s why we are very open for contributions and appreciate any help. We have just released a public roadmap 58 and created some issues that are open for contributions on GitHub (see “good-first-issue” label).
We really want to hear what you would like to see next and what is missing. We’d love to hear your feedback!
[Hex Package](https://hex.pm/packages/backpex)
[GitHub - naymspace/backpex](https://github.com/naymspace/backpex)
[Live Demo](https://backpex.live/) | d3d3h3h3 |
1,899,061 | Variables, Constants, Data Types, and Namespaces in C++ | Welcome to the second blog of the series "Unlocking C++"! In this blog, I will discuss variables,... | 27,776 | 2024-06-24T14:37:27 | https://dev.to/komsenapati/variables-constants-data-types-and-namespaces-in-c-2i24 | cpp, variable, datatypes, coding |

Welcome to the second blog of the series "Unlocking C++"!
In this blog, I will discuss variables, constants, data types, and namespaces.
It's the basics required for any programming language.
Let's start with variables.
## Variable
Variables are named memory locations used for storing the data.
For high level, it's just like we used to do with maths
`2x + 4 = 0`
`=> x = -2`
Variables are of many types.
For C++ variable types can be stated as
- Global Variable: Any function or class can access The variable declared globally.
- Local Variable: The local or block-scoped variable can only be accessed within the block. As the block ends, the variable is discarded.
```cpp
#include <iostream>
int y = 3.14;
int main() {
{ // block
int x = 5;
std::cout << x << std::endl;
std::cout << y << std::endl;
}
std::cout << x << std::endl; // gives error 'x' was not declared in this scope
std::cout << y;
}
```
Here x is a block-scoped variable so inside the block we can access them but as the block ends it can't be accessed. And y is a global variable so it can be accessed anywhere in the program.
## Constants
Constants are also named memory locations used for data storage but their values can't be changed once given.
For example, π.
We know it's a constant and its value will not be changed.
In C we used macros ( `#define PI 3.14` ) or manifest constants but constants in C++ are better as they come with type safety.
```cpp
#include <iostream>
int main() {
const double pi = 3.14;
std::cout << pi;
pi = 4; // error: assignment of read-only variable
}
```
Here as its const, its value can't be changed.
Another important concept for variables and constants.
Declaration is the statement where the variable is declared without any value. Space is given to a variable as it may be given value in future.
```cpp
int x;
```
Initialisation is the statement where we give a value to the variable.
```cpp
x = 5;
```
In the case of variables declaration and initialisation can happen in the same line or different lines as well but for constant it **must** be done in the same line.
## Data types
Each variable or constant is associated with a data type. It specifies the type of data stored in the variable.
The primary data types in C++ are:
- int: store integer values, 2 bytes size (maybe 4 bytes in some systems)
- long: store integer values but of higher size (4 bytes)
- float: store decimal numbers, less precision, 4 bytes size
- double: store decimal numbers, high precision, 8 bytes size
- char: store a single character, 1 byte size
- bool: either true(1) or false(0), 1 byte size
Here primary data types mean built-in (predefined) data types to be used by the user directly.
There are Derived and User-defined data types as well.
For now, I am stating them but I will make separate blogs for them in the future.
### Derived data types
These are the data types derived from primitive data types.
- Function
- Array
- Pointer
- Reference
### User-defined data types
These data types are created by users to use in the program.
- struct
- union
- class
- enum
- typedef
## Namespaces
Namespace is a declarative region under which variables, functions, and classes exist. It's used to avoid conflict between the same identifiers.
> Identifiers means names given to variables, constants, functions, and classes.
In C++ we have a standard namespace (std) but we can create our namespace as well.
```cpp
#include <iostream>
namespace apple {
int x = 1;
}
namespace banana {
int x = 2;
}
int main() {
std::cout << apple::x << std::endl; // 1
std::cout << banana::x; // 2
return 0;
}
```
Here the same identifier x is used but in different namespaces.
So this is for this blog. Let's meet in the next blog.

| komsenapati |
1,899,060 | Neha Zubair: Pioneering Excellence in Guest Posting, Content Writing, and SAAS Marketing | Delving into Neha Zubair's Multifaceted Expertise Neha Zubair stands at the forefront of... | 0 | 2024-06-24T14:35:04 | https://dev.to/jessicase0/neha-zubair-pioneering-excellence-in-guest-posting-content-writing-and-saas-marketing-2idc | seo, guestposting, contentwriting | ## Delving into Neha Zubair's Multifaceted Expertise
[Neha Zubair](https://docs.google.com/spreadsheets/d/1nLeLnmJn37c82NXG3KCy0u4_u1GwCs_YV04N7vgyckw/edit?usp=drivesdk) stands at the forefront of guest posting, content writing, and SAAS marketing with a distinguished track record spanning over two dynamic years.

Her journey is marked by a commitment to excellence and a profound impact on enhancing businesses' digital footprints.
## Neha Zubair's Professional Journey
Neha Zubair's journey through the realms of guest posting, content writing, and SAAS marketing stands as a testament to her steadfast commitment and unmatched expertise.
Her impact resonates deeply, forged by strategic acumen and an innovative approach that redefines industry standards.
## Unveiling Neha Zubair's Expertise
1. **Guest Posting**: Neha Zubair is working with premier platforms such as Hackernoon, MSN, Forbes, Khaleej Times, NDTV, NY Weekly, JPost, Gulf News, Markets Business Insider, AP News, and Benzinga. Her contributions resonate widely, reflecting her adeptness in crafting impactful narratives that resonate with global audiences.
2. **Content Writing**: Neha's forte lies in her ability to create compelling and informative content that captivates diverse demographics. Her writings not only inform but also inspire action, positioning brands for sustained growth and engagement.
3. **SAAS Marketing**: As a seasoned SAAS marketing specialist, Neha Zubair has empowered numerous enterprises to amplify their online visibility and drive measurable business outcomes. Her strategic acumen and data-driven approach ensure tailored solutions that resonate in today's competitive landscape.
## Elevating Businesses: Neha Zubair's Success Stories
Neha Zubair's client roster reads like a who's who of industry leaders, including collaborations with the USA's largest fashion conglomerate and other prominent entities.

Her ability to deliver consistent, high-quality results underscores her reputation as a trusted partner in achieving digital success.
## Connecting with Neha Zubair
To harness Neha Zubair's transformative expertise for your business growth, seize the opportunity to connect:
Email: nehazubair50@gmail.com
LinkedIn: [Click here](https://www.linkedin.com/in/neha-%E2%9C%AE%E2%83%9D%E2%9D%A4-guest-posting-and-seo-expert?utm_source=share&utm_campaign=share_via&utm_content=profile&utm_medium=android_app)
## Conclusion:
**Neha Zubair** emerges as a luminary in the realm of guest posting, content writing, and SAAS marketing, setting benchmarks through her unmatched proficiency and expansive [portfolio](https://zzmehna.my.canva.site/neha-zubair-backlinks-expert).
For enterprises poised to elevate their online presence and drive meaningful impact, **Neha Zubair** stands as the catalyst for transformative success.
| jessicase0 |
1,899,059 | You are Using ChatGPT Wrong! — #1 Mistake 99% of Users Make | Introduction If you are using ChatGPT or any other language model like Claude or Gemini,... | 0 | 2024-06-24T14:34:09 | https://dev.to/safdarali/you-are-using-chatgpt-wrong-1-mistake-99-of-users-make-4kll | webdev, chatgpt, ai, promptengineering | ## Introduction
If you are using ChatGPT or any other language model like Claude or Gemini, there’s a good chance you’re making a critical mistake that hampers the effectiveness of your prompts. This mistake is so common that it affects 99% of users. The advice often given by "prompt engineering gurus" is leading you astray. Let's dive into what this mistake is and why it's so detrimental.
## The Biggest Prompting Mistake
The common belief is that the more details you provide in your prompt, the better and more accurate the output will be. This sounds logical and is reiterated by many experts in the field. However, this approach is often completely wrong and can lead to subpar results.
## Why More Detail Isn't Always Better
When you overload your prompt with excessive details, it can confuse the model and lead to outputs that don't meet your expectations. This issue is particularly easy to demonstrate with image creation but applies equally to text prompts.
## Consider the following prompt I gave to ChatGPT-4:
_
“Make a picture of a woman in a blue shirt, standing on a beach.”
_
This prompt is clear and specific, right? But what if the output is not what you imagined? The problem here is that the prompt, although detailed, is restrictive and doesn’t leave room for the model to understand the broader context or nuances of what you’re looking for.
## The Importance of Balanced Prompts
To get the best results, you need to balance specificity with flexibility. Here’s how you can do it:
Focus on the Core Idea: Instead of piling on details, focus on the core concept of your prompt. For example, "Create an image of a beach scene" is broad but allows the model to use its training to fill in the details in a coherent way.
Iterative Refinement: Start with a broad prompt and then iteratively refine it based on the initial outputs. This approach allows you to guide the model toward your desired outcome more effectively.
Contextual Prompts: Provide context rather than specific details. For instance, _"Generate a serene beach scene with a person enjoying the view" _gives the model a clear idea without overwhelming it with specifics.
Demonstrating the Problem
## To illustrate, let’s compare two prompts:
**Detailed Prompt:** "Make a picture of a woman in a blue shirt, standing on a beach with a golden retriever, near a surfboard, with seagulls flying in the background and a lighthouse in the distance."
**Balanced Prompt:** "Create a relaxing beach scene featuring a woman enjoying her time."
In the first example, the model is constrained by too many specific elements, which may result in a cluttered and less coherent image. In contrast, the second prompt allows the model to creatively generate a scene that fits the overall theme, likely producing a more aesthetically pleasing and meaningful result.
## Why This Matters for Text Prompts
The same principle applies to text generation. For instance, if you ask ChatGPT:
_"Write a story about a young girl named Emma who lives in a small town, has a golden retriever named Max, loves to read books about space, enjoys stargazing, and dreams of becoming an astronaut."_
This prompt, while detailed, may lead to a story that feels forced or disjointed because the model is trying to incorporate all the details without a clear narrative direction. Instead, try:
_
"Write a story about a young girl with big dreams and a love for the stars."_
This prompt provides a strong central theme, giving the model creative freedom to construct a coherent and engaging story.
## Proving the Theory
Let’s see this in action with a ChatGPT prompt example:
**Overly Detailed Prompt:** _"Generate a blog post about the importance of balanced diets, including sections on macronutrients, micronutrients, the benefits of hydration, the impact of processed foods, and examples of balanced meals for breakfast, lunch, and dinner."_
**Balanced Prompt:** _"Write a blog post on why balanced diets are essential for health."_
With the first prompt, the resulting post might feel like a list of facts and details crammed together without a smooth flow. The second prompt, however, allows ChatGPT to craft a well-rounded article that covers the main points naturally.
## Conclusion
The key takeaway is to avoid overwhelming your prompts with too many specifics. Focus on the core idea and provide context, then refine your prompt based on the output you receive. This approach will yield better, more coherent, and creative results from ChatGPT and other language models.
Remember, the goal is to guide the AI, not to micromanage it. By understanding and avoiding the common mistake of overly detailed prompting, you can unlock the full potential of ChatGPT and enhance the quality of your interactions with AI.
## Final Thoughts
Experiment with your prompts, start broad, and refine as needed. This method not only improves the output but also helps you develop a better understanding of how to communicate effectively with AI. Happy prompting!
That's all for today.
And also, share your favourite web dev resources to help the beginners here!
Connect with me:@ [LinkedIn ](https://www.linkedin.com/in/safdarali25/)and checkout my [Portfolio](https://safdarali.vercel.app/).
Explore my [YouTube ](https://www.youtube.com/@safdarali_?sub_confirmation=1)Channel! If you find it useful.
Please give my [GitHub ](https://github.com/Safdar-Ali-India) Projects a star ⭐️
Thanks for 24571! 🤗 | safdarali |
1,898,960 | Git: What happens under the hood | Git: What happens under the hood What is Git? Git is a open source version control system... | 0 | 2024-06-24T14:31:56 | https://dev.to/hackman78/git-what-happens-under-the-hood-13dc | # Git: What happens under the hood
What is Git? Git is a open source version control system that has had a stranglehold on the development world since it came out in 2005. Created by Linus Torvalds it is the system that he created to fix some problems with the old version control systems of the day.
## History
Linus Torvalds created Git because at the time there were many version control systems some open-source and others were not open-source. It was a fractured time for version control of the day. Linus felt forced to make a change in the community when the version control system he was using decided to start charging for the system that they were using and Linus didn't want to pay. He was already a successful engineer already having been the father of the Linux Kernel already and having a team of developers working on it when their version control system decided to start charging for the service. So he got to work and created on of the most influential tools in the developer tools to date.
## Parts of the .git folder
What is git and how does it work? What **'makes'** a directory a git repo? What is a repository? Git is an extensive tool that takes alot of time to get used to all of the features and understand what is actual happening under the hood. A git repository is just a folder that has another nested .git folder that holds staged changes in it. The important folders are refs, heads, config, logs, and objects.
## How it Works
### refs directory
refs holds all the references to the heads of different branches of your git repository. It is important to hold all your branch heads no matter the branch your on so that you can always go back whenever you would like.
### objects directory
The objects directory is where all of your git changes are actual stored compressed in different folders to hold your changes. The sha1 is the file directory of your current git commit you are on.
### config
Sets the config of your git repo locally. Things like if you want your pulls to rebase, remote branches, your origin repo, etc.
### logs
Directory to hold your git branch trees to see what your git tree looks like extensively.
### heads
holds all of the heads to your different head branches in Git.
## Conclusion
I haven't learned all of Git and it is definitely not the most intuitive in the world, but I learned something very important.
**SOFTWARE I USE IS NOT THAT DIFFICULT TO UNDERSTAND**
All of the technology I use may take some time to get accustomed to, but all software is really just the same techniques I have been using all throughout my coding career. Control flow, loops, functions, scopes, closures, methods, data structures, algorithms, time complexity, etc. They are all used in every program, because they are all built on top of the same binary, on top of the same assembly language, all can talk to other computers with the same protocols. Really there are a few differences between code. The syntax, the time complexity/efficiency, and the design of the system(Database, Client, Server). The small or large decisions that you make when starting up a project are what decides how easy or difficult it will be to iterate on your project.
I think remembering the concept that all software is built on top of the same systems is the key for me to have a long career in software, mitigate imposter syndrome, and to CI/CD my current knowledge because there is always something else to learn, something else to explore, and something else that will be fun to build.
## Sources on how to code
[Code Crafters](https://app.codecrafters.io/courses/git/overview) **paid** Any Language
### Free
[gitlet](http://gitlet.maryrosecook.com/docs/gitlet.html) **Code Git in Javascript**
[Git in Python](https://wyag.thb.lt/)
[Ruby](https://robots.thoughtbot.com/rebuilding-git-in-ruby)
| hackman78 | |
1,899,057 | LeetCode Day16 Binary Tree Part 6 | LeetCode 530. Minimum Absolute Difference in BST Given the root of a Binary Search Tree... | 0 | 2024-06-24T14:27:14 | https://dev.to/flame_chan_llll/leetcode-day16-binary-tree-part-6-25fe | leetcode, java, algorithms, datastructures | # LeetCode 530. Minimum Absolute Difference in BST
Given the root of a Binary Search Tree (BST), return the minimum absolute difference between the values of any two different nodes in the tree.
Example 1:

Input: root = [4,2,6,1,3]
Output: 1
Example 2:

Input: root = [1,0,48,null,null,12,49]
Output: 1
Constraints:
The number of nodes in the tree is in the range [2, 104].
0 <= Node.val <= 105
[Original Page](https://leetcode.com/problems/minimum-absolute-difference-in-bst/description/)
## *Wrong Code for the first attempt
```
public int getMinimumDifference(TreeNode root) {
int min = Integer.MAX_VALUE;
Deque<TreeNode> queue = new LinkedList<>();
queue.offer(root);
while(!queue.isEmpty()){
TreeNode cur = queue.poll();
if(cur !=null){
if(cur.left!=null){
min = Math.min(min, Math.abs(cur.val - cur.left.val));
}
if(cur.right !=null){
min = Math.min(min, Math.abs(cur.val - cur.right.val));
}
queue.offer(cur.left);
queue.offer(cur.right);
}
}
return min;
}
```

The Above code cannot solve the situation when two adjacent elements are not a direct parent-child relationship.
```
public int getMinimumDifference(TreeNode root) {
int min = Integer.MAX_VALUE;
return getMin(root,null, min);
}
public int getMin(TreeNode cur,TreeNode pre, int min){
if(cur == null){
return min;
}
min = getMin(cur.left,pre, min);
if(pre!=null){
min = Math.min(min, Math.abs(pre.val-cur.val));
}
pre = cur;
min = getMin(cur.right,pre, min);
return min;
}
```
```
TreeNode pre;
public int getMinimumDifference(TreeNode root) {
int min = Integer.MAX_VALUE;
return getMin(root, min);
}
public int getMin(TreeNode cur, int min){
if(cur == null){
return min;
}
min = getMin(cur.left, min);
if(pre!=null){
min = Math.min(min, Math.abs(pre.val-cur.val));
}
min = getMin(cur.right, min);
return min;
}
```

it will cause some problem that the 1st element is not the first element the pre will be the first, so it will cause trouble here.
e.g.

we want

so here it may pre == null will lead to first logic
Here we can also use two value conduct this problem
```
public int getMinimumDifference(TreeNode root) {
int[] min = {Integer.MAX_VALUE};
getMin(root,null, min);
return min[0];
}
public TreeNode getMin(TreeNode cur,TreeNode pre, int[] min){
if(cur == null){
return pre;
}
pre = getMin(cur.left,pre, min);
if(pre!=null){
min[0] = Math.min(min[0], Math.abs(pre.val-cur.val));
}
pre = cur;
pre = getMin(cur.right,pre, min);
return pre;
}
```
Be careful Java use Value Pass so it will not work when we pass by using int or others so that we can pass an object it will re-assign element of the array so that will work
then be careful we pass pre instead of cur, because we want the first pre is start from the left node (if we pass cur it will cause when we start mid-logic, the pre is still not in the right place it will be 1 element earlier than cur, it is not resealable)
# 501. Find Mode in Binary Search Tree
Given the root of a binary search tree (BST) with duplicates, return all the mode(s) (i.e., the most frequently occurred element) in it.
If the tree has more than one mode, return them in any order.
Assume a BST is defined as follows:
The left subtree of a node contains only nodes with keys less than or equal to the node's key.
The right subtree of a node contains only nodes with keys greater than or equal to the node's key.
Both the left and right subtrees must also be binary search trees.
Example 1:

Input: root = [1,null,2,2]
Output: [2]
Example 2:
Input: root = [0]
Output: [0]
Constraints:
The number of nodes in the tree is in the range [1, 104].
-105 <= Node.val <= 105
Follow up: Could you do that without using any extra space? (Assume that the implicit stack space incurred due to recursion does not count).
[Original Page](https://leetcode.com/problems/find-mode-in-binary-search-tree/description/)
## lol, Practicing Stream is so cool but not efficient, may it work well in real-practice env?
```
public int[] findMode(TreeNode root) {
Map<Integer,Integer> map = new HashMap<>();
map = modeLogic(root,map);
int max = Collections.max(map.values());
List<Integer> list = new ArrayList<>();
list = map.entrySet().stream()
.filter(entry -> entry.getValue().equals(max))
.map(Map.Entry::getKey)
.collect(Collectors.toList());
return list.stream()
.mapToInt(Integer::valueOf)
.toArray();
}
public Map<Integer,Integer> modeLogic(TreeNode cur, Map<Integer,Integer> map){
if(cur==null){
return map;
}
//left
modeLogic(cur.left, map);
//mid
map.put(cur.val, map.getOrDefault(cur.val, 0)+1);
//right
modeLogic(cur.right, map);
return map;
}
```
## Methode 2.
Because it is BST, so it will be easier when we use in-order transverse.
## * Wrong Code
```
public int[] findMode(TreeNode root) {
return modeLogic(root, new LinkedList<Integer>(), 0,0,0 );
}
public List<Integer> modeLogic(TreeNode cur, List<Integer> list, int count, int maxCount, int preVal){
if(cur == null){
return list;
}
// left
modeLogic(cur.left, list, count, maxCount, cur.val);
// mid logic
// 1. if we get the same element we increase the count
if(cur.val == preVal){
count+=1;
}else{
/*** 2. if we finish tranverser pre element we need to
i. comparte the count to previous max Count(we want to find mode)
ii. if large than before we have to remove the whole saved elements
iii. if equal than before we add the element because it is also a potential mode
***/
if(count>maxCount){
maxCount = count;
list.clear();
list.add(preVal);
}else if(count == maxCount){
list.add(preVal);
}
// any way now we have to update the new element and reset the count
preVal = cur.val;
count = 1;
}
// right
modeLogic(cur.right, list, count, maxCount, cur.val);
return list;
}
```
using this way it will also cause a problem that when we start transverse the left sub-tree is fine-iterated but the mid will not get the previous Val, count and MaxCount so it will cause problem
## Fix Above
```
int count;
int maxCount;
TreeNode pre;
List<Integer> list = new LinkedList<>();
public int[] findMode(TreeNode root) {
pre = root;
modeLogic(root);
return list.stream()
.mapToInt(Integer::intValue)
.toArray();
}
public void modeLogic(TreeNode cur){
if(cur == null){
return;
}
// left
modeLogic(cur.left);
// mid logic
// 1. if we get the same element we increase the count
if(cur.val == pre.val){
count+=1;
}else{
// any way now we have to update the new element and reset the count
pre = cur;
count = 1;
}
if(count>maxCount){
maxCount = count;
list.clear();
list.add(pre.val);
}else if(count == maxCount){
list.add(pre.val);
}
// right
modeLogic(cur.right);
}
``` | flame_chan_llll |
1,899,055 | Childproof Jars: Safe and Secure Storage Solutions for THC Products | childproof jars.png Childproof Jars Secure Storage for Your THC Products Are you looking for a safe... | 0 | 2024-06-24T14:25:19 | https://dev.to/chris_vincit_fe15e1c83a5d/childproof-jars-safe-and-secure-storage-solutions-for-thc-products-4i20 | secure | childproof jars.png
Childproof Jars Secure Storage for Your THC Products
Are you looking for a safe and secure storage solution for your THC products
Look no further than pill bottle
These jars are specially designed to keep your products secure and inaccessible to small children
We will discuss the advantages of using childproof jars the innovation behind their design how to use them their quality and service and their various applications
Advantages of Childproof Jars
Childproof \bottles of medicine offer many advantages over traditional storage solutions
First and foremost these jars are designed to be tamper-resistant and feature a locking mechanism that prevents small children from accessing the contents
This is necessary because THC products can be dangerous if accidentally ingested by children
Additionally childproof jars are light-resistant which helps to preserve the potency of the products by limiting exposure to light
They are also available in a variety of sizes and colors making them suitable for different product types and personal preferences
Innovation Behind Childproof Jars
Childproof jars are the result of innovative design and careful engineering
These jars feature a lid with a locking mechanism that is difficult for small children to open but easy enough for most adults to access
The lid is made of high-quality plastic or glass which makes it durable and long-lasting
These jars are also designed to be opaque which helps to limit exposure to light and preserve the quality of the contents
How to Use Childproof Jars
Using childproof jars is easy and straightforward
Simply remove the lid by pressing down on the locking mechanism and twisting it counterclockwise
Once the lid is removed you can access the contents of the jar
To close the jar simply push down on the lid and twist it clockwise until it is secure
Quality and Service of Childproof Jars
Childproof air tight jar are made from high-quality materials and are designed to be durable and long-lasting
They are also backed by excellent customer service and support
Many companies offer warranties and guarantees on their products ensuring that you are satisfied with your purchase
Additionally these jars are easy to clean and maintain making them a reliable storage solution for all your THC products
Applications of Childproof Jars
Childproof jars are suitable for a variety of THC products including flower edibles concentrates and tinctures
These jars are available in different sizes colors and materials making them customizable to meet your specific needs
Whether you are a personal user or a dispensary owner childproof jars are a safe and reliable storage solution for all your THC products
| chris_vincit_fe15e1c83a5d |
1,899,054 | Stainless steel bottles can handle acidic beverages without corroding. | screenshot-1708794385619.png Great things about Metal Containers Stainless containers undoubtedly... | 0 | 2024-06-24T14:23:11 | https://dev.to/chris_vincit_fe15e1c83a5d/stainless-steel-bottles-can-handle-acidic-beverages-without-corroding-3j72 | drink, water, bottles | screenshot-1708794385619.png
Great things about Metal Containers
Stainless containers undoubtedly are an option which people choose to keep their beverages cold or hot for quite some time. They are made out of top-quality materials offering a advantages that are few other types of water containers. These bottles are rust-resistant and will manage acidic beverages without corroding. Metal containers can be durable, also lightweight, and an easy task to totally clean. They truly are a choice which is ideal people that want to save money and minimize waste which is synthetic.
Innovation and Safety of Stainless Steel Containers
Metal containers are innovative goods that have been completely made to meet the needs of modern consumers. These are typically created from top-quality materials that will provide safe and storage space which is sound your beverages. Unlike other types of containers that will contain harmful chemical compounds, stainless bottles are without the BPA, phthalates, as well as other toxins. This could be why them safe to utilize plus a choice which is perfect families with young ones.
Use of Metal Bottles
Stainless containers may be used for several purposes, including tasks that can easily be outside recreations, and travel. They are ideal for maintaining beverages cold or hot for long quantities of time, making them suitable for camping trips, picnics, and other tasks which can be outdoor. These containers may be ideal for additionally use throughout the fitness center or during sporting activities, simply because they are lightweight plus an effortless task to transport. Stainless containers will be perfect for also travel, as they possibly can be effectively loaded in a backpack or suitcase, and will withstand the rigors of travel without leaking or breaking.
Just how to Utilize Stainless Metal Containers
Utilizing personalized plastic water bottles metal which is stainless is straightforward. Simply fill the container along with your drink which is favorite and also the bottle shall keep it hot or cool all the time. To ensure your beverage stays cool, you can include ice into the container. To hold your beverage hot, fill the container simply with hot water and invite it to heat up for a minutes that are few filling it together with your drink. Metal containers are actually easy to clean, and can be washed with water and detergent or devote the dishwasher for easy cleansing.
Service and Quality of Stainless Steel Containers
Stainless personalized water bottles was consists of top-notch materials and in most cases are created to create performance which is durable. They have been developed to withstand the rigors of day-to-day use as they are also supported by a guarantee through the maker. The producer provides help and certainly will manage to solve any conditions which you have for folks who have any problems with your steel which is stainless container.
Application of Stainless Bottles
Stainless personalized drink bottles are an item which is versatile can be utilized in many different applications. They are typically perfect for use within the genuine home, at work, or overseas. These containers enables you to store beverages, soups, along with meals. Some steel which is stainless come with additional features such as for example built-in straws, handles, and insulation. They've been brought on by these features to be more simple to use and offer added value towards the consumer.
In conclusion, stainless steel bottles are a great investment for anyone who is looking for a safe, durable, and long-lasting water bottle. They offer several advantages over other types of water bottles and are perfect for use in a variety of applications. Whether you are a student, athlete, or outdoor enthusiast, a stainless steel bottle is a great choice that will provide you with years of reliable use.
| chris_vincit_fe15e1c83a5d |
1,896,278 | Styling Our Content | Intro to Styling For the past few weeks, we've discussed how to get all the information... | 27,613 | 2024-06-24T14:23:00 | https://dev.to/nmiller15/styling-our-content-5do6 | webdev, html, css, learning | ## Intro to Styling
For the past few weeks, we've discussed how to get all the information you need onto a web page. However, if you've been following along and coding as you go, you've probably noticed that your pages don't look very appealing. They might even look quite bad, actually…
So far, we haven't added any styles to our web pages. Styles are rules we give to our browser to tell it how we want our HTML elements to look on the page. We can change the size, font, color, position, alignment, and many other things! But how?
## Adding Our First Styles
```html
<p style="color: blue;">This text is blue now!</p>
```
That's it! If you're curious, copy and paste that into a document and open it in your browser. What did you see? The text is blue now! Nice!
So, what's going on here? We surrounded our text content with a paragraph element and gave that paragraph element a style attribute. Notice the `style=` in the opening `<p>` tag. In the attribute's value, we assigned the `color` style to a value of blue: `"color: blue;"`.
Now, this is cool, but what if we wanted to center the text on the page?
```html
<p style="color: blue; text-align: center;">This text is blue and centered!</p>
```
All we had to change was the value in the style attribute! Powerful! But what's happening in that value attribute? We haven't seen that syntax before! That's CSS! But wait, why is it here if we're writing HTML?
We are! But HTML isn't designed for styling; it's designed for structure and semantics (which we'll cover in a later post). If we want to change the appearance of the page, we have to modify the stylesheet. By default, our browser decides how to interpret the information in our HTML using what's called the "user agent stylesheet." The font sizes and colors you've seen up to this point have been based on that! When we add CSS to our style attribute, the styles we've defined will override the user agent stylesheet, and any styles we haven't defined will fall back to that sheet. CSS stands for Cascading Style Sheets, and that's why!
This is great. Now that we can change the look of our page, the possibilities are endless. But imagine if we wanted to get more custom:
```html
<h1 style="font-size: 36px;
font-weight: 500;
text-align: center;
text-decoration: underline;
padding-bottom: 16px;
color: #b0b1b2;
opacity: 0.7;">Welcome to My Website!</h1>
<p style="font-size: 16px;
font-weight: 300;
text-align: left;
padding-left: 100px;
background-color: yellow;">I hope you're having a great day!</p>
```
Wow… that's a lot of code for just two lines of text… and it's so hard to quickly see what's going on! This is a problem because if you were to try and come back to edit this information later, it would take you much longer than if it looked like this:
```html
<h1>Welcome to My Website</h1>
<p>I hope you're having a great day!</p>
```
How can we solve this problem?
## Introducing CSS
To keep our HTML from getting cluttered, we move all of our styling into a `.css` file! For small projects, we typically call this `style.css`. Beyond just tidying up our code, moving styles into a new file also fulfills a programming concept called separation of concerns. This means that we want our code to be segmented into its functional components. Our code shouldn't try to do everything but rather be broken down into smaller pieces that do one thing well!
In this example, instead of having one file that structures our content and styles it, we have two files: one that structures, and one that styles. Concerns separated! So, what might this look like?
```css
h1 {
font-size: 36px;
font-weight: 500;
text-align: center;
text-decoration: underline;
padding-bottom: 16px;
color: #b0b1b2;
opacity: 0.7;
}
p {
font-size: 16px;
font-weight: 300;
text-align: left;
padding-left: 100px;
background-color: yellow;
}
```
Ah, much better! Now, our HTML file isn’t cluttered with all of this styling! What you see above are two CSS rulesets. A ruleset consists of a selector and two curly brackets that contain all the information about how to style the selector. In this example, the two selectors we see are `h1` and `p`. This means that all the styles listed here will be applied to any HTML `<h1>` or `<p>` element, respectively!
We can do this for any type of element! In fact, there are many ways that we can select objects with CSS Selectors, but we'll save that for another week. For now, just know that if you type in the type of element, you can add styles within the curly braces!
We’ve defined some styles, but how do we make sure our browser knows which file to apply them to?
## The Link Element
The first `<head>` element we will learn is the `<link>` element. There are a few use cases for it, but for our purposes, we will use it to link our `style.css` to our `index.html`. In other words, this element tells our browser which stylesheet to use for the page. In practice, it will look like this:
```html
<!DOCTYPE html>
<html>
<head>
<title>My Webpage</title>
<link href="./style.css" rel="stylesheet"/>
</head>
<body>
...
</body>
</html>
```
And that’s it! Our `style.css` is linked to our HTML document, and the styles will be applied.
Let’s break down this link element. First, it is a self-closing element, so we don’t need a closing tag, and it doesn’t take any content inside. There are two required attributes for this element: `href` and `rel`. These are required because the `<link>` element links external resources to the HTML document. So, we need to tell our browser 1) where that resource is and 2) what that resource is to be used for. The `href`, which stands for "hypertext reference," determines the where, while `rel`, or relationship, determines the use!
It's important to note that with the `href` attribute, the path you provide can be either absolute or relative. This means you can give the location of a file path from the root directory, e.g., `/Users/username/Documents/project/style.css` (absolute). Or you can do it relative to the location of the file you are working on, as done above. The `./` before the file name indicates that the browser should look in the same folder (or directory) as the HTML file for `style.css`. You could also add a URL here; many content delivery networks allow you to use pre-made stylesheets, which you will connect using the `<link>` element too!
## Challenge
Alright, we've covered a lot today. Now it's time to put it into practice. Take the About Me page you made in last week's challenge and `<link>` it to a new file called `style.css`. (Make sure you put it in the same folder as your HTML file!)
Then, create styles for each of your elements. Play around with the different styles you can assign until your About Me page looks more presentable! (Note: You can also target the `<html>` and `<body>` elements!)
For a complete list of the styles you can apply, check out the Mozilla Developer Network. They host complete documentation for web development languages: HTML, CSS, and JS! Here’s a [link to their site](https://developer.mozilla.org/en-US/). Use the list of properties under "Reference" in the sidebar to see what's possible!
See you next week! | nmiller15 |
1,898,963 | Welcome to the Future! | In conclusion, the launch of InnovateHub marks the beginning of an exciting journey filled with... | 0 | 2024-06-24T14:19:11 | https://dev.to/opalford/welcome-to-the-future-7o0 | webdev, chatgpt | In conclusion, the launch of InnovateHub marks the beginning of an exciting journey filled with endless possibilities. We're thrilled to welcome you to this new frontier and can't wait to see the incredible things you'll achieve here. The future is bright, and with InnovateHub, you're equipped to shine even brighter. Join us today and be a part of something truly extraordinary. Welcome to InnovateHub – where your ideas take flight and the future is [now](https://g.co/kgs/72xGhYB)! | opalford |
1,898,962 | String in JavaScript.! | String string matinli malumotlarni saqlash uchun mo'ljallangan va ularni double quotes va single... | 0 | 2024-06-24T14:18:38 | https://dev.to/samandarhodiev/string-in-javascript-4ef7 | **String**
string matinli malumotlarni saqlash uchun mo'ljallangan va ularni **_double quotes_** va **_single quotes_** lar ichiga yozib hosil qilish mumkin.!
misol uchun:
```
let jsName = 'JavaScript';
console.log(jsName);
// natija - JavaScript
```
```
let lorem_ = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat."
console.log(lorem_);
// natija - Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.
```
Keyinchalik 2016 yildan, ES6 dan boshlab stringlarni Template usulda hosil qilish imkoniham mavjud bo'ldi, template string quyidagi usulda hosil qilinadi.!

| samandarhodiev | |
1,898,961 | Avalanche Photo Diodes for Automotive LiDAR Systems | screenshot-1708567083343.png Are you tired of always having to keep your eyes on the road? There must... | 0 | 2024-06-24T14:18:11 | https://dev.to/chris_vincit_fe15e1c83a5d/avalanche-photo-diodes-for-automotive-lidar-systems-32h5 | diodes | screenshot-1708567083343.png
Are you tired of always having to keep your eyes on the road? There must be something that can help make your driving experience a lot safer. Luckily, we have the technology we need to make it possible - the Avalanche Photo Diodes (APDs) for Automotive LiDAR Systems.
Benefits of Avalanche Photo Diodes
The avalanche photodiode detector strategies may help detect hurdles, traffic, pedestrians, and also other cars. They can also run in a variety of weather, that makes it perfect for your automobile or truck. LiDAR systems utilizing this technology can recognize things within real time, which will help keep you using your vehicle secure at all times.
Innovation at its Most Useful
The APDs are notable because of their high sensitiveness allowing them to identify possibly the change which is tiniest in environmental surroundings. This enables the ingaas avalanche photodiode system's technology to instantly process the situations in comparison to other old-generation technologies. They shall make completely sure your security in almost every the weather.
Keeping You Safe
One of the main top features of having these apd is safety. The optical phase modulator system applying this technology can recognize things within real-time, such as for example pedestrians and also other vehicles. By having these features arranged in your vehicle, it'll be feasible to hold the tactile arms into the wheel while driving, which is to be crucial in ensuring the security of everyone in to the car.
Easy-to-Use Application
Using these APDs in LiDAR systems for automobiles is easy. The device is installed in your car, and you will utilize it similar to other navigation computer software installed in your very own vehicle while driving. It's simple and easy user-friendly to navigate. With the APDs running, you'll be at simplicity in just about any situation which is driving flake out while driving.
Quality Service
As soon as you purchase apd and LiDAR systems, you recognize you might be getting a cutting-edge item supported by its top-notch solution. of experts is focused on supplying the customer solution which is best possible. They will respond to all questions you could have, resolve issues quickly and effortlessly, and also is going to be readily available for you at any time concerning the time.
If you are looking for a way to make your driving experience safer, you can't go wrong with the APDs for Automotive LiDAR Systems. The innovative technology, high-quality service, easy-to-use application and, most importantly, the safety you receive from these products make them well worth the investment. So, invest in the avalanche photo diodes and secure peace of mind while driving.
| chris_vincit_fe15e1c83a5d |
1,898,950 | Apache Maven Kirish | Umumiy tushuncha Command Language — bu Operatsion sistema(OS) tushunuvchi til. Unix OS uchun Unix... | 0 | 2024-06-24T14:15:25 | https://dev.to/abbos_abdukhakimov_7fa817/apache-maven-kirish-mha | maven, java, pom, superpom | **Umumiy tushuncha**
Command Language — bu Operatsion sistema(OS) tushunuvchi til. Unix OS uchun Unix shell(sh,bash,zsh),Windows OS uchun batch. Bu komandalar unikal nom va o’ziga hos parameterlardan tashkil topgan,misol uchun :
`cp ~/Download/apache-maven.zip ~/maven
cd ~/usr/var/lib
mkdir teset_directory
cat data.json`
va bu komandalar batch yoki shell bo’lishiga qarab farq qiladi. Bundan tashqari qo’shimcha o’rnatilgan dasturni ham o’zining komandalari bo’lishi mumkin. JDK uchun
`java -jar application.jar`
Postgres uchun
`-psql --host=localhost --port=5432 --dbname=postgres`
Maven uchun
`mvn clean install`
**Nima uchun Maven?**
Yuqoridagi komandalarni to’plami .sh yoki batch script orqali ishga tushirish mumkin,project ni yig’ish va ishga tushurish uchun aynan shunday scriptlarni yozish va ularni ishga tushirish kerak bo’lgan,bu o’z navbatida ba’zi bir noqulayliklarni keltirib chiqargan.
1. Platformaga bog’liqlilik Windows uchun .bat Linux OS lari uchun .sh.
2. Har bir pojectni o’ziga hos scirptlari bo’lgan va bu boshqa projectlar uchun har doim ham mos kelavermagan.Shuning uchun Apache Ant
dasturi ishlab chiqlgan.
**Apache Ant**
Projectni yeg’ish va uni avtomatlashtiruvchi XML tiliga asoslangan dastur u Java dasturlash tilida yozilganligi uchun platforma tanlamaydi va Make dasturini platformaga bog’liq bo’lmagan analogi hisoblanadi.Make — Linux OS lari uchun projectni avtomatlashtiruvchi yordamchi dastur. Biroq bu ikkala dastur ham universallikni ta’minlay olmagan ya’ni project yeg’ishni ma’lum bir qolib va uslubga keltiraolmagan,har bir projectni o’ziga hos strukturasi va life cycle bo’lgan,o’z navbatida bu porjectga yangi qo’shilgan dasturchilarni onboarding jarayonini sekinlashtirgan,shu sababdan Apache Maven dasturi ishlab chiqildi.
**Apache Maven**
XML tiliga asoslangan Java dasturlash tilida yozilgan dastur va u universallik va convetion project(barcha projectlar uchun ma’lum bir nizom yoki qolib) ni taqdim etgan.
Soddaroq til bilan aytganda Maven Plugin lar to’plami bu pluginlar o’z navbatida Goal lardan tashkil topgan.Yuqorida aytilganidek Maven Java dasturlash tilida yozilgan project plugin lar esa oddiy java project uning ichidagi Goal esa MOJO — Maven plain Old Java Object ya’ni oddiy Java class

Har bir plugin ichida default HelpMojo class mavjud.Barcha Mojo class lar [AbstractMojo](https://maven.apache.org/ref/3.2.3/maven-plugin-api/apidocs/org/apache/maven/plugin/AbstractMojo.html) class dan vorislik olgan.Misol uchun biron bir plugin da qanday goal lar mavjudligini bilish uchun
`mvn compiler:help`
yanada ko’proq malumot olish uchun -Ddetail=true JVM argument bilan ishga tushirish mumkin
**Archetype Plugin**
Bu plugin yordamchi plugin bo’lib,Maven project ni ma’lum bir qolib asosida yaratishga yordam beradi.Archetype plugin lar o’z navbatida bir necha turlarga bo’linadi.Archetype plugin 7 ta goal(4ta asosiy) goallardan tashkil topgan,bular ichidagi generate goal bizga Maven strukturali project yaratishimizga yordam beradi.Misol uchun
`mvn archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=appfuse-basic-spring -DarchetypeVersion=3.2.5`
Spring va Hibernate frameforklari bilan Web dasturlarni yaratish uchun maxsus archetype.Bunday archetype lar juda ko’p batafsil bu yerda.
Mavenda plugin yoki projectlar ni nomlashda mahsus [convention ](https://uz.wikipedia.org/wiki/Konvensiya)mavjud.
Quyida oddiy Java project yaratish uchun archetype ko’rsatilgan.
`mvn archetype:generate -DarchetypeGroupId=org.apache.maven.archetypes -DarchetypeArtifactId=maven-archetype-quickstart -DarchetypeVersion=1.4`
Bu plugin ya’ni quickstart-archetype uni ishga tushirganimzdan so’ng bizdan groupId,artifactId va version kiritish talab qilinadi.
**_GroupId_** — maven project yoki plugin uchun unikal bo’lgan identifikator va u java paketlari nomlanish qoidasiga mos tushishi kerak.
**_ArtifactId_** — project JAR fayli uchun ihtiyoriy bo’gan nomi.
**_Version_** — project versiyasi,o’z navbatida version ham bir necha qismga bo’linadi,major.minor.increament-qualifier batafsil [bu yerda](https://docs.oracle.com/middleware/1212/core/MAVEN/maven_version.htm#MAVEN8855).
- _1. Major projectni asosiy versiyasi._
- _2. Minor project da qandaydur uncha katta bo’lmagan o’zgarish bo’lganda._
- 3. _Increament(majburiy emas) qandaydur hatoni to’g’irlash amalga oshirilganda(bug fix)._
- 4. _Quailfier(majburiy emas) project versiyasi haqida qo’shimcha ma’lumot misol uchun BETA,ALFA,RELEASE yoki SNAPSHOT._
- _Masalan: 1.2.10-SNAPSHOT_
Yuqorida aytilganidek Maven projectni ma’lum bir qolib strukturda yaratishga va o’z navbatida dasturchilarni bir project dan ikkinchi projectga o’tganda tez moslashishiga yordam berdi.Maven bo’lmagan project lardan farqli project ichidagi ochiq(source) kodlar main va test,main package-da asosiy klasslar test package-da asosiy klasslarni testlash uchun klasslar,shuningdek maven projectni asosiy konfiguratsion fayli pom.xml joylashgan.Pom fayl projectni qanday ishlashi va o’zini qanday tutushini belgilab beradi,undagi XSD schema esa ma’lum bir qolib asosida sozlashingizga yordam beradi.

**POM va Super POM**
- **POM (Project Object Model)**
Maven project lari uchun asosiy konfiguratsion fayl,u asosiy 4 qismdan iborat.
**General Project Information** — bu qism project yeg’ilishi va ishga tushirilishiga hech qanday ta’sir o’tkazmaydigan qism bunda asosan meta ma’lumotlar joylashgan masalan project nomi,project sayti uchun havola(URL),contributors(qaysi dasturchilar ishtrok etkanligi),qo’shimcha litsenziyalar va h.k.z
**Build Environment** — bu qism projectni turli hil muhit(Dev,QA,PROD) da turli hil konfiguratsiya turli hil plugin lar bilan ishga tushirish uchun xizmat qiladi.
**POM Relationship** — Pom coordinates(groupId,artifactId,version) project nomlanishi tartibi va versiylarini boshqarish,multimodule katta bo’lgan projectlar uchun qismlarga bo’lish,inheritance (Super POM) bu haqida biroz keyinroq,dependencies eng asosiy qismlardan biri project uchun kerakli qo’shimcha kutubxonalar.
**Build Settings** — bu qismda projectni default holatda qanday ishlashini belgilashimiz,turli hil yangi pluginlar qo’shishimz source kodlarimizni joylashuvini belgilashimiz va ma’lum bir pluginga ma’lum bir gaol ni biriktirishimz mumkin

- **Super POM**
Yuqorida aytilganidek POM fayl maven projectlarining asosiy fayli hisoblanadi va bu fayllarda vorislik olish yo’lga qo’yilgan,Mavenda huddi Java dasturlash tilidagi Object klassi kabi POM fayllar uchun umumiy(base) fayl mavjud va bu fayl [Super POM](https://maven.apache.org/ref/3.9.8/maven-model-builder/super-pom.html) deb nomlanadi shuningdek bu faylni yuklab olingan maven project ichidan topish mumkin
`M2_HOME/lib//maven-model-builder-3.9.8.jar:org/apache/maven/model/pom-4.0.0.xml.`

**Repositories section** — bu qismda qo’shimcha kutubxonalar(dependencies) default qayerdan olib kelish kerakligi ko’rsatilgan.
**PluginRepositories section** — bu qismda project uchun kerak bo’ladigan pluginlarni qayerdan olib kelish ko’rsatilgan.
**Build section** — yuqorida aytilganidek bu qism kompilyatsadan o’tgan ochiq kodlarimizni qaysi joyda saqlashimiz,resource fayllarni qayrdan qidirish test klasslarni default saqlanish joyi,yordamchi pluginlarni default versiyasi belgilab qo’yilgan bo’ladi.
Albatta vorislik orqali yuqoridagi barcha sozlamalarni qayta aniqlab(override) o’zgartirishimiz mumkin(tavsiya etilmaydi).
Bundan tashqari yordamchi pluginlardan tashqari default bo’lgan plugin lar mavjud,bu pluginlar oldindan maven projectlariga o’rnatilgan bo’ladi,eski versiyalardan bu pluginlar Super POM tarkibiga kirgan hozirga kelib uni alohida olib chiqishgan.Demak bizda POM faylmizni oxirgi natijaviy(final) ko’rinishi,Super POM,oldindan o’rnatilgan default pluginlar va projectimzdagi POM fayl.Projectmizni natifaviy(final) ko’rinishini mahsus Help plugini orqali ko’rishimiz mumkin.
`mvn help:effective-pom`
**Abbos Abduhakimov**
- **_Savol va takliflar uchun menga yozishdan uyalmang._**
- _email: abbos.abdukhakimov@gmail.com_
- _linkedin: www.linkedin.com/in/abbos-abduhakimov_
| abbos_abdukhakimov_7fa817 |
1,898,959 | Innovating Infrastructure: Plastic Pipe Extrusion Line Applications | Innovating Infrastructure: Plastic Pipe Extrusion Line Applications In the wide world of... | 0 | 2024-06-24T14:15:09 | https://dev.to/chris_vincit_fe15e1c83a5d/innovating-infrastructure-plastic-pipe-extrusion-line-applications-4a85 | innovation, design, javascript | Innovating Infrastructure: Plastic Pipe Extrusion Line Applications
In the wide world of construction, infrastructure plays a task essential shaping the future. Using the development of technology, synthetic pipe extrusion line applications have revolutionized the true method infrastructure is created. This article explores the advantages, innovation, safety, use, solution, quality, and application of Plastic pipe extrusion line applications.
Features of Plastic Pipe Extrusion Line Applications
The creation of high-quality s Carbon Fiber Products (CFRP) has revolutionized the true way construction happens. Plastic pipe extrusion line applications provide numerous benefits such as durability, flexibility, and cost efficiency. The production procedure for synthetic pipes means that the ultimate item is regarding the quality highest and meets the mandatory industry standards. Synthetic pipelines can simply handle harsh climate conditions and offer performance durable.
Innovation of Plastic Pipe Extrusion Line Applications
The innovation of Plastic pipe extrusion line applications has permitted for the creation of more intricate and pipeline complex. The updated technology can produce pipelines with accurate dimensions and proportions, improving the general quality associated with product final. The versatility of synthetic pipes has led to their usage extensive in applications, contributing to the infrastructure's development.
How to Use Plastic Pipe Extrusion Line Applications
Using Plastic pipe extrusion line Products is not too difficult. It's possible to start by designing the mandatory requirements of this pipe, accompanied by choosing the extrusion correct for the work. The method comprises melting and shaping the plastic in to a pipe design specific. The last action involves cutting the pipeline to the size required.
Services for Plastic Pipe Extrusion Line Applications
There are numerous services readily available for Basalt Fiber products (BFRP), including installation to maintenance. These solutions make sure that the Plastic pipe extrusion line applications the necessary criteria and regulations. The solutions offer clients convenience, ensuring the pipelines perform their intended function for a while long.
Quality of Plastic Pipe Extrusion Line Applications
Quality is essential in construction, and sPlastic pipe extrusion line applications products guarantee high-quality pipes that meet industry standards. The raw materials used for the synthetic pipes are of high quality, ensuring durability and sustainability. The standard of these pipelines helps to ensure that they can effectively transport liquids being different gases without the risk of degradation.
Application of Plastic Pipe Extrusion Line Technology
Plastic pipe extrusion line technology is used in several industries, including agricultural, oil and gas, electric, telecommunications, and construction. The Plastic pipe extrusion line applications can be shaped to satisfy application specific, making them versatile and useful in many companies. Plastic pipeline extrusion line products add dramatically to the development and growth of infrastructure worldwide.
| chris_vincit_fe15e1c83a5d |
1,898,957 | Best Chat Apps for Businesses in 2024 | In the dynamic world of modern business, communication transcends traditional boundaries, making... | 0 | 2024-06-24T14:12:56 | https://blog.productivity.directory/best-chat-apps-for-businesses-d06c756980b4 | productivitytools, chatapps, telegram, discord | In the dynamic world of modern business, communication transcends traditional boundaries, making efficient digital tools indispensable. With remote and hybrid work models becoming the norm, selecting the right chat app can profoundly influence [productivity](https://productivity.directory/) and collaboration. Let's delve into the top chat apps for businesses in 2024, comparing their unique features to help you choose the best fit for your organization.
Slack
=====
The Hub of Modern Workplace Communication

[Slack](https://productivity.directory/slack) revolutionized workplace chat by turning it into an engaging, flexible interaction space. It excels with its channel-based organization system, allowing teams to segregate conversations by project, department, or any other relevant category. The real power of Slack lies in its extensive integration capabilities, connecting seamlessly with tools like Asana, Jira, and more, thus centralizing communications and boosting productivity in one fell swoop.
Microsoft Teams
===============
More Than Just Chat

For businesses embedded in the [Microsoft ecosystem](https://productivity.directory/microsoft-teams), [Teams](https://productivity.directory/microsoft-teams) is a natural choice. This app blends chat, video meetings, and file collaboration into a cohesive platform, integrated with Office 365. It's particularly effective for detailed project management and large enterprise use, supporting everything from casual check-ins to detailed presentations within a unified environment.
Google Chat
===========
Integrated Collaboration for Google Users

[Google Chat](https://productivity.directory/google-meet), designed for seamless interaction within the Google Workspace, offers an intuitive solution for teams that rely on Google tools. It supports direct messaging and group discussions, integrates effortlessly with Gmail, Docs, and Drive, and automates workflows via built-in Google AI. Google Chat simplifies management by keeping all communications and files in sync across all devices.
Discord
=======
From Gamers to Professionals

Originally designed for gamers, [Discord](https://productivity.directory/discord) has expanded its reach to professional markets, particularly appealing to tech-savvy teams and creative industries. Its informal setup --- featuring voice, video, and text chat --- supports real-time collaboration and community building, making it a favorite among startups and digital nomads.
Element
=======
Privacy-Focused Communication
Element is built on the decentralized Matrix protocol, offering a secure, end-to-end encrypted messaging solution. This app is crucial for industries where security is non-negotiable, such as healthcare and finance. Element's commitment to privacy and its robust, customizable features make it an excellent choice for any business prioritizing data security.
Telegram
========
Fast and Versatile Communication

[Telegram](https://productivity.directory/telegram) combines speed, security, and scalability. Known for its swift message delivery, it also provides robust encryption protocols. With support for large groups and powerful bot functionalities, Telegram is ideal for businesses looking to streamline their communication processes while ensuring security.
Conclusion
==========
Each of these chat apps offers distinct advantages and could be the linchpin in enhancing your team's productivity and satisfaction. When choosing the right tool, consider your business's specific needs --- be it integration capabilities, security, ease of use, or scalability.
For more insights on enhancing productivity and making the most of modern technology, continue exploring [The Productivity Blog](https://blog.productivity.directory/). As part of [Productivity Directory](https://productivity.directory/) --- a curated list of [productivity tools and apps ](https://productivity.directory/)--- The Productivity Blog serves as your go-to resource for tips, trends, and tools that empower your business to thrive in a digitally-driven world. Don't miss [The Productivity Newsletter](https://newsletter.productivity.directory/) either; it brings the latest updates and expert advice directly to your inbox, helping you stay ahead in the fast-evolving landscape of business technology. | stan8086 |
1,898,956 | The Future of Equipment Racks in IT Infrastructure | The Future of Equipment Racks in IT Infrastructure Equipment rack can be a right important part of... | 0 | 2024-06-24T14:11:58 | https://dev.to/chris_vincit_fe15e1c83a5d/the-future-of-equipment-racks-in-it-infrastructure-nh0 | racks | The Future of Equipment Racks in IT Infrastructure
Equipment rack can be a right important part of IT infrastructure. These are generally acclimatized to search and arrange equipment like servers, switches, routers and more community products. You will get the safe and secure venue to keep high priced, sensitive and painful equipment. After a few years, equipment rack have encountered various alterations and innovations, resulting in their increasing appeal popularity in infrastructure. Down the road, equipment rack may even end up more critical in IT infrastructure and they will be properly discover new and techniques are innovative.
Advantages of Equipment Racks
The large choice of primary advantages of equipment rack is their capability to raise the ongoing providers IT infrastructure. Equipment rack supply the organized framework for saving and organizing equipment, creating it easier to manage. Furthermore, equipment rack might help manage the airflow in the quantity room. Most equipment rack need airflow management qualities that enable heat better regulation, that will particularly impact the lifespan of network products.
Equipment rack additionally enhance the safety of system products. Whenever equipment is spread inside a number place minus the best providers, it could be difficult to obtain anything you need when it is required by the. This scenario improves the possibility of any type of accident whenever going equipment. Equipment rack create it safer to get and handle community equipment, decreasing the alternative of equipment damage or personal harm.
b1d018cf737cf256dbd2df9e872fb4e0ec3922063c56f228df68070242da521c_11zon.jpg
Innovation in Equipment Rack Design
Equipment rack design has appeared an easy method in modern circumstances. Nowadays, you shall effortlessly find racks available in different sizes and designs to generally meet the precise demands of different companies. An illustration of an innovation in equipment rack design is the use of modular racks. Modular racks let for quicker, easier installation and could become configured to satisfy specific specifications. Today additionally, there is equipment rack and enhanced security such as biometric locks, surveillance digital cameras and protection systems. These features produce included safeguards for expensive, delicate equipment.
How to Use Equipment Racks
To look for the greater away from equipment rack, consideration is provided with the kinds of stored equipment and the dimensions and setup associated with the room. Whenever setting up an equipment rack, it is important to possess free knowing from equipment needs around temperature, airflow and power. Following the requirements is identified, the equipment should be set up to the rack correctly. It is crucial that you adhere to the producer's instructions whenever installing equipment in an equipment rack which will make sure safety and avoid malfunctions.
Equipment Rack Service and Quality
Whenever spending in equipment rack, it is necessary to buy high-quality racks that give exceptional service. Cheap racks might appear like a deal initially, nevertheless they frequently times result in significant conditions. High-quality Products offering better airflow administration, enhanced temperature legislation and enhanced safety. Additionally, top-quality equipment rack are stronger and can last for the very long time. It is necessary to conduct thorough research and spend some cash on high-quality equipment rack in order to avoid dilemmas in the foreseeable future.
Equipment Rack Applications
Sheet metal cover have different applications, from ideas facilities to host rooms to tiny company IT infrastructures. In ideas facilities, equipment rack are acclimatized to household equipment like servers, switches and routers. They assist organize the equipment and make which it is an easy task to manage. In server areas, equipment rack are acclimatized to organize and keep equipment. Small organizations can additionally reap the huge benefits regarding the use of equipment rack. Smaller company IT infrastructures may benefit from equipment rack given that they provide you with the cost effective solution for organizing and storing equipment.
What Does the Future Hold for Equipment Racks?
As time goes by, equipment rack could become more critical in IT infrastructure. As Parts system products being higher level and high priced, equipment rack will crank up critical dramatically in ensuring their safeguards and safety. Additionally, equipment rack will most likely become found in new and innovative techniques as technology will continue to evolve. Being an example, equipment rack may be used to dwelling advanced cooling systems or AI-powered servers. Your alternatives are endless and the ongoing future of equipment rack in IT infrastructure is bright. | chris_vincit_fe15e1c83a5d |
1,898,955 | Strategic Partnerships: Microbial Biologics CRDMO Collaborations | Shanghai Jinli Unique Rope Carbon monoxide., Ltd: Your Partner for Off-Road Hauling as well as... | 0 | 2024-06-24T14:09:28 | https://dev.to/chris_vincit_fe15e1c83a5d/strategic-partnerships-microbial-biologics-crdmo-collaborations-2f2f | bio | Shanghai Jinli Unique Rope Carbon monoxide., Ltd: Your Partner for Off-Road Hauling as well as Aquatic Rope Services
Perform you truly like being actually outdoors as well as place that's taking experiences which could be off-road? Time or even possibly you choose a fantastic out concerning the sprinkle? This is actually definitely dependable assist you tow your devices or even protect your watercraft because situation, you might require a partner
Shanghai Jinli Unique Rope Carbon monoxide., Ltd has actually shown up Products towards deal the pulling that's biggest this is actually definitely off-road aquatic rope methods towards produce your experiences safe as well as pleasurable
Components of Selecting Shanghai Jinli Unique Rope Carbon monoxide., Ltd
In concerns towards pulling this is actually aquatic that's definitely off-road services, Shanghai Jinli Unique Rope Carbon monoxide., Ltd stands apart with the group
Our products are actually made along with the top quality products which could be greatest as a result they are actually developed towards endure severe problems like for instance hefty tons, moisture, as well as survive that's severe
You can easily anticipate a range of ropes towards choose coming from, consisting of winch this is actually definitely synthetic, UHMWPE ropes, as well as aquatic ropes
Development in Rope Manufacturing
Shanghai Jinli Unique Rope Carbon monoxide., Ltd is actually purchased development as well as constantly enhances on its own rope-making procedure
Our team use the innovation this is actually definitely newest towards guarantee our ropes are actually solid, light-weight, as well as easy towards function effectively along with
Our ropes might be actually designed towards likewise final as a result they are actually immune towards abrasion, extending, as well as UV damages
You might be actually specific you select our items that you will certainly obtain the absolute best high top premium ropes in the market place when
Security Factors to consider
Security is actually a concern this is actually Shanghai that's definitely leading Jinli Rope Carboxypeptidase G2 1., Ltd. Our team acknowledge that off-road aquatic as well as hauling tasks might be dangerous, as well as that is the great factor our team place security as well as health and wellness very initial
Our ropes are actually carefully tested towards surpass market security requirements, as well as our team likewise offer described directions around ways to correctly utilize as well as proceed preserving our items
Our team likewise deal certified guidance around ways to choose the rope that's appropriate your requirements that are actually one-of-a-kind
Easy suggestions towards Utilize Our Items
Utilizing GLP-2 1 our hauling this is actually definitely off-road as well as ropes is actually easy
Our ropes include outlined directions that reveal you with the technique this is actually definitely setup that's whole upkeep
When utilize that's creating of items, it is actually necessary to wage the directions thoroughly towards ensure that you might be actually making use of our ropes securely as well as efficiently
Solution as well as High top premium
Shanghai Jinli Unique Rope Carbon monoxide., Ltd is actually purchased offering client treatment this is actually definitely outstanding
Our business is actually right below towards help you choose the rope that's appropriate your choices as well as our team likewise will certainly constantly available towards response any type of questions you might have actually potentially
Our ropes are actually sustained through a guarantee that's great as well as our team likewise support our items along with assurance
You might be actually specific you select Shanghai Jinli Unique Rope Carbon monoxide., Ltd that you will certainly obtain the absolute best high top premium items as well as solution about when
Request of Our Items
Our hauling this is actually definitely off-road as well as ropes have actually various requests
They might be used for off-road cars like for instance vehicles, Jeeps, as well as ATVs
They may be capable likewise be actually utilized for aquatic tasks like for instance docking, anchoring, as well as hauling
It does not matter exactly just what your choices are actually, Shanghai Jinli Special Rope Carbon monoxide., Ltd has actually a product that will definitely help you which could be particular | chris_vincit_fe15e1c83a5d |
1,898,954 | Building a Chatbot using your documents with LangChain from Scratch | Introduction This tutorial will guide you through creating a chatbot that uses your... | 0 | 2024-06-24T14:09:10 | https://dev.to/jackyxbb/building-a-chatbot-using-your-documents-with-langchain-from-scratch-3mk7 | chatbot, openai, rag, vectordatabase | ## Introduction
This tutorial will guide you through creating a chatbot that uses your documents to provide intelligent responses. We'll leverage LangChain for natural language processing, document handling, and a vector database for efficient data retrieval. The stack includes Vite for project setup, React for the front-end, TailwindCSS for styling, OpenAI for generating responses, and Supabase as our vector database. This guide aims to provide a step-by-step approach to building a chatbot from scratch using LangChain, and it will also cover how to develop a chatbot using LangChain effectively. By the end of this tutorial, you'll know how to build an LLM RAG (Retrieval-Augmented Generation) Chatbot with LangChain.
***You can find the complete project code on [GitHub](https://github.com/jacky-xbb/ai-assistant).***
## Who is this for?
This tutorial is designed for developers who:
- Have basic knowledge of React and JavaScript.
- Want to build AI-powered applications using their own documents.
- Are looking to integrate vector databases and AI models into their projects.
## What will be covered?
- Setting up a new project with Vite and TailwindCSS.
- Integrating LangChain with OpenAI for AI capabilities.
- Using Supabase as a vector database to store and retrieve document data.
- Building the front-end components for the chatbot.
- Creating utilities for document handling and AI response generation.
## Architecture

## Preparation
Create a vector store on Supabase, please refer to this GitHub [Repo](https://github.com/jacky-xbb/supabase-vector-py).
## Step-by-Step Guide
### Step 1: Initialize the Project with Vite
Create a new Vite project:
```bash
npm create vite@latest ai-assistant -- --template react
cd ai-assistant
```
### Step 2: Install Necessary Dependencies
Install TailwindCSS and other dependencies:
```bash
npm install tailwindcss postcss autoprefixer react-router-dom @langchain/core @langchain/openai @supabase/supabase-js
```
### Step 3: Configure TailwindCSS
Initialize TailwindCSS:
```bash
npx tailwindcss init -p
```
Configure `tailwind.config.js`:
```
module.exports = {
content: [
"./index.html",
"./src/**/*.{js,ts,jsx,tsx}",
],
theme: {
extend: {},
},
plugins: [],
}
```
Add Tailwind directives to `src/index.css`:
```css
@tailwind base;
@tailwind components;
@tailwind utilities;
```
### Step 4: Configure Environment Variables
Create a `.env` file in the root of your project and add your API keys:
```
VITE_SUPABASE_BASE_URL=your-supabase-url
VITE_SUPABASE_API_KEY=your-supabase-api-key
VITE_OPENAI_API_KEY=your-openai-api-key
VITE_OPENAI_BASE_URL=https://api.openai.com/v1
```
### Step 5: Project Directory Structure
Organize your project files as follows:
```
src/
├── components/
│ ├── Avatar.jsx
│ ├── Chat.jsx
│ ├── ChatBox.jsx
│ ├── Header.jsx
│ └── Message.jsx
├── utils/
│ ├── chain.js
│ ├── combineDocuments.js
│ ├── formatConvHistory.js
│ └── retriever.js
├── App.jsx
├── main.jsx
├── index.css
└── custom.css
```
### Step 6: Main Entry Point
src/main.jsx
```jsx
import React from 'react'
import ReactDOM from 'react-dom/client'
import App from './App.jsx'
import './index.css'
import './custom.css'; // Custom CSS
ReactDOM.createRoot(document.getElementById('root')).render(
<React.StrictMode>
<App />
</React.StrictMode>,
)
```
The `main.jsx` file sets up the React application by rendering the `App` component into the root element. It also imports the main styles, including TailwindCSS and any custom CSS.
### Step 7: Creating Components
**App Component**
src/App.jsx
```jsx
import Chat from './components/Chat'
function App() {
return (
<Chat />
)
}
export default App
```
The `App` component serves as the root component of the application. It simply renders the `Chat` component, which contains the main functionality of the AI assistant.
**Chat Component**
src/components/Chat.jsx
```jsx
import { useState, useEffect } from 'react';
import Header from './Header';
import Message from './Message';
import ChatBox from './ChatBox';
import robotAvatar from '../assets/robot-avatar.png';
import userAvatar from '../assets/profile.jpg';
import { chain } from '../utils/chain';
import { formatConvHistory } from '../utils/formatConvHistory';
function Chat() {
const [messages, setMessages] = useState(() => {
// Retrieve messages from local storage if available
const savedMessages = localStorage.getItem('messages');
return savedMessages ? JSON.parse(savedMessages) : [];
});
const [isLoading, setIsLoading] = useState(false);
// Save messages to local storage whenever they change
useEffect(() => {
localStorage.setItem('messages', JSON.stringify(messages));
}, [messages]);
// Handle new messages sent by the user
const handleNewMessage = async (text) => {
const newMessage = {
time: new Date().toLocaleTimeString(),
text,
avatarSrc: userAvatar,
avatarAlt: "User's avatar",
position: "left",
isRobot: false,
};
setMessages((prevMessages) => [...prevMessages, newMessage]);
const updatedMessages = [...messages, newMessage];
setIsLoading(true);
try {
// Invoke the LangChain API to get the AI's response
const response = await chain.invoke({
question: text,
conv_history: formatConvHistory(updatedMessages.map(msg => msg.text)),
});
const aiMessage = {
time: new Date().toLocaleTimeString(),
text: response,
avatarSrc: robotAvatar,
avatarAlt: "Robot's avatar",
position: "right",
isRobot: true,
};
setMessages((prevMessages) => [...prevMessages, aiMessage]);
} catch (error) {
console.error("Error fetching AI response:", error);
} finally {
setIsLoading(false);
}
};
return (
<main className="font-merriweather px-10 py-8 mx-auto w-full bg-sky-950 max-w-[480px] h-screen">
<Header
mainIconSrc={robotAvatar}
mainIconAlt="Main icon"
title="AI-Assistant"
/>
<div id="chatbot-conversation-container" className="flex flex-col gap-y-2 mt-4">
{messages.map((message, index) => (
<Message
key={index}
time={message.time}
text={message.text}
avatarSrc={message.avatarSrc}
avatarAlt={message.avatarAlt}
position={message.position}
isRobot={message.isRobot}
/>
))}
</div>
<div className="mt-auto mb-4">
<ChatBox
label="What's happening?"
buttonText="Ask"
onSubmit={handleNewMessage}
isLoading={isLoading}
/>
</div>
</main>
);
}
export default Chat;
```
The `Chat` component is the core of the AI assistant, handling message state and interaction logic.
- `useState` is used to initialize and manage the `messages` state, which holds the chat messages, and the `isLoading` state, which indicates if a message is being processed.
- The `useEffect` hook ensures that messages are saved to local storage whenever the `messages` state changes. This allows chat history to persist across page reloads.
- The `handleNewMessage` function processes new messages sent by the user. It creates a new message object, updates the state with the new message, and sends the message to the LangChain API to get a response.
- The AI response is then added to the messages state, updating the chat interface with both the user and AI messages.
- The `return` statement defines the layout of the chat interface, including the header, message list, and input box. The `Header` component displays the chat title and main icon, while the `Message` component renders each message in the chat history. The `ChatBox` component provides an input field and button for sending new messages.
**Header Component**
src/components/Header.jsx
```jsx
import React from "react";
import PropTypes from "prop-types";
import Avatar from "./Avatar";
function Header({ mainIconSrc, mainIconAlt, title }) {
return (
<header className="flex flex-col mx-auto items-center grow shrink-0 basis-0 w-fit">
<Avatar src={mainIconSrc} alt={mainIconAlt} className="aspect-square w-[129px]" />
<h1 className="mt-3 text-4xl text-center text-stone-200">{title}</h1>
</header>
);
}
Header.propTypes = {
mainIconSrc: PropTypes.string.isRequired,
mainIconAlt: PropTypes.string.isRequired,
title: PropTypes.string.isRequired,
};
export default Header;
```
The `Header` component displays the title and main icon of the chat application.
**Message Component**
src/components/Message.jsx
```jsx
import PropTypes from "prop-types";
import Avatar from "./Avatar";
const avatarStyles = "rounded-full shrink-0 self-end aspect-square w-[47px]";
const messageTimeStyles = "self-center mt-5 text-xs font-bold leading-4 text-slate-600";
function Message({ text, time, avatarSrc, avatarAlt, position, isRobot }) {
const messageTextStyles = `text-sm leading-4 rounded-xl p-3 mt-2 ${isRobot ? "bg-sky-50 black-800" : "bg-cyan-800 text-white"
} font-merriweather`; // Apply Merriweather font
return (
<section className={`flex gap-2 ${position === 'left' ? 'justify-start' : 'justify-end'} items-center`}>
{position === 'left' && <Avatar src={avatarSrc} alt={avatarAlt} className={avatarStyles} />}
<div className="flex flex-col grow shrink-0 basis-0 w-fit">
{time && <time className={messageTimeStyles}>{time}</time>}
<p className={messageTextStyles}>{text}</p>
</div>
{position === 'right' && <Avatar src={avatarSrc} alt={avatarAlt} className={avatarStyles} />}
</section>
);
}
Message.propTypes = {
text: PropTypes.string.isRequired,
time: PropTypes.string,
avatarSrc: PropTypes.string.isRequired,
avatarAlt: PropTypes.string.isRequired,
position: PropTypes.oneOf(['left', 'right']).isRequired,
isRobot: PropTypes.bool.isRequired,
};
export default Message;
```
- The `Message` component displays individual messages within the chat interface.
- It uses the `Avatar` component to display the avatar of the message sender.
- The `messageTextStyles` are applied to style the message text, with different styles for user and robot messages.
**ChatBox Component**
src/components/ChatBox.jsx
```jsx
import { useState } from "react";
import PropTypes from "prop-types";
const inputStyles = "w-full px-4 pt-4 pb-4 mt-2 text-base leading-5 bg-sky-900 rounded-xl border-4 border-solid shadow-sm border-slate-600 text-gray-100 resize-none";
const buttonStyles = "w-full px-6 py-3 mt-3 text-2xl font-bold text-center text-white whitespace-nowrap rounded-xl bg-slate-600 hover:bg-slate-700 hover:translate-y-0.5 focus:outline-none ";
const buttonDisabledStyles = "w-full px-6 py-3 mt-3 text-2xl font-bold text-center text-white whitespace-nowrap rounded-xl bg-slate-600 opacity-50 cursor-not-allowed";
function ChatBox({ label, buttonText, onSubmit, isLoading }) {
const [message, setMessage] = useState("");
const handleSubmit = (event) => {
event.preventDefault();
if (message.trim() && !isLoading) {
onSubmit(message);
setMessage('');
}
};
return (
<section className="flex flex-col mt-4">
<form onSubmit={handleSubmit}>
<label htmlFor="chatInput" className="sr-only">{label}</label>
<input
id="chatInput"
placeholder={label}
className={inputStyles}
value={message}
onChange={(e) => setMessage(e.target.value)}
/>
<button
type="submit"
className={isLoading ? buttonDisabledStyles : buttonStyles}
disabled={isLoading}
>
{buttonText}
</button>
</form>
</section>
);
}
ChatBox.propTypes = {
label: PropTypes.string.isRequired,
buttonText: PropTypes.string.isRequired,
onSubmit: PropTypes.func.isRequired,
isLoading: PropTypes.bool.isRequired,
};
export default ChatBox;
```
- The `ChatBox` component provides an input box and a button for users to send messages.
- It uses `useState` to manage the message input state.
- The `handleSubmit` function handles the form submission, invoking the `onSubmit` function with the message content.
**Avatar Component**
src/components/Avatar.jsx
```jsx
import React from "react";
import PropTypes from "prop-types";
function Avatar({ src, alt, className }) {
return <img loading="lazy" src={src} alt={alt} className={className} />;
}
Avatar.propTypes = {
src: PropTypes.string.isRequired,
alt: PropTypes.string.isRequired,
className: PropTypes.string,
};
export default Avatar;
```
The `Avatar` component renders an avatar image.
### Step 8: Utility Functions
**Chain Utility**
src/utils/chain.js
```jsx
import {
RunnablePassthrough,
RunnableSequence,
} from "@langchain/core/runnables";
import { ChatOpenAI } from "@langchain/openai";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { PromptTemplate } from "@langchain/core/prompts"
import { retriever } from './retriever';
import { combineDocuments } from './combineDocuments';
const openAIApiKey = import.meta.env.VITE_OPENAI_API_KEY;
const openAIUrl = import.meta.env.VITE_OPENAI_BASE_URL;
const llm = new ChatOpenAI({
apiKey: openAIApiKey,
configuration: {
baseURL: openAIUrl,
}
});
// A string holding the phrasing of the prompt
const standaloneQuestionTemplate = `Given some conversation history (if any) and a question,
convert the question to a standalone question.
conversation history: {conv_history}
question: {question}
standalone question:`;
// A prompt created using PromptTemplate and the fromTemplate method
const standaloneQuestionPrompt = PromptTemplate.fromTemplate(standaloneQuestionTemplate);
const answerTemplate = `You are a helpful and enthusiastic support bot who can answer a given question based on
the context provided and the conversation history. Try to find the answer in the context. If the answer is not given
in the context, find the answer in the conversation history if possible. If you really don't know the answer,
say "I'm sorry, I don't know the answer to that." And direct the questioner to email help@example.com.
Don't try to make up an answer. Always speak as if you were chatting to a friend.
context: {context}
conversation history: {conv_history}
question: {question}
answer: `;
const answerPrompt = PromptTemplate.fromTemplate(answerTemplate);
// Take the standaloneQuestionPrompt and PIPE the model
const standaloneQuestionChain = standaloneQuestionPrompt
.pipe(llm)
.pipe(new StringOutputParser());
const retrieverChain = RunnableSequence.from([
prevResult => prevResult.standalone_question,
retriever,
combineDocuments,
]);
const answerChain = answerPrompt
.pipe(llm)
.pipe(new StringOutputParser());
const logConvHistory = async (input) => {
console.log('Conversation History:', input.conv_history);
return input;
}
const chain = RunnableSequence.from([
{
standalone_question: standaloneQuestionChain,
original_input: new RunnablePassthrough(),
},
{
context: retrieverChain,
question: ({ original_input }) => original_input.question,
conv_history: ({ original_input }) => original_input.conv_history,
},
logConvHistory,
answerChain,
]);
export { chain };
```
Here is a Flowchart

This file sets up a sequence of operations to handle the user's query and retrieve an appropriate response using LangChain, OpenAI, and a retriever for context.
- `StandaloneQuestionChain`: It is responsible for converting a user's query, which may depend on the context of previous conversation history, into a standalone question that can be understood without any prior context.
- `RetrieverChain` : It is used to retrieve relevant documents from the vector database that are pertinent to the standalone question generated by the `StandaloneQuestionChain`.
- `LogConvHistory` : It is a simple logging function used to print the conversation history to the console. This can be helpful for debugging and understanding the flow of conversation and how it influences the generated responses.
- `AnswerChain` : It is responsible for generating a response to the user's question based on the retrieved context and the conversation history.
**Combine Documents Utility**
src/utils/combineDocuments.js
```jsx
export function combineDocuments(docs) {
return docs.map((doc) => doc.pageContent).join('\n\n');
}
```
This utility function combines multiple documents into a single string. This is useful for providing a unified context to the AI model.
**Format Conversation History Utility**
src/utils/formatConvHistory.js
```jsx
export function formatConvHistory(messages) {
return messages.map((message, i) => {
if (i % 2 === 0) {
return `Human: ${message}`
} else {
return `AI: ${message}`
}
}).join('\n')
}
```
This utility function takes an array of `messages` and formats them into a conversation between a human and an AI.
**Retriever Utility**
src/utils/retriever.js
```jsx
import { SupabaseVectorStore } from "@langchain/community/vectorstores/supabase";
import { OpenAIEmbeddings } from "@langchain/openai";
import { createClient } from '@supabase/supabase-js';
const sbUrl = import.meta.env.VITE_SUPABASE_BASE_URL;
const sbApiKey = import.meta.env.VITE_SUPABASE_API_KEY;
const openAIApiKey = import.meta.env.VITE_OPENAI_API_KEY;
const openAIUrl = import.meta.env.VITE_OPENAI_BASE_URL;
const client = createClient(sbUrl, sbApiKey);
const embeddings = new OpenAIEmbeddings({
apiKey: openAIApiKey,
configuration: {
baseURL: openAIUrl
}
});
const vectorStore = new SupabaseVectorStore(embeddings, {
client,
tableName: 'personal_infos',
queryName: 'match_personal_infos',
});
const retriever = vectorStore.asRetriever();
export { retriever };
```
This utility sets up a retriever using Supabase and OpenAI embeddings.
- `SupabaseVectorStore`: Initializes a vector store using Supabase.
- `OpenAIEmbeddings`: Creates embeddings using OpenAI.
- The retriever is then used to fetch relevant documents based on the user's query.
### Running the Application
1. **Start the Development Server:**
```bash
npm run dev
```
2. **Access the Application:**
Open your browser and navigate to `http://localhost:5173`.
## Conclusion
In this tutorial, you learned how to build an AI assistant using LangChain, OpenAI, and Supabase. We've covered the setup of a Vite project, integration of TailwindCSS, and the creation of React components and utility functions. You now have a functional AI assistant that can interact with users and provide responses based on the conversation history and context.
## References
- [GitHub Repo](https://github.com/jacky-xbb/ai-assistant)
- [Install Tailwind CSS with Vite](https://tailwindcss.com/docs/guides/vite)
- [LangChain Documentation](https://js.langchain.com/v0.2/docs/introduction/)
- [Supabase Vector Documentation](https://supabase.com/docs/guides/ai) | jackyxbb |
1,898,953 | Aprendizaje Federado y Privacidad Diferencial | El aprendizaje federado es una técnica avanzada en el campo del aprendizaje automático que permite... | 0 | 2024-06-24T14:08:21 | https://dev.to/gcjordi/aprendizaje-federado-y-privacidad-diferencial-1n4b | ia, ai | El aprendizaje federado es una técnica avanzada en el campo del aprendizaje automático que permite entrenar modelos de IA de manera colaborativa sin necesidad de centralizar los datos.
Este enfoque es particularmente relevante en contextos donde la privacidad de los datos es una preocupación crítica, como en aplicaciones médicas, financieras y de dispositivos móviles.
Combinado con la privacidad diferencial, el aprendizaje federado ofrece una solución robusta para proteger la privacidad de los usuarios mientras se mantiene la eficacia del modelo.
**Concepto de Aprendizaje Federado**
El aprendizaje federado se basa en la idea de que los datos permanecen en los dispositivos locales de los usuarios. En lugar de enviar los datos a un servidor central, se envían actualizaciones del modelo. Este proceso generalmente sigue estos pasos:
_Distribución del Modelo Inicial:_ Un modelo inicial se distribuye a todos los dispositivos participantes.
_Entrenamiento Local:_ Cada dispositivo entrena el modelo usando sus datos locales.
_Agregación de Actualizaciones:_ Las actualizaciones del modelo entrenado localmente se envían a un servidor central, donde se agregan para actualizar el modelo global.
_Actualización del Modelo Global:_ El modelo global actualizado se redistribuye a los dispositivos para otro ciclo de entrenamiento.
Este ciclo se repite varias veces hasta que el modelo converge. La clave es que los datos nunca abandonan los dispositivos locales, lo que mejora significativamente la privacidad.
**Privacidad Diferencial**
La privacidad diferencial es una técnica matemática que proporciona garantías sobre la privacidad de los datos individuales en un conjunto de datos. Se basa en la idea de que los resultados de un análisis no deberían revelar información específica sobre ningún individuo en el conjunto de datos. Para lograr esto, se introduce ruido aleatorio en los datos o en los resultados del análisis. En el contexto del aprendizaje federado, la privacidad diferencial se puede aplicar de varias maneras:
_Ruido en las Actualizaciones del Modelo:_ Antes de enviar las actualizaciones del modelo al servidor central, se puede añadir ruido a las actualizaciones. Esto garantiza que las contribuciones individuales de los dispositivos no se puedan inferir fácilmente.
_Agregación Segura:_ Las técnicas de agregación segura, como la homomorfía, permiten combinar las actualizaciones del modelo de manera que se preserva la privacidad mientras se obtiene un modelo global preciso.
**Ventajas del Aprendizaje Federado con Privacidad Diferencial**
Protección de Datos Sensibles: Al mantener los datos en los dispositivos locales y aplicar privacidad diferencial, se reduce el riesgo de exposición de datos sensibles.
_Mejora de la Confianza del Usuario:_ Los usuarios son más propensos a participar en esquemas de recopilación de datos si confían en que su privacidad está protegida.
_Cumplimiento Normativo:_ Este enfoque facilita el cumplimiento de regulaciones de privacidad, como el GDPR y la HIPAA, que exigen una gestión rigurosa de los datos personales.
**Desafíos y Futuro del Aprendizaje Federado**
A pesar de sus ventajas, el aprendizaje federado enfrenta varios desafíos. Estos incluyen la heterogeneidad de los datos locales, la eficiencia de la comunicación, y la necesidad de técnicas avanzadas para manejar grandes volúmenes de datos distribuidos. Sin embargo, las investigaciones actuales están abordando estos problemas, y se espera que el aprendizaje federado con privacidad diferencial se convierta en un estándar para aplicaciones que requieren alta privacidad.
En resumen, el aprendizaje federado combinado con la privacidad diferencial ofrece un marco poderoso para entrenar modelos de IA mientras se protegen los datos de los usuarios. Esta combinación promete transformar la forma en que se maneja la privacidad en el aprendizaje automático, permitiendo un equilibrio óptimo entre la eficacia del modelo y la protección de la privacidad.
[Jordi G. Castillón](https://jordigarcia.eu/) | gcjordi |
1,898,952 | Enhancing Supply Chain Efficiency with Marradata.ai's Advanced Technologies | In the fast-paced world of business, supply chain optimization has become a critical factor for... | 0 | 2024-06-24T14:07:48 | https://dev.to/marketingemer/enhancing-supply-chain-efficiency-with-marradataais-advanced-technologies-4m81 | supplychain, datascience, marradata, datastructures | In the fast-paced world of business, supply chain optimization has become a critical factor for achieving operational excellence and maintaining a competitive edge. Leveraging advanced technologies, Marradata.ai provides comprehensive solutions that streamline supply chain operations, reduce costs, and enhance service levels. This post delves into the various aspects of supply chain optimization and how Marradata.ai's innovative solutions can transform business operations.
## Understanding Supply Chain Optimization
Supply chain optimization involves the strategic management of supply chain activities to maximize customer value and achieve a sustainable competitive advantage. The goal is to ensure that the supply chain operates as efficiently as possible, balancing costs, quality, and speed.
The Role of Technology in Supply Chain Optimization
Several advanced technologies play a pivotal role in optimizing supply chain operations. Marradata.ai integrates these technologies to offer businesses a competitive edge.
Predictive Analytics
Predictive analytics involves using historical data, machine learning algorithms, and statistical techniques to forecast future events. In supply chain management, predictive analytics can predict demand, optimize inventory levels, and improve logistics planning.
Benefits of Predictive Analytics
Accurate Demand Forecasting: Marradata.ai's algorithms analyze historical sales data, market trends, and external factors to forecast demand accurately, ensuring the right products are available at the right time.
Inventory Optimization: Accurate demand forecasts help maintain optimal inventory levels, reducing excess stock and minimizing stockouts. This balance ensures lower holding costs and improved cash flow.
Proactive Maintenance: Predictive analytics can predict equipment failures before they occur. Marradata.ai uses this capability to schedule timely maintenance, preventing costly downtime and extending machinery lifespan.
Logistics and Transportation Planning: Predictive models optimize transportation routes and schedules, reducing fuel consumption and delivery times, leading to cost savings and enhanced customer satisfaction.
Risk Management: Identifying potential supply chain disruptions and bottlenecks enables proactive risk management, ensuring a resilient and responsive supply chain.
Internet of Things (IoT)
The Internet of Things (IoT) is transforming supply chain management by providing real-time data and insights. IoT devices such as sensors, RFID tags, and GPS trackers collect and share data on inventory levels, transportation conditions, and equipment performance.
Benefits of IoT
Real-time Visibility: IoT devices provide real-time data on inventory levels, shipment locations, and transportation conditions. Marradata.ai uses this data to offer complete visibility into the supply chain, enabling informed decision-making.
Enhanced Inventory Management: IoT sensors monitor inventory levels and environmental conditions, ensuring products are stored under optimal conditions. This data helps optimize inventory management, reducing excess stock and preventing stockouts.
Improved Logistics and Transportation: GPS trackers and IoT sensors provide real-time data on transportation routes and conditions. This information is used to optimize logistics operations, reducing fuel consumption, improving delivery times, and enhancing customer satisfaction.
Predictive Maintenance: IoT devices monitor equipment performance and predict potential failures. This allows for proactive maintenance scheduling, minimizing downtime and extending machinery lifespan.
Supply Chain Risk Management: IoT data helps identify potential disruptions and risks in the supply chain, enabling proactive risk mitigation and ensuring a resilient supply chain.
Machine Learning
Machine learning (ML) involves training algorithms to learn from data and make predictions or decisions without being explicitly programmed. In supply chain management, ML can be used to predict demand, optimize routes, manage inventory, and more.
Benefits of Machine Learning
Accurate Demand Forecasting: Machine learning analyzes historical sales data, market trends, and other factors to predict future demand accurately. This helps maintain optimal inventory levels and avoid stockouts or overstocking.
Efficient Inventory Management: Machine learning algorithms analyze real-time data to optimize inventory levels, reducing excess inventory, minimizing holding costs, and ensuring timely stock replenishment.
Optimized Transportation and Logistics: Machine learning solutions optimize transportation routes and schedules by analyzing traffic patterns, weather conditions, and other factors, reducing transportation costs and improving delivery times.
Improved Supplier Management: Analyzing supplier performance data helps identify the best suppliers and negotiate better terms, ensuring a reliable supply chain and reducing procurement costs.
Risk Mitigation: Identifying potential risks and disruptions in the supply chain enables proactive measures to mitigate these risks, ensuring a resilient supply chain.
Real-world Applications
Case Study 1: Enhancing Supply Chain Efficiency with Predictive Analytics
A global manufacturing firm partnered with Marradata.ai to enhance its supply chain efficiency. By implementing predictive analytics, the company reduced inventory costs by 18%, improved on-time delivery rates by 22%, and minimized supply chain disruptions. This partnership showcases the tangible benefits of integrating predictive analytics into supply chain management.
Case Study 2: Leveraging IoT for Real-time Visibility
A leading logistics company integrated IoT technology into its supply chain operations with Marradata.ai. By implementing IoT-enabled solutions, the company achieved a 15% reduction in transportation costs, a 20% improvement in delivery times, and enhanced visibility into its supply chain. This case highlights the transformative impact of IoT on supply chain optimization.
Case Study 3: Boosting Demand Forecast Accuracy with Machine Learning
A global retail chain partnered with Marradata.ai to implement machine learning-driven supply chain solutions. The results were impressive: a 30% improvement in demand forecast accuracy, a 20% reduction in inventory costs, and a 15% increase in on-time deliveries. This case demonstrates the significant impact of machine learning on supply chain optimization.
Conclusion
[Supply chain optimization](https://marradata.ai) is essential for businesses looking to enhance operational efficiency, reduce costs, and improve customer satisfaction. Marradata.ai offers cutting-edge solutions that leverage predictive analytics, IoT, and machine learning to optimize various aspects of the supply chain. By adopting Marradata.ai's innovative technologies, businesses can achieve superior supply chain optimization, gain a competitive edge, and ensure long-term success.
For more information on how Marradata.ai can help your business optimize its supply chain, visit marradata.ai. | marketingemer |
1,898,951 | Embracing Comfort: The Benefits of Electric Heaters | Embracing Comfort: The Benefits of Electric Heaters Look no further than It. With their benefits... | 0 | 2024-06-24T14:06:45 | https://dev.to/chris_vincit_fe15e1c83a5d/embracing-comfort-the-benefits-of-electric-heaters-5a06 | designpatterns |
Embracing Comfort: The Benefits of Electric Heaters
Look no further than It. With their benefits which are a few qualities that are revolutionary It offer you a safer method which will be easy stay hot cozy during every period. We will explore the benefits of adopting ease heaters which can be electric like their use, safety, quality, application.
Options that come with Electric Heaters: Simple Portable
It give you a advantages which are few ir heating pad. First, they are typically quick and simple to work with, needing most readily useful an socket that are electric an company which was electric. Its an task easy get them around your property, from area to put or even between property, without the need for complex setup since installation.
Innovation in Electric Heating: Energy Efficiency Smart Controls
Its about for a long time, present innovations in technology is creating them also dramatically energy-efficient intuitive to work well with. It is often come which was electric smart settings that allow you to definitely placed precise conditions also schedule warming times based on your requirements.
Safety Features of Electric Heaters: No Flames since Gas
Among the biggest problems and Its old-fashioned will be the risk of gas because fire leakages. heaters being electric nevertheless, these potential risks try virtually expunged. It avoid gas since flames, reducing the chance of accidental fires because explosions.
How to create Use Of Electric Heaters: Placement Maintenance Tips
Obtaining the absolute many through It is essential to work with it exactly take care of it. Whenever producing their It certain you place it in the area that are well-ventilated from curtains, furniture, since considerably flammable issues. Its also wise to avoid expansion that are using since power strips using your heater which are electric these can raise the risk of fire.
Service Quality of Electric Heaters: pick a reliable Brand out
When choosing It is important to look at the provider quality connected with thermal pad silicone brand you choose. Find a services that test reputable a tremendously reputation which is great top-notch safer. You may also want to try to find formal official certification from organizations UL because CSA, which suggest It tested matches protection which was effectiveness that's sure.
Application of Electric Heaters: From Homes to Workplaces
It can be employed in lots of settings being not the same as homes to workplaces areas that are furthermore outside. It is possible to select from an array of types sizes, in accordance with the needs you have specifications. Some popular types of It include cup heater, baseboard heaters, heaters being wall-mounted.
Aluminum alloy heater Products It incorporate benefits revolutionary qualities which can make them an alternative good people trying to accept advantages in the house because at work. Over summer and winter minus the work of complex It practices by securely advantage which was using of benefits, energy effectiveness, security, freedom, it is possible to enjoy temperature coziness. | chris_vincit_fe15e1c83a5d |
1,898,949 | What are the steps to integrate native code with a Flutter application for both Android and iOS? | Integrating native code with a Flutter application allows you to utilize platform-specific features... | 0 | 2024-06-24T14:05:17 | https://dev.to/chariesdevil/what-are-the-steps-to-integrate-native-code-with-a-flutter-application-for-both-android-and-ios-2jh | Integrating native code with a Flutter application allows you to utilize platform-specific features and APIs that may not be available directly through Flutter's framework. This process involves writing platform-specific code in languages like Java or Kotlin for Android, and Swift or Objective-C for iOS. Here, we'll walk through the steps to integrate native code with a Flutter app for both Android and iOS, including detailed explanations and examples.
## Understanding Flutter's Platform Channels
Flutter uses platform channels to communicate between the Dart code and the native code. A platform channel allows sending messages between the Flutter framework and the platform-specific code. These channels use a binary message codec to encode and decode messages and can be used for both method invocation and event handling.
**Steps to Integrate Native Code with a Flutter Application**
**1. Setting Up a New Flutter Project**
First, create a new Flutter project if you don't already have one:
flutter create native_integration_example
cd native_integration_example
**2. Understanding the Project Structure**
In the Flutter project, the platform-specific code resides in the android and ios directories:
android: Contains the Android-specific code in Java/Kotlin.
ios: Contains the iOS-specific code in Swift/Objective-C.
**3. Creating a Platform Channel**
Define a method channel in your Flutter code. This channel will be used to communicate with the native code.
In your Flutter project's lib/main.dart file, create a method channel:
import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
void main() => runApp(MyApp());
class MyApp extends StatelessWidget {
static const platform = MethodChannel('com.example.native_integration_example/channel');
@override
Widget build(BuildContext context) {
return MaterialApp(
home: Scaffold(
appBar: AppBar(title: Text('Native Code Integration')),
body: Center(
child: ElevatedButton(
onPressed: _getNativeMessage,
child: Text('Get Native Message'),
),
),
),
);
}
Future<void> _getNativeMessage() async {
String message;
try {
final String result = await platform.invokeMethod('getNativeMessage');
message = 'Native Message: $result';
} on PlatformException catch (e) {
message = "Failed to get native message: '${e.message}'.";
}
print(message);
}
}
**4. Implementing Native Code for Android**
Navigate to the android directory in your Flutter project. Open the MainActivity.java or MainActivity.kt file located at android/app/src/main/java/com/example/native_integration_example/.
Add the following code to handle the platform channel and return a native message:
**Java (MainActivity.java):**
package com.example.native_integration_example;
import io.flutter.embedding.android.FlutterActivity;
import io.flutter.embedding.engine.FlutterEngine;
import io.flutter.plugin.common.MethodChannel;
public class MainActivity extends FlutterActivity {
private static final String CHANNEL = "com.example.native_integration_example/channel";
@Override
public void configureFlutterEngine(FlutterEngine flutterEngine) {
super.configureFlutterEngine(flutterEngine);
new MethodChannel(flutterEngine.getDartExecutor().getBinaryMessenger(), CHANNEL)
.setMethodCallHandler((call, result) -> {
if (call.method.equals("getNativeMessage")) {
result.success("Hello from Android Native Code");
} else {
result.notImplemented();
}
});
}
}
## Kotlin (MainActivity.kt):
package com.example.native_integration_example
import io.flutter.embedding.android.FlutterActivity
import io.flutter.embedding.engine.FlutterEngine
import io.flutter.plugin.common.MethodChannel
class MainActivity: FlutterActivity() {
private val CHANNEL = "com.example.native_integration_example/channel"
override fun configureFlutterEngine(flutterEngine: FlutterEngine) {
super.configureFlutterEngine(flutterEngine)
MethodChannel(flutterEngine.dartExecutor.binaryMessenger, CHANNEL).setMethodCallHandler { call, result ->
if (call.method == "getNativeMessage") {
result.success("Hello from Android Native Code")
} else {
result.notImplemented()
}
}
}
}
**5. Implementing Native Code for iOS**
Navigate to the ios directory in your Flutter project. Open the AppDelegate.swift or AppDelegate.m file located at ios/Runner/.
Add the following code to handle the platform channel and return a native message:
**Swift (AppDelegate.swift):**
import UIKit
import Flutter
@UIApplicationMain
@objc class AppDelegate: FlutterAppDelegate {
override func application(
_ application: UIApplication,
didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?
) -> Bool {
let controller = window?.rootViewController as! FlutterViewController
let channel = FlutterMethodChannel(name: "com.example.native_integration_example/channel",
binaryMessenger: controller.binaryMessenger)
channel.setMethodCallHandler { (call, result) in
if call.method == "getNativeMessage" {
result("Hello from iOS Native Code")
} else {
result(FlutterMethodNotImplemented)
}
}
return super.application(application, didFinishLaunchingWithOptions: launchOptions)
}
}
**Objective-C (AppDelegate.m):**
#include "AppDelegate.h"
#include "GeneratedPluginRegistrant.h"
@implementation AppDelegate
- (BOOL)application:(UIApplication *)application
didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
[GeneratedPluginRegistrant registerWithRegistry:self];
FlutterViewController* controller = (FlutterViewController*)self.window.rootViewController;
FlutterMethodChannel* channel = [FlutterMethodChannel
methodChannelWithName:@"com.example.native_integration_example/channel"
binaryMessenger:controller.binaryMessenger];
[channel setMethodCallHandler:^(FlutterMethodCall* call, FlutterResult result) {
if ([@"getNativeMessage" isEqualToString:call.method]) {
result(@"Hello from iOS Native Code");
} else {
result(FlutterMethodNotImplemented);
}
}];
return [super application:application didFinishLaunchingWithOptions:launchOptions];
}
@end
**6. Running the Application**
After implementing the native code for both Android and iOS, you can run your Flutter application on a device or emulator:
flutter run
When you press the "Get Native Message" button, the app will communicate with the native code and display the message from the respective platform.
## Conclusion
Integrating native code with a Flutter application involves creating a platform channel for communication, implementing the native code in the appropriate platform directories, and handling method calls and results. This approach allows you to leverage platform-specific features and APIs while maintaining a single codebase for your Flutter application. By following the steps outlined above, you can seamlessly integrate native code for both Android and iOS, enhancing the functionality and performance of your Flutter apps. | chariesdevil | |
1,898,945 | From Intern to Developer: My Quest for a Full-Time Gig at 18 | Hey everyone, I'm facing a battle cry – internship or junior developer role? I've been coding for 5... | 0 | 2024-06-24T14:02:17 | https://dev.to/enzoenrico/from-intern-to-developer-my-quest-for-a-full-time-gig-at-18-p79 | beginners, productivity, career, learning | Hey everyone, I'm facing a battle cry – internship or junior developer role?
I've been coding for 5 years now (started young, right?), and I'm currently slaying the beast of a Software Engineering degree. My internship? It's been a valiant 1.5 years, but with the adult life just around the corner, I need to get a real dev job.
## Operation: Level Up!
So, here's my 3-part battle plan to dominate the applicant pool:
1. **Website Relaunch:** [My portfolio](https://enzoenrico.vercel.app) is about to transform from a humble squire to a shining knight!
2. **CV Revamp:** Time to polish that resume till it gleams like gold.
3. **Networking Blitz:** Courses, events – you name it, I'm attending to build my developer fellowship!
My ultimate quest? Landing those sweet developer interviews (and hopefully the job!).
And if you have any tips, recommendations or wanna chat in general, let's connect! I'm eager to learn what I grasp from more experienced Engineers!
Stay tuned, because this is just the first chapter in my journey. I'll be documenting the whole thing, so you can follow along as I transform from intern to full-fledged developer! | enzoenrico |
1,898,948 | The Future of Money:Insights from Coinbase's Latest Report | "Crypto is the future of money," Coinbase emphasizes in its latest report, "The State of Crypto". It... | 0 | 2024-06-24T13:59:48 | https://36crypto.com/the-future-of-money-insights-from-coinbase-latest-report/ | cryptocurrency, review, news | "Crypto is the future of money," Coinbase emphasizes in its latest [report](https://assets.ctfassets.net/o10es7wu5gm1/2n9KtrCyq59uQ4uOCF18hi/21fd908639c6c87e9d36a994ccacda51/Q2_2024_State_of_Crypto_final.pdf), "The State of Crypto". It notes that during the first quarter of 2024, Fortune 100 companies announced a record number of blockchain and Web3 initiatives. However, the biggest obstacle for them was the lack of reliable specialists and the necessary skills. In addition to this, the declining share of American crypto developers further aggravates the situation. Currently, only one out of four developers is from the United States, which is 14% less than in the last five years. But despite this, interest in blockchain technology remains high.
**Crypto Help to Update the Financial System**
Coinbase draws attention to a significant reduction in the number of crypto developers in the United States. Executives of Fortune 500 companies are concerned about the lack of reliable specialists, seeing this as a bigger obstacle to the introduction of cryptocurrencies than regulatory issues.
At the same time, small businesses are interested in finding cryptocurrency-savvy candidates for future positions in technical, financial, and legal departments. About 68% of respondents believe that blockchain and cryptocurrencies can solve financial problems such as processing time and transaction fees.
_"The market infrastructure on which we have been issuing, trading, and wrapping assets into portfolios is 50 years old… What we are starting to see with blockchain technologies is that there are ways to improve that tremendously. There are ways to cut processing times, get more real-time information, and enable 24/7/365 trading because we live in a global world where our businesses operate around the clock."_ [said](https://www.youtube.com/watch?v=eRxVREUdOYs&ab_channel=Chainlink) Sandy Kaull, Franklin Templeton's head of digital assets.
Volodymyr Nosov, CEO of WhiteBIT, shares similar views, noting: _"Despite the volatility, Bitcoin is gold for the new generation. Young investors won't invest in gold, they believe in the digital age […] Blockchain is the future that needs to be understood."_
Coinbase notes that recent years have been a period of experimentation with onchain, but technology and financial companies have found the best formula between product and market. In the first quarter, these two sectors accounted for 8 out of 10 onchain initiatives, which shows an upward trend compared to 2023, when they accounted for almost 6 out of 10.
In addition, interest in using onchain technology for customer transactions extends not only to financial companies, but also to the retail, healthcare, and consumer goods industries. These include:
- Exploring crypto as a form of payment for remote or global region
- Implementing play-to-earn mechanics to enhance video game experience
- Letting healthcare patients and customers use digital wallets to pay for products and service
- Accepting healthcare donations in crypto
- Blockchain- and NFT-based restaurant loyalty programs
**Rising Interest of Using Stablecoins**
After that, Coinbase analyzes how stablecoins are gradually beginning to play an increasingly important role in the global economy. In the first quarter of 2024, the daily volume of stablecoin transactions broke records and reached $150 billion.
Stablecoins mitigate the volatility of popular cryptocurrencies such as Bitcoin, making them more suitable for daily transactions. They are widely used for cross-border payments and trading in other cryptocurrencies.
More than 50% of the surveyed companies noted that the introduction of stablecoins could open up new business opportunities. The relative stability of the stablecoins makes them attractive for companies seeking to avoid the fluctuations typical of other cryptocurrencies. Another reason for the attractiveness of stablecoins is low transaction fees and faster processing times.
Pegah Soltani, Head of Payment Products at Ripple, similarly [spoke](https://www.pymnts.com/news/cross-border-commerce/cross-border-payments/2023/cross-border-payment-solutions-give-big-lift-small-business-growth-plans/) about cross-border payments in the world. She explained that payment standards vary greatly from country to country. For example, using SWIFT or TIPS in Europe and FedNow in the US requires different protocols, which limits the quality and detail of data.
As a result, these systems operate as closed networks that interact inefficiently with each other, requiring significant manual intervention and ultimately leading to an unsatisfactory payment experience.
According to Coinbase, the efficiency and cost-effectiveness of cryptocurrency transactions are compelling arguments in favor of their implementation. In addition, 76% of small businesses express interest in any potential benefits that cryptocurrency may offer, indicating a broad willingness to explore these technologies.
Compass Coffee, mentioned by Coinbase in its report, is already actively implementing payments in stablecoins. With many customers switching from cash to cards, the company said it was tired of paying high transaction fees, funds that could be reinvested in the business. That is why it started offering stablecoins as an alternative payment method.
_"Accepting crypto payments could be transformational for our business. We hope to help transform retail experiences by accepting USDC"_ [said](https://www.foxbusiness.com/economy/dc-coffee-chain-debuts-crypto-payments-coinbase-partnership) Michael Haft, Compass Coffee Founder and CEO
**Summary**
Coinbase's State of Crypto report emphasizes the importance of cryptocurrencies as the future of money. The first quarter of 2024 showed a significant increase in blockchain and Web3 initiatives among Fortune 100 companies, despite the lack of qualified specialists and the decline in the share of American crypto developers. However, despite these challenges, interest in blockchain technology remains high. | hryniv_vlad |
1,898,944 | Serverless Workloads on Kubernetes with Knativ | Kubernetes has become a standard tool for container orchestration, providing a set of primitives to... | 0 | 2024-06-24T13:54:33 | https://dev.to/platform_engineers/serverless-workloads-on-kubernetes-with-knativ-4e9e | Kubernetes has become a standard tool for container orchestration, providing a set of primitives to run resilient, distributed applications. However, managing the underlying infrastructure can be time-consuming. The serverless paradigm helps users deploy applications without worrying about the infrastructure. Knative, a Kubernetes-based platform, provides components to deploy and manage serverless workloads, offering open-source Kubernetes integration, cloud-agnosticism, building blocks, and extensibility.
### Knative Components
Knative features two main components: Eventing and Serving. Eventing manages events that trigger serverless workloads. Serving is a set of components to deploy and manage serverless workloads. Knative Serving enables developers to deploy and manage serverless applications on top of Kubernetes, allowing for quick and easy deployment of new services, scaling, and connection to other services and event sources.
### Deploying Serverless Workloads with Knative
To deploy a serverless workload on Knative, you must create a `Service` resource. This can be achieved using either the Knative CLI (`kn`) or the `kubectl` command line tool for applying YAML files to your Kubernetes cluster.
### Using the Knative CLI
To use the Knative CLI, you need to install it first. You can download the latest version of the Knative CLI binary and set it up as follows:
```bash
wget https://github.com/knative/client/releases/download/knative-v1.8.1/kn-linux-amd64
mv kn-linux-amd64 kn
chmod +x kn
cp kn /usr/local/bin/
```
Verify the installation by running:
```bash
kn version
```
### Creating a Service Resource
Once you have the Knative CLI set up, you can create a `Service` resource using the following command:
```bash
kn service create <service-name> --image <image-name>
```
This command will create a new `Service` resource and automatically create a new `Revision` and `Route` for the service. The `Revision` is a point-in-time snapshot of your workload, and the `Route` assigns traffic to the `Revision`.
### Using kubectl
Alternatively, you can use `kubectl` to create a `Service` resource by applying a YAML file to your Kubernetes cluster. Here is an example YAML file:
```yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: <service-name>
spec:
template:
spec:
containers:
- image: <image-name>
```
Apply the YAML file using the following command:
```bash
kubectl apply -f service.yaml
```
### Platform Engineering and Continuous Deployment
Knative can be integrated with continuous deployment tools like ArgoCD to automate the deployment of serverless workloads. ArgoCD follows the GitOps pattern, allowing developers to control application updates and infrastructure setup from a unified platform. This integration enables developers to focus on the functionality of the service, abstracting infrastructure issues like scaling and fault tolerance.
### Conclusion
Knative provides a robust platform for deploying and managing serverless workloads on Kubernetes. By using Knative, developers can focus on writing code without worrying about the underlying infrastructure. The platform's components, such as Eventing and Serving, enable efficient management of serverless applications. With the ability to integrate with continuous deployment tools, Knative simplifies the process of deploying and managing modern, cloud-native applications. | shahangita | |
1,898,941 | Agile Manifesto in examples | Why was our project not successful? We followed all the processes and used modern technologies and... | 0 | 2024-06-24T13:52:15 | https://dev.to/firstname_lastname/agile-manifesto-in-examples-o2 | **Why was our project not successful?**
We followed all the **processes** and used modern technologies and **tools**? We had **comprehensive documentation**. We did everything according to the **contract**. We followed the **plan**. All deadlines were met. What did we miss? What did we do wrong?
I won't answer these questions, I'd rather tell you five short stories.
Story One:
> I was hired for my first job as a tester after a 5-minute interview.
> The first question my future boss asked me was:
> - How do you feel about people who swear?
> - I find it terrible! - I answered.
> - Then we cannot work together. Almost all the men work for me, they constantly swear.
> - I disagree, if I work here, they will stop.
> - OK. When can you start?
>
> On my last day in this company I was told that he had never regretted his decision.
Story Two:
> Several years ago I worked as a teacher of the “Software testing” course.
> All the questions from my students I separated into two big groups.
>
> One group:
> - Should I learn any programming language to receive a job offer for the QA Engineer position?
> - Who gets more money: Manual or Automation QA Engineer?
> - Which rating should I have to achieve a “success” badge in my “Software Testing course” certificate?
> - Is it possible to become a software developer if I am working on the software tester position now? Do developers have a higher salary?
>
> And another group:
> - How to select some value in the dropdown in my UI auto test?
> - When is it required to add detailed steps for test cases?
> - Are there any tools that simplify generating test data?
>
> Statistically those who asked most of the questions from the 1st group didn’t pass interviews. Unlike those who asked questions from the 2nd group, they are QA Engineers now.
Story Three:
> I used to use the following tools and programming languages in the daily work:
> - XSLT - a language for transforming XML to HTML
> - SVN - a version control system
> - MS Project - a project management software
> - Weebly - a website builder
> - IIS - a web server software created by Microsoft
> - WatiN, Test Complete - tools for UI auto testing
> - NUnit - a unit testing framework
>
> I don’t need them anymore.
Story Four:
> One of the most complex applications I’ve ever tested is Sitecore CMS.
>
> At the beginning the CMS was developed by five friends who had an idea. They worked close to each other, nobody thought about the requirements documentation.
>
> Additionally we had a strong rule: “Do not disturb developers”.
>
> So, no requirements, no ability to ask questions to developers, no access to the code. Sometimes when we (testers) didn’t understand something in the CMS work we used to decompile DLL files with the .NET Reflector and read the code to understand implementation of the feature. If it made sense we did our testing. If not we asked to re-work the feature.
>
> Of course, this story sounds crazy but we improved our way of work.
>
> That time we didn’t have proper processes and comprehensive documentation but we created a product that is popular nowadays (even after 20 years).
Story Five:
> We were developing an application that allowed customers to book hotel rooms and tickets for different events.
>
> It was New Year eve, when we started another sprint, and agreed on a set of features with the client. But then he changed his mind, and requested to add the possibility for discounts and gifts (according to the rules that he proposed).
>
> Thanks to this idea, which we of course implemented, the client received additional profit, and we received bonuses for our work.
I won’t tell you a story when several strong specialists lost a client because they didn’t have proper collaboration with each other and didn’t listen to client’s feedback.
Or a story when a big set of regression tests, gaps in the automation and lack of time might lead to death (comprehensive documentation didn’t save us from this mistake).
I would rather say "thank you" to:
- wise managers who hire those with potential
- interviewers who don’t ask me very detailed questions about the tools I used (because tools become deprecated but experience is always in trend)
- colleagues who love their work and inspire me
- clients who provide a feedback (because their feedback is more important than contracts and plans)
Agile Manifesto saying:
- **Individuals and interactions** over processes and tools
- **Working software** over comprehensive documentation
- **Customer collaboration** over contract negotiation
- **Responding to change** over following a plan
**Fully agree ☺**
Our products are like children. They are more successful when they are simply loved. Sometimes they can be successful even in spite of what we are doing for their own good. And they will never be happy if they were used to please our pride. | firstname_lastname | |
1,898,942 | Automated Deployment of a Next.js Application on Ubuntu server with Git Hooks | Deploying a Next.js application can be streamlined by automating the process with Git hooks and... | 0 | 2024-06-24T13:51:58 | https://dev.to/sabermekki/automated-deployment-of-a-nextjs-application-on-ubuntu-with-git-hooks-30fk | Deploying a Next.js application can be streamlined by automating the process with Git hooks and managing the application lifecycle with PM2. In this article, I'll walk you through setting up your server, transferring your application.
---
**Step 1: Set Up the Server**
Install Node.js, npm, pm2, git:
`ssh your-username@serverIp`
`sudo apt update`
`sudo apt install -y nodejs npm`
`sudo npm install -g pm2`
`sudo apt install -y git`
---
**Step 2: Transfer Your Next.js Application to the Server**
**On your local machine:**
> Add the Remote Repository
`git remote add deploy ssh://your-username@serverIp/your-username/home/apps/my-nextjs-app.git`
**On the server:**
> Create the Bare Repository
```
mkdir -p /your-username/home/apps/my-nextjs-app.git
cd /your-username/home/apps/my-nextjs-app.git
git init --bare
```
> Set Up the Post-Receive Hook
```
cd /your-username/home/apps/my-nextjs-app.git/hooks
nano post-receive
```
> Add the following script to the post-receive hook:
```
#!/bin/bash
REPO_PATH=/your-username/home/my-nextjs-app.git
APP_PATH=/your-username/home/apps/my-nextjs-app
NODE_ENV=production
GIT_WORK_TREE=$APP_PATH git checkout -f main
cd $APP_PATH
npm install
npm run build
pm2 restart nextjs-app || pm2 start npm --name "nextjs-app" -- start
```
> Make the hook executable:
```
chmod +x post-receive
```
---
**Step 3: Deploy Your Application**
> On your local machine, push your code:
```
git push deploy main
```
---
**Conclusion**
In conclusion, deploying a Next.js application on Ubuntu can be efficiently managed through manual deployment, CI/CD pipelines, containerization, cloud services, PaaS solutions, static site export, or third-party hosting services, each offering unique benefits based on scalability, ease of use, cost, and complexity | sabermekki |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.