id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,877,424 | Error of servlet | A post by deenanath pandey | 0 | 2024-06-05T02:17:42 | https://dev.to/deenanath1991/error-of-servlet-1jfd | help |

| deenanath1991 |
1,877,423 | What is cloud computing ? | In simple words, cloud computing is delivery of resource, IT environments that enable on-demand... | 0 | 2024-06-05T02:08:18 | https://dev.to/leonardosantosbr/what-is-cloud-computing--13f1 | learning, cloudcomputing, aws, azure | In simple words, cloud computing is delivery of resource, IT environments that enable on-demand services like computing, storage, networking, which can be accessed over the Internet using a cloud provider.
**Advantages of Cloud Computing**
- _Agility_
Being able to implement technology services in a short time, being able to expand activities to new geographic regions.
- _Elasticity_
You provision the amount of resources actually needed, being able to increase or decrease the amount of resources that will be needed.
- _Cost savings_
Expenses are equivalent only to the IT consumed
**Types of Cloud Computing**
1. _(IaaS) = infrastructure as a service_
Instead of physically purchasing and managing servers, companies can rent these resources from a cloud provider such as: Virtual Servers, Operating Systems, Storage
2. _(PaaS)= plataform as a service_
Deliver development environments so that developers can create and manage applications. PaaS allows companies to focus on software development while the cloud provider manages the infrastructure. ex: azure, aws, heroku
3. _(SaaS)= software as a service_
software distribution where applications are hosted by a service provider and made available over the internet, and can be accessed via the web such as: netflix, google drive, dropbox


| leonardosantosbr |
1,877,422 | How to Build AI-Driven Retrieval by Integrating Langchain and Elasticsearch | Discover how the synergistic power of Langchain and Alibaba Cloud Elasticsearch can revolutionize the... | 0 | 2024-06-05T02:06:52 | https://dev.to/a_lucas/how-to-build-ai-driven-retrieval-by-integrating-langchain-and-elasticsearch-35j0 | tutorial, ai, productivity, learning | Discover how the synergistic power of Langchain and Alibaba Cloud Elasticsearch can revolutionize the way you search and analyze data. This article provides an expert insight into blending these technologies for smarter, AI-driven data retrieval.
## The Power of Langchain
Langchain is a library designed to streamline natural language processing tasks, making it easier for developers to integrate and utilize AI models such as GPT-3 for complex language tasks. It plays a pivotal role in enhancing the efficiency of data retrieval by simplifying the interaction with large language models.
## The Versatility of Alibaba Cloud Elasticsearch
Elasticsearch is a highly scalable open-source full-text search and analytics engine. Users can quickly and in real-time search and analyze vast amounts of data. It's a potent tool for building sophisticated search experiences ([Learn more about Alibaba Cloud Elasticsearch](https://www.alibabacloud.com/en/product/elasticsearch)).
## Harnessing the Combined Force of Langchain and Elasticsearch
By fusing Langchain with Elasticsearch, we unlock potent language processing capabilities from AI models with Elasticsearch's robust search functionality to create intelligent search solutions. Here are some examples of how to set up and query your Elasticsearch index using Langchain.
### Example 1: Connecting Langchain to Elasticsearch
Here's an example showcasing the integration of Langchain with Elasticsearch for data retrieval:
```
import ssl
import openai
from elasticsearch import Elasticsearch
from langchain_community.vectorstores import ElasticsearchStore
from langchain_openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain_community.document_loaders import TextLoader
# Load the document
file_path = 'your_document.txt'
encoding = 'utf-8'
loader = TextLoader(file_path, encoding=encoding)
documents = loader.load()
# Split the document into manageable chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# Connect to Elasticsearch
conn = Elasticsearch(
"https://<your-alibaba-cloud-elasticsearch-instance>:9200",
http_auth=('username', 'password'),
verify_certs=True
)
# Index documents and search
embeddings = OpenAIEmbeddings()
db = ElasticsearchStore.from_documents(docs, embeddings, index_name="your_index", es_connection=conn)
db.client.indices.refresh(index="your_index")
query = "What are the major trends in tech for 2024?"
results = db.similarity_search(query)
print(results)
```
### Example 2: Enhanced Search with Metadata
We can refine our search further by incorporating more rich metadata into our documents and utilizing this for more sophisticated searches:
```
# Add metadata to documents
for i, doc in enumerate(docs):
doc.metadata["date"] = f"{range(2010, 2020)[i % 10]}-01-01"
doc.metadata["rating"] = range(1, 6)[i % 5]
doc.metadata["author"] = ["John Doe", "Jane Doe"][i % 2]
# Create a new index with metadata
db = ElasticsearchStore.from_documents(docs, embeddings, index_name="your_metadata_index", es_connection=conn)
# Perform search with metadata filtering
query = "What are the key milestones in technology?"
docs = db.similarity_search(query, filter=[{"term": {"metadata.author.keyword": "Jane Doe"}}])
print(docs[0].metadata)
```
## Troubleshooting and Challenges
When implementing any new technology, there is invariably a learning curve. Make sure your setup is correctly configured and every step is scrutinized to work as expected. Alibaba Cloud provides ample documentation and support to ease this process.
## In Conclusion
The convergence of Langchain and Alibaba Cloud Elasticsearch paves a new path for complex information retrieval tasks, providing us with tailored solutions that mold into the evolving landscape of data search and natural language processing. This integration not only handles large datasets but does so intelligently and efficiently.
As we step into the era of AI-enhanced data retrieval, leveraging these advanced tools is no longer optional. Alibaba Cloud Elasticsearch, with its rich feature set and easy-to-use platform, is leading the transformation.
Ready to start your journey with Elasticsearch on Alibaba Cloud? Explore our tailored Cloud solutions and services to take the first step towards transforming your data into a visual masterpiece. [Click here, embark on your 30-Day Free Trial](https://c.tb.cn/F3.bTfFpS) | a_lucas |
1,877,421 | Shanghai Rumi Electromechanical Technology's Impact on the Industry | 2d7cce047a34a7d66425e4880c976d2438d4b051ac669e9e8e35cad6f7f82f10.png Shanghai Rumi Electromechanical... | 0 | 2024-06-05T02:04:44 | https://dev.to/sylvia_joyceke_6f128206ac/shanghai-rumi-electromechanical-technologys-impact-on-the-industry-oal | 2d7cce047a34a7d66425e4880c976d2438d4b051ac669e9e8e35cad6f7f82f10.png
Shanghai Rumi Electromechanical Technology's Impact on the Industry
Shanghai Rumi Electromechanical Technology's impact on the Industry Technology can be a group which try ongoing creates products that will do factors for example {creating cars as|cars that are creating} develop homes. They are great at whatever they are doing and in addition produced impact that are bigger the areas. Listed here are the grounds which can be few you'll want to stress aboutShanghai Rumi Electromechanical Technology's impact the areas.
Value:
Shanghai Rumi Electromechanical Technology's Impact on the Industry possesses {complete levels which|levels that are complete} was big of this create their equipment a lot better than folk. For instance, their equipment are now accurate plus fast. Meaning they build is accurate that they're going to give information quickly as the affairs. important since it saves {time and money|money and time}. Also, their equipment have grown to be dependable. This implies must they are doing, they are able to efficiently feel put that they do not eat up most frequently and/or. This might be important {about them deteriorating because it means used the equipment for an excessive period minus fretting|because it means used the equipment for an excessive period minus fretting about them deteriorating}.
Innovation:
Shanghai Rumi Electromechanical Technology's Impact on the Industry influence the {|certain} areas is discovering new plus best plus methods is enhanced give their products. They typically utilize the technology that has been latest plus how to make certain that their products could be the top they {may|might} be. As an example, they have recently started cleverness which is utilizing was artificialAI) inside their equipment. Which means that the equipment can look over plus enhance since they're place. This is awesome as it shows that the equipment {might|may} get best plus most useful into the run which are very long.
Protection:
Shanghai Rumi Electromechanical Technology's impact on the {continuing company takes|company that is continuing} safeguards. They make certain that their products try safer to work well with plus that no {|physical} body gets harmed when using the them. They have an entire {large amount of security|amount that is large of} characteristics built within their equipment, like crisis avoid buttons and sensors that identify if options is the means. This is crucial since it shows that some one may utilize the equipment minus worrying about getting damage.
Use:
Shanghai Rumi Electromechanical Technology's products are used in the quantity which are big organizations. As an example, they make products which will make cars, create property, plus furthermore give things. This is vital because it means their equipment can be used in a {|true} number of techniques is different. This may cause them to become {versatile plus valuable|valuable plus versatile}.
How exactly to integrate:
Shanghai Rumi Electromechanical Technology's products or High Viscosity Mixers are in fact user friendly. They've instructions that are an {easy task to|task that is easy} follow and they're built to feel user-friendly. This implies you also'll figure out how to include considered one of their equipment effectively when you've got never place the machine before.
Service:
Shanghai Rumi Electromechanical Technology produces solutions that will be near their consumers. They're going to allow you to do the repair if you have the {nagging issue plus|issue that is nagging} one among their equipment. In addition they incorporate classes to help you learn how to use their products. That is crucial you can use their products and complete self-esteem, understanding {since it ensures that you've got services if you want it|if you want it since it ensures that you've got services}.
Quality:
Shanghai Rumi Electromechanical Technology's products are in reality quality that are greater. They have been crafted from the extremely best content and are frequently created to endure an interval that was very long. This is crucial you can use their products for quite a while minus fretting {since it means about them wear straight down|about them wear straight down since it means}.
Application:
Shanghai Rumi Electromechanical Technology's Double Planetary Mixers may be used in the {|known} levels which are big methods. They have been based in the {continuing company that|company that is continuing} are automotive creating vehicles, in the construction company to generate domiciles, and in addition in to the formulation areas to help with making things. This is vital since it means their products are versatile and will also be employed in a {|complete} lot of companies which can be different.
To sum up, Shanghai Rumi Electromechanical Technology is merely a {united group that|group that is united} creates equipment being close. Their Products is quick, accurate, plus dependable. They are safer to work with plus consist of services which are close service. Then then you need think about Shanghai Rumi Electromechanical Technology's equipment you make tasks if you'd like the product to help. They are top in the marketplace and are furthermore present in a {|complete} lot of companies which can be various.
Source: https://www.rumiasia.com/Highviscositymixers | sylvia_joyceke_6f128206ac | |
1,877,396 | The Backbone of Collaboration: Understanding Merge Request Reviews | Introduction Merge request reviews form an integral part of successful collaborative... | 0 | 2024-06-05T02:00:02 | https://dev.to/iswanjumat/the-backbone-of-collaboration-understanding-merge-request-reviews-3c8g | ### Introduction
Merge request reviews form an integral part of successful collaborative projects, especially in the field of software development. They ensure high-quality code, facilitate knowledge sharing, and foster a more collaborative team environment. In this blog post, we will delve into the importance of merge request reviews, how to conduct them effectively, and some best practices to follow.
### Understanding the Importance of Merge Request Reviews
Merge request reviews are essential for maintaining the quality of code in a collaborative software development project. They provide a platform for developers to scrutinize proposed changes and spot any potential issues before they become part of the main code base. This process helps reduce bugs, enhances code readability, and improves the overall performance of the software.
> To ensure a smooth merge process, writing a clear and informative description for your pull request (PR) is crucial. This description helps reviewers understand the context and purpose of your changes. Here's a sample PR description for your reference: \[[link to sample](https://gist.github.com/iswanj/1a84bd1ffee825d59bd15ea5d2d42dc6)\]
### How to Conduct Effective Merge Request Reviews
Effective merge request reviews require a clear understanding of the project's goals and the specific changes proposed. Here are a few steps to follow:
1. Understand the context: Before starting the review, understand the purpose of the changes and how it fits into the overall project.
2. Check the code: Review the proposed changes thoroughly. Look for any code smells, potential bugs or performance issues.
3. Provide constructive feedback: Instead of just pointing out what's wrong, provide suggestions on how to improve the code. Remember, the goal is not just to find faults, but to help each other grow as developers.
### Best Practices for Merge Request Reviews
Adopting best practices for merge request reviews can streamline the process and make it more effective. Here are a few to consider:
1. Keep it small: Smaller merge requests are easier to review and less likely to introduce bugs.
2. Review regularly: Regular reviews can help catch issues early and reduce the time taken to integrate changes.
3. Foster a positive culture: Encourage open communication and respect for each other's opinions. This can help create a more collaborative and productive team environment.
### Conclusion
Merge request reviews are a critical tool in any collaborative development project. They help maintain code quality, share knowledge among the team, and promote a collaborative culture. By understanding their importance, learning how to conduct them effectively, and adopting best practices, teams can significantly enhance their productivity and the quality of their output.
| iswanjumat | |
1,877,395 | JavaScript Language Quick Start | Background This section gives a little background on JavaScript to help you understand why... | 0 | 2024-06-05T01:51:51 | https://dev.to/fmzquant/javascript-language-quick-start-obl | javascript, cryptocurrency, trading, fmzquant | ## Background
This section gives a little background on JavaScript to help you understand why it is the way it is.
## JavaScript Versus ECMAScript
ECMAScript is the official name for JavaScript. A new name became necessary because there is a trademark on JavaScript (held originally by Sun, now by Oracle). At the moment, Mozilla is one of the few companies allowed to officially use the name JavaScript because it received a license long ago. For common usage, these rules apply:
- JavaScript means the programming language.
- ECMAScript is the name used by the language specification. Therefore, whenever referring to versions of the language, people say ECMAScript. The current version of JavaScript is ECMAScript 5; ECMAScript 6 is currently being developed.
## Influences and Nature of the Language
JavaScript’s creator, Brendan Eich, had no choice but to create the language very quickly (or other, worse technologies would have been adopted by Netscape). He borrowed from several programming languages: Java (syntax, primitive values versus objects), Scheme and AWK (first-class functions), Self (prototypal inheritance), and Perl and Python (strings, arrays, and regular expressions).
JavaScript did not have exception handling until ECMAScript 3, which explains why the language so often automatically converts values and so often fails silently: it initially couldn’t throw exceptions.
On one hand, JavaScript has quirks and is missing quite a bit of functionality (block-scoped variables, modules, support for subclassing, etc.). On the other hand, it has several powerful features that allow you to work around these problems. In other languages, you learn language features. In JavaScript, you often learn patterns instead.
Given its influences, it is no surprise that JavaScript enables a programming style that is a mixture of functional programming (higher-order functions; built-in map, reduce, etc.) and object-oriented programming (objects, inheritance).
## Syntax
This section explains basic syntactic principles of JavaScript.
## An Overview of the Syntax
A few examples of syntax:
```
// Two slashes start single-line comments
var x; // declaring a variable
x = 3 + y; // assigning a value to the variable `x`
foo(x, y); // calling function `foo` with parameters `x` and `y`
obj.bar(3); // calling method `bar` of object `obj`
// A conditional statement
if (x === 0) { // Is `x` equal to zero?
x = 123;
}
// Defining function `baz` with parameters `a` and `b`
function baz(a, b) {
return a + b;
}
```
Note the two different uses of the equals sign:
- A single equals sign (=) is used to assign a value to a variable.
- A triple equals sign (===) is used to compare two values (see Equality Operators).
## Statements Versus Expressions
To understand JavaScript’s syntax, you should know that it has two major syntactic categories: statements and expressions:
- Statements “do things.” A program is a sequence of statements. Here is an example of a statement, which declares (creates) a variable foo:
```
var foo;
```
- Expressions produce values. They are function arguments, the right side of an assignment, etc. Here’s an example of an expression:
```
3 * 7
```
The distinction between statements and expressions is best illustrated by the fact that JavaScript has two different ways to do if-then-else—either as a statement:
```
var x;
if (y >= 0) {
x = y;
} else {
x = -y;
}
```
or as an expression:
```
var x = y >= 0 ? y : -y;
```
You can use the latter as a function argument (but not the former):
```
myFunction(y >= 0 ? y : -y)
```
Finally, wherever JavaScript expects a statement, you can also use an expression; for example:
```
foo(7, 1);
```
The whole line is a statement (a so-called expression statement), but the function call foo(7, 1) is an expression.
## Semicolons
Semicolons are optional in JavaScript. However, I recommend always including them, because otherwise JavaScript can guess wrong about the end of a statement. The details are explained in Automatic Semicolon Insertion.
Semicolons terminate statements, but not blocks. There is one case where you will see a semicolon after a block: a function expression is an expression that ends with a block. If such an expression comes last in a statement, it is followed by a semicolon:
```
// Pattern: var _ = ___;
var x = 3 * 7;
var f = function () { }; // function expr. inside var decl.
```
## Comments
JavaScript has two kinds of comments: single-line comments and multiline comments. Single-line comments start with // and are terminated by the end of the line:
```
x++; // single-line comment
```
Multiline comments are delimited by /* and */:
```
/* This is
a multiline
comment.
*/
```
## Variables and Assignment
Variables in JavaScript are declared before they are used:
```
var foo; // declare variable `foo`
```
## Assignment
You can declare a variable and assign a value at the same time:
```
var foo = 6;
```
You can also assign a value to an existing variable:
```
foo = 4; // change variable `foo`
```
## Compound Assignment Operators
There are compound assignment operators such as +=. The following two assignments are equivalent:
```
x += 1;
x = x + 1;
```
## Identifiers and Variable Names
Identifiers are names that play various syntactic roles in JavaScript. For example, the name of a variable is an identifier. Identifiers are case sensitive.
Roughly, the first character of an identifier can be any Unicode letter, a dollar sign ($), or an underscore (_). Subsequent characters can additionally be any Unicode digit. Thus, the following are all legal identifiers:
```
arg0
_tmp
$elem
π
```
The following identifiers are reserved words—they are part of the syntax and can’t be used as variable names (including function names and parameter names):

The following three identifiers are not reserved words, but you should treat them as if they were:

Lastly, you should also stay away from the names of standard global variables. You can use them for local variables without breaking anything, but your code still becomes confusing.
## Values
JavaScript has many values that we have come to expect from programming languages: booleans, numbers, strings, arrays, and so on. All values in JavaScript have properties.Each property has a key (or name) and a value. You can think of properties like fields of a record. You use the dot (.) operator to read a property:
```
value.propKey
```
For example, the string 'abc' has the property length:
```
> var str = 'abc';
> str.length
3
```
The preceding can also be written as:
```
> 'abc'.length
3
The dot operator is also used to assign a value to a property:
> var obj = {}; // empty object
> obj.foo = 123; // create property `foo`, set it to 123
123
> obj.foo
123
```
And you can use it to invoke methods:
```
> 'hello'.toUpperCase()
'HELLO'
```
In the preceding example, we have invoked the method toUpperCase() on the value 'hello'.
## Primitive Values Versus Objects
JavaScript makes a somewhat arbitrary distinction between values:
- The primitive values are booleans, numbers, strings, null, and undefined.
- All other values are objects.
A major difference between the two is how they are compared; each object has a unique identity and is only (strictly) equal to itself:
```
> var obj1 = {}; // an empty object
> var obj2 = {}; // another empty object
> obj1 === obj2
false
> obj1 === obj1
true
```
In contrast, all primitive values encoding the same value are considered the same:
```
> var prim1 = 123;
> var prim2 = 123;
> prim1 === prim2
true
```
The next two sections explain primitive values and objects in more detail.
## Primitive Values
The following are all of the primitive values (or primitives for short):
- Booleans: true, false (see Booleans)
- Numbers: 1736, 1.351 (see Numbers)
- Strings: 'abc', "abc" (see Strings)
- Two “nonvalues”: undefined, null (see undefined and null)
Primitives have the following characteristics:
## Compared by value
The “content” is compared:
```
> 3 === 3
true
> 'abc' === 'abc'
true
```
### Always immutable
Properties can’t be changed, added, or removed:
```
> var str = 'abc';
> str.length = 1; // try to change property `length`
> str.length // ⇒ no effect
3
> str.foo = 3; // try to create property `foo`
> str.foo // ⇒ no effect, unknown property
undefined
```
(Reading an unknown property always returns undefined.)
## Objects
All nonprimitive values are objects. The most common kinds of objects are:
- Plain objects, which can be created by object literals (see Single Objects):
```
{
firstName: 'Jane',
lastName: 'Doe'
}
```
The preceding object has two properties: the value of property firstName is 'Jane' and the value of property lastName is 'Doe'.
- Arrays, which can be created by array literals (see Arrays):
```
[ 'apple', 'banana', 'cherry' ]
```
The preceding array has three elements that can be accessed via numeric indices. For example, the index of 'apple' is 0.
- Regular expressions, which can be created by regular expression literals (see Regular Expressions):
```
/^a+b+$/
```
Objects have the following characteristics:
### Compared by reference
Identities are compared; every value has its own identity:
```
> ({} === {}) // two different empty objects
false
> var obj1 = {};
> var obj2 = obj1;
> obj1 === obj2
true
```
### Mutable by default
You can normally freely change, add, and remove properties (see Single Objects):
```
> var obj = {};
> obj.foo = 123; // add property `foo`
> obj.foo
123
```
### Undefined and null
Most programming languages have values denoting missing information. JavaScript has two such “nonvalues,” undefined and null:
undefined means “no value.” Uninitialized variables are undefined:
```
> var foo;
> foo
undefined
```
Missing parameters are undefined:
```
> function f(x) { return x }
> f()
undefined
```
If you read a nonexistent property, you get undefined:
```
> var obj = {}; // empty object
> obj.foo
undefined
```
- null means “no object.” It is used as a nonvalue whenever an object is expected (parameters, last in a chain of objects, etc.).
## WARNING
undefined and null have no properties, not even standard methods such as toString().
## Checking for undefined or null
Functions normally allow you to indicate a missing value via either undefined or null. You can do the same via an explicit check:
```
if (x === undefined || x === null) {
...
}
```
You can also exploit the fact that both undefined and null are considered false:
```
if (!x) {
...
}
```
## WARNING
false, 0, NaN, and '' are also considered false (see Truthy and Falsy).
## Categorizing Values Using typeof and instanceof
There are two operators for categorizing values: typeof is mainly used for primitive values, while instanceof is used for objects.
typeof looks like this:
```
typeof value
```
It returns a string describing the “type” of value. Here are some examples:
```
> typeof true
'boolean'
> typeof 'abc'
'string'
> typeof {} // empty object literal
'object'
> typeof [] // empty array literal
'object'
```
The following table lists all results of typeof:

typeof null returning 'object' is a bug that can’t be fixed, because it would break existing code. It does not mean that null is an object.
instanceof looks like this:
```
value instanceof Constr
```
It returns true if value is an object that has been created by the constructor Constr (see Constructors: Factories for Objects). Here are some examples:
```
> var b = new Bar(); // object created by constructor Bar
> b instanceof Bar
true
> {} instanceof Object
true
> [] instanceof Array
true
> [] instanceof Object // Array is a subconstructor of Object
true
> undefined instanceof Object
false
> null instanceof Object
false
```
## Booleans
The primitive boolean type comprises the values true and false. The following operators produce booleans:
- Binary logical operators: && (And), || (Or)
- Prefix logical operator: ! (Not)
- Comparison operators:Equality operators: ===, !==, ==, !=
- Ordering operators (for strings and numbers): >, >=, <, <=
## Truthy and Falsy
Whenever JavaScript expects a boolean value (e.g., for the condition of an if statement), any value can be used. It will be interpreted as either true or false. The following values are interpreted as false:
- undefined, null
- Boolean: false
- Number: 0, NaN
- String: ''
All other values (including all objects!) are considered true. Values interpreted as false are called falsy, and values interpreted as true are called truthy. Boolean(), called as a function, converts its parameter to a boolean. You can use it to test how a value is interpreted:
```
> Boolean(undefined)
false
> Boolean(0)
false
> Boolean(3)
true
> Boolean({}) // empty object
true
> Boolean([]) // empty array
true
```
## Binary Logical Operators
Binary logical operators in JavaScript are short-circuiting. That is, if the first operand suffices for determining the result, the second operand is not evaluated. For example, in the following expressions, the function foo() is never called:
```
false && foo()
true || foo()
```
Furthermore, binary logical operators return either one of their operands—which may or may not be a boolean. A check for truthiness is used to determine which one:
## And (&&)
If the first operand is falsy, return it. Otherwise, return the second operand:
```
> NaN && 'abc'
NaN
> 123 && 'abc'
'abc'
```
## Or (||)
If the first operand is truthy, return it. Otherwise, return the second operand:
```
> 'abc' || 123
'abc'
> '' || 123
123
```
## Equality Operators
JavaScript has two kinds of equality:
- Normal, or “lenient,” (in)equality: == and !=
- Strict (in)equality: === and !==
Normal equality considers (too) many values to be equal (the details are explained in Normal (Lenient) Equality (==, !=)), which can hide bugs. Therefore, always using strict equality is recommended.
## Numbers
All numbers in JavaScript are floating-point:
```
> 1 === 1.0
true
```
Special numbers include the following:
NaN (“not a number”)
An error value:
```
> Number('xyz') // 'xyz' can’t be converted to a number
NaN
```
Infinity
Also mostly an error value:
```
> 3 / 0
Infinity
> Math.pow(2, 1024) // number too large
Infinity
```
Infinity is larger than any other number (except NaN). Similarly, -Infinity is smaller than any other number (except NaN). That makes these numbers useful as default values (e.g., when you are looking for a minimum or a maximum).
## Operators
JavaScript has the following arithmetic operators (see Arithmetic Operators):
- Addition: number1 + number2
- Subtraction: number1 - number2
- Multiplication: number1 * number2
- Division: number1 / number2
- Remainder: number1 % number2
- Increment: ++variable, variable++
- Decrement: --variable, variable--
- Negate: -value
- Convert to number: +value
The global object Math (see Math) provides more arithmetic operations, via functions.
JavaScript also has operators for bitwise operations (e.g., bitwise And; see Bitwise Operators).
## Strings
Strings can be created directly via string literals. Those literals are delimited by single or double quotes. The backslash () escapes characters and produces a few control characters. Here are some examples:
```
'abc'
"abc"
'Did she say "Hello"?'
"Did she say \"Hello\"?"
'That\'s nice!'
"That's nice!"
'Line 1\nLine 2' // newline
'Backlash: \\'
```
Single characters are accessed via square brackets:
```
> var str = 'abc';
> str[1]
'b'
```
The property length counts the number of characters in the string:
```
> 'abc'.length
3
```
Like all primitives, strings are immutable; you need to create a new string if you want to change an existing one.
## String Operators
Strings are concatenated via the plus (+) operator, which converts the other operand to a string if one of the operands is a string:
```
> var messageCount = 3;
> 'You have ' + messageCount + ' messages'
'You have 3 messages'
```
To concatenate strings in multiple steps, use the += operator:
```
> var str = '';
> str += 'Multiple ';
> str += 'pieces ';
> str += 'are concatenated.';
> str
'Multiple pieces are concatenated.'
```
## String Methods
Strings have many useful methods (see String Prototype Methods). Here are some examples:
```
> 'abc'.slice(1) // copy a substring
'bc'
> 'abc'.slice(1, 2)
'b'
> '\t xyz '.trim() // trim whitespace
'xyz'
> 'mjölnir'.toUpperCase()
'MJÖLNIR'
> 'abc'.indexOf('b') // find a string
1
> 'abc'.indexOf('x')
-1
```
## Statements
Conditionals and loops in JavaScript are introduced in the following sections.
## Conditionals
The if statement has a then clause and an optional else clause that are executed depending on a boolean condition:
```
if (myvar === 0) {
// then
}
if (myvar === 0) {
// then
} else {
// else
}
if (myvar === 0) {
// then
} else if (myvar === 1) {
// else-if
} else if (myvar === 2) {
// else-if
} else {
// else
}
```
I recommend always using braces (they denote blocks of zero or more statements). But you don’t have to do so if a clause is only a single statement (the same holds for the control flow statements for and while):
```
if (x < 0) return -x;
```
The following is a switch statement. The value of fruit decides which case is executed:
```
switch (fruit) {
case 'banana':
// ...
break;
case 'apple':
// ...
break;
default: // all other cases
// ...
}
```
The “operand” after case can be any expression; it is compared via === with the parameter of switch.
## Loops
The for loop has the following format:
```
for (⟦«init»⟧; ⟦«condition»⟧; ⟦«post_iteration»⟧)
«statement»
```
init is executed at the beginning of the loop. condition is checked before each loop iteration; if it becomes false, then the loop is terminated. post_iteration is executed after each loop iteration.
This example prints all elements of the array arr on the console:
```
for (var i=0; i < arr.length; i++) {
console.log(arr[i]);
}
```
The while loop continues looping over its body while its condition holds:
```
// Same as for loop above:
var i = 0;
while (i < arr.length) {
console.log(arr[i]);
i++;
}
```
The do-while loop continues looping over its body while its condition holds. As the condition follows the body, the body is always executed at least once:
```
do {
// ...
} while (condition);
```
In all loops:
- break leaves the loop.
- continue starts a new loop iteration.
## Functions
One way of defining a function is via a function declaration:
```
function add(param1, param2) {
return param1 + param2;
}
```
The preceding code defines a function, add, that has two parameters, param1 and param2, and returns the sum of both parameters. This is how you call that function:
```
> add(6, 1)
7
> add('a', 'b')
'ab'
```
Another way of defining add() is by assigning a function expression to a variable add:
```
var add = function (param1, param2) {
return param1 + param2;
};
```
A function expression produces a value and can thus be used to directly pass functions as arguments to other functions:
```
someOtherFunction(function (p1, p2) { ... });
```
## Function Declarations Are Hoisted
Function declarations are hoisted—moved in their entirety to the beginning of the current scope. That allows you to refer to functions that are declared later:
```
function foo() {
bar(); // OK, bar is hoisted
function bar() {
...
}
}
```
Note that while var declarations are also hoisted (see Variables Are Hoisted), assignments performed by them are not:
```
function foo() {
bar(); // Not OK, bar is still undefined
var bar = function () {
// ...
};
}
```
## The Special Variable arguments
You can call any function in JavaScript with an arbitrary amount of arguments; the language will never complain. It will, however, make all parameters available via the special variable arguments. arguments looks like an array, but has none of the array methods:
```
> function f() { return arguments }
> var args = f('a', 'b', 'c');
> args.length
3
> args[0] // read element at index 0
'a'
```
## Too Many or Too Few Arguments
Let’s use the following function to explore how too many or too few parameters are handled in JavaScript (the function toArray() is shown in Converting arguments to an Array):
```
function f(x, y) {
console.log(x, y);
return toArray(arguments);
}
```
Additional parameters will be ignored (except by arguments):
```
> f('a', 'b', 'c')
a b
[ 'a', 'b', 'c' ]
```
Missing parameters will get the value undefined:
```
> f('a')
a undefined
[ 'a' ]
> f()
undefined undefined
[]
```
## Optional Parameters
The following is a common pattern for assigning default values to parameters:
```
function pair(x, y) {
x = x || 0; // (1)
y = y || 0;
return [ x, y ];
}
```
In line (1), the || operator returns x if it is truthy (not null, undefined, etc.). Otherwise, it returns the second operand:
```
> pair()
[ 0, 0 ]
> pair(3)
[ 3, 0 ]
> pair(3, 5)
[ 3, 5 ]
```
## Enforcing an Arity
If you want to enforce an arity (a specific number of parameters), you can check arguments.length:
```
function pair(x, y) {
if (arguments.length !== 2) {
throw new Error('Need exactly 2 arguments');
}
...
}
```
## Converting arguments to an Array
arguments is not an array, it is only array-like (see Array-Like Objects and Generic Methods). It has a property length, and you can access its elements via indices in square brackets. You cannot, however, remove elements or invoke any of the array methods on it. Thus, you sometimes need to convert arguments to an array, which is what the following function does (it is explained in Array-Like Objects and Generic Methods):
```
function toArray(arrayLikeObject) {
return Array.prototype.slice.call(arrayLikeObject);
}
```
## Exception Handling
The most common way to handle exceptions (see Chapter 14) is as follows:
```
function getPerson(id) {
if (id < 0) {
throw new Error('ID must not be negative: '+id);
}
return { id: id }; // normally: retrieved from database
}
function getPersons(ids) {
var result = [];
ids.forEach(function (id) {
try {
var person = getPerson(id);
result.push(person);
} catch (exception) {
console.log(exception);
}
});
return result;
}
```
The try clause surrounds critical code, and the catch clause is executed if an exception is thrown inside the try clause. Using the preceding code:
```
> getPersons([2, -5, 137])
[Error: ID must not be negative: -5]
[ { id: 2 }, { id: 137 } ]
```
## Strict Mode
Strict mode (see Strict Mode) enables more warnings and makes JavaScript a cleaner language (nonstrict mode is sometimes called “sloppy mode”). To switch it on, type the following line first in a JavaScript file or a <script> tag:
```
'use strict';
```
You can also enable strict mode per function:
```
function functionInStrictMode() {
'use strict';
}
```
## Variable Scoping and Closures
In JavaScript, you declare variables via var before using them:
```
> var x;
> x
undefined
> y
ReferenceError: y is not defined
```
You can declare and initialize several variables with a single var statement:
```
var x = 1, y = 2, z = 3;
```
But I recommend using one statement per variable (the reason is explained in Syntax). Thus, I would rewrite the previous statement to:
```
var x = 1;
var y = 2;
var z = 3;
```
Because of hoisting (see Variables Are Hoisted), it is usually best to declare variables at the beginning of a function.
## Variables Are Function-Scoped
The scope of a variable is always the complete function (as opposed to the current block). For example:
```
function foo() {
var x = -512;
if (x < 0) { // (1)
var tmp = -x;
...
}
console.log(tmp); // 512
}
```
We can see that the variable tmp is not restricted to the block starting in line (1); it exists until the end of the function.
## Variables Are Hoisted
Each variable declaration is hoisted: the declaration is moved to the beginning of the function, but assignments that it makes stay put. As an example, consider the variable declaration in line (1) in the following function:
```
function foo() {
console.log(tmp); // undefined
if (false) {
var tmp = 3; // (1)
}
}
```
Internally, the preceding function is executed like this:
```
function foo() {
var tmp; // hoisted declaration
console.log(tmp);
if (false) {
tmp = 3; // assignment stays put
}
}
```
## Closures
Each function stays connected to the variables of the functions that surround it, even after it leaves the scope in which it was created. For example:
```
function createIncrementor(start) {
return function () { // (1)
start++;
return start;
}
}
```
The function starting in line (1) leaves the context in which it was created, but stays connected to a live version of start:
```
> var inc = createIncrementor(5);
> inc()
6
> inc()
7
> inc()
8
```
A closure is a function plus the connection to the variables of its surrounding scopes. Thus, what createIncrementor() returns is a closure.
## The IIFE Pattern: Introducing a New Scope
Sometimes you want to introduce a new variable scope—for example, to prevent a variable from becoming global. In JavaScript, you can’t use a block to do so; you must use a function. But there is a pattern for using a function in a block-like manner. It is called IIFE (immediately invoked function expression, pronounced “iffy”):
```
(function () { // open IIFE
var tmp = ...; // not a global variable
}()); // close IIFE
```
Be sure to type the preceding example exactly as shown (apart from the comments). An IIFE is a function expression that is called immediately after you define it. Inside the function, a new scope exists, preventing tmp from becoming global. Consult Introducing a New Scope via an IIFE for details on IIFEs.
## IIFE use case: inadvertent sharing via closures
Closures keep their connections to outer variables, which is sometimes not what you want:
```
var result = [];
for (var i=0; i < 5; i++) {
result.push(function () { return i }); // (1)
}
console.log(result[1]()); // 5 (not 1)
console.log(result[3]()); // 5 (not 3)
```
The value returned in line (1) is always the current value of i, not the value it had when the function was created. After the loop is finished, i has the value 5, which is why all functions in the array return that value. If you want the function in line (1) to receive a snapshot of the current value of i, you can use an IIFE:
```
for (var i=0; i < 5; i++) {
(function () {
var i2 = i; // copy current i
result.push(function () { return i2 });
}());
}
```
## Objects and Constructors
This section covers two basic object-oriented mechanisms of JavaScript: single objects and constructors (which are factories for objects, similar to classes in other languages).
### Single Objects
Like all values, objects have properties. You could, in fact, consider an object to be a set of properties, where each property is a (key, value) pair. The key is a string, and the value is any JavaScript value.
In JavaScript, you can directly create plain objects, via object literals:
```
'use strict';
var jane = {
name: 'Jane',
describe: function () {
return 'Person named '+this.name;
}
};
```
The preceding object has the properties name and describe. You can read (“get”) and write (“set”) properties:
```
> jane.name // get
'Jane'
> jane.name = 'John'; // set
> jane.newProperty = 'abc'; // property created automatically
```
Function-valued properties such as describe are called methods. They use this to refer to the object that was used to call them:
```
> jane.describe() // call method
'Person named John'
> jane.name = 'Jane';
> jane.describe()
'Person named Jane'
```
The in operator checks whether a property exists:
```
> 'newProperty' in jane
true
> 'foo' in jane
false
```
If you read a property that does not exist, you get the value undefined. Hence, the previous two checks could also be performed like this:
```
> jane.newProperty !== undefined
true
> jane.foo !== undefined
false
```
The delete operator removes a property:
```
> delete jane.newProperty
true
> 'newProperty' in jane
false
```
### Arbitrary Property Keys
A property key can be any string. So far, we have seen property keys in object literals and after the dot operator. However, you can use them that way only if they are identifiers (see Identifiers and Variable Names). If you want to use other strings as keys, you have to quote them in an object literal and use square brackets to get and set the property:
```
> var obj = { 'not an identifier': 123 };
> obj['not an identifier']
123
> obj['not an identifier'] = 456;
```
Square brackets also allow you to compute the key of a property:
```
> var obj = { hello: 'world' };
> var x = 'hello';
> obj[x]
'world'
> obj['hel'+'lo']
'world'
```
### Extracting Methods
If you extract a method, it loses its connection with the object. On its own, the function is not a method anymore, and this has the value undefined (in strict mode).
As an example, let’s go back to the earlier object jane:
```
'use strict';
var jane = {
name: 'Jane',
describe: function () {
return 'Person named '+this.name;
}
};
```
We want to extract the method describe from jane, put it into a variable func, and call it. However, that doesn’t work:
```
> var func = jane.describe;
> func()
TypeError: Cannot read property 'name' of undefined
```
The solution is to use the method bind() that all functions have. It creates a new function whose this always has the given value:
```
> var func2 = jane.describe.bind(jane);
> func2()
'Person named Jane'
```
### Functions Inside a Method
Every function has its own special variable this. This is inconvenient if you nest a function inside a method, because you can’t access the method’s this from the function. The following is an example where we call forEach with a function to iterate over an array:
```
var jane = {
name: 'Jane',
friends: [ 'Tarzan', 'Cheeta' ],
logHiToFriends: function () {
'use strict';
this.friends.forEach(function (friend) {
// `this` is undefined here
console.log(this.name+' says hi to '+friend);
});
}
}
```
Calling logHiToFriends produces an error:
```
> jane.logHiToFriends()
TypeError: Cannot read property 'name' of undefined
```
Let’s look at two ways of fixing this. First, we could store this in a different variable:
```
logHiToFriends: function () {
'use strict';
var that = this;
this.friends.forEach(function (friend) {
console.log(that.name+' says hi to '+friend);
});
}
```
Or, forEach has a second parameter that allows you to provide a value for this:
```
logHiToFriends: function () {
'use strict';
this.friends.forEach(function (friend) {
console.log(this.name+' says hi to '+friend);
}, this);
}
```
Function expressions are often used as arguments in function calls in JavaScript. Always be careful when you refer to this from one of those function expressions.
## Constructors: Factories for Objects
Until now, you may think that JavaScript objects are only maps from strings to values, a notion suggested by JavaScript’s object literals, which look like the map/dictionary literals of other languages. However, JavaScript objects also support a feature that is truly object-oriented: inheritance. This section does not fully explain how JavaScript inheritance works, but it shows you a simple pattern to get you started. Consult Chapter 17 if you want to know more.
In addition to being “real” functions and methods, functions play another role in JavaScript: they become constructors—factories for objects—if invoked via the new operator. Constructors are thus a rough analog to classes in other languages. By convention, the names of constructors start with capital letters. For example:
```
// Set up instance data
function Point(x, y) {
this.x = x;
this.y = y;
}
// Methods
Point.prototype.dist = function () {
return Math.sqrt(this.x*this.x + this.y*this.y);
};
```
We can see that a constructor has two parts. First, the function Point sets up the instance data. Second, the property Point.prototype contains an object with the methods. The former data is specific to each instance, while the latter data is shared among all instances.
To use Point, we invoke it via the new operator:
```
> var p = new Point(3, 5);
> p.x
3
> p.dist()
5.830951894845301
```
p is an instance of Point:
```
> p instanceof Point
true
```
### Arrays
Arrays are sequences of elements that can be accessed via integer indices starting at zero.
### Array Literals
Array literals are handy for creating arrays:
```
> var arr = [ 'a', 'b', 'c' ];
```
The preceding array has three elements: the strings 'a', 'b', and 'c'. You can access them via integer indices:
```
> arr[0]
'a'
> arr[0] = 'x';
> arr
[ 'x', 'b', 'c' ]
```
The length property indicates how many elements an array has. You can use it to append elements and to remove elements:
```
> var arr = ['a', 'b'];
> arr.length
2
> arr[arr.length] = 'c';
> arr
[ 'a', 'b', 'c' ]
> arr.length
3
> arr.length = 1;
> arr
[ 'a' ]
```
The in operator works for arrays, too:
```
> var arr = [ 'a', 'b', 'c' ];
> 1 in arr // is there an element at index 1?
true
> 5 in arr // is there an element at index 5?
false
```
Note that arrays are objects and can thus have object properties:
```
> var arr = [];
> arr.foo = 123;
> arr.foo
123
```
### Array Methods
Arrays have many methods (see Array Prototype Methods). Here are a few examples:
```
> var arr = [ 'a', 'b', 'c' ];
> arr.slice(1, 2) // copy elements
[ 'b' ]
> arr.slice(1)
[ 'b', 'c' ]
> arr.push('x') // append an element
4
> arr
[ 'a', 'b', 'c', 'x' ]
> arr.pop() // remove last element
'x'
> arr
[ 'a', 'b', 'c' ]
> arr.shift() // remove first element
'a'
> arr
[ 'b', 'c' ]
> arr.unshift('x') // prepend an element
3
> arr
[ 'x', 'b', 'c' ]
> arr.indexOf('b') // find the index of an element
1
> arr.indexOf('y')
-1
> arr.join('-') // all elements in a single string
'x-b-c'
> arr.join('')
'xbc'
> arr.join()
'x,b,c'
```
### Iterating over Arrays
There are several array methods for iterating over elements (see Iteration (Nondestructive)). The two most important ones are forEach and map.
forEach iterates over an array and hands the current element and its index to a function:
```
[ 'a', 'b', 'c' ].forEach(
function (elem, index) { // (1)
console.log(index + '. ' + elem);
});
```
The preceding code produces the following output:
```
0. a
1. b
2. c
```
Note that the function in line (1) is free to ignore arguments. It could, for example, only have the parameter elem.
map creates a new array by applying a function to each element of an existing array:
```
> [1,2,3].map(function (x) { return x*x })
[ 1, 4, 9 ]
```
### Regular Expressions
JavaScript has built-in support for regular expressions. They are delimited by slashes:
```
/^abc$/
/[A-Za-z0-9]+/
```
### Method test(): Is There a Match?
```
> /^a+b+$/.test('aaab')
true
> /^a+b+$/.test('aaa')
false
```
### Method exec(): Match and Capture Groups
```
> /a(b+)a/.exec('_abbba_aba_')
[ 'abbba', 'bbb' ]
```
The returned array contains the complete match at index 0, the capture of the first group at index 1, and so on. There is a way (discussed in RegExp.prototype.exec: Capture Groups) to invoke this method repeatedly to get all matches.
### Method replace(): Search and Replace
```
> '<a> <bbb>'.replace(/<(.*?)>/g, '[$1]')
'[a] [bbb]'
```
The first parameter of replace must be a regular expression with a /g flag; otherwise, only the first occurrence is replaced. There is also a way (as discussed in String.prototype.replace: Search and Replace) to use a function to compute the replacement.
## Math
Math is an object with arithmetic functions. Here are some examples:
```
> Math.abs(-2)
2
> Math.pow(3, 2) // 3 to the power of 2
9
> Math.max(2, -1, 5)
5
> Math.round(1.9)
2
> Math.PI // pre-defined constant for π
3.141592653589793
> Math.cos(Math.PI) // compute the cosine for 180°
-1
```
From: https://blog.mathquant.com/2019/04/26/4-1-javascript-language-quick-start.html | fmzquant |
1,877,394 | Innovations at Shanghai Rumi Electromechanical Technology | 1216ade8501ce3c06cb6ee9f82decf63dd8e4eadde06e8ba78254e1d37c398d9.png "Discover the Innovations at... | 0 | 2024-06-05T01:49:05 | https://dev.to/sylvia_joyceke_6f128206ac/innovations-at-shanghai-rumi-electromechanical-technology-37pn | 1216ade8501ce3c06cb6ee9f82decf63dd8e4eadde06e8ba78254e1d37c398d9.png
"Discover the Innovations at Shanghai Rumi Electromechanical Technology "
Introduction:
Shanghai Rumi Electromechanical Technology is a {|ongoing} company that are ongoing concentrates which can be ongoing the production of top-quality goods plus actions. Featuring their innovations, it has feel one of the more businesses which can be sought-after the organization.
Value:
One of the {main top features|top that is main} of Shanghai Rumi Electromechanical Technology will be the gear that was advanced level. The organization employs the tech that is current to make services and products that are top-notch try both {efficient plus dependable|dependable plus efficient}. Either your shall need gear for you personally because their abode, Shanghai Rumi Electromechanical Technology has ways designed for the.
Innovation:
Shanghai Rumi Electromechanical technology ended up being fabled for the approach that has been revolutionary to make plus development. Business is usually researching to enhance their Products or services as solutions since possibilities, therefore in retrospect it invests especially in developing plus review. This dedication to innovation are exactly what sets Shanghai Rumi Electromechanical Technology along with their competition.
Protection:
Protection is just a {concern Shanghai which will|Shanghai that is concern which} be Rumi that are top technology. The business observe strategies which may be strict make sure their products or services as solutions since possibilities are safer to your {working workplace|workplace that is working} alongside. Every item was thouroughly tested such that it fulfills all specs being protective their circulated towards the certain specified areas which are certain {particular ensuring|ensuring that is particular}. This commitment to protection funds people reassurance whenever Shanghai which is often Rumi Technology that are using solutions.
Use/How to work alongside:
Shanghai Rumi Electromechanical Technology techniques was user-friendly plus want fix that want to minimal. Either you might be using a merchandise that will be more time that has been little does test first you may be individuals being skilled you {will|shall} need to recognize that Shanghai Rumi Electromechanical Technology products is straightforward plus simple. Every item consist of guidelines being clear precisely how greater benefit {from it, that makes it a bit of dessert in the first place|it a bit of dessert in the first place from it, that makes}.
Service/Quality:
Shanghai Rumi Electromechanical Technology try expert in supplying their user as a result of the {ongoing team that|team that is ongoing} are ongoing is ongoing are extremely well feasible. The business enterprise enterprise possesses united band of particular expert being usually willing to help clients using their demands. Business furthermore supplies the product quality which are system which has been spot which try rigorous be sure their products or services or solutions meet up with the best suggestions of quality. This dedication to High Viscosity Mixers {quality plus selection|selection plus quality} is strictly exactly what keeps people finding their previously to Shanghai Rumi Electromechanical Technology.
Application:
Shanghai Rumi Electromechanical Technology means can be used in a {|true} number of organizations being different like production, construction, plus farming. The business enterprise enterprise's items are intended to assist companies enhance effectiveness plus effectiveness, not to mention rates that will be frequently reducing. Just one {single thing is|thing that is single} essential by their of gear for you personally because their premises, Shanghai Rumi Electromechanical Technology features a simple thing that are ordinary is ordinary might complement their completely.
If you should be Complete Production Lines top-quality services and products, further look no than Shanghai Rumi Electromechanical Technology. Company's dedication to innovation, safeguards, quality, plus solution help it become grow to be the frontrunner you'll. Using their gear that's been state-of-the-art plus that is item which is often {developing plus revolutionary|revolutionary plus developing}, it is most likely you are going to trust Shanghai Rumi Electromechanical Technology for you personally.
Source: https://www.rumiasia.com/Highviscositymixers | sylvia_joyceke_6f128206ac | |
1,877,393 | Overview of Shanghai Rumi Electromechanical Technology Co., L | 74c6cebcd045f665789222bb7e6488bc7e75fcf91216525d541f0d998ef07dea.png Overview of Shanghai Rumi... | 0 | 2024-06-05T01:30:35 | https://dev.to/sylvia_joyceke_6f128206ac/overview-of-shanghai-rumi-electromechanical-technology-co-l-1736 | 74c6cebcd045f665789222bb7e6488bc7e75fcf91216525d541f0d998ef07dea.png
Overview of Shanghai Rumi Electromechanical Technology Co., Ltd
Looking for a {|continuing} business which provides top-quality, revolutionary, plus merchandise that is safer? Look no further than Shanghai Rumi Electromechanical Technology Co. Ltd! keep reading to learn concerning the {great things about|things that are great} picking this company that is great.
Options that come with Locating Shanghai Rumi Electromechanical Technology Co. Ltd:
Shanghai Rumi Electromechanical Technology Co. Ltd is one of the most {useful businesses providing|businesses that are useful} various things that try electromechanical like machines, generators, transformers, plus plenty which was entire. The corporation try items which is providing are top-quality solutions for extended than 2 {full decades|decades that are full}. Consider pros which can make Shanghai Rumi Electromechanical Technology Co. Ltd the possibility which is much better to your account:
1. Quality product : their Products are manufactured away from top-notch information that last longer.
2. prices that are affordable Their rates is fairly affordable for the customer that was typical.
3. Professional provider : Their customer support team is wanted to handle any problems because conditions that you shall see.
3. Timely Deliveries : the products is guaranteed by them appear on time, certainly's no delay in efforts.
5. Wide Range of services : they function a variety that are wide of catering to your specs being unique.
Innovation at Shanghai Rumi Electromechanical Technology Co. Ltd:
Innovation test primary to Shanghai Rumi Electromechanical Technology Co. Ltd's triumph story. They have a {|united} team of researchers plus developers who work tirelessly to produce amazing plus things that was options being improved.
Their revolutionary nature has led them to develop types of machines, for instance induction machines, permanent magnet motors, asynchronous machines, and many other things. They've furthermore utilized advanced tech to construct their products or services upwards as service, ensuring them a {great solution for|solution that is great} customers in several sectors that they meet areas requirements, producing.
Protection at Shanghai Rumi Electromechanical Technology Co. Ltd:
Security is often a concern that has been Shanghai which are top Rumi Technology Co. Ltd. Their products proceed through rigorous testing to make certain that they meet safeguards specifications. They shall have used protocols being strict ensure their products or services as service do not happen any {|nagging} problems for customers. Protection properties are built-into their design, for example automatic overheat sound plus safeguards insulation.
Creating Use Of Shanghai Rumi Electromechanical Technology Co. Ltd Goods:
Using Shanghai Rumi Electromechanical Technology Co. Ltd's product is or High Viscosity Mixers simple. Their performing procedure is related to the gear which varies {electromechanical use every|use that is electromechanical} day, homes equipment plus transportation. Almost all their products need clear manuals plus directions that explain to you aided by the procedure that are entire of plus installation.
Apart from this, their customer support team is clearly available to enable you to plus any {set-up dilemmas you|dilemmas that is set-up} have got, producing their products or services as solutions with their enjoy enjoyable plus seamless.
Service at Shanghai Rumi Electromechanical Technology Co. Ltd:
Shanghai Rumi Electromechanical Technology Co. Ltd centers on providing quality options due to their clientele. They have been supported by the customer that take to {efficient team which|team that is efficient} is consistently willing to answer any issues you should have about their definitely products plus possibilities.
They take pride in supplying timely deliveries, effortless buy placements, plus an extraordinary company that are after-sales. Their approach which are customer-centric is why {they have built an individual|an individual has been built by them} that are dedicated in recent times.
Quality at Shanghai Rumi Electromechanical Technology Co. Ltd:
Quality are in the core of exactly what Shanghai Rumi Electromechanical Technology Co. Ltd do. From their state-of-the-art manufacturing processes to their lawful restrictions which is {strict follow while|follow that is strict} producing their products or services as service, they be sure that every product that departs their factory satisfies the most effective instructions of quality. This focus on quality has produced the history of quality, producing them a variety that has been people being top {different businesses|businesses that are different}.
Applications of Shanghai Rumi Electromechanical Technology Co. Ltd goods:
Shanghai Rumi Electromechanical Technology Co. Ltd's items has quantity that are wide of, which is why they attract businesses which are various. Several of the most typical applications include transportation, agriculture, commercial gear, equipment, and a whole lot more.
To close out, Shanghai Rumi Electromechanical Technology Co. Ltd is just a {|combined} group your criteria that are electro-mechanical you could be determined by for many. Their pay attention to Complete Production Lines quality, safety, innovation, affordable rates, excellent customer care, plus numerous merchandise make them an excellent selection for any customer shopping for electromechanical products. Trust Shanghai Rumi Electromechanical Technology Co. Ltd, and yourself shall never ever be disappointed.
Source: https://www.rumiasia.com/Highviscositymixers | sylvia_joyceke_6f128206ac | |
1,877,621 | Pipeline Validación de Código | Siguiendo con la construcción de nuestro pipeline para aplicaciones en react, vamos a agregar... | 0 | 2024-06-05T13:26:37 | https://www.ahioros.info/2024/06/pipeline-validacion-de-codigo.html | azure, cloud, devops, spanish | ---
title: Pipeline Validación de Código
published: true
date: 2024-06-05 01:30:00 UTC
tags: Azure,cloud,DevOps,spanish
canonical_url: https://www.ahioros.info/2024/06/pipeline-validacion-de-codigo.html
---
Siguiendo con la construcción de nuestro pipeline para aplicaciones en react, vamos a agregar herramientas para validar el código del repositorio cada vez que realizamos un commit o un PR en el repositorio.
Antes de todo vamos a definir cada uno de estos conceptos.
<!-- agrega el botón leer más -->
* Formato: Para realizar esto usamos las herramientas llamadas linter. Estas herramientas suelen venir con opciones para hacer también el análisis del cóigo. Esto no es nuevo ya en el lenguaje de programación C se utilizaba antes estas herramientas antes de la compilación.
* Análisis: El análisis del código se puede realizar por medio de los linters o de otras herramientas especializadas, inluso podemos darle puntaje al código y así restringir que no se suba código que no cumpla con ciertas reglas o que esté por de bajo de un 89/100 de puntaje.
* Test: En esta etapa realizamos los tests del código estos pueden ser unitarios o de integración.
* Compilación: En esta etapa compilamos el código y si todo funciona correctamente podemos subirlo al repositorio.
En nuestro pipeline vamos a agregar cada una de las herramientas.
**Nota** : Estas son solo algunas herramientas, existen muchas más, y sobre el análisi de código más adelanteveremos como usar servicios de terceros, ya sea para analizar el código a nivel de seguridad, para validar la calidad del código, etc.
```yaml
stages:
- stage: CodeValidation
jobs:
- job: CodeValidation
steps:
- task: NodeTool@0
inputs:
versionSpec: '15.x'
displayName: 'Install Node.js'
- script: |
npm install
displayName: 'npm install'
- script: |
npm run lint
displayName: 'npm lint'
- script: |
npm install prettier && npx prettier . --write && npm run prettier
displayName: 'npm prettier'
- script: |
CI=true npm run test
displayName: 'npm test'
- script: |
npm run build
displayName: 'npm build'
```
Acá te dejo el video de esta configuración por si tienes dudas:
{% youtube DTFuRxTkpX8 %}
<iframe allowfullscreen="" youtube-src-id="DTFuRxTkpX8" width="480" height="270" src="https://www.youtube.com/embed/DTFuRxTkpX8"></iframe> | ahioros |
1,877,350 | How to make basic express api | ExpressJS is a JavaScript framework that allows you to make advanced api for your web app first you... | 0 | 2024-06-05T01:12:41 | https://dev.to/cache/how-to-make-basic-express-api-2o7g | express, javascript, api, webdev | ExpressJS is a JavaScript framework that allows you to make advanced api for your web app
first you need to make a `my-project`folder
Then open terminal and type this code
```
npm init -y
```
Then you need to install express, you can install express by this code
```
npm i express
```
After this code you need to replace `scripts` in your `package.json` file with this code
```
"scripts":{
"start":"node app.js"
}
```
Then make `app.js` file
and pus this code inside your `app.js` file
```js
const express = require("express")
const app = express()
// /api route
app.get('/api',(req, res) => {
res.send('hello world!') // this will return hello world! When you go to https://localhost:3000/api
})
// make app listen on port 3000
app.listen(3000, (req, res) => {
console.log("app listening on https://localhost:3000")
})
```
then run
```
npm start
```
And navigate to https://localhost/3000/api
And you'll see 'hello world!'
Text
That's it.
| cache |
1,877,349 | Understanding Spring Annotations: A Comprehensive Overview | Introduction Working with Spring implies using lots of annotations to configure your... | 27,602 | 2024-06-05T01:02:28 | https://springmasteryhub.com/2024/06/04/understanding-spring-annotations-a-comprehensive-overview/ | java, spring, springboot, programming | # Introduction
Working with Spring implies using lots of annotations to configure your application, link components, and manage behaviors. These annotations can be separated into some categories: Initialization annotations, Configuration Specifics annotations, Stereotypes, Behavioral, and Testing.
This overview aims to help you discover some new annotations or to understand briefly what some annotations that you may be seeing in your project do, so you can better understand a little bit what is going on and why they are there. It can help you have some ideas to apply some of those in your project.
In this first article of the series, we will cover an overview of each annotation and briefly explain what each one does. In the following articles, I’ll show you how to use them and some specific use cases for each one.
## Initialization Annotations
### @Value
This annotation is usually used to inject values from your configuration properties.
### @Required
It has the function to mark a dependency as mandatory but it became deprecated.
### @Autowired
It allows Spring to link beans from the Spring IoC container.
### @Lazy
It makes a bean to be late initiated, and not initialized with the application context, but when it is going to be used.
### @AliasFor
It allows you to create different names for the same attribute.
### @Qualifier
It allows you to define which bean to use when you have two beans with the same type.
### @Primary
It defines which of the beans with the same type is the default.
### @DependsOn
It defines beans that the current beans depend on. It guarantees that all the dependencies will be created first.
### @Secured
This annotation specifies which user role (it can be more than one) has access to a method.
### @Resource
It shows Spring to inject a bean based on its name in a method.
### @PreDestroy
Spring will call the method where this annotation is set before a bean is removed from the context.
### @PostConstruct
Spring will call the method with this annotation to execute after the bean is created.
### @Lookup
It will get the return type of the method to replace its behavior, returning a bean of the same type.
## Configuration Specific Annotations
### @Import
Indicates one or more components to import, usually classes with @Configuration in it.
### @Profile
When combined with a bean, it will only make the bean available if the Spring active profile is set to the same value as defined in @Profile(”prod”).
### @ComponentScan
Used with @Configuration, we can use this annotation to point to Spring where they are located.
### @Bean
This marks to Spring that the return type of the method is an instance to be created, managed, and injected by the Spring IoC container.
### @PropertySource
An annotation used in conjunction with @Configuration to add property sources to the application environment.
### @Scope
This annotation is used to define the lifetime scope of a bean.
## Stereotypes
### @Component
This annotation tells Spring that our class needs to be created, managed, and injected as an application bean.
### @Configuration
This annotation indicates for Spring that the class is a source of bean definitions.
### @Controller
This annotation is a specialization of @Component and has the objective of handling web requests.
### @RestController
This annotation is a combination of @Controller with @ResponseBody to make it easier to create RESTful web services.
### @Service
This is a specialization of the @Component that aims to hold business logic in it.
### @Repository
This annotation is for classes that deal with data persistence.
## Behavioral Annotations
### @Aspect
This annotation tells Spring that this class is an aspect, meaning this class contains code specific to a cross-cutting concern.
### @Transactional
Spring will create a transaction wrapping the class or method and manage its lifecycle.
## Test Annotations
### @BootstrapWith
You can use this annotation to tell how Spring TestContext should be bootstrapped (you could use this to load only specific configurations to your context).
### @ContextConfiguration
This annotation is used to tell how to load and configure the application context for your tests.
### @WebAppConfiguration
It loads a web application context in testing.
### @ContextHierarchy
It defines the hierarchy of the configurations when configuring the test application context.
### @ActiveProfiles
It defines which profile will be used in your test application context.
### @TestPropertySource
It helps you to configure the property source that Spring will use in your tests.
### @DynamicPropertySource
This allows you to set a method to add dynamic properties to your property source.
### @DirtiesContext
It tells Spring to restart the context in testing. Spring by default reuses the context in testing; this will force it to restart.
### @TestExecutionListeners
Similar to JUnit, Spring provides some listeners that will execute actions that can be before, after, etc. (just like JUnit @Before).
### @RecordApplicationEvents
This annotation will record all the application events published in the application context during a test execution.
### @Commit
This will tell the transaction to commit after the execution of a test method.
### @Rollback
This annotation will make the transaction rollback after the test method execution.
### @BeforeTransaction
Defines a behavior that will run before starting a transaction.
### @AfterTransaction
Defines a behavior that will happen after the transaction ends.
### @Sql
This will run SQL in the database when running an integration test.
### @SqlGroup
Creates a group of @Sql annotations to run when running your integration tests.
## Conclusion
In the upcoming articles, we will explain tips, tricks, and use cases for each of these annotations. Stay tuned for detailed guides on leveraging these annotations to enhance your Spring applications.
Follow me! | tiuwill |
1,875,734 | it's all about the least worst combination of trade-offs | i remember that early in my computer science career in the industry hearing a lot of silver bullet... | 0 | 2024-06-05T01:02:03 | https://dev.to/marcostx/its-all-about-the-least-worst-combination-of-trade-offs-17fc | books, softwareengineering, architecture | i remember that early in my computer science career in the industry hearing a lot of silver bullet frameworks/packages that fit all the cases you want and (apparently) didn't have any disadvantages or drawbacks. "using this MVP framework X will solve your PHP development problems", "the best API interface for Java is Y", and similar pitches were quite common. At the time, the community was infested with solution evangelists and it was so common that providing critical analysis of the usage or its limitation was like desacrating.
these frameworks were often marketed with grand promises and I saw minimal discussion about the context or implications of its usages involved. at first, these claims seemed incredibly attractive, especially for someone new to the field, as they promised a panacea to all the prevalent issues. however, it didn’t take long to realize that every framework or package had its own set of limitations, and the context in which they were used mattered significantly. this background made me more appreciative of any resource that addressed the complexity of architectural decisions honestly.
last week, I read the book "Software Architecture: The Hard Parts" that I really liked. by the way, the hard parts the author refers to are related to the difficult choices, and the foundation parts - should change less compared to the "soft" ones - of a software design. the book then focuses mainly on these architectural decisions involved in modern software development and several guides on how to better evaluate the implications of different types of solutions to solve these problems: trade-off analysis. the rigorous attention to decision-making frameworks ensures that readers are not just passively absorbing information but are actively engaging with the flow of thinking to understand the nuanced consequences of each choice. the author discusses how these decisions can have long-lasting impacts on maintainability, scalability, and performance, emphasizing that what works well in one context might become a bottleneck in another - which for me makes this book special.

the author demystified the concept of the one-fit-all solution for software architecture by focusing on identifying the possible trade-offs inherited in all design decisions. one of the first quotes of the book is "don't try to find the best design in software architecture, instead, strive for the least worst combination of trade-offs". that really caught my eye and gave me the curiosity to finish the book. the examples and case studies provided are rich with scenarios that demonstrate how even small decisions can have cascading effects on a software system's future evolution. by enforcing this critical mindset, the book arms readers with the tools to make more informed decisions, rather than chasing an elusive "perfect" architecture or being "evangelizers".
the good parts of the book are
- real-world examples to follow the theoretical assumptions the author presents
- systematic guidance on how to decompose complex systems
- large and useful amount of common decisions trade-offs on modern software architecture
- personal tips from the author alongside the chapter discussions
- amazing and intuitive diagrams
- comprehensive language and storytelling
the weak parts of the book are
- not so recommended for beginner software developers (but I don't think that's the book's target)
- despite the chapters having a good interconnection, sometimes the author refers to a chapter discussion without making it easier for the reader to remember the context of what was discussed and its implications
- personal tips from the author alongside the chapter discussions (also a weak part here because in some cases I thought it biased the discussion)
overall I think it's definitely an amazing reading and I really recommend it for every programmer who creates modern distributed software that, consequently, involves a lot of design decisions that should be carefully evaluated in order to develop robust systems. The concepts presented in this book are not just applicable to large companies but can also be adapted for use in startups or smaller teams, making it versatile and essential reading for anyone involved in software design and development. | marcostx |
1,877,347 | How to Merge Two Arrays in Java: A Simple Guide | Merging two arrays is a common operation in Java, often encountered in various programming tasks.... | 0 | 2024-06-05T00:50:09 | https://dev.to/raajaryan/how-to-merge-two-arrays-in-java-a-simple-guide-13hc | java, programming, tutorial, beginners |
Merging two arrays is a common operation in Java, often encountered in various programming tasks. This article explores multiple methods to merge two arrays in Java, catering to different preferences and scenarios.
## Method 1: Using Predefined Function
```java
import java.util.Arrays;
public class MergeTwoArrays1 {
public static void main(String[] args) {
int[] a = {10, 20, 30, 40};
int[] b = {50, 60, 70, 80};
int a1 = a.length;
int b1 = b.length;
int c1 = a1 + b1;
int[] c = new int[c1];
System.arraycopy(a, 0, c, 0, a1);
System.arraycopy(b, 0, c, a1, b1);
System.out.println(Arrays.toString(c));
}
}
```
**Output:**
```
[10, 20, 30, 40, 50, 60, 70, 80]
```
**Complexity:**
- Time Complexity: O(M + N)
- Auxiliary Space: O(M + N)
Here, M is the length of array a, and N is the length of array b.
## Method 2: Without Using Predefined Function
```java
public class MergeTwoArrays2 {
public static void main(String[] args) {
int a[] = {30, 25, 40};
int b[] = {45, 50, 55, 60, 65};
int a1 = a.length;
int b1 = b.length;
int c1 = a1 + b1;
int[] c = new int[c1];
for (int i = 0; i < a1; i++) {
c[i] = a[i];
}
for (int i = 0; i < b1; i++) {
c[a1 + i] = b[i];
}
for (int i = 0; i < c1; i++) {
System.out.println(c[i]);
}
}
}
```
**Output:**
```
30
25
40
45
50
55
60
65
```
**Complexity:**
- Time Complexity: O(M + N)
- Auxiliary Space: O(M + N)
Here, M is the length of array a, and N is the length of array b.
## Method 3: Using Java Streams
```java
import java.util.Arrays;
import java.util.stream.IntStream;
public class MergeTwoArraysUsingStreams {
public static void main(String[] args) {
int a[] = {30, 25, 40};
int b[] = {45, 50, 55, 60, 65};
int[] c = mergeArraysUsingStreams(a, b);
Arrays.stream(c).forEach(System.out::println);
}
public static int[] mergeArraysUsingStreams(int[] arr1, int[] arr2) {
return IntStream.concat(Arrays.stream(arr1), Arrays.stream(arr2)).toArray();
}
}
```
**Output:**
```
30
25
40
45
50
55
60
65
```
**Complexity:**
- Time Complexity: O(M + N)
- Auxiliary Space: O(M + N)
Here, M is the length of array a, and N is the length of array b.
## Method 4: Using ArrayList
```java
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
public class MergeArrays {
public static int[] mergeArraysUsingArrayList(int[] a, int[] b) {
List<Integer> resultList = new ArrayList<>();
for (int num : a) {
resultList.add(num);
}
for (int num : b) {
resultList.add(num);
}
return resultList.stream()
.mapToInt(Integer::intValue).toArray();
}
public static void main(String[] args) {
int a[] = {30, 25, 40};
int b[] = {45, 50, 55, 60, 65};
int[] result = mergeArraysUsingArrayList(a, b);
for (int i = 0; i < result.length; i++) {
System.out.println(result[i]);
}
}
}
```
**Output:**
```
30
25
40
45
50
55
60
65
```
**Complexity:**
- Time Complexity: O(M + N)
- Auxiliary Space: O(M + N)
Here, M is the length of array a, and N is the length of array b.
These methods offer flexibility in merging arrays, catering to different preferences and requirements. Choose the one that best suits your specific scenario and coding style. | raajaryan |
1,877,291 | Building Connections: Exploring Bing Search Engine APIs | In the vast landscape of digital information, search engines play a pivotal role in helping users... | 0 | 2024-06-04T23:08:47 | https://dev.to/ericksmith14/building-connections-exploring-bing-search-engine-apis-2a3b | api, bing | In the vast landscape of digital information, search engines play a pivotal role in helping users navigate through the abundance of data available on the internet. Among the myriad of search engines available, Bing stands out as a robust platform offering powerful APIs (Application Programming Interfaces) that developers can leverage to enhance their applications and services. In this article, we delve into the world of **[Bing Search Engine API](https://zenserp.com/bing-search-api)**, exploring its functionalities, benefits, and potential applications.
## Understanding Bing Search Engine APIs
Bing Search Engine APIs provide developers with access to a wealth of information indexed by the Bing search engine. These APIs allow developers to integrate various search functionalities directly into their applications, enabling users to perform web searches, image searches, video searches, news searches, and more, seamlessly within the application environment.
## Exploring Bing Search APIs
One of the key features of Bing Search Engine APIs is its versatility. Developers can choose from a range of APIs tailored to specific search functionalities. For instance, the Bing Web Search API enables developers to retrieve search results from the web, including webpages, images, news articles, and videos, with customizable search parameters. Similarly, the Bing Image Search API provides access to a vast repository of images indexed by Bing, allowing developers to integrate image search capabilities into their applications effortlessly.
## Utilizing Bing Search APIs in Applications
The applications of Bing Search Engine APIs are diverse and expansive. For instance, developers can integrate the Bing Image Search API into e-commerce platforms to allow users to search for products using images, enhancing the shopping experience. Similarly, news aggregators can leverage the Bing News Search API to fetch real-time news articles and updates, keeping users informed and engaged.
## Comparing Bing Search APIs with Other Providers
While Bing Search Engine APIs offer robust functionality and ease of integration, it's essential to compare them with other providers in the market. Google, for instance, offers a range of APIs, including the Google Custom Search API and the Google Images API, which provide similar functionalities. However, each provider may have its unique features, pricing structures, and usage policies, necessitating careful consideration based on specific project requirements.
## Conclusion: Leveraging Bing Search Engine APIs for Enhanced Functionality
In conclusion, Bing **[Search Engine APIs](https://zenserp.com/)** offer developers a powerful toolkit for integrating search functionalities into their applications seamlessly. From web searches to image searches and news searches, Bing provides a comprehensive suite of APIs to cater to diverse application needs. By leveraging Bing Search Engine APIs, developers can enhance the functionality of their applications, improve user experience, and unlock new possibilities in information retrieval and discovery. Whether it's building a search engine-powered app or integrating search capabilities into existing platforms, Bing Search Engine APIs provide the tools necessary to connect users with the information they seek efficiently and effectively. | ericksmith14 |
1,877,342 | Developing Interactive E-Learning Content with HTML5 and JavaScript | Introduction With the rapid growth of technology, e-learning has become a popular medium... | 0 | 2024-06-05T00:31:51 | https://dev.to/kartikmehta8/developing-interactive-e-learning-content-with-html5-and-javascript-2hio | webdev, javascript, beginners, programming | ## Introduction
With the rapid growth of technology, e-learning has become a popular medium for education and training. In order to keep up with this changing trend, developers are now turning to HTML5 and JavaScript to create interactive and engaging e-learning content. In this article, we will discuss the advantages, disadvantages, and features of developing interactive e-learning content with these two languages.
## Advantages
1. **Cross-platform compatibility:** HTML5 and JavaScript are supported on all major devices and browsers, making it easier for learners to access the content on any device.
2. **Interactive and engaging:** With the use of animations, games, and interactivities, HTML5 and JavaScript allow for the creation of interactive and engaging content that keeps learners interested and motivated.
3. **Cost savings:** Since HTML5 and JavaScript are open-source, they are free to use, reducing the cost of developing e-learning content.
## Disadvantages
1. **Requires programming skills:** Developing e-learning content with HTML5 and JavaScript requires basic programming skills which may be challenging for non-technical users.
2. **Limited browser support:** Some older browsers may not support HTML5 and JavaScript, limiting the reach of the e-learning content.
## Features
1. **Multimedia support:** HTML5 and JavaScript support multimedia elements such as audio, video, and images, making the learning experience more engaging.
```html
<!-- Example of embedding a video in HTML5 -->
<video width="320" height="240" controls>
<source src="movie.mp4" type="video/mp4">
Your browser does not support the video tag.
</video>
```
2. **Responsive design:** E-learning content developed with HTML5 and JavaScript can adapt to different screen sizes and devices, providing a consistent user experience.
```css
/* Example of CSS for responsive design */
@media (max-width: 600px) {
.container {
width: 100%;
padding: 0;
}
}
```
## Conclusion
In conclusion, developing interactive e-learning content with HTML5 and JavaScript has numerous advantages such as cross-platform compatibility and engaging features. However, it also has some drawbacks, such as the requirement for programming skills. Despite these limitations, the use of these languages in creating e-learning content is a step in the right direction towards providing a more effective and engaging learning experience for students.
| kartikmehta8 |
1,877,340 | GraphQL vs. REST: Qual é a Melhor Escolha para sua API? | No mundo do desenvolvimento de APIs, REST tem sido o padrão de fato por muitos anos. No entanto, com... | 0 | 2024-06-05T00:21:32 | https://dev.to/thiagohnrt/graphql-vs-rest-qual-e-a-melhor-escolha-para-sua-api-59cj | graphql, restapi, webdev, braziliandevs | No mundo do desenvolvimento de APIs, REST tem sido o padrão de fato por muitos anos. No entanto, com o surgimento do GraphQL, os desenvolvedores agora têm uma alternativa poderosa e flexível. Este artigo compara GraphQL e REST, destacando as vantagens e desvantagens de cada abordagem, e fornece orientações sobre quando escolher uma sobre a outra.
### 1. O Que é REST?
REST (Representational State Transfer) é uma arquitetura de software que utiliza os princípios e protocolos da web. APIs RESTful são baseadas em recursos e utilizam métodos HTTP padrão (GET, POST, PUT, DELETE) para realizar operações CRUD (Create, Read, Update, Delete).
#### Vantagens do REST:
- **Simplicidade**: A estrutura de URL e os métodos HTTP são fáceis de entender e usar.
- **Escalabilidade**: Projetado para ser escalável e suportar grandes volumes de tráfego.
- **Cacheabilidade**: Respostas podem ser cacheadas, melhorando o desempenho.
#### Desvantagens do REST:
- **Overfetching e Underfetching**: Clientes podem receber mais dados do que precisam (overfetching) ou não suficientes (underfetching).
- **Versionamento**: Gerenciar versões de API pode ser complicado e propenso a erros.
### 2. O Que é GraphQL?
GraphQL é uma linguagem de consulta para APIs, desenvolvida pelo Facebook, que permite aos clientes solicitar exatamente os dados de que precisam. Ao contrário do REST, que é baseado em recursos, o GraphQL é baseado em tipos e campos.
#### Vantagens do GraphQL:
- **Flexibilidade**: Clientes podem especificar exatamente quais dados querem, evitando overfetching e underfetching.
- **Tipagem**: O esquema de GraphQL é fortemente tipado, o que melhora a validação e a documentação.
- **Desempenho**: Reduz o número de requisições HTTP ao permitir que múltiplas consultas sejam feitas em uma única requisição.
#### Desvantagens do GraphQL:
- **Complexidade**: Pode ser mais complexo de implementar e configurar do que REST.
- **Cache**: Implementar caching pode ser mais difícil em GraphQL do que em REST.
- **Sobrecarga no Servidor**: Consultas complexas podem impactar o desempenho do servidor se não forem gerenciadas adequadamente.
### 3. Comparação Detalhada
#### 3.1. Estrutura de Requisição e Resposta
- **REST**:
- Requisição: Cada recurso tem seu próprio endpoint.
- Resposta: Estruturada conforme o recurso solicitado, pode incluir dados desnecessários.
- Exemplo:
- `GET /users/1` para obter dados de um usuário específico.
- `GET /users/1/posts` para obter posts de um usuário específico.
- **GraphQL**:
- Requisição: Uma única URL para todas as operações, com consultas definidas pelo cliente.
- Resposta: Estruturada conforme a consulta do cliente, retornando apenas os dados solicitados.
- Exemplo:
```graphql
{
user(id: 1) {
name
posts {
title
}
}
}
```
#### 3.2. Flexibilidade de Dados
- **REST**: Cada endpoint retorna um conjunto fixo de dados.
- **GraphQL**: Clientes podem solicitar exatamente os dados que precisam, combinando múltiplas consultas em uma única requisição.
#### 3.3. Performance
- **REST**: Pode resultar em múltiplas requisições para obter dados relacionados, aumentando a latência.
- **GraphQL**: Permite obter todos os dados necessários em uma única requisição, reduzindo a latência, mas pode causar sobrecarga se as consultas não forem otimizadas.
#### 3.4. Versionamento
- **REST**: Requer versionamento explícito (por exemplo, `/api/v1/users`), o que pode complicar a manutenção.
- **GraphQL**: Evita a necessidade de versionamento explícito ao permitir que novos campos e tipos sejam adicionados ao esquema sem afetar as consultas existentes.
### 4. Quando Usar REST?
- **Simplicidade**: Projetos pequenos ou quando a simplicidade é preferível.
- **Cacheamento**: Quando o caching é crucial para o desempenho.
- **Padronização**: Equipes acostumadas com o padrão REST ou quando há uma necessidade de conformidade com práticas de API estabelecidas.
### 5. Quando Usar GraphQL?
- **Flexibilidade de Dados**: Projetos onde é importante que os clientes controlem exatamente quais dados recebem.
- **Desenvolvimento Rápido**: Quando a agilidade no desenvolvimento e a evolução da API são cruciais.
- **Aplicações Complexas**: Projetos com múltiplas fontes de dados ou requisitos complexos de consulta.
### 6. Exemplos Práticos
#### Implementando uma API REST com Node.js e Express
1. **Configuração Inicial**:
```sh
mkdir rest-api
cd rest-api
npm init -y
npm install express
```
2. **Criação de Endpoints**:
```js
const express = require('express');
const app = express();
const port = 3000;
let users = [
{ id: 1, name: 'John Doe' },
{ id: 2, name: 'Jane Doe' }
];
app.get('/users', (req, res) => {
res.json(users);
});
app.get('/users/:id', (req, res) => {
const user = users.find(u => u.id === parseInt(req.params.id));
if (!user) return res.status(404).send('User not found');
res.json(user);
});
app.listen(port, () => {
console.log(`Server running at http://localhost:${port}`);
});
```
#### Implementando uma API GraphQL com Node.js e Apollo Server
1. **Configuração Inicial**:
```sh
mkdir graphql-api
cd graphql-api
npm init -y
npm install apollo-server graphql
```
2. **Criação do Schema e Resolvers**:
```js
const { ApolloServer, gql } = require('apollo-server');
const typeDefs = gql`
type User {
id: ID!
name: String!
}
type Query {
users: [User]
user(id: ID!): User
}
`;
let users = [
{ id: 1, name: 'John Doe' },
{ id: 2, name: 'Jane Doe' }
];
const resolvers = {
Query: {
users: () => users,
user: (parent, args) => users.find(user => user.id === parseInt(args.id))
}
};
const server = new ApolloServer({ typeDefs, resolvers });
server.listen().then(({ url }) => {
console.log(`Server running at ${url}`);
});
```
### Conclusão
A escolha entre GraphQL e REST depende das necessidades específicas do seu projeto. REST é uma abordagem madura e bem estabelecida, ideal para projetos simples e situações onde o caching é crítico. GraphQL, por outro lado, oferece maior flexibilidade e pode ser mais eficiente em termos de requisições, sendo ideal para aplicações complexas com requisitos dinâmicos de dados. Avalie os requisitos do seu projeto e escolha a abordagem que melhor se adapta às suas necessidades. | thiagohnrt |
1,878,150 | What is the best web scraping API service? | Web scraping API services are becoming very popular as the demand for web data is sky-rocketing.... | 0 | 2024-06-05T15:18:01 | https://scrapeway.com/blog/what-is-the-best-web-scraping-api-service | benchmark, webscraping, data, datascience | ---
title: What is the best web scraping API service?
published: true
date: 2024-06-05 00:00:55 UTC
tags: benchmark,webscraping,data,datascience
canonical_url: https://scrapeway.com/blog/what-is-the-best-web-scraping-api-service
---

Web scraping API services are becoming very popular as the demand for web data is sky-rocketing. So, which web scraping API is the best and how to choose the right one?
In this article, we’ll explore the **three** primary metrics used by [Scrapeway](https://scrapeway.com/) to determine the most performant web scraping API:
- Success rate — how likely a service is to successfully return page content.
- Speed — how fast scraping is being performed.
- Price — the overall price of the service.
[Check out our newsletter!](https://scrapeway.com/newsletter)
### Start with real benchmarks
The only reliable way to evaluate web scraping services is to actually measure them.

_Scrapeway benchmarks for average performance_
As a reference, we’ll use Scrapeways weekly web scraping API [benchmarks](https://scrapeway.com/) that evaluate for these 3 metrics. Here are example results for June 2024:
The above example represent an average performance of all [web scraping targets](https://scrapeway.com/targets) covered by Scrapeway.
### Stability & success rate
The primary goal of every web scraping service is to extract web data at scale successfully. This means that scraping APIs have to overcome web scraping blocking to be effective tools at scale.

_Success rate of each service_
We see that this week [Scrapfly](https://scrapeway.com/web-scraping-api/scrapfly) has the highest success rate with [Zenrows](https://scrapeway.com/web-scraping-api/zenrows) and [ScrapingBee](https://scrapeway.com/web-scraping-api/scrapingbee) following closely behind when it comes to average success rate.
The success rate indicates the percentage of requests that succeed and when evaluating web scraping APIs we want this number as high as possible as it indicates:
- Service technical capability
As anti-bot bypass is the toughest problem in scraping having a reliable anti-bot bypass is very important. Getting stranded with no more data supply for days is a nightmare for any data-driven business.
- Service stability
For large continuous scraping projects it’s important to have reliable and predictable service.
- Cost reduction
Success rate directly translates to cost reduction as it decreases retrying and increases overall scraping performance.
Three types of failures can be encountered when using web scraping APIs:
1. Websites use Anti-bot tools that block the scraper.
2. Service encounters technical difficulties when scraping a specific page.
3. Page is not scrapable because of user error.
Various [Anti-bot systems](https://scrapeway.com/anti-bot-services) is by far the most common reason why web scraping fails and in our benchmarks, it represents over 95% of all failures.
Service outages/bugs are much more rare with the [top web scraping APIs](https://scrapeway.com/targets) we’ve tested and are usually resolved quickly representing the rest 5% of all failures.
As for user errors, these are harder to measure as this data is only available to web scraping API providers.
The primary cause of this issue is that some pages require headers or cookies to be set correctly for a successful scrape. Ideally, web scraping services could adjust requests automatically but we hadn’t seen any features like this except for:
- WebScrapingAPI and ScrapingDog report suggestions for optimal API parameters though not scraping configuration like headers.
- Scrapfly’s “ASP” feature can modify _some_ headers, cookies and browser fingerprints.
Overall, we think that success rate is by far the most important metric when it comes to evaluating web scraping APIs and the entire reason we’ve founded Scrapeway benchmarks. Though other metrics are important too.
### Speed
Speed is an essential metric to assess when looking for the best API for real-time web scraping.

_Speed rate average of each service_
In our benchmarks, we see that [ScrapingBee](https://scrapeway.com/web-scraping-api/scrapingbee) and [Scrapfly](https://scrapeway.com/web-scraping-api/scrapfly) are the fastest in 4s average scrape time making these services more favorable for real-time web scraping applications.
Real-time web scrapers are usually integrated with real-time processes like web apps or data retrieval on demand systems. It’s a small niche of the overall web scraping market but here every second counts.
For optimal speed performance consider these factors:
- Avoid using the headless browser feature.
All web scraping APIs offer scraping using headless browsers however we found that this often slows down the scraping process significantly, often 3–10x!
- Aim for a high success rate.
Failed requests need to be retried which means double or even triple the scrape time.
These two are by far the biggest factors that affect web scraping speed. Though, service stability and geographical location can also have minor impacts on overall speed performance.
Overall, speed is an important metric to consider when evaluating real-time web scraping APIs as for data collecting scrapers the more important metric is often the concurrency limit which varies by service and plan from 5 to infinity.
If you’re unfamiliar with concurrency limits, it’s the number of requests that can be sent at the same time. This can be a bottleneck of how quickly the full dataset can be collected but given the average 5–10 second scraping speed avg. concurrency of 10 can generate 60–120 scrapes per minute. This is often enough for most scrapers but worth considering nevertheless.
### Price
Price is the final metric of the evaluation process though unlike success rate and speed, it’s significantly less technical.

_Cost per mil average of each service_
In this benchmark, we see that [ScrapingDog](https://scrapeway.com/web-scraping-api/scrapingdog) is clearly leading the average price though with a significantly lower success rate than [ScrapingBee](https://scrapeway.com/web-scraping-api/scrapingbee) and [Scrapfly](https://scrapeway.com/web-scraping-api/scrapfly) which are in a similar price range.
Most web scraping services follow the API credit-based model. Here each user subscribes to a monthly plan that gives X amount of credits to spend for the month.
The most important evaluation axis here is the credit cost of additional features. For example here are some features that often cost extra:
- Enabling of better proxies.
- Anti-bot bypass.
- Bandwidth consumed.
- Headless browser usage.
- Screenshot capture.
In our benchmarks, we represent pricing as bare-minumum spending required for scraping a single page and avoiding as many of these optional features as possible to reduce the costs.
Generally, all web scraping services have a similar pricing model where bare scrape request costs 1 credit and extra features cost extra credits:
- Headless browser is usually +5 to +10 credits
- High quality or residential proxies vary between +10 to +25 credits
This pricing model of “pay for what you use” is very convenient and provides a lot of space for budget optimization.
The biggest effect on overall price is the success rate as some services require extra features to be enabled like headless browsers or premium proxies to scrape successfully.
Another important metric to evaluate is entry point price. Some services like ScrapingAnt start as low as $19/month while others like Zenrows require a hefty investment of $69/month as a minimum. Here’s a quick summary:

_Min entry point in USD for each web scraping service_
Note that credits do not carry over month-to-month making this minimum commitment a very important pricing metric.
Finally, the pricing can fluctuate wildly every week so [refer to our weekly benchmarks](https://scrapeway.com/blog/index) for the most up-to-date pricing.
### Other features and support
To wrap this up there are other edge cases that can be important when evaluating web scraping APIs like additional features, support, and documentation.
For features, AI data parsing is a popular one. Unfortunately, it’s not offered by all services and each implementation varies greatly so your experience might vary.
Full browser automation features can be critical for scraping more complex web applications that require user inputs like clicks and form submissions. This feature is also great for developers new to web scraping as it requires no reverse engineering knowledge.
For support, most services offer live chat and extensive documentation as well as an interactive API player for testing. Live chat is incredibly valuable as things tend to change and move quickly in the web scraping field.
For our full coverage of all features see our [services overview](https://scrapeway.com/web-scraping-api) directory.
### Conclusion
So the best web scraping API can clearly be found through real evaluation techniques that measure success rate, speed, and price.
Smaller companies might want to target the price metric more while bigger operations should focus on success rate and reliability.
Either way, each evaluation should be done for each individual web scraping target as scraping conditions vary greatly for each website. For this, see [Scrapeway’s weekly benchmarks](https://scrapeway.com/) for the most up-to-date information every week!
_Originally published at_ [_https://scrapeway.com_](https://scrapeway.com/blog/what-is-the-best-web-scraping-api-service) _on June 5, 2024._ | scrapeway |
1,851,897 | OCR with tesseract, python and pytesseract | Python is super versatile, it has a giant community that has libraries that allow to achieve great... | 0 | 2024-06-05T03:12:02 | https://coffeebytes.dev/en/ocr-with-tesseract-python-and-pytesseract/ | python, ai, ocr, machinelearning | ---
title: OCR with tesseract, python and pytesseract
published: true
date: 2024-06-05 00:00:00 UTC
tags: python,ai,ocr,machinelearning
canonical_url: https://coffeebytes.dev/en/ocr-with-tesseract-python-and-pytesseract/
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sx0no0vh23re7lra4w6b.jpg
---
Python is super versatile, it has a giant community that has libraries that allow to achieve great things with few lines of code, Optical Character Recognition (OCR) is one of them, for that you just need to install tesseract and the python bindings, called pytesseract and you’ll be ready to convert an image to a string.
## Applications of OCR
OCR is quite useful for social networks, where you can scan the text that appears in the images to read its content and then process it or give it statistical treatment.
Here’s another case, imagine a program that scans image boards or social networks, extracts a couple of images from the posted videos and links them to a Tik Tok account using the watermark that appears on each video.
[Captcha resolution](https://coffeebytes.dev/en/my-analysis-of-anti-bot-captchas-and-their-advantages-and-disadvantages/) is also one of the most interesting uses of OCR.
Or maybe a page that uploads images of your products with your prices written on each of them. With OCR it is possible to get all their prices, and upload them to your database, downloading and processing their images.
Facebook must use some kind of similar technology to censor images that include offensive text, according to its policies, that are uploaded to its social network.

_Facebook is capable of reading the text on its images_
Another of the most common applications is the transformation of a pdf book into images to text, ideal for transforming old book scans to epub or text files.
As you can see it is quite useful, I think it is one of the IA applications that will not go away [when the AI bubble crashes](https://coffeebytes.dev/en/the-rise-and-fall-of-the-ai-bubble/).
## Installation of tesseract-ocr
To perform OCR with Python we will need tesseract, which is the library that handles all the heavy lifting and image processing.
Make sure you install the newest tesseract-ocr, there is a huge difference between version 3 and versions after 4, as neural networks were implemented to improve character recognition. I am using version 5 alpha.
``` bash
sudo apt install tesseract-ocr
tesseract -v
tesseract 5.0.0-alpha-20201224-3-ge1a3
```
Differences in OCR engine efficiency between tesseract 3 and tesseract 5 alpha.

_Comparison between OCR performance of tesseract 3 and tesseract 5_
### Installing languages in tesseract
We can see which languages are installed with _–list-langs_.
``` bash
tesseract --list-langs
```
It is obvious, but it is necessary to mention that the extent to which it recognizes the text will depend on whether we use it in the correct language. Let’s install the Spanish language.
``` bash
sudo apt install tesseract-ocr-spa
tesseract --list-langs
List of available languages (3):
eng
osd
spa
```
You will see that Spanish is now installed and we can use it to detect the text in our images by adding the _-l spa_ option to the end of our command
## OCR with tesseract
Now let’s put it to the test to recognize text in images, straight from the terminal. I am going to use the following image:

``` bash
tesseract image_with_text.jpg -
Warning: Invalid resolution 0 dpi. Using 70 instead.
Estimating resolution as 139
Do you have the time to listen to me whine
...
```
The “-” at the end of the command tells tesseract to send the results of the analysis to the standard output, so that we can view them in the terminal.
It is possible to tell tesseract which OCR engine to use:
- 0: for the original tesseract
- 1: for neural networks
- 2: tesseract and neural networks
- 3: Default, whichever is available
``` bash
tesseract image_with_text.jpg - --oem 1
```
Consider that **not all language files work with the original tesseract** (0 and 3). Although generally the neural networks one is the one that gives the best result. You can find the models compatible with the original tesseract and neural networks in the [tesseract repository](https://github.com/tesseract-ocr/tessdata).
You can install them manually by downloading them and moving them to the appropriate folder, in my case it is _/usr/local/share/tessdata/_, but it may be different on your system.
``` bash
wget https://github.com/tesseract-ocr/tessdata/raw/main/eng.traineddata
sudo mv eng.traineddata /usr/local/share/tessdata/
```
## Installing pytesseract
After installation we add pytesseract (the python bindings) and pillow (for image management) to our virtual environment.
``` bash
pipenv install pytesseract pillow
```
## Read strings from images with python
First let’s check the languages we have installed.
``` python
import pytesseract
from PIL import Image
import pytesseract
print(pytesseract.get_languages())
# ['eng', 'osd', 'spa']
```
Now that we have the languages, we can read the text that’s in our images and process it as a string in our script.
The code is quite short and self-explanatory. Basically we pass the image as an argument to pytesseract’s _image\_to\_string_ method.
``` python
import pytesseract
from PIL import Image
import pytesseract
img = Image.open("image_with_text.jpg") # Open the image with pillow
img.load()
text = pytesseract.image_to_string(img, lang='eng') # Extract image's text
print(text)
# Do you have the time to listen to me whine...
```
_image\_to\_string_ method can receive as argument the language in which we want it to detect the text.
Tesseract when with a method with which we can obtain much more information from the image, _image\_to\_data_, available for versions higher than 3.05.
``` python
data = pytesseract.image_to_data(img)
print(data)
```
[](images/dataTesseract.png)
If you want to learn more visit the [complete tesseract documentation](https://github.com/tesseract-ocr/tesseract). | zeedu_dev |
1,877,290 | React: Design Patterns | Understanding Layout Components | Layout components in React are specialized components designed to arrange other components on a page.... | 0 | 2024-06-04T23:04:03 | https://dev.to/andresz74/react-design-patterns-understanding-layout-components-4jcm | Layout components in React are specialized components designed to arrange other components on a page. Their primary role is to manage the layout, allowing the main content components to remain agnostic about their placement. This separation of concerns enhances flexibility and reusability. Examples include split screens, lists, and modals.
## Split-Screen Components
A split-screen component divides the screen into two sections, each displaying a different component. Here's how you can create one:
1. **Define the Component**: Create a new file, `SplitScreen.js`, and define the component with `left` and `right` props.
2. **Structure the Layout**: Use a container div to hold two child divs for the left and right components.
3. **Style the Layout**: Use styled-components to apply flexbox styles, ensuring both sides take up equal space.
```javascript
import styled from 'styled-components';
export const SplitScreen = ({ left: Left, right: Right }) => {
return (
<Container>
<Pane><Left /></Pane>
<Pane><Right /></Pane>
</Container>
);
};
const Container = styled.div`
display: flex;
`;
const Pane = styled.div`
flex: 1;
`;
```
### Enhancing Split-Screen Components
To make the split-screen component more flexible, you can add weight props to control the space each side occupies:
1. **Add Weight Props**: Introduce `leftWeight` and `rightWeight` props with default values.
2. **Apply Weights**: Pass these weights to the styled components and adjust the flex property accordingly.
```javascript
export const SplitScreen = ({ left: Left, right: Right, leftWeight = 1, rightWeight = 1 }) => {
return (
<Container>
<Pane weight={leftWeight}><Left /></Pane>
<Pane weight={rightWeight}><Right /></Pane>
</Container>
);
};
const Pane = styled.div`
flex: ${props => props.weight};
`;
```
## Lists and List Items
Lists are another common layout pattern. You can create reusable list components that display different types of items:
1. **Define List Items**: Create small and large list item components for different data types (e.g., people, products).
2. **Create a Regular List**: A generic list component that takes items, a resource name, and an item component as props.
```javascript
export const RegularList = ({ items, resourceName, itemComponent: ItemComponent }) => {
return (
<>
{items.map((item, i) => (
<ItemComponent key={i} {...{ [resourceName]: item }} />
))}
</>
);
};
```
### Numbered Lists
To create a numbered list, extend the regular list component to include item numbers:
```javascript
export const NumberedList = ({ items, resourceName, itemComponent: ItemComponent }) => {
return (
<>
{items.map((item, i) => (
<React.Fragment key={i}>
<h3>{i + 1}</h3>
<ItemComponent {...{ [resourceName]: item }} />
</React.Fragment>
))}
</>
);
};
```
## Modal Components
Modals are used to display content over the main page. Here's how to create a simple modal component:
1. **Define the Modal**: Create a new file, `Modal.js`, and define the modal component with state management for visibility.
2. **Structure the Modal**: Use styled-components for the modal background and body.
3. **Toggle Visibility**: Add buttons to show and hide the modal.
```javascript
import React, { useState } from 'react';
import styled from 'styled-components';
export const Modal = ({ children }) => {
const [shouldShow, setShouldShow] = useState(false);
return (
<>
<button onClick={() => setShouldShow(true)}>Show Modal</button>
{shouldShow && (
<ModalBackground onClick={() => setShouldShow(false)}>
<ModalBody onClick={e => e.stopPropagation()}>
{children}
<button onClick={() => setShouldShow(false)}>Hide Modal</button>
</ModalBody>
</ModalBackground>
)}
</>
);
};
const ModalBackground = styled.div`
position: fixed;
top: 0;
left: 0;
width: 100%;
height: 100%;
background-color: rgba(0, 0, 0, 0.5);
`;
const ModalBody = styled.div`
position: fixed;
top: 50%;
left: 50%;
transform: translate(-50%, -50%);
background-color: white;
padding: 20px;
`;
```
### Conclusion
Layout components in React help manage the arrangement of other components on a page. By separating layout concerns from content concerns, you gain flexibility and reusability in your codebase. Whether you're creating split screens, lists, or modals, understanding and utilizing layout components can significantly enhance your development workflow.
---
[React: Design Patterns](https://github.com/LinkedInLearning/react-design-patterns-2895130) is a course by Shaun Wassell that you can follow on LinkedIn Learning.
| andresz74 | |
1,812,683 | Welcome Thread - v279 | Leave a comment below to introduce yourself! You can talk about what brought you here, what... | 0 | 2024-06-05T00:00:00 | https://dev.to/devteam/welcome-thread-v279-2mg1 | welcome | ---
published_at : 2024-06-05 00:00 +0000
---

---
1. Leave a comment below to introduce yourself! You can talk about what brought you here, what you're learning, or just a fun fact about yourself.
2. Reply to someone's comment, either with a question or just a hello. 👋
3. Come back next week to greet our new members so you can one day earn our [Warm Welcome Badge](https://dev.to/community-badges?badge=warm-welcome)! | sloan |
1,877,296 | Implementing Native Code in React Native | Hey devs! React Native is an excellent tool for building mobile applications with a single codebase... | 0 | 2024-06-04T23:57:18 | https://dev.to/paulocappa/implementing-native-code-in-react-native-2282 | kotlin, swift, reactnative, javascript | Hey devs!
React Native is an excellent tool for building mobile applications with a single codebase that can run on both iOS and Android. However, sometimes we need to access platform-specific functionalities that are not available in the standard React Native library. In these situations, we can turn to native code implementations.
In this post, we will explore how to add native functionalities to your React Native application using native packages. We will use the example of a Gallery library, which already has native implementations in Swift for iOS and Kotlin for Android.
### Why Use Native Code?
- **Performance**: Native code can be optimized for the specific platform, offering better performance.
- **Access to Specific APIs**: Some functionalities are only available through native APIs.
- **Integration with Native Libraries**: Use of libraries or SDKs that only exist in native form.
### Setting Up the Environment
Before we start, make sure you have your environment set up for React Native development, including Xcode for iOS and Android Studio for Android.
### Project Structure
Assume we have the following basic structure of a React Native project:
```
my-react-native-app
├── android
├── ios
├── src
│ ├── components
│ ├── screens
│ ├── App.js
├── package.json
```
### Adding Native Code
#### Creating the Native Module
Let's create a native module that exposes gallery functionalities. We'll start with Android using Kotlin.
#### Android (Kotlin)
1- **Create a new Kotlin module:**
Navigate to `android/app/src/main/java/com/myreactnativeapp/` and create a new directory `gallery`.
2- **Add the Kotlin class:**
Create a `GalleryModule.kt` file inside the `gallery` directory:
```kotlin
package com.myreactnativeapp.gallery
import com.facebook.react.bridge.ReactApplicationContext
import com.facebook.react.bridge.ReactContextBaseJavaModule
import com.facebook.react.bridge.ReactMethod
import com.facebook.react.bridge.Promise
class GalleryModule(reactContext: ReactApplicationContext) : ReactContextBaseJavaModule(reactContext) {
override fun getName(): String {
return "GalleryModule"
}
@ReactMethod
fun openGallery(promise: Promise) {
// Implementation to open the gallery
promise.resolve("Gallery opened successfully!")
}
}
```
3- **Register the module:**
In the same directory, create a `GalleryPackage.kt` file:
```kotlin
package com.myreactnativeapp.gallery
import com.facebook.react.ReactPackage
import com.facebook.react.bridge.ReactApplicationContext
import com.facebook.react.uimanager.ViewManager
import com.facebook.react.bridge.NativeModule
class GalleryPackage : ReactPackage {
override fun createNativeModules(reactContext: ReactApplicationContext): List<NativeModule> {
return listOf(GalleryModule(reactContext))
}
override fun createViewManagers(reactContext: ReactApplicationContext): List<ViewManager<*, *>> {
return emptyList()
}
}
```
4- **Update `MainApplication.java`:**
Add the new package to the list of registered packages:
```java
import com.myreactnativeapp.gallery.GalleryPackage; // Import the package
@Override
protected List<ReactPackage> getPackages() {
@SuppressWarnings("UnnecessaryLocalVariable")
List<ReactPackage> packages = new PackageList(this).getPackages();
packages.add(new GalleryPackage()); // Add the new package
return packages;
}
```
#### iOS (Swift)
1- **Create the Swift module:**
Navigate to `ios` and open the project in Xcode. In `ios`, create a new Swift file `GalleryModule.swift` inside the `Libraries` directory.
```swift
import Foundation
import React
@objc(GalleryModule)
class GalleryModule: NSObject {
@objc
func openGallery(_ resolve: @escaping RCTPromiseResolveBlock, rejecter reject: @escaping RCTPromiseRejectBlock) {
// Implementation to open the gallery
resolve("Gallery opened successfully!")
}
}
```
2- **Update the bridge header:**
Open `ios/YourProjectName-Bridging-Header.h` and add:
```objc
#import "React/RCTBridgeModule.h"
```
3- **Register the module:**
Open `AppDelegate.m` and add the module registration:
```objc
#import <React/RCTBridge.h>
#import <React/RCTBridgeModule.h>
@implementation AppDelegate
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
RCTBridge *bridge = [[RCTBridge alloc] initWithDelegate:self launchOptions:launchOptions];
[bridge registerModuleForName:@"GalleryModule" withClass:[GalleryModule class]];
// Rest of the code
}
```
### Using the Native Module in React Native
Now that we have our native modules set up, we can use them in our JavaScript code.
1- **Create a JavaScript file for the module:**
```javascript
// src/nativeModules/GalleryModule.js
import { NativeModules } from 'react-native';
const { GalleryModule } = NativeModules;
const openGallery = async () => {
try {
const result = await GalleryModule.openGallery();
console.log(result);
} catch (error) {
console.error(error);
}
};
export { openGallery };
```
2- **Use the module in your component:**
```javascript
// src/screens/GalleryScreen.js
import React from 'react';
import { View, Button } from 'react-native';
import { openGallery } from '../nativeModules/GalleryModule';
const GalleryScreen = () => {
return (
<View>
<Button title="Open Gallery" onPress={openGallery} />
</View>
);
};
export default GalleryScreen;
```
### Conclusion
Adding native code to your React Native application can seem challenging at first, but with the right steps, you can easily extend your app's capabilities to include platform-specific functionalities. This guide has shown how to create and integrate simple native modules for both Android and iOS. With this foundation, you can explore further and add complex functionalities as needed.
### References
1. [React Native Documentation on Native Modules](https://reactnative.dev/docs/native-modules-setup)
2. [React Native Documentation on Integrating with Swift](https://reactnative.dev/docs/native-modules-ios)
3. [React Native Documentation on Integrating with Kotlin](https://reactnative.dev/docs/native-modules-android)
This guide should provide a solid foundation for you to start working with native code in React Native. If you have any questions or run into issues, the React Native community is very active and can be an excellent resource for additional support. | paulocappa |
1,877,295 | How to Create a Responsive Card Using Plain HTML & CSS | When you go online, and visit different websites, you will always come across websites that have an... | 0 | 2024-06-04T23:26:33 | https://dev.to/george_kingi/how-to-create-a-responsive-card-using-plain-html-css-1j3b | webdev, css, html, animation | When you go online, and visit different websites, you will always come across websites that have an image with text at either the top or bottom of the image. A profile picture on a platform with bio text at the bottom is a perfect example of this context.
The image with the text is called a CARD in HTML and CSS. A card contains a title, a picture (photo), and descriptive information. This article is about how to create a card (image) that responds by either moving up or down once hovered over. It will serve both beginners and experts in HTML and CSS.
### Preview of the Final Responsive Card Output
We will create the below responsive card, you will notice that once you move your cursor over any of the three images, the image responds by either moving up or down, some exciting but basic animations.

### Creating the Responsive Card from Scratch
We will first create the structure of our project using the markup language, HTML, then proceed to style it up using CSS.

### Step-by-Step Process of Creating the Responsive Card
- Download 3 Pictures of your choice (Click here for free pictures:(https://unsplash.com/s/photos/forms)
- Create a folder and give it a name of your choice (I named my folder “Box or Card”).
- Open the folder you just created and inside it create another folder, give it a name of your choice, and move the pictures you just downloaded here.
- Open Vs Code Editor, go to file select open folder then choose the folder you just created.
- While on the Vs Code Editor, create a new file and name it index.html. The file name must have the extension .html
- Create a new file and name it style.css. The file name must have the extension .css and link the two files as seen in the below syntax
- You should have something like the snip below; I named my pictures; Growth, Development, and Reproduction.

### HTML Syntax
Open index.html and enter the below code;
```
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>BOX OR CARD</title>
<link rel="stylesheet" href="style.css">
</head>
<h2>RESPONSIVE SERVICES CARD BY KINGSGEE</h2>
<body>
<H3>DISCUSSION ON GROWTH, DEVELOPMENT AND MATURATION</H3>
<div class="container">
<div class="box">
<img src="Growth.jpg" alt="">
<h3>Growth</h3>
<p>Growth is the change in physical characteristic of an individual
increase in weight or length
</p>
<button class="button">See more</button>
</div>
```
Output:

### Continuation of the HTML Syntax
```
To add more images add the code below;
!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>BOX OR CARD</title>
<link rel="stylesheet" href="style.css">
</head>
<h2>RESPONSIVE SERVICES CARD BY KINGSGEE</h2>
<body>
<H3>DISCUSSION ON GROWTH, DEVELOPMENT AND MATURATION</H3>
<div class="container">
<div class="box">
<img src="Growth.jpg" alt="">
<h3>Growth</h3>
<p>Growth is the change in physical characteristics of an individual
increase in weight or length
</p>
<button class="button">See more</button>
</div>
<div class="box">
<img src="Development.jpg" alt="">
<h3>Development</h3>
<p>This refers to the change in a structure such as thoughts or behavior of an individual
</p>
<button class="button">See more</button>
</div>
<div class="box">
<img src="Reproduction.jpg" alt="">
<h3>Maturatiion</h3>
<p>Its the emergence of unfolding of an individual genetic potential as they become older
</p>
<button class="button">See more</button>
</div>
</div>
<br><br>
<h3>™ ©+254700809861 for more inquiries</h3>
</body>
</html>
```
Output:

With the above HTML syntax, our structure is set. The next step is to introduce the powerful CSS to make our images responsive.
### Application and Implementation of different designs and styles on Cards.
Discussing the Asterisk ( *), Margins, Border, Padding, and other CSS styles
The Asterisk (*):➜ is the CSS universal selector that selects all elements in the HTML document
Margin:➜ Used to create extra space around an element.
Border:➜ Used to set an element’s border.
Padding:➜ Used to create space between the content of an element and its border, the default state is 2px.
CSS Hover, Transform, Transition, and some Animations and how to apply them
```
* {
margin: 0;
padding: 0;
text-align: center;
font-family: arial;
}
<!--Adding color on background-->
body{
background-color: #146f6f;
padding-top: 50px ;
padding-bottom: 100px;
}
<!--Adding color on the headings-->
h2{
color: #0c0115;
}
h3 { color: #0c0115;
font-size: 30px;
margin-top: 10px;
}
.container{
padding-top:50px;
gap: 25px ;
margin: auto;
display:flex;
justify-content: center;
}
<!--Adding responsiveness to the images-->
.box{width: 320px;
height: 500px;
background-color: #3ea9a9;
border-radius: 100px;
transition:transform 0.6s ease;
}
<!--Adding hover-->
.box:hover{
transform: translateY(30px);
}
.box img{
width: 250px;
height: 250px;
padding-top: 30px;
border-radius: 30%;
}
.box p {
color: #1a082a;
padding-top: 10px;
padding-bottom: 10px;
line-height: 25px;
font-family: Georgia;
font-weight: 5px;
}
.box button{
width: 150px;
height: 40px;
background-color: #1c092e;
border-radius: 10px;
border: none;
color: #b788e2;
font-size: 17px;
font-weight: bold;
margin-top: 2px;}
.box button:hover{
background-color: #1232ea;
color: whitesmoke;
border: 2px solid salmon;
}
```
Output:

### Importance of Cards in Web Development
A card’s importance can not be downplayed as it is crucial in providing a user-friendly representation of information. A card provides a summary of what the developer intends to represent in a visually appealing way, notice that in our mini project above, we were able to add pictures, put some text, and define what the pictures are about, they look good, right? 90% of online websites have an aspect of card and developers should embrace this exciting skill. While this article has only touched base, developers are encouraged to keep on learning different CSS card designs.
### Conclusion
In summary, our mini project above has helped us create a web page that has some animation, user user-friendly, and easy to read. Design cards are essential in displaying content in an effective and organized manner making a website user-friendly and visually appealing as we can see.
This knowledge can be applied in the making of different types of websites including; Portfolio websites, E-commerce websites, Personal Websites, Business websites, etc. Thanks to the internet, I can voice my valid opinions and help other developers learn and grow in Tech.
| george_kingi |
1,875,908 | Elanat CMS One Year Birthday | Today, June 5, 2024, is the one year anniversary of Elanat CMS. The first version of Elanat CMS... | 0 | 2024-06-04T23:23:01 | https://dev.to/elanatframework/elanat-cms-one-year-birthday-bif | news, opensource, dotnet, github | Today, June 5, 2024, is the one year anniversary of [Elanat CMS](https://github.com/elanatframework/Elanat). The first version of Elanat CMS (1.0.0.0) was released on June 5, 2023. Before the release of version 1, Elanat CMS was developed by Mohammad Rabie in more than 10 years. Elanat CMS is a large CMS-Framework that has nearly 600 View files and Controller class.

> Mohammad Rabie: I want to say that I could never get along with the default web architectures of .NET and I always criticized the weak structures of .NET architectures. I created Elanat CMS on web-form platform in .NET standard without server controllers. After a short time, I decided to move Elanat CMS from .NET Standard to .NET Core; Unfortunately, the default ASP.NET Core MVC and Razor Pages frameworks did not meet my needs and desires. Because the default ASP.NET Core MVC and Razor Pages frameworks are weak in terms of dynamics and modularity and establish a hard connection. That's why I decided to build a new web framework on .NET Core to meet all my needs. I named this framework CodeBehind. The migration of Elanat's content management system from .NET standard to .NET Core using the CodeBehind framework took only two weeks. Migrating to .NET Core brought many benefits for Elanat CMS; in addition to high execution speed, there was no need to restart the program to add a new add-on.
## Version 2.2
Soon we will release version 2.2 of Elanat content management system. In this version, we made many changes to give the best user experience to web developers.
## Elanat logo change
[Elanat](https://elanat.net) is a team that has released and supports two products:
1- Elanat CMS

It is a very large and powerful content management system; It is actually a CMS Framework.
2- CodeBehind Framework

A new, flexible and modular back-end framework based on .NET Core.
As you know, the Elanat logo was the same as the Elanat CMS logo, and that's why we at Elanat team decided to create a new logo for Elanat (elanat.net) on the occasion of the one year anniversary of Elanat CMS.

This logo consists of three letters E, L and F, which were written one after the other. The letters E and L are the initials of Elanat and F is the initials of Framework. In this logo, the letter F is written in reverse, which shows our focus on the design and development of frameworks.
### Related links
Elanat CMS on GitHub:
https://github.com/elanatframework/Elanat
Elanat CMS website:
https://elanat.net
CodeBehind on GitHub:
https://github.com/elanatframework/Code_behind
CodeBehind in NuGet:
https://www.nuget.org/packages/CodeBehind/
CodeBehind page:
https://elanat.net/page_content/code_behind | elanatframework |
1,877,293 | React: Design Patterns | Container Components | In the world of React, there's a design pattern called Container Components. If you're a beginner or... | 0 | 2024-06-04T23:13:04 | https://dev.to/andresz74/react-design-patterns-container-components-5g70 | react, javascript, designpatterns | In the world of React, there's a design pattern called Container Components. If you're a beginner or intermediate React developer, you might be used to having each child component load its own data. Typically, you'd use hooks like `useState` and `useEffect` along with libraries like Axios or Fetch to get data from a server. This works, but it can get messy when multiple child components need to share the same logic. That's where container components come in.
Container components handle all the data loading and management for their child components. They abstract away the data-fetching logic, allowing child components to focus solely on rendering. This separation of concerns makes your code cleaner and more maintainable.
## The Main Idea
The main idea behind container components is similar to layout components. Just as layout components ensure that child components don't need to know or care about their layout, container components ensure that child components don't need to know where their data is coming from or how to manage it. They just take some props and display whatever they need to display.
## A Simple Example: CurrentUserLoader
Let's start with a simple example. Suppose we have a `CurrentUserLoader` component that loads the current user's data and passes it to a `UserInfo` component.
```jsx
import React, { useState, useEffect } from 'react';
import axios from 'axios';
export const CurrentUserLoader = ({ children }) => {
const [user, setUser] = useState(null);
useEffect(() => {
const fetchData = async () => {
const response = await axios.get('/current-user');
setUser(response.data);
};
fetchData();
}, []);
return (
<>
{React.Children.map(children, child => {
if (React.isValidElement(child)) {
return React.cloneElement(child, { user });
}
return child;
})}
</>
);
};
```
In this example, `CurrentUserLoader` fetches the current user's data and passes it down to its children as a `user` prop. The `UserInfo` component can then use this prop to display the user's information.
## Making It More Flexible: UserLoader
The `CurrentUserLoader` is useful but limited. It only loads the current user's data. What if we want to load any user's data by their ID? We can create a more flexible `UserLoader` component.
```jsx
import React, { useState, useEffect } from 'react';
import axios from 'axios';
export const UserLoader = ({ userId, children }) => {
const [user, setUser] = useState(null);
useEffect(() => {
const fetchData = async () => {
const response = await axios.get(`/users/${userId}`);
setUser(response.data);
};
fetchData();
}, [userId]);
return (
<>
{React.Children.map(children, child => {
if (React.isValidElement(child)) {
return React.cloneElement(child, { user });
}
return child;
})}
</>
);
};
```
Now, `UserLoader` can load any user's data by their ID and pass it down to its children.
## Going Generic: ResourceLoader
We can take this concept even further by creating a generic `ResourceLoader` component that can load any type of resource from the server.
```jsx
import React, { useState, useEffect } from 'react';
import axios from 'axios';
export const ResourceLoader = ({ resourceUrl, resourceName, children }) => {
const [state, setState] = useState(null);
useEffect(() => {
const fetchData = async () => {
const response = await axios.get(resourceUrl);
setState(response.data);
};
fetchData();
}, [resourceUrl]);
return (
<>
{React.Children.map(children, child => {
if (React.isValidElement(child)) {
return React.cloneElement(child, { [resourceName]: state });
}
return child;
})}
</>
);
};
```
With `ResourceLoader`, you can load any resource by specifying its URL and name. This makes the component highly reusable.
## The Ultimate Flexibility: DataSource
Finally, let's create a `DataSource` component that doesn't even know where its data is coming from. Instead of hardcoding the data-fetching logic, we'll pass a function that returns the data.
```jsx
import React, { useState, useEffect } from 'react';
export const DataSource = ({ getDataFunction, resourceName, children }) => {
const [state, setState] = useState(null);
useEffect(() => {
const fetchData = async () => {
const data = await getDataFunction();
setState(data);
};
fetchData();
}, [getDataFunction]);
return (
<>
{React.Children.map(children, child => {
if (React.isValidElement(child)) {
return React.cloneElement(child, { [resourceName]: state });
}
return child;
})}
</>
);
};
```
With `DataSource`, you can load data from any source—whether it's an API, local storage, or something else—by passing a function that returns the data.
## Conclusion
Container components are a powerful pattern in React that help you manage data loading and sharing logic across multiple components. By abstracting away the data-fetching logic into container components like `CurrentUserLoader`, `UserLoader`, `ResourceLoader`, and `DataSource`, you can make your code cleaner and more maintainable. This separation of concerns allows your child components to focus solely on rendering, making your application easier to understand and extend.
---
[React: Design Patterns](https://github.com/LinkedInLearning/react-design-patterns-2895130) is a course by Shaun Wassell that you can follow on LinkedIn Learning.
| andresz74 |
1,873,946 | Deploying static webs apps with the Azure cli and bicep | Deploying static webs apps via the cli and bicep | 0 | 2024-06-04T22:49:08 | https://dev.to/danwright/deploying-static-webs-apps-1dib | azure, devops, bicep, react | ---
title: Deploying static webs apps with the Azure cli and bicep
published: true
description: Deploying static webs apps via the cli and bicep
tags: #Azure #devops #Bicep #react
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-02 20:49 +0000
---
## Deploying static web apps
> A brief intro deploying Azure static web apps using the Azure cli, and Bicep, Microsoft’s domain specific language (DSL) for deploying Azure resources.
This will deploy a create-react-app (CRA) starter repo located in a DevOps repo using a simple .yaml file into a static app resource.
## What is a static web app
## Azure cli deployment
As a precursor to deploying via the cmd line you will need the azure cli installed
> Use `az version` to determine if you have the cli installed. If not follow these instructions for installing [https://learn.microsoft.com/en-us/cli/azure/install-azure-cli](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli)
Login using `az login` and set the subscription you wish to work in with `az account set --subscription <SUBSCRIPTION-NAME>`.
Create a resource group to manage any resource deployed using `az group create –name <RESOURCE-NAME> –location <LOCATION>`
Finally deploy the resource into the resource group at a location `az staticwebapp create – name <SWA-RESOURCE-NAME> –resource-group <RESOURCE-NAME> –location <LOCATION>`
In the Azure portal under resource groups navigate to the newly deployed resource group which will contain the static web app.
## Bicep
This will deploy 1 resource and 1 module into a resource group within a subscription. Create a new git repo in Azure Devops to work in which will can be deployed using a `./azure-pipelines.yaml` file.
In order to create the resource group set the scope of the `./main.bicep` file from its default of resourceGroup to subscription and then deploy the resource. The parameters needed will be passed in from a `./mian.parameters.bicepparam` file that can be altered.
>`./main.bicep`
```
targetScope = 'subscription'
// Parameters imported from `./mian.parameters.bicepparam` file
param globalTags object
param resourceGroupName string
param resourceGroupLocation string
@description('Timestamp last deployment')
param utcShort string = utcNow('d')
var customTags = {
LastDeployed: utcShort
}
resource resourceGroup 'Microsoft.Resources/resourceGroups@2024-03-01' = {
name: resourceGroupName
location: resourceGroupLocation
tags: union(globalTags, customTags)
}
```
The parameters file will hold customizable options needed to deploy the resources. Update the resource group name and location with values and add custom global tags to be used in all resources.
>`./main.parameters.bicepparam`
```
using './main.bicep'
param globalTags = {
Environment: 'Dev'
}
param resourceGroupLocation = '<RESOURCE-GROUP-LOCATION>'
param resourceGroupName = '<RESOURCE-GROUP-NAME>'
```
To add the staticWebApp and keep its config out of the main file and aid reuse add the staticWebApp as a module.
>`./modules/static-web-app.bicep`
```
@description('Global tags')
param tags object
@allowed(['centralus', 'eastus', 'eastus2', 'westus', 'westus2'])
param resourceGroupLocation string
@description('Timestamp last deployment')
param utcShort string = utcNow('d')
var customTags = {
LastDeployed: utcShort
}
resource staticwebapp 'Microsoft.Web/staticSites@2023-12-01' = {
name: 'azStaticWebApp'
location: resourceGroupLocation
tags: union(tags, customTags)
properties: {}
sku: {
name: 'Free'
}
}
```
From here you can deploy the resources using the Azure cli, using `az deployment sub create --location centralus --template-file ./main.bicep --parameters './main.parameters.bicepparam'`
To deploy the resources through a pipeline create a `'./azure-pipelines.yaml'` file and use an inlineScript to run the same cli command as above.
>`'./azure-parameters.yaml'`
```
trigger:
- main
name: Bicep deploy
variables:
vmImageName: 'ubuntu-latest'
AzureServiceConnection: <SERVICE-CONNECTION-NAME>
bicepParamFile: './main.parameters.bicepparam'
pool:
vmImage: $(vmImageName)
steps:
- task: AzureCLI@2
inputs:
azureSubscription: $(AzureServiceConnection)
scriptType: bash
scriptLocation: inlineScript
useGlobalConfig: false
inlineScript: az deployment sub create --location <LOCATION> --template-file ./main.bicep --parameters $(bicepParamFile)
```
In Azure DevOps create a service connection that can be used to grant access to Azure for deploying resources into:
1. DevOps -> project settings -> service connections
2. New service conection -> Azure Resource Manager -> workload identity federeation (automatic)
3. Scope: subscription -> ResourceGroup: leave blank -> Add a service connection name and description -> Grant access permission to all pipelines
Once created replace `<SERVICE-CONNECTION-NAME>` with the name of the newly created service connection and push the code changes.
## Deploy create react app
Create a new repo with create react app and add a `'./azure-pipelines.yaml'` file that deploys the repo in the static web app any time new changes are pushed to it.
>`'./azure-parameters.yaml'`
```
trigger:
- main
pool:
vmImage: ubuntu-latest
steps:
- checkout: self
submodules: true
- task: AzureStaticWebApp@0
inputs:
app_location: '/'
output_location: '/build'
env:
azure_static_web_apps_api_token: $(deployment_token)
```
Get the deployment token from the static app resource under the `Manage deployment token` at the top of the overview tab.
Inside Azure DevOps click create a new pipeline from an Azure GitRepo with an existing Azure pipelines yaml file. On the review stage add new variable for the deployment_token and add the static web app deployment token before running.
Validate the web app has been deployed by navigating to the URL on the overview tab.
Now any time now changes are pushed to the create-react-repo those changes will be deployed to the web app.
Link to the [Git repo](https://github.com/sirBassetDeHound/techTuesday) | danwright |
1,877,285 | Best Practices for Working with Next.js | Next.js has rapidly become one of the most popular frameworks for building React applications,... | 0 | 2024-06-04T22:45:37 | https://dev.to/enitanogun1/best-practices-for-working-with-nextjs-49np |
Next.js has rapidly become one of the most popular frameworks for building React applications, offering a range of features that simplify development and enhance performance. Here are some best practices to follow when working with Next.js:
1. Leverage Static Generation and Server-Side Rendering
Next.js provides powerful data fetching methods like static generation (SSG) and server-side rendering (SSR).
Static Generation (getStaticProps): Use this for pages that can be pre-rendered at build time. This is ideal for static content and offers excellent performance because the content is served from a CDN.
Server-Side Rendering (getServerSideProps): Use this when you need to fetch data at request time. This is useful for content that changes frequently and needs to be up-to-date for every request.
2. Optimize Images with Next/Image
Next.js includes an optimized image component (`next/image`) that automatically handles resizing, lazy loading, and serving images in modern formats like WebP. This can significantly improve your site's performance and user experience.
```jsx
import Image from 'next/image';
function MyComponent() {
return (
<Image
src="/path/to/image.jpg"
alt="Description"
width={500}
height={500}
/>
);
}
```
3. Utilize Dynamic Imports
Dynamic imports allow you to load components and modules only when they are needed, reducing the initial load time of your application. This can be particularly useful for large libraries or components that are not required immediately.
```jsx
import dynamic from 'next/dynamic';
const HeavyComponent = dynamic(() => import('../components/HeavyComponent'));
function MyPage() {
return <HeavyComponent />;
}
```
4. Implement API Routes
Next.js allows you to create API endpoints within the `pages/api` directory. This feature is ideal for handling form submissions, authentication, or fetching data from an external source without needing a separate backend.
```jsx
// pages/api/hello.js
export default function handler(req, res) {
res.status(200).json({ message: 'Hello, world!' });
}
```
5. Configure Custom Error Pages
Customize your error pages (`404.js` and `_error.js`) to improve user experience and provide more meaningful feedback. This can help with navigation and reducing bounce rates.
```jsx
// pages/404.js
export default function Custom404() {
return <h1>404 - Page Not Found</h1>;
}
```
6. Use Environment Variables
Next.js supports environment variables through `.env.local`, `.env.development`, and `.env.production` files. These variables can be accessed via `process.env`, helping you manage different configurations for various environments.
```bash
// .env.local
NEXT_PUBLIC_API_URL=http://localhost:3000
```
```jsx
// Accessing in code
const apiUrl = process.env.NEXT_PUBLIC_API_URL;
```
7. Enhance Performance with Built-in Analytics
Next.js offers built-in analytics to help you monitor and optimize the performance of your application. By analyzing data on how users interact with your site, you can make informed decisions to improve speed and usability.
```jsx
// pages/_app.js
import { Analytics } from '@vercel/analytics/react';
function MyApp({ Component, pageProps }) {
return (
<>
<Component {...pageProps} />
<Analytics />
</>
);
}
export default MyApp;
```
8. Follow SEO Best Practices
Next.js has robust support for SEO through the `next/head` component, which allows you to manage meta tags, titles, and other important SEO elements.
```jsx
import Head from 'next/head';
function MyPage() {
return (
<Head>
<title>My Page Title</title>
<meta name="description" content="My page description" />
</Head>
);
}
```
Conclusion
By following these best practices, you can ensure that your Next.js applications are efficient, scalable, and maintainable. Leveraging the full potential of Next.js features can significantly enhance both developer experience and end-user satisfaction. | enitanogun1 | |
1,877,284 | How to create a Storage Account For a Public Website in Microsoft Azure | Skilling tasks • Create a storage account. • Configure basic settings for security and... | 0 | 2024-06-04T22:33:29 | https://dev.to/atony07/how-to-create-a-storage-account-in-microsoft-azure-44e7 | Skilling tasks
• Create a storage account.
• Configure basic settings for security and networking.
Follow these steps below;
Step:1: Create a Resource group or make use of an existing Resource group.
An existing Resource group was used here.
Step:2: In the Azure portal, search for and select Storage accounts

Step:3: Select Create

Step:4: On the Basics tab, select your Resource group, Provide a Storage account name. The storage account name must be unique in Azure, Set the Performance to Standard and select Review, and then Create.

Step:5: Select Create

Step:6: Wait for Deployment is in progress

Step:7: Deployment is complete and go to resource

Step:8: In your storage account, in the Data management section, select the Redundancy blade

Step:9: Select Locally-redundant storage (LRS) in the Redundancy drop-down and select Save

Step:10: In the Settings section, select the Configuration blade.

Step:11: Ensure Secure transfer required is Enabled. Ensure the Minimal TLS version is set to Version 1.2, Allow storage account key access is Disabled and click Save

Step:12: In the Security + networking section, select the Networking blade.

Step:13: Ensure Public network access is set to Enabled from all networks and Save.

| atony07 | |
1,877,283 | QDAL88 Link Situs Slot Gacor Terbaru Gampang Maxwin 2024 | Daftar KLIK DI SINI BOSKU !! Daftar KLIK DI SINI BOSKU !! QDAL88 menawarkan berbagai macam jenis... | 0 | 2024-06-04T22:32:41 | https://dev.to/qdal88login/qdal88-link-situs-slot-gacor-terbaru-gampang-maxwin-2024-noi | [Daftar KLIK DI SINI BOSKU](https://bigprofitbuzz.com/) !!
[Daftar KLIK DI SINI BOSKU](https://bigprofitbuzz.com/) !!

[QDAL88](https://bigprofitbuzz.com/) menawarkan berbagai macam jenis permainan slot gacor hari ini dari provider terkemuka seperti Pragmatic Play, Habanero, dan Spadegaming. Dengan pelayanan yang baik serta bonus yang menggiurkan sehingga pemain dapat merasakan pengalaman bermain slot gacor QDAL88 yang seru dan menguntungkan. Satu keunggulan dari QDAL88 adalah adanya fitur slot gacor yang memberikan kesempatan kepada pemain untuk mendapatkan kemengan besar dengan mudah. Dengan memanfaatkan fitur [slot gacor](https://bigprofitbuzz.com/) ini, Para pemain memiliki peluang yang lebih besar untuk meraih kemenangan dengan mudah. Dengan memanfaatkan fitur slot gacor ini para pemain memiliki peluang yang lebih besar untuk meraih kemenangan besar dalam waktu singkat | qdal88login | |
1,877,282 | Mastering Python Comprehensions: Crafting Efficient and Readable Code | There are a lot of things that Python can do, and one of the things Python is good at is making code... | 0 | 2024-06-04T22:31:17 | https://medium.com/gitconnected/mastering-python-comprehensions-d7cef457165a | python, beginners | There are a lot of things that Python can do, and one of the things Python is good at is making code more readable and easy to follow. Python comprehension is one of those patterns that will make your code more efficient and readable, unless you go overboard.
**So what is comprehension?** In technical terms, comprehension is a concise way in Python to create sequences (like lists, sets, and dictionaries) using a single line of code.
So simply put, comprehension will help you create `list`, `set`, `dict`, and `generators` efficiently and in a more readable format.
Let's start then!
# Types of Comprehension
## 1\. List Comprehension
As the name suggests, it is a compact way to create a list. Let's say you want to create a list of all numbers from 1 to 10. How would you do it?
You might say you can just use `range(1, 11)` to get 1 to 10. But now if you want to get only odd numbers, how would you get it?
To get odd numbers:
```python
odd_nums = [x for x in range(1, 11) if x % 2 != 0]
print(odd_nums) # Output: [1, 3, 5, 7, 9]
```
Now with odd numbers, if you want the square of those numbers as well, then you could just do this:
```python
squared_odd_nums = [x**2 for x in range(1, 11) if x % 2 != 0]
print(squared_odd_nums) # Output: [1, 9, 25, 49, 81]
```
Now, if you do it the traditional way, this is how your code will look:
```python
squared_odd_nums = []
for num in range(1, 11):
if num % 2 != 0:
squared_odd_nums.append(num**2)
print(squared_odd_nums) # Output: [1, 9, 25, 49, 81]
```
## 2\. Set Comprehension
No need for more explanation here; set comprehension is the same as list comprehension but instead of composing with `[]` (square brackets), we will use `{}` (curly brackets).
```python
squared_odd_nums = {num**2 for num in range(1, 11) if num % 2 != 0}
print(squared_odd_nums) # Output: {1, 9, 25, 49, 81}
```
## 3\. Dictionary Comprehension
One more benefit of comprehension is you can easily create a dictionary with one line. Let's say you want a dictionary of all odd numbers and their squared values. How would you do it?
It's easy!
```python
squares = {num: num**2 for num in range(1, 11) if num % 2 != 0}
print(squares) # Output: {1: 1, 3: 9, 5: 25, 7: 49, 9: 81}
```
## 4\. Generator Comprehension
Last but not least! Let's see how we can create a generator using comprehension.
> Note: Generators look like tuples but they are not.
```python
squared_odd_nums = (num**2 for num in range(1, 11) if num % 2 != 0)
print(type(squared_odd_nums)) # Output: <class 'generator'>
print(list(squared_odd_nums)) # Output: [1, 9, 25, 49, 81]
```
Now if you really want a tuple, you can do something like this:
```python
squared_odd_nums = tuple(num**2 for num in range(1, 11) if num % 2 != 0)
print(type(squared_odd_nums)) # Output: <class 'tuple'>
print(squared_odd_nums) # Output: (1, 9, 25, 49, 81)
```
You might ask how do we use nested loop and conditions with Python comprehension like we do with normal `for` loop. So here is your answer.
# If/Else with Python Comprehension
I showed you earlier how to use `if` statement with comprehension but what if you want to use `if` and `else` both.
Let's see it with similar odd number example that we used earlier. Let' say we want odd number and squared of even number in between 1 to 10. How would we do it?
```python
numbers = [num**2 if num % 2 == 0 else num for num in range(1, 11)]
print(numbers) # Output: [1, 4, 3, 16, 5, 36, 7, 64, 9, 100]
```
Here, we have checked that if condition `num % 2 == 0` (even number) becomes `true` then square it otherwise just pass the number (odd number).
# Nested loop with Python Comprehension
So now you want to know how the hell we will do the nested loop. So. let's say we want to flatten the 3x3 matrix.
Here is how you would do it with list comprehension.
```python
matrix_3d = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
flattened_list = [col for row in matrix_3d for col in row]
print(flattened_list)
# Output: [1, 2, 3, 4, 5, 6, 7, 8, 9]
```
Here, it will take each row and then iterate through each element of that row.
Now, if you want to use condition with nested loop comprehension here is how you would do it.
```python
matrix_3d = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
]
flattened_list = [col for row in matrix_3d for col in row if col % 2 != 0]
print(flattened_list)
# Output: [1, 3, 5, 7, 9]
```
You can use `if-else` same way we use for single loop comprehension. So, that was a guide for comprehension in Python.
# Conclusion
From the above, we can conclude that comprehensions are a powerful feature in Python that can help you write cleaner, more efficient code. If you think, "How can this be faster than a normal loop?" check out my blog titled [How to read and write sequential data gracefully in Python](https://dev.to/sahilfruitwala/how-to-read-and-write-sequential-data-gracefully-in-python-1npl) for a better understanding of that.
> Embrace comprehensions to write more Pythonic and maintainable code.
| sahilfruitwala |
1,877,281 | Seeing the world: Think of at least five places you'd like to visit | `let listName:string[]=['hello world','Today is weather is very beatuiful','I going to cenima ','I... | 0 | 2024-06-04T22:30:45 | https://dev.to/osaid4624/seeing-the-world-think-of-at-least-five-places-youd-like-to-visit-bfl | `let listName:string[]=['hello world','Today is weather is very beatuiful','I going to cenima ','I start a learn typscript',]
// console.log("Orignal Order", listName);
// console.log("Alpebatic Order", [...listName].sort());
// console.log("Orginal Order", listName);
// console.log("Revers Alphabet",[...listName].sort().reverse());
// console.log("Orignal order ", listName);
// listName.reverse();
// console.log("Reverse order",listName);
// listName.reverse()
// console.log("orginal order", listName);
// listName.sort()
// console.log("Sort order Name ", listName);
// listName.reverse()
// console.log("revers alphabatic ", listName);` | osaid4624 | |
1,877,280 | Day 963 : My Computer | liner notes: Professional : Had a day of no meetings so I tried to take full advantage. I finally... | 0 | 2024-06-04T22:29:09 | https://dev.to/dwane/day-963-my-computer-4eog | hiphop, code, coding, lifelongdev | _liner notes_:
- Professional : Had a day of no meetings so I tried to take full advantage. I finally got a library that I've been working on published. Going to test it out in an application tomorrow. Spent some time responding to community questions.
- Personal : Last night, I worked on a side project. Went through some tracks for the radio show. Don't remember much else.

Going to crack open my computer to work on my side project. I want to try to create a slideshow without using JavaScript. It's a good exercise to try out new stuff in CSS. Go through some tracks for the radio show. If there's time, I want to start playing with the AR glasses I got in.
Have a great night!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube Qkylu0bbsHo %} | dwane |
1,877,279 | AI-Powered Tools to Unleash Your Creativity: From Fiction to Fact | In the realm of creative expression, Artificial Intelligence (AI) is rapidly transforming... | 0 | 2024-06-04T22:28:59 | https://dev.to/malconm/ai-powered-tools-to-unleash-your-creativity-from-fiction-to-fact-312l | ai, productivity, design | In the realm of creative expression, Artificial Intelligence (AI) is rapidly transforming possibilities. From crafting captivating narratives to generating awe-inspiring visuals, AI tools are empowering artists and content creators to push boundaries and streamline their workflows.
Here are 4 compelling AI tools you can explore to enhance your creative endeavors:
**Midjourney (Text-to-Image Generation):**
What it does: [Midjourney](https://www.midjourney.com/) takes your textual descriptions and transforms them into stunning, dreamlike images. Imagine conjuring up a "cyberpunk cityscape bathed in neon light" or a "photorealistic portrait of a majestic griffin soaring through a starry sky."
Benefits: Ideal for creating concept art, illustrations, or sparking visual inspiration for creative projects.
**Jasper (AI Writing Assistant):**
What it does: [Jasper](https://www.jasper.ai/) acts as your AI writing companion, assisting you in crafting compelling narratives, generating creative text formats like poems or scripts, and overcoming writer's block.
Benefits: Streamline your writing process, overcome creative roadblocks, and explore new content formats.
**Murf.io (Text-to-Speech Generation):**
What it does: Murf.io breathes life into your written content by transforming text into realistic and expressive human voices. Ideal for creating engaging audiobooks, voiceovers for videos, or personalized narration tracks.
Benefits: Add a professional touch to your presentations, create captivating audiobooks, or personalize your content with unique narrations.
[Review about Murf.io](https://www.youtube.com/watch?v=UIZt0UlK9PM)
**Aiconvert Online (Free AI Tools & Resources):**
What it does: Aiconvert Online offers a treasure trove of [free AI tools](https://aiconvert.online/) across various functionalities, from image generation, editing and conversion to creating content using AI chatbots. Explore their library to discover tools that suit your specific needs.
Benefits: Experiment with a variety of AI functionalities without breaking the bank. A great resource for beginners or those seeking to explore the potential of AI in their creative process.
**This is just a glimpse into the ever-expanding world of AI tools for creative endeavors. With these resources in your arsenal, you can unlock a new level of creative freedom and efficiency.
Ready to dive deeper? Explore the provided links and unleash the power of AI in your creative journey!**
| malconm |
1,877,278 | 🚀What is BXDev Technology? | Hey Dev.to community! I'm thrilled to introduce you to BXDev Technology, a dynamic and... | 0 | 2024-06-04T22:25:12 | https://dev.to/bxdevtech/what-is-bxdev-technology-111d | webdev, programming, opensource, career | ## Hey Dev.to community!
I'm thrilled to introduce you to BXDev Technology, a dynamic and forward-thinking tech company dedicated to pushing the boundaries of innovation and creating impactful digital solutions. As our inaugural post on Dev.to, we're excited to share our story and vision with you.
## Who We Are?
BXDev Technology is a team of passionate developers, designers, and innovators united by a shared mission: to revolutionize the digital landscape and empower individuals and businesses with cutting-edge technology. With a diverse range of expertise and a relentless drive for excellence, we're committed to delivering solutions that exceed expectations and drive real-world impact.
## Our Mission
At BXDev Technology, our mission is simple yet ambitious: to harness the power of technology to solve complex problems, drive innovation, and improve lives. Whether it's developing groundbreaking software applications, designing intuitive user experiences, or leveraging emerging technologies like AI and blockchain, we're committed to making a positive difference in the world through our work.
## What We Do?
BXDev Technology specializes in a wide range of digital solutions, including:
- **Software Development:** From web and mobile applications to enterprise solutions, we create scalable and robust software tailored to our clients' unique needs.
- **UI/UX Design:** Our team of designers crafts intuitive and engaging user experiences that delight users and drive engagement.
- **Emerging Technologies:** We stay at the forefront of innovation, exploring emerging technologies like AI, blockchain, and IoT to develop cutting-edge solutions for tomorrow's challenges.
## Why Choose BXDev Technology?
When you choose BXDev Technology, you're not just getting a technology partner—you're joining forces with a team of dedicated professionals who are passionate about your success. With a focus on collaboration, transparency, and excellence, we work closely with our clients to understand their goals, address their challenges, and deliver solutions that exceed expectations.
## Get in Touch
Interested in learning more about BXDev Technology and how we can help bring your digital vision to life? Visit our website BXDevTech.com to learn more about our services, explore our portfolio, and get in touch with our team. We'd love to hear from you!
## Join the Conversation
Stay connected with BXDev Technology and join the conversation on Dev.to! Follow us for the latest updates, insights, and discussions on technology, innovation, and digital trends. Have questions or want to share your thoughts? Drop us a comment or reach out to us directly—we're here to engage and learn from the amazing Dev.to community.
## About Us
BXDev Technology is more than just a tech company—we're a community of innovators, creators, and problem-solvers committed to shaping the future of technology. Learn more about our team, our values, and our vision on our website and follow us on social media to stay connected.
Thank you for joining us on this exciting journey with BXDev Technology!
Let's innovate, create, and build the future together. 🌟 | cptbuuya |
1,877,351 | The Game-Changing Strategy: How Playables on YouTube, Netflix, LinkedIn, and NYT Are Driving Disruption! | Have you played a Playable yet? If you've played games like Wordle, Connections, Tiles, or Sudoku... | 0 | 2024-06-06T12:12:28 | https://brianchristner.io/the-game-changing-strategy-how-playables-on-youtube-netflix-linkedin-and-nyt-are-driving-disruption/ | business, marketing, gaming | ---
title: The Game-Changing Strategy: How Playables on YouTube, Netflix, LinkedIn, and NYT Are Driving Disruption!
published: true
date: 2024-06-04 22:21:26 UTC
tags: business,marketing,gaming
canonical_url: https://brianchristner.io/the-game-changing-strategy-how-playables-on-youtube-netflix-linkedin-and-nyt-are-driving-disruption/
---

Have you played a Playable yet? If you've played games like [Wordle](https://www.nytimes.com/games/wordle/index.html?ref=brianchristner.io), [Connections](https://www.nytimes.com/games/connections?ref=brianchristner.io), [Tiles](https://www.nytimes.com/puzzles/tiles?ref=brianchristner.io), or [Sudoku](https://www.nytimes.com/puzzles/sudoku?ref=brianchristner.io) you've already experienced Playables, but you didn't even know it. They are now showing up in places you would've never expected.
A "playable game" is a lightweight digital game designed to be playable directly within various media platforms or your browser. These games are typically embedded in websites, advertisements, social media platforms, or other digital ecosystems, making them easily accessible and convenient for users.
> **Playables provide a viable marketing channel to drive new revenue and retention!**
Mini-games, or quick games, as they've been referred to in other segments, have been around for quite some time. What is new is combining Playables into existing platforms to drive new customer acquisition and retention for existing platforms.
When we mention names like YouTube, Netflix, LinkedIn, or the New York Times (NYT), we don't consider them gaming companies. However, Playables are a key driver in NYT's turnaround success.
### New York Times and Playables

_Playables on New York Times_
Traditional media relies heavily on ads to fund its business. This business model has been suffering across the board, with most newspapers losing money or laying off staff in the last several years.
The New York Times took this challenge into its own hands. In January 2022, [NYT acquired Wordle](https://wordsrated.com/impact-of-wordle-on-nyt/?ref=brianchristner.io), the absolute viral game, for a seven-figure sum. The acquisition pivoted the entire strategy of the NYT from selling ads to cross-selling NYT subscriptions from Wordle or similar games to NYT potential subscribers. This change of strategy resulted in a 42.65% increase in revenue and a 42.86% increase in subscriptions.
> **NYT's is now bundling games, with other media packages to attract new subscribers...and it's working!!**
[Playables are helping the NYT's thrive amid the media chaos](https://www.axios.com/2024/01/29/wordle-nyt-games-news-media-layoffs?ref=brianchristner.io). The NYT has set the benchmark or what is possible when combining Playables into subscription bundles which enables the NYT's to:
1. Acquire new subscribers
2. Retain existing customers and turn them into returning visitors.
Here are some key characteristics of playables games:
### Characteristics of Playables Games
1. **Simplified Gameplay** :
- **Short Sessions** : Designed for short play sessions, typically lasting a few minutes and sometimes a single game per day. This model drives retention as players want to return back the next day to play the next game. Combined with gamification mechanics that inspire players to log in daily to unlock challenges and continue playing streaks.
- **Simple Mechanics** : Easy to understand and play, catering to a wide audience with varying levels of gaming experience.
2. **Engagement and Retention** :
- **High Engagement** : Aimed at quickly capturing the player's attention and keeping them engaged.
- **Retention Strategies** : Often incorporate elements that encourage repeat play, such as leaderboards, achievements, building streaks, or rewards.
3. **Monetization** :
- **Ad Revenue** : Can generate revenue through in-game advertisements or sponsorships.
- **Microtransactions** : These may include options for in-game purchases or microtransactions.
- **Cross-Selling:** NYT's has proven that using Playables is a great lever to acquire new customers.
### YouTube launched Playables this week

_YouTube Playables - Angry Birds_
[YouTube announced this week that 75 Playables](https://blog.youtube/news-and-events/youtube-playables/?ref=brianchristner.io)are available to play on the web, iOS, or Android apps. Of course, in Switzerland, where I am, it is not yet available. **_However, here's the direct link to YouTube Playable Games_** [**_Angry Birds Showdown_**](https://www.youtube.com/playables/Ugkxb2gxwOZu9QDqSZymy8YWhn_NJbg2BQXH?ref=brianchristner.io) **_, _** [**_Words of Wonders_**](https://www.youtube.com/playables/UgkxPFCtv4YicBUoaAB3qI5p2_k96YzBpj0x?ref=brianchristner.io) **_, _** [**_Cut the Rope_**](https://www.youtube.com/playables/Ugkx2d5qQg5_Snq2mIm4BC1tyrHkdS_2daYs?ref=brianchristner.io) **_, _** [**_Tomb of the Mask_**](https://www.youtube.com/playables/UgkxKXJt3Dp8gutdeca9P-UpEM3LLaeDffpa?ref=brianchristner.io) **_, and _** [**_Trivia Crack_**](https://www.youtube.com/playables/UgkxfKri7hfiRMnRbdGkN-SLmF6YjBPnvism?ref=brianchristner.io) **_._**
Of course, I tried Angry Birds and lost 20 minutes of my life reliving some old playing habits. The games are great and launch directly within the YouTube app which keeps you inside their ecosystem. I can imagine the next step is building more gamification to unlock new games or features to keep players returning for more.
> **Also, I'm sure Google Ads will be coming for this space...**
The Playables for YouTube will definitely increase the retention of YouTube users considerably and might even open up opportunities to cross-sell their premium YouTube membership to unlock more games or features.
### NetFlix Games

Netflix is also flirting with the games space. [Netflix Games](https://www.netflix.com/tudum/games?ref=brianchristner.io)contains surprisingly a lot of games already. However, what sets Netflix apart from NYT, YouTube, and LinkedIn is that the games are all mobile games requiring you to install on your phone.
With games such as Sonic, Grand Theft Auto III, and Stranger Things makes the brand effect quite interesting. The advantage of Netflix is that they already own the rights to many movies and TV Shows, so this could become a very interesting space for them.
The games are included in your Netflix subscription, and Netlfix states, "No ads. No extra fees. No in-app purchases.". However, this doesn't prevent them from learning more about your online playing preferences and building an even better data profile on you.
> **Netflix + Games = the ultimate Netflix and Chill!**
### LinkedIn Playables
The surprising contender in the Playables space is [LinkedIn Games](https://www.linkedin.com/games/?ref=brianchristner.io). Playables is actually the best idea for LinkedIn out of all the platforms. only has three games Playables is actually the best idea for LinkedIn out of all the platforms. LinkedIn is known for great content but interacts less with content than other social media platforms.
LinkedIn has a user base of [1 Billion users, yes, Billion with a "B". folks.](https://blog.hootsuite.com/linkedin-statistics-business/?ref=brianchristner.io)Adding Playables to the offering makes absolutely perfect sense.
If LinkedIn can increase Daily Active users by 1% by using Playables, this could enormously impact revenue and interactions. In my eyes, LinkedIn is the sleeper out of the bunch that has the greatest potential to unlock either cross-selling of other products or increasing engagement, maybe both.
> **My only concern is LinkedIn could turn into the old Facebook days, and my entire timeline is spamming me with Farmville/LinkedIn Games updates.**
First impressions: I played Queens quickly to see what the game was. After completing the game, LinkedIn informed me that three of my connections also played this game. The social graph aspect could be another way to see who's best in your network or company on a particle game. I could see a lot of value here with the entire social aspect
### Benefits of Playables Games
- **Wide Reach** : Accessible to a broad audience due to their platform-independent nature.
- **High Engagement** : Interactive and engaging, leading to higher user retention and time spent.
- **Marketing Tool** : Effective for brand engagement and promotional campaigns.
- **Ease of Use** : Simple and quick to play, requiring minimal effort from the user.
In summary, playable games are a versatile and engaging form of digital entertainment that leverages the accessibility and integration capabilities of modern web and mobile platforms. They provide an interactive experience that is easy to access and play, making them popular for casual gaming and marketing.
### FAQ Section: The Game-Changing Strategy with Playables on YouTube, Netflix, LinkedIn, and NYT
**What are Playables and how do they work?**
Playables are lightweight digital games embedded within various media platforms or browsers, designed for short, engaging play sessions. They often feature simple mechanics to cater to a wide audience and aim to drive user retention and engagement through repeat play incentives like leaderboards and daily challenges.
**Why are Playables important for platforms like NYT and YouTube?**
Playables provide an innovative marketing channel for these platforms, helping to drive new revenue and increase user retention. For example, the NYT uses Playables to cross-sell subscriptions, significantly boosting their revenue and subscriber base.
**How have Playables impacted the New York Times?**
The NYT's acquisition of Wordle led to a new strategy of bundling games with media subscriptions, resulting in a 42.65% revenue increase and a 42.86% rise in subscriptions. This strategy has become a benchmark for leveraging Playables to attract and retain subscribers.
**What benefits do Playables offer to users and platforms?**
Playables are accessible and engaging, leading to higher user retention and longer time spent on platforms. They serve as effective marketing tools, driving brand engagement and promotional campaigns. For platforms, they offer new revenue streams through ads and cross-selling.
**What future developments are expected for Playables on platforms like LinkedIn and Netflix?**
LinkedIn aims to increase daily active users by incorporating social aspects into Playables, while Netflix leverages its extensive media library to offer exclusive games, enhancing their subscription value. Both platforms are exploring how Playables can boost user engagement and monetization.
## Follow me
If you liked this article, Follow Me on [Twitter/X](https://x.com/idomyowntricks) to stay updated!
| vegasbrianc |
1,877,274 | some of the great beaches around the world | A post by Prithiwis Das | 0 | 2024-06-04T22:20:01 | https://dev.to/prithiwis_das_39b4df24b0b/some-of-the-great-beaches-around-the-world-188i | prithiwis_das_39b4df24b0b | ||
1,877,273 | Trying to give some new tag lines | A post by Prithiwis Das | 0 | 2024-06-04T22:18:45 | https://dev.to/prithiwis_das_39b4df24b0b/trying-to-give-some-new-tag-lines-46i7 | prithiwis_das_39b4df24b0b | ||
1,877,272 | Introducing PixShuffle | Hey Dev.to community! We're thrilled to introduce PixShuffle, a revolutionary new way to experience... | 0 | 2024-06-04T22:18:01 | https://dev.to/cptbuuya/introducing-pixshuffle-23gj | Hey Dev.to community!
We're thrilled to introduce PixShuffle, a revolutionary new way to experience visual content like never before.
What is PixShuffle?
PixShuffle is an innovative picture slideshow platform that brings your images to life with dynamic shuffling and customizable features. Whether you're a photography enthusiast, a digital artist, or simply someone who loves beautiful visuals, PixShuffle is designed to provide you with an immersive and captivating viewing experience.
Key Features:
🔄 Dynamic Shuffling: Watch as your images seamlessly shuffle and transition, creating a mesmerizing visual journey every time.
📱 Responsive Design: Enjoy PixShuffle on any device, from desktops to smartphones, with its responsive design ensuring optimal viewing on all screen sizes.
⚙️ Customization Options: Personalize your slideshow experience with customizable settings, including transition effects, shuffle speed, and more.
How it Works:
Using PixShuffle is easy and intuitive. Simply visit our website, upload your images, and let the magic happen! Sit back, relax, and watch as your images come to life with PixShuffle's dynamic shuffling feature. With navigation buttons and customization options at your fingertips, you're in control of your visual experience every step of the way.
Why Choose PixShuffle?
PixShuffle isn't just another picture slideshow platform. It's a revolutionary new way to experience visual content, designed to delight and inspire users of all backgrounds. Whether you're looking to showcase your portfolio, create engaging presentations, or simply enjoy beautiful images, PixShuffle has you covered.
Get Started Today!
Ready to experience PixShuffle for yourself? Head over to our website PixShuffle.com and start shuffling your images today! Join the PixShuffle community and discover a whole new world of visual storytelling.
Join the Conversation:
We'd love to hear your thoughts and feedback on PixShuffle! Join the conversation on Dev.to and share your experiences with the community. Have questions or need assistance? Reach out to us via email at support@pixshuffle.com or join our Discord server for live support.
About Us:
PixShuffle is brought to you by [Your Company Name], a team of passionate developers dedicated to creating innovative solutions that enhance the way we experience digital content. Learn more about us on our website and follow us on social media for the latest updates and announcements.
Thank you for joining us on this exciting journey with PixShuffle!
Happy shuffling! 📸✨ | cptbuuya | |
1,877,271 | Add "Login with Passkeys" to your Django app | Passkeys, passkeys, passkeys! Everyone's talking about them. With Amazon rolling out passkeys last... | 0 | 2024-06-04T22:15:40 | https://pangea.cloud/blog/add-login-with-passkeys-to-your-django-app/ | Passkeys, passkeys, passkeys! Everyone's talking about them. With [Amazon](https://www.theverge.com/2023/10/23/23928589/amazon-passkey-support-web-ios-shopping-mobile-app) rolling out passkeys last year and [Google](https://blog.google/technology/safety-security/passkeys-default-google-accounts/) encouraging users to make them the default authentication method, it raises the question: How do I add them to my app?
## How do passkeys work?
If you’re just interested in implementing them, you can skip this section. No offense taken 😉
FIDO2 multidevice credentials more often referred to as “passkeys” is a standard introduced by the FIDO alliance. It’s an extremely powerful way of using public-key cryptography to verify the identity of a user without passwords, multi-factor, etc. The public / private key pair is usually generated and securely stored on a hardware or software authenticator device.
To learn more about how passkeys and authenticators work in detail, check out the [Secure By Design Hub article](https://pangea.cloud/securebydesign/authn-using-passkeys/?utm_source=hashnode&utm_medium=passkeys-django-blog) on passkeys.
## How do I build it in?
In this tutorial, we’re going to leverage Pangea AuthN’s hosted pages to be able to quickly configure passkeys without building all the cryptographic mayhem from scratch 😅. To prove that it’s easy to add passkeys into any application in just a few minutes, I’m going to start with a fresh new Django app and implement passkeys in just a few steps.
### Step 1: Create a new Django app and install needed dependencies
```bash
python -m pip install django
django-admin startproject mysite
cd mysite
python manage.py startapp authme
pip install python-dotenv pangea-django
python manage.py migrate
```
Let's go to `mysite/mysite/settings.py` to allow our app to use these packages.
We need to be able to:
1. load a `.env` file
2. access the `pangea-django` middleware
3. load our HTML pages from a designated `templates` folder
```python
from dotenv import load_dotenv
import os
from pathlib import Path
load_dotenv()
# Skip down to the middlware declaration.
# Comment or remove the default django middleware.
# Add Pangea's Auth Middleware
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
#"django.contrib.auth.middleware.AuthenticationMiddleware",
"pangea_django.PangeaAuthMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
# Replace the default configuration with the templates below
# templates folder is where we will keep our html
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [os.path.join(BASE_DIR, "templates")],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
```
### Step 2: Create an account on [pangea.cloud](https://l.pangea.cloud/DN5AJWb)
Head over to [pangea.cloud](https://l.pangea.cloud/DN5AJWb) and create an account for free. Then in the developer console, enable the “AuthN” service and grab the following tokens.
These tokens will be pasted in your .env config file as shown below. Create and add these tokens into `mysite/.env` file like this:
```bash
PANGEA_AUTHN_TOKEN=<PANGEA_AUTHN_TOKEN>
PANGEA_DOMAIN=<PANGEA_DOMAIN>
PANGEA_HOSTED_LOGIN=<PANGEA_HOSTED_LOGIN>
```
### Step 3: Add the Pangea Auth components to the new Django app
Now we edit our `mysite/authme/views.py` file with the `pangea_django` package that maintains authentication context and state across the application.
So our `authme/views.py` file should look like this:
```python
from django.shortcuts import render, redirect
from django.contrib.auth.decorators import login_required
from pangea_django import PangeaAuthentication, generate_state_param
import os
# Create your views here.
def landing(request):
if request.user.is_authenticated:
return redirect("/home")
hosted_login = os.getenv("PANGEA_HOSTED_LOGIN")
redirect_url = hosted_login + "?state=" + generate_state_param(request)
return redirect(redirect_url)
def login_success(request):
user = PangeaAuthentication().authenticate(request=request)
if user:
return redirect("/home")
return redirect("/")
@login_required(login_url="/")
def logout(request):
user = PangeaAuthentication().logout(request)
return redirect("/logout")
@login_required(login_url="/")
def home(request):
context = {}
context["user"] = request.user
return render(request, "home.html", context)
```
Next we need to map these views to paths. We do this in `mysite/mysite/urls.py`
```python
from django.urls import path
import authme
urlpatterns = [
path('', authme.views.landing),
path('home', authme.views.home),
path('login_success', authme.views.login_success),
path('logout', authme.views.logout),
]
```
The app need to be able to access `mysite/authme/views.py` from `mysite/mysite/urls.py`. Add this line to `mysite/authme/__init__.py`
```python
from . import views
```
Lastly we need a page for this authentication to render to. The way we make pages in Django is by creating a templates folder with html pages. Since `authme.view.home` is routing to `<base url>/home`, we need to call our page `mysite/templates/home.html`
```xml
<!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body,h1,h2,h3,h4,h5,h6 {font-family: "Lato", sans-serif;}
body, html {
height: 100%;
color: #b47b00;
line-height: 1.8;
background-color: antiquewhite;
}
table {
margin: 0px auto;
background-color: white;
padding: 25px;
}
tr {
font-family: Verdana, Tahoma, sans-serif;
font-size: 20px;
color: #rgb(158, 108, 0);
}
td {
padding: 2px;
}
</style>
</head>
<body>
<!-- Navbar (sit on top) -->
<div>
<a href="logout">LOGOUT</a>
</div>
<!-- Navbar (sit on top) -->
<h1 style="text-align: center;">You are logged in!</h1>
<table>
<tr>
<td>ID:</td>
<td>{{ user.id }}</td>
</tr>
<tr>
<td>Email:</td>
<td>{{ user.email }}</td>
</tr>
<tr>
<td>First Name:</td>
<td>{{ user.first_name }}</td>
</tr>
<tr>
<td>Last Name:</td>
<td>{{ user.last_name }}</td>
</tr>
<tr>
<td>Last Login:</td>
<td>{{ user.last_login }}</td>
</tr>
</table>
</body>
</html>
```
### Step 4: Add Redirect and Enable Passkeys in Pangea console
We need to add an authorized redirect, so that Pangea’s AuthN hosted pages can successfully redirect us back to the [http://127.0.0.1:8000/home](http://127.0.0.1:8000/home) when a user is done authenticating.
So first, let’s go under `General > Redirect (Callback) Settings` and add [http://127.0.0.1:8000/home](http://127.0.0.1:8000/home) as a redirect and save it.
Here comes the last step, let’s enable Passkeys! Head over to `Single Sign On > Passkeys` and enable it. Optionally you can choose to enable fallback authentication options based on your desired user experience.
Here’s a quick video on how you can enable it in settings:
<iframe width="560" height="315" src="https://www.youtube.com/embed/M2kPx1WteEE?si=HxqHT_gvoNQNfoGp"></iframe>
Let's try it out! You can find the [code in Github](https://github.com/pangeacyber/pangea-django-authn-app) and watch the demo below.
<iframe width="560" height="315" src="https://www.youtube.com/embed/2pfrc1YI7nk?si=7TgceOr5Z7LJIbE9"></iframe>
---
If you are looking to add AuthN to other languages or frameworks, follow these tutorials:
* [React.js](https://pangea.cloud/blog/add-passkeys-to-reactjs-in-2mins/)
* [Next.js (page-router mode)](https://pangea.cloud/blog/add-login-with-passkeys-to-nextjs-in-2-mins/) | vmvilla | |
1,877,269 | exciting beaches all over the world | This is a submission for [Frontend Challenge... | 0 | 2024-06-04T22:13:16 | https://dev.to/prithiwis_das_39b4df24b0b/exciting-beaches-all-over-the-world-493n | devchallenge, frontendchallenge, css | _This is a submission for [Frontend Challenge v24.04.17]((https://dev.to/challenges/frontend-2024-05-29), Glam Up My Markup: Beaches_
## What I Built
<!-- Tell us what you built and what you were looking to achieve. -->
## Demo
<!-- Show us your project! You can directly embed an editor into this post (see the FAQ section from the challenge page) or you can share an image of your project and share a public link to the code. -->
## Journey
<!-- Tell us about your process, what you learned, anything you are particularly proud of, what you hope to do next, etc. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- We encourage you to consider adding a license for your code. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | prithiwis_das_39b4df24b0b |
1,877,268 | Ruby function design and other stuff I like ranting about | Why? If the posts title looks weird: it is. This is a small rant about code-functionality... | 0 | 2024-06-04T22:09:31 | https://dev.to/aneshodza/ruby-function-design-and-other-stuff-i-like-ranting-about-32oe | ruby, watercooler | ## Why?
If the posts title looks weird: it is. This is a small rant about code-functionality that makes little to no sense (in my eyes).
This whole article can be summed up like this:

## How?
I was talking to a coworker about getting the numerical value of a character, like you can in Java, because a `char` stores the number behind that character.
As an example of that behavior is this function, that counts the occurrence of every character:
```java
public static int[] countCharacters(String input) {
int[] summand = new int[256];
for (char c : input.toCharArray()) {
summand[c]++;
}
return summand;
}
```
My *naive* suggestion was that you could just run `number.to_i` to convert it to its (what I think is) ASCII number.
The problem with that are cases like converting `"49".to_i`, where ruby, as a non-typed language, has to convert that to the number the string represents:
```ruby
irb(main):017> "12".to_i
=> 12
irb(main):018> "12".sum
=> 99
```
On the one hand there is the `"12".to_i`, which translates to `12`. Fair enough, that makes sense. The other case, where the `String` doesn't represent a number, it just translates to `0`:
```ruby
irb(main):012> "a".to_i
=> 0
irb(main):013> "fdsgsdf".to_i
=> 0
```
That leads to some weird (but expected) behaviour:
```ruby
irb(main):003> summand = []
=> []
irb(main):006> summand["a".to_i] = 'test'
=> "test"
irb(main):008> summand["4".to_i] = 'test'
=> "test"
irb(main):009> summand
=> ["test", nil, nil, nil, "test"]
```
Now look *closely* at that first codeblock. You might realize that there are two operations used:
```ruby
irb(main):017> "12".to_i
=> 12
irb(main):018> "12".sum
=> 99
```
The weird part is the `"12".sum` - as ruby loves translating strings to the numbers they represent, you would expect that to translate to something like this:
```ruby
irb(main):018> "12".sum
=> 12
irb(main):018> "12".sum
=> 3 # Because 1 + 2 = 3
```
But you get something completely different:
```ruby
irb(main):018> "12".sum
=> 99
```
And now why does this happen you might as?
Because of this:
```ruby
irb(main):027> "2".sum
=> 50
irb(main):028> "1".sum
=> 49
```
Because it takes their **ASCII** representation, the same as java does!
### Okay, so shouldn't this work for accessing arrays?
This (obviously) doesn't work out of the box, as you need to convert the string to a number:
```ruby
irb(main):004> summand["1"] = 'test'
(irb):4:in `[]=': no implicit conversion of String into Integer (TypeError)
summand["1"] = 'test'
^^^^^^^^^^^^^
from (irb):4:in `<main>'
irb(main):005> summand["a"] = 'test'
(irb):5:in `[]=': no implicit conversion of String into Integer (TypeError)
summand["a"] = 'test'
^^^^^^^^^^^^^
from (irb):5:in `<main>'
```
### What about scientific notation?
*"What about scientific notation?"* you might ask. That's also what I was wondering:
```
irb(main):022> 2e10
=> 20000000000.0
irb(main):024> "2e10".to_i
=> 2
```
#### ?!
Yes, you are seeing this right: It converts the `"2e10"` to a `2`, as it's all the numbers until the first non-number. This behavior is seen in longer strings too:
```
irb(main):036> "21f10".to_i
=> 21
```
Here it cuts the `to_i` operation after reaching the `f`.
#### Actually converting scientific notation to a number
Now, how can we make this work? This is how:
```
irb(main):037> "2e10".to_f.to_i
=> 20000000000
```
And this only works, as numbers of scientific notation have the class `Float` per default:
```
irb(main):038> 2e10.class
=> Float
```
For a dynamically-typed language it still seems to be unable to do a lot of casting for you!
## Where is this leading?
Nowhere. Everything I mentioned before is really obscure and won't be important 99% of the time. This article is just for those 1% of cases.
| aneshodza |
1,877,267 | I Babaji labaran Adamu sole proprietor/product manager ABU ABDULRAHMAN Communication, | T**his is a submission for Frontend Challenge v24.04.17, CSS Art: June. Inspiration ... | 0 | 2024-06-04T22:08:28 | https://dev.to/1b2a3b4j5i/i-babaji-labaran-adamu-sole-proprietorproduct-manager-abu-abdulrahman-communication-197d | frontendchallenge, devchallenge, css | _T**his is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), CSS Art: June._
## Inspiration
<!-- What are you highlighting today? -->
## Demo
<!-- Show us your CSS Art! You can directly embed an editor into this post (see the FAQ section of the challenge page) or you can share an image of your project and share a public link to the code. -->
## Journey
<!-- Tell us about your process, what you learned, anything you are particularly proud of, what you hope to do next, etc. -->
<!-- Team Submissions: Please pick one member to publish the submission and credit teammates by listing their DEV usernames directly in the body of the post. -->
<!-- We encourage you to consider adding a license for your code. -->
<!-- Don't forget to add a cover image to your post (if you want). -->
<!-- Thanks for participating! --> | 1b2a3b4j5i |
1,877,263 | Babylon.js Browser MMO - DevLog - Update #4 - Entering / Leaving game synchronization | Hello, This time, a short presentation on the synchronization of entering and leaving zones. As you... | 0 | 2024-06-04T22:03:58 | https://dev.to/maiu/babylonjs-browser-mmo-devlog-update-4-entering-leaving-game-synchronization-3no6 | babylonjs, gamedev, indie, mmo | Hello,
This time, a short presentation on the synchronization of entering and leaving zones. As you may have noticed, when a player joins a zone, a message is displayed on the UI and player is rendered. The synchronization of position and rotation is currently functioning smoothly without any apparent issues, aside from the absence of placer prediction/reconciliation and entity interpolation.
Upon leaving the game, the player is removed from other game clients, and a message is displayed in the chat.
{% youtube noqt2_aF-ic %} | maiu |
1,854,413 | Dev: Blockchain | A Blockchain Developer is a specialized professional responsible for designing, developing, and... | 27,373 | 2024-06-04T22:00:00 | https://dev.to/r4nd3l/dev-blockchain-akg | blockchain, developer | A **Blockchain Developer** is a specialized professional responsible for designing, developing, and implementing blockchain-based solutions and decentralized applications (DApps). Here's a detailed description of the role:
1. **Understanding Blockchain Technology:**
- Blockchain Developers possess a deep understanding of blockchain technology, which is a distributed ledger system that enables secure and transparent transactions without the need for intermediaries.
- They comprehend the fundamental concepts of blockchain, including blocks, consensus mechanisms, cryptographic hashing, smart contracts, and decentralized networks.
2. **Blockchain Platforms and Protocols:**
- Blockchain Developers are proficient in working with different blockchain platforms and protocols, such as Ethereum, Hyperledger Fabric, Corda, and Binance Smart Chain.
- They choose the appropriate blockchain platform based on project requirements, scalability, privacy, consensus mechanism, and development ecosystem.
3. **Smart Contract Development:**
- Blockchain Developers create smart contracts, which are self-executing contracts with the terms of the agreement directly written into code.
- They use programming languages such as Solidity (for Ethereum), Go (for Hyperledger Fabric), and JavaScript (for Binance Smart Chain) to develop smart contracts that automate business logic and enforce rules on the blockchain.
4. **Decentralized Application (DApp) Development:**
- Blockchain Developers build decentralized applications (DApps) that run on blockchain networks, offering transparency, security, and immutability.
- They design frontend interfaces and backend logic for DApps, integrating with smart contracts and interacting with blockchain nodes through APIs.
5. **Tokenization and Cryptocurrency Development:**
- Blockchain Developers create digital tokens and cryptocurrencies on blockchain platforms, enabling tokenization of assets, fundraising through initial coin offerings (ICOs) or token sales, and developing token-based economies.
- They implement token standards such as ERC-20 (for fungible tokens), ERC-721 (for non-fungible tokens or NFTs), and BEP-20 (for Binance Smart Chain tokens).
6. **Consensus Mechanisms and Network Security:**
- Blockchain Developers understand various consensus mechanisms used in blockchain networks, such as proof of work (PoW), proof of stake (PoS), delegated proof of stake (DPoS), and practical Byzantine fault tolerance (PBFT).
- They ensure the security and integrity of blockchain networks by participating in consensus processes, validating transactions, and preventing double-spending attacks.
7. **Blockchain Integration and Interoperability:**
- Blockchain Developers integrate blockchain technology with existing systems and applications, enabling data exchange, interoperability, and seamless interaction between blockchain and off-chain systems.
- They use APIs, middleware, and interoperability protocols to connect blockchain networks with enterprise systems, databases, and external services.
8. **Blockchain Scalability and Performance Optimization:**
- Blockchain Developers address scalability and performance challenges in blockchain networks by implementing solutions such as sharding, sidechains, off-chain scaling, and layer 2 protocols.
- They optimize blockchain performance, transaction throughput, and latency to support large-scale deployments and mass adoption of blockchain-based applications.
9. **Security Auditing and Testing:**
- Blockchain Developers conduct security audits and testing of smart contracts, DApps, and blockchain networks to identify vulnerabilities, bugs, and security threats.
- They follow best practices for secure coding, code review, static analysis, and penetration testing to mitigate security risks and ensure the integrity and robustness of blockchain-based solutions.
10. **Community Engagement and Collaboration:**
- Blockchain Developers actively participate in blockchain communities, forums, and events to stay updated on industry trends, share knowledge, and collaborate with peers and experts.
- They contribute to open-source projects, forums, and developer communities, fostering innovation, collaboration, and knowledge sharing in the blockchain ecosystem.
In summary, a Blockchain Developer plays a critical role in designing, developing, and deploying blockchain-based solutions and decentralized applications (DApps) that leverage the transformative potential of blockchain technology. By combining expertise in blockchain platforms, smart contract development, DApp development, security, and scalability, they drive innovation and adoption of blockchain technology across industries and applications. | r4nd3l |
1,865,859 | SAST Scanning with SonarQube and Docker | Learn how to set up and use SonarQube for Static Application Security Testing (SAST) with Docker. | 27,513 | 2024-06-04T21:59:45 | https://damienjburks.hashnode.dev/sast-scanning-with-sonarqube-and-docker | docker, cybersecurity, owasp, tutorial | ## Table of Contents
- [Introduction](#introduction)
- [Prerequisites](#prerequisites)
- [Understanding SonarQube](#understanding-sonarqube)
- [What is SonarQube?](#what-is-sonarqube)
- [Understanding Docker Compose](#understanding-docker-compose)
- [Key Features and Benefits](#key-features-and-benefits)
- [Scanning and Inspecting Sonar Results](#scanning-and-inspecting-sonar-results)
- [Cloning Vulnerable Web Application (TIWAP)](#cloning-vulnerable-web-application-tiwap)
- [Verifying the Setup](#verifying-the-setup)
- [Setting Up SonarQube with Docker Compose](#setting-up-sonarqube-with-docker-compose)
- [Logging into SonarQube](#logging-into-sonarqube)
- [Creating a Project in SonarQube](#creating-a-project-in-sonarqube)
- [Running Sonar Scanner using Docker](#running-sonar-scanner-using-docker)
- [Reviewing Results in SonarQube](#reviewing-results-in-sonarqube)
- [Cleaning Up](#cleaning-up)
- [Conclusion](#conclusion)
## Introduction
As a seasoned Cloud DevSecOps Engineer with a keen interest in integrating robust security practices into the development lifecycle, I am thrilled to share insights and practical knowledge on enhancing code security. In this article, we will delve into the powerful combination of SAST (Static Application Security Testing) using SonarQube and Docker and explore how these tools can fortify your applications against vulnerabilities. This is a technical blog post or article, so get ready for some code to be shared and repositories to get cloned using Git.
## Prerequisites
Before you can start scanning the vulnerable web application with SonarQube and inspecting the results, you'll want to ensure you have the following list of applications and software packages installed:

- **[Docker](https://www.docker.com/products/docker-desktop)** - We're going to be launching the application from Docker and also running the scans using Docker as well.

- **[VS Code](https://code.visualstudio.com/)** - This is not a hard requirement but highly recommended for viewing and editing files like the Docker Compose YAML file.

- **[Git](https://git.com)** - We're going to need this to clone and checkout the repositories.
## Understanding SonarQube
### What is SonarQube?
SonarQube is a self-managed automatic code review tool that systematically helps you deliver clean code. I've used SonarQube several times within the past to help me out with DevSecOps related work or really to scan my code.

The application can be integrated with various different IDEs and pipelines to build, test, and deploy your code, and to be able to scan your code for all kinds of issues—not just security issues. It could be refactoring issues that your code has and many other things. Here are some key features of SonarQube:
1. **Quality Gates**: A score that defines how well-maintained and secure your entire application code base is.
2. **Supports 30+ Languages**: Including popular languages and frameworks.
3. **Integration with DevOps & CI Platforms**: GitHub, GitLab, BitBucket, Azure, etc.
4. **Fast Analysis and Unified Configurations**
5. **SonarLint IDE Integration**
For our SAST scanning purposes, we are going to focus on leveraging SonarQube and the Sonar Scanner to identify security vulnerabilities within our code.
No need to worry about paying though. SonarQube is free and open-source _unless_ you opt-in for the enterprise or data center editions. We will use the Community Edition, which still covers many features that help us scan our code for various issues.
## Understanding Docker Compose
Docker Compose is a powerful tool designed to define and manage multi-container Docker applications with ease. By using a single YAML file, Docker Compose allows developers to configure the various services, networks, and volumes that comprise their application stack, streamlining the deployment and management processes.
### Key Features and Benefits
#### Simplified Configuration
Docker Compose uses a YAML file, commonly named `docker-compose.yml`, to define the entire application stack in a human-readable format. This file outlines each service in your application, the Docker images they use, and any dependencies between them. This simplifies the setup process, making it easy to replicate the environment across different machines or teams.
#### Multi-Container Applications
With Docker Compose, you can define multiple services that work together, such as a web server, a database, and a cache service. Each service runs in its own container, ensuring isolation and consistency. This modular approach allows you to scale individual components independently and manage complex applications with ease.
#### Network Management
Docker Compose automatically handles the creation and management of networks for your containers. It allows different services to communicate with each other seamlessly, using service names as hostnames. This built-in networking capability eliminates the need for manual network configuration and simplifies inter-service communication.
#### Volume Management
Persisting data is crucial for many applications. Docker Compose allows you to define volumes in your configuration file, ensuring that data is not lost when containers are stopped or recreated. Volumes can be shared between services, enabling data persistence and easy access across different components of your application.
#### Environment Configuration
Docker Compose supports environment variables, making it easy to manage different configurations for various environments (development, testing, production). By defining environment variables in the YAML file or in an `.env` file, you can customize service behavior without modifying the application code.
## Scanning and Inspecting Sonar Results
Now that the conceptual part is over, let's get into the technical activity and cloning and setting up the vulnerable web application and SonarQube using Docker Compose and Git.
### Cloning Vulnerable Web Application (TIWAP)
First, you’ll need to clone the **TIWAP** web application repository. I've specified the URL in the command below:
```bash
git clone https://github.com/The-DevSec-Blueprint/TIWAP
# Once it's cloned, you want to CD into the directory.
cd TIWAP
```
>**NOTE**: If you're looking for a reference repository with the completed files and more directions, feel free to view the completed code here: [Completed Code w/ more steps and explanations](https://github.com/The-DevSec-Blueprint/sonarqube-scanning-dockercompose)
#### Spinning Up the Environment
We will use Docker Compose to set up the necessary environment:
```bash
docker-compose up -d
```
> **NOTE**: The `-d` flag runs the container in detached mode, but if you want to see real-time logs, you can run the following command below:
```bash
docker-compose up
```
### Verifying the Setup
To ensure the environment is up and running, navigate to `http://localhost:8000` in your web browser.
### Setting Up SonarQube with Docker Compose
- **Clear your terminal**:
```bash
clear
```
- **Create the Docker Compose YAML File**
You'll want to create another `docker-compose.yml` file. I'd highly recommend you create another folder or subdirectory within the TIWAP project, create the file, and then copy and paste the contents below in it:
```yaml
version: "1"
services:
sonarqube:
image: sonarqube:lts-community
depends_on:
- sonar_db
environment:
SONAR_JDBC_URL: jdbc:postgresql://sonar_db:5432/sonar
SONAR_JDBC_USERNAME: sonar
SONAR_JDBC_PASSWORD: sonar
ports:
- "9001:9000"
volumes:
- sonarqube_conf:/opt/sonarqube/conf
- sonarqube_data:/opt/sonarqube/data
- sonarqube_extensions:/opt/sonarqube/extensions
- sonarqube_logs:/opt/sonarqube/logs
- sonarqube_temp:/opt/sonarqube/temp
sonar_db:
image: postgres:13
environment:
POSTGRES_USER: sonar
POSTGRES_PASSWORD: sonar
POSTGRES_DB: sonar
volumes:
- sonar_db:/var/lib/postgresql
- sonar_db_data:/var/lib/postgresql/data
volumes:
sonarqube_conf:
sonarqube_data:
sonarqube_extensions:
sonarqube_logs:
sonarqube_temp:
sonar_db:
sonar_db_data:
```
This Docker command runs a container based on the `sonarsource/sonar-scanner-cli` image with the specified parameters. Here's a breakdown of each part:
1. `docker run`: This command is used to run a Docker container.
2. `--rm`: This flag ensures that the container is removed after it stops running. It helps in keeping your system clean by automatically removing the container once it's done executing.
3. `--network=host`: This flag specifies that the container should share the network namespace with the Docker host, allowing it to access services running on the host's network. In this case, it's often used to allow the container to access services running on localhost.
4. `-e SONAR_HOST_URL="http://localhost:9001"`: This sets an environment variable `SONAR_HOST_URL` with the value `http://localhost:9001`. It defines the URL of the SonarQube server that the Sonar scanner should connect to for analysis.
5. `-v "<your_absolute_path>:/usr/src"`: This mounts a volume from the host machine to the container. It maps the local directory `<your_absolute_path>` to the directory `/usr/src` inside the container. This allows the Sonar scanner to access the project files located on the host machine.
6. `sonarsource/sonar-scanner-cli`: This specifies the Docker image to be used for running the container. In this case, it's the official Sonar scanner CLI image provided by SonarSource.
7. `-D` flags: These are parameters passed to the Sonar scanner CLI within the container. They provide configuration options for the SonarQube analysis:
- `sonar.projectKey`: Specifies the unique key for the project in SonarQube.
- `sonar.sonar.projectVersion`: Specifies the version of the project.
- `sonar.sonar.language`: Specifies the programming language of the project (Python in this case).
- `sonar.sonar.sourceEncoding`: Specifies the encoding of the source files.
- `sonar.login`: Specifies the authentication token or credentials required to connect to the SonarQube server.
- `sonar.sonar.projectBaseDir`: Specifies the base directory of the project within the container.
- `sonar.sources=.`: Specifies the directory containing the source files to be analyzed. In this case, it's set to `.` which typically represents the current directory.
- **Deploy SonarQube using Docker Compose:**
```bash
docker-compose up -d
```
Give it about 5-10 minutes to download and set up the necessary Docker container images for SonarQube and its database.
### Logging into SonarQube
Once SonarQube is fully operational, navigate to `http://localhost:9001` in your web browser.
- **Login Credentials**:
- **Username**: `admin`
- **Password**: `admin`
- **Change the default password** when prompted.

### Creating a Project in SonarQube
Create a new project in SonarQube to scan. For this example, we'll use a **vulnerable web application from GitHub**.
- **Project Key**: `test-vulnerable-app`
- **Project Name**: `Test Vulnerable App`
You'll want to generate a token for Sonar Scanner and keep it safe as you'll need it for the scanning process.

>**NOTE**: The highlighted token will not be valid; this is an example. Your token will be different and will be generated automatically.
### Running Sonar Scanner using Docker
Navigate to your project directory in the terminal where you have cloned the vulnerable web application repository.
```bash
cd TIWAP
```
Run the following command to scan your project with SonarQube. Be sure to replace the `<your_sonar_token>` string with your generated token and the `<your_absolute_path>` string with the absolute path to the TIWAP codebase:
```bash
docker run \
--rm \
--network=host \
-e SONAR_HOST_URL="http://localhost:9001" \
-v "<your_absolute_path>:/usr/src" \
sonarsource/sonar-scanner-cli \
-Dsonar.projectKey=test-vulnerable-app \
-Dsonar.sonar.projectVersion=1.0 \
-Dsonar.sonar.language=py \
-Dsonar.sonar.sourceEncoding=UTF-8 \
-Dsonar.login=<your_sonar_token> \
-Dsonar.sonar.projectBaseDir=/root/src \
-Dsonar.sources=.
```
The scan will take some time _(roughly about 4-8 minutes)_, so feel free to take a stretch break! Once completed, the results will be published to your SonarQube project as shown below:

### Reviewing Results in SonarQube
Log into the SonarQube console and navigate to your project to review the results. The scan results will be categorized into different sections like Bugs, Vulnerabilities, Code Smells, etc.
#### Security Vulnerabilities
Security vulnerabilities are critical issues within your code that can be exploited by malicious actors to compromise the integrity, confidentiality, or availability of your application. Identifying and addressing these vulnerabilities is paramount to maintaining a secure codebase. SonarQube's SAST (Static Application Security Testing) capabilities help detect these issues early in the development lifecycle, enabling you to fix them before they become significant problems.
- **Example: Enable SSL Certification Validation**:
One common security vulnerability is having SSL certification validation disabled in your application. SSL (Secure Sockets Layer) certificates are essential for establishing encrypted connections between clients and servers, ensuring that data transferred over the network is secure and cannot be intercepted by unauthorized parties. If SSL certification validation is set to false, your application is vulnerable to man-in-the-middle (MITM) attacks, where attackers can intercept and manipulate the data being exchanged.

**Solution**:
Ensure that SSL certification validation is enabled in all your connections. This can usually be done by configuring the appropriate settings in your application's connection properties or environment variables. For example, in a Python application using the `requests` library, you should ensure SSL verification is enabled:
```python
import requests
# Correct way with SSL verification enabled
response = requests.get('https://0.0.0.0:5001/api/stock/product?product=' + product, verify=True)
```
By enabling SSL certification validation, you can protect your application from potential security breaches and ensure that your data remains secure.
#### Code Smells
Code smells are indicators of potential problems in your code that, while not necessarily bugs, can lead to maintainability issues and increased technical debt. SonarQube identifies code smells and provides recommendations to improve the quality, readability, and maintainability of your code. Addressing code smells helps ensure that your codebase remains clean and efficient, making it easier to manage and extend over time.
- **Example: Avoid Duplicating Strings or Literals**:
A common code smell is the duplication of strings or literals multiple times throughout your code. This practice can lead to inconsistencies and make your code harder to maintain. For instance, if a string value changes, you will need to update it in multiple places, increasing the risk of errors and inconsistencies.

**Solution**:
Use constants or configuration files to store commonly used strings or literals. This approach centralizes the values, making your code more manageable and reducing the risk of errors. For example, in a Python application, you might define constants in a separate module:
```python
# constants.py
MAIN_TEMPLATE = "index.html"
```
Then, you can use these constants throughout your code instead of duplicating the string values:
```python
# main.py
import constants
...
return render_template(constants.MAIN_TEMPLATE)
```
By avoiding the duplication of strings or literals, you can improve the maintainability and readability of your code, making it easier to update and extend in the future.
By addressing both security vulnerabilities and code smells, you can ensure that your codebase is not only secure but also clean and maintainable, leading to more robust and reliable software development practices.
## Cleaning Up
Now that we're done with everything, let's clean up behind ourselves. You'll want to run the following commands to remove all of the attached volumes and purge all of the containers and images:
```bash
# Delete volumes (database, etc.)
docker-compose down --volumes
# Optional: Purge all images, networks, containers
docker system prune -a -f
```
## Conclusion
To conclude, SonarQube is a powerful tool for static application security testing (SAST). It allows you to identify vulnerabilities and code smells efficiently, ensuring that your application codebase is both secure and maintainable.
Thank you so much for reading! I hope you were able to take away valuable insights about setting up and using SonarQube, the Sonar Scanner, and Docker. Until next time, keep scanning your code, and do your best to ensure it is secure and maintainable.
---
**Disclaimer:** This blog post reflects my personal experiences and opinions. This blogs original content is based off of the following video:
[](https://youtu.be/UoAfU5iAhl0?si=x0gaP54T717ds9JJ)
_All images located in the blog post have been sourced from different places. Click on the image to get redirected to the original source._
| damienjburks |
1,877,266 | Daily Notes | Durable Skills - Networking Quality and Quantity in opportunities, numbers game Reuben... | 0 | 2024-06-04T21:58:59 | https://dev.to/dylansarikas/daily-notes-1m8f | ## Durable Skills - Networking
1. Quality and Quantity in opportunities, numbers game
2. Reuben has a list of updated networking events, email for list
3. Do your research before events, Name, Title, Company
4. Find connection, ask questions, get the person to talk
5. Singular Focus, make the person feel like the only person in the room
6. Break into conversation - Do you mind if I join you
7. Digital card on your phone, QR Code on LinkedIn
8. Seize the Initiative
9. Network sooner rather than later - Linkedin for virtual
## Dev Environment
1. Setup Postgres.App instead of install postgress through homebrew | dylansarikas | |
1,877,265 | Babylon.js Browser MMO - DevLog - Update #5 - Area of interest with spatial hash grid | Hello, Next short presentation with the current progress. Today I'm presenting effect of working... | 0 | 2024-06-04T21:51:52 | https://dev.to/maiu/babylonjs-browser-mmo-devlog-update-5-area-of-interest-with-spatial-hash-grid-ifd | babylonjs, gamedev, indie, mmo | Hello,
Next short presentation with the current progress. Today I'm presenting effect of working spatial hash grid on the game server side. Server is sending two types of packages. First that someone is in range and should be visible and second to tell that entity is out of range should not be visible. Underneath whole needed info for rendering (model name, character class etc.) is send in order to display entity. For future probably I'll add some caching to not load not needed data.
This algorithm is not working on the radius but rather on rectangles, so it might be the case that player 1 see player 2 and player 2 dont see player 1. Also there's concept of grace range. So players is added is it closer than eg. 50 units and it's visible until it's go away for more than 100 units.
Hope You like it!
{% youtube MVmKWP30jkw %} | maiu |
1,877,264 | [Game of Purpose] Day 17 | Today I created a new project called Sky Patrol. I added it to a Perforce repository, configured my... | 27,434 | 2024-06-04T21:49:31 | https://dev.to/humberd/game-of-purpose-day-17-ajm | gamedev | Today I created a new project called Sky Patrol. I added it to a Perforce repository, configured my user, set up stream depot, added Main stream and submitted initial commit.
I want to create a game, where you are a drone operator and drop granades on the enemy troops or vehicles. You could have different kind of drones with different mobilities. Each drone can have various upgradable accessories. They could be kamikaze drones, surveilance drones or drones dropping payload. They could also pick up some stuff left by enemy troops.
There could be missions, where you are deployed and need to fly low, to avoid radars or high, to avoid generating noise. You would need to prevent enemies from advancing. Each kill gains you reputation, for which you gain access to a better equipment.
I followed [this tutorial](https://www.youtube.com/watch?v=F8HNzXAdcGc), which allowed me to make my model fly. It was very straightforward and I learned about some Event Graph nodes:
* Flop Flop - each trigger forwards to either A or B endpoint one after another. 
* Retriggerable Delay - it looks like it's something similar to a debounce function, where event is retransmitted on after a certain period passes without it being interrupted by another event.

* Movement Mode - character behaves differently in different modes out of the box. Not sure how to change it for now.

Triggering flying mode with double space bar:

Holding Space bar while flying makes the character go higher, whereas holding left Ctrl make it go lower:

Demo:
{% youtube https://youtu.be/shk6E0HGB4M %} | humberd |
1,877,262 | My Journey of Programming Life | MY PROGRAMMING JOURNEY SO FAR. Every programmer's journey is unique, filled with challenges, moments... | 0 | 2024-06-04T21:48:11 | https://dev.to/collins_bright_d813903974/my-journey-of-programming-life-hbj |
**MY PROGRAMMING JOURNEY SO FAR.**
Every programmer's journey is unique, filled with challenges, moments of clarity and continuous Learning. My journiy to web development started three weeks ago and despite this short time, it had been an exciting journey. This article is a reflection of some initial steps, hurdles cross and the progress made in learning HTML and CSS.
Three weeks ago, I began my journey into web development. With not prior experience, I dived into HTML and CSS, the building blocks of web design.
At first, understanding the syntax and semantics of HTML and CSS was challenging. The structure of tags i.e the syntax and the cascading style rule i.e the semantics , were confusing and tiring.
Despite the initial hurdles, I kept pushing forward. Like the saying goes ‘the more you look at a thing, the clearer it becomes’. Gradually, things start making sense, and the progress became enjoyable and Interesting.
Learning to Crete web pages became a curious and interesting venture because I always love behind the scene.
Although, still learning HTML and CSS, I'm excited about what lies ahead, the future and my hopes and expectations are high, so high! WOW! My early struggles have strengthen my determination and also the advice for my coaches keeps me going. I look forward to advancing my skills in web development.
In just three weeks, my journey from confusion to clarity In web development have been incredibly rewarding and understandable. I'm excited and eager to continue learning and growing in this field. Amen.

---
```
```
| collins_bright_d813903974 | |
1,875,347 | Let's Build HTTP Parser From Scratch In Rust | A week ago, I stumbled upon this fantastic article about learning Rust's iterators and pattern... | 0 | 2024-06-04T21:47:08 | https://dev.to/hiro_111/lets-build-http-parser-from-scratch-in-rust-5gih | rust, http | A week ago, I stumbled upon [this fantastic article](https://www.freecodecamp.org/news/rust-tutorial-build-a-json-parser/) about learning Rust's iterators and pattern matching by building a JSON parser. What really resonated with me was how it explained using Rust's built-in traits. If you're also learning Rust, I highly recommend checking it out!
And now, I want to apply what I learned to other problem.
Well, technically it's not a problem but, you know... anyway I decided to write my own HTTP parser in Rust!
> 2024-06-06 Update:
> I realized that I didn't use some that I learned through the blog post above (say, lexer). So my apologies if I got you disappointed. However I think my version is more performant than that. Enjoy :)
## Program Description
In this article we're going to implement two public structs `HTTPRequest` and `HTTPResponse` .
[You can see the full code here](https://github.com/hirotake111/rust_http_parser)
## HTTP Request
So how does our `HTTPRequest` look like? According to [MDN](https://developer.mozilla.org/en-US/docs/Web/HTTP/Messages#http_requests) each HTTP request consists of the following elements:
- Request line
- Headers
- And body (this could be empty)
So our `HTTPRequest` struct should look like as follows:
```rust
#[derive(Debug, Clone)]
pub struct HTTPRequest {
request_line: RequestLine,
headers: HTTPHeaders,
body: Option<String>,
}
```
And the corresponding elements should look like as follows:
```rust
#[derive(Debug, Clone)]
pub struct RequestLine {
method: Method,
request_target: String,
http_version: String,
}
#[derive(Debug, Clone)]
struct HTTPHeaders(HashMap<String, String>);
#[derive(Debug, Clone)]
pub enum Method {
GET,
POST,
HEAD,
OPTIONS,
DELETE,
PUT,
CONNECT,
TRACE,
}
```
## TryFrom trait
If we implement `TryForm` trait for our `HTTPRequest` struct, we can initialize the struct by performing `HTTPRequest::try_from(input);`.
Or, we could get the same result with `input.try_into()` (I prefer latter).
Also, our input for the HTTP request should be a byte stream as it comes through TCP socket (or other byte stream producers). So it would be great to have `BufReader<T>` as a parameter for a method to initialize the struct.
Overall, our `HTTPRequest` implementation should look like:
```rust
impl<R: Read> TryFrom<BufReader<R>> for HTTPRequest {
type Error = String;
fn try_from(reader: BufReader<R>) -> Result<Self, Self::Error> {
let mut iterator = reader.lines().map_while(Result::ok).peekable();
let request_line = iterator
.next()
.ok_or("failed to get request line")?
.parse()?;
let headers = HTTPHeaders::new(&mut iterator)?;
let body = if iterator.peek().is_some() {
Some(iterator.collect())
} else {
None
};
Ok(HTTPRequest {
request_line,
headers,
body,
})
}
}
```
It looks clean and tidy :)
And here is an example of other element's implementation under `HTTPRequest`:
```rust
impl FromStr for RequestLine {
type Err = String;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let mut iterator = s.split(' ');
let method: Method = iterator
.next()
.ok_or("failed to get HTTP method")?
.parse()?;
let request_target = iterator
.next()
.ok_or("failed to get request target")?
.to_string();
let http_version = iterator
.next()
.ok_or("failed to get HTTP version")?
.to_string();
Ok(RequestLine {
method,
request_target,
http_version,
})
}
}
```
## HTTP Response
The implementation of `HTTPResponse` looks almost identical to `HTTPRequest` - therefore allow me to omit the detail :)
## Check to see if it works
I prepare a txt file containing HTTP message in it. So let's try and see if it works as expected:
```rust
fn main() {
let file = std::fs::File::open("examples/request_post.txt").unwrap();
let reader = BufReader::new(file);
let request: HTTPRequest = reader.try_into().unwrap();
dbg!(request);
}
```
And then run it.
```bash
cargo run --example request_get # GET request
```
You should see the output as follows:

## Summary
Implementing HTTP parser is definitely easier than JSON. But I learned a lot by making my own from scratch.
Thanks! | hiro_111 |
1,877,261 | Architecting an AI App for Appointment Confirmation: Node.js, AWS & AI Calling Agent API | In this video, I'll walk you through the system design of a project that I have built that integrates... | 0 | 2024-06-04T21:40:36 | https://dev.to/gkhan205/architecting-an-ai-app-for-appointment-confirmation-nodejs-aws-ai-calling-agent-api-5820 | aws, node, javascript, ai | In this video, I'll walk you through the system design of a project that I have built that integrates Node.js, AWS, third-party APIs, and an AI Calling Agent (Talks like a real person). You'll learn how to efficiently fetch data from an external API using cron jobs, process and store it in a database, and then use another API for AI-driven calls.
Using diagrams and detailed explanations, I'll break down each component and highlight best practices for building a reliable and scalable system. Whether you're a software engineer or a tech enthusiast, this video will provide valuable insights into designing complex workflows without diving into the code.
For some reason, I can't show code or real workflow but I have discussed the architecture in this video. Comment down your suggestions or thoughts on this architecture.
{%youtube uuzQLaZKRBo %}
👍 Like, share, and subscribe if you find this video helpful! | gkhan205 |
1,877,260 | NextJS Project: How to create QR code from any URL | In this video, I’m excited to share my latest project – a QR code generator app with built-in... | 0 | 2024-06-04T21:36:40 | https://dev.to/gkhan205/build-advanced-qr-code-generator-app-with-nextjs-1kol | nextjs, webdev, javascript, beginners | In this video, I’m excited to share my latest project – a QR code generator app with built-in analytics. After finding existing solutions costly, I decided to create my own. This app will help you generate QR codes easily, and track their usage effectively.
Let's build this app together! I'll be developing it right here on YouTube and then using it myself for some time to ensure it's working perfectly. After that, I'll deploy the app and make it available for public use.
📋 Agenda for this video:
- Taking a URL as input – Learn how the app accepts any URL you want to convert into a QR code.
- Creating the QR code – See the process behind generating a functional QR code.
- Downloading the QR code as an image – Discover how to save your QR code for easy sharing.
- Copying the QR code to the clipboard – Quickly copy your QR code for instant use.
In this video, I covered topics:
1. Using hooks (useState, useRef)
2. Client Component in NextJS
3. ShadCN UI Integration and Usage
4. Form handling with React Hook Form
5. Form schema validation with Zod
6. Event handling
7. Copy to Clipboard functionality
📹 Stay tuned for more videos where I’ll share the development journey, new features, and future enhancements, such as a URL shortener, link in bio, and advanced analytics.
{%youtube R3phIupkMBM%}
👍 Like, share, and subscribe if you find this video helpful!
| gkhan205 |
1,877,259 | Why Servicenow is better IT Career | ServiceNow is widely regarded as a strong choice for an IT career due to several compelling... | 0 | 2024-06-04T21:35:14 | https://dev.to/priya_rao_123/why-servicenow-is-better-it-career-450a | servicenow, technology, it, javascript | ServiceNow is widely regarded as a strong choice for an IT career due to several compelling reasons:

**1. High Demand for ServiceNow Professionals**
Growing Market: ServiceNow is a leading platform in IT service management (ITSM) and other areas like IT operations management (ITOM), IT business management (ITBM), and beyond. Its adoption is increasing across industries.
Career Opportunities: There is a strong demand for skilled ServiceNow professionals, including administrators, developers, consultants, and architects.
**2. Competitive Salaries**
Attractive Compensation: Due to the specialized skill set required, ServiceNow professionals often command high salaries compared to other IT roles.
Salary Growth: As professionals gain more experience and certifications in ServiceNow, their earning potential increases significantly.Become Servicenow developer by learning [servicenow training online](https://hkrtrainings.com/servicenow-training)
**3. Versatile Career Paths**
Diverse Roles: ServiceNow offers multiple career paths, including roles in development, administration, consulting, and project management.
Cross-Industry Applicability: Skills in ServiceNow are applicable across various industries such as finance, healthcare, manufacturing, and government.
**4. Comprehensive Platform**
Wide Range of Applications: ServiceNow provides solutions not only for ITSM but also for HR service delivery, customer service management, security operations, and more.
Integration Capabilities: The platform integrates well with other enterprise systems, offering extensive opportunities for professionals to work on integration projects.
**5. Continuous Learning and Certification**
Certification Programs: ServiceNow offers a range of certifications (Certified System Administrator, Certified Application Developer, Certified Implementation Specialist, etc.) that help professionals validate their skills and knowledge.
Learning Resources: There are ample training resources available, including online courses, boot camps, and official training programs provided by ServiceNow and other training partners.
**6. Innovative and Evolving Platform**
Regular Updates: ServiceNow releases regular updates with new features and functionalities, ensuring that professionals stay on the cutting edge of technology.
Focus on Innovation: ServiceNow invests heavily in research and development, particularly in areas like artificial intelligence and machine learning, enhancing the platform's capabilities.
**7. Community and Support**
Active Community: There is a vibrant and supportive community of ServiceNow professionals, including forums, user groups, and events.
Knowledge Sharing: The community offers ample opportunities for networking, knowledge sharing, and professional development.
**8. Strategic Importance**
Critical Business Functions: ServiceNow solutions are critical for business operations, helping organizations improve efficiency, reduce costs, and enhance service delivery.
Visibility and Impact: Working with ServiceNow allows professionals to have a significant impact on the business processes and outcomes of their organizations.
**9. Remote Work Opportunities**
Flexibility: Many ServiceNow roles, especially those in development and consulting, offer opportunities for remote work, providing flexibility and work-life balance.
Conclusion
Overall, ServiceNow offers a robust and promising career path with numerous benefits, including high demand, competitive salaries, diverse opportunities, and the chance to work with a comprehensive and evolving platform. For IT professionals looking to specialize in a niche area with significant growth potential, ServiceNow is an excellent choice. | priya_rao_123 |
1,876,968 | 2024 Complete Full-Stack Developers Roadmap | Getting into tech may be nutcracking especially if you don't know your way about it. Things may be a... | 0 | 2024-06-04T21:33:16 | https://dev.to/code_duchess/2024-complete-full-stack-developers-roadmap-1co9 | webdev, javascript, beginners, programming | Getting into tech may be nutcracking especially if you don't know your way about it. Things may be a bit confusing and most Developers would wish they could go back in time and start their tech journey from afresh. I have saved you from your future what-ifs by writing this complete Full-Stack Developer road map, which involves possible stacks you can learn and where you can learn them. In this article, I am going to be covering stacks from the Frontend to the Backend and continuous things you can do to keep yourself busy while looking for that big job.
## FrontEnd Development
Frontend has to do with an interactive UI (User Interface) which communicates with the user. This is the part of the web which the user can see and interact with. According to Stack Overflow annual developer survey 2020, 37% of Developers learn Frontend Development. People going into tech prefer learning a stack that is new and not overpopulated. Here are some recommended Frontend stacks to get you started in your developer journey.
**Programming Language for Frontend Web Development**
1. **HTML (Hyper Text Markup Language):** I do recommend HTML for anyone transitioning into Frontend Development. It is the ABC of the Frontend. HTML can be seen as the skeleton of the website which has the basic structure that makes a website. The HTML tells the browser how to display the elements like images, texts, and links. A topical HTML file contains various tags that give the webpage structure. Here is a link to kickstart your journey in understanding HTML [w3schools](https://www.w3schools.com/html/html_intro.asp)
2. **CSS (Cascading Style Sheet)**: CSS is the next go-to step after learning HTML. I said earlier that "HTML can be seen as the skeleton of the website" but this time CSS is the flesh of the website. CSS is used in styling the website, making it more interactive and user-friendly. CSS is used in styling the various tags/elements of the HTML. Here is a link to learn CSS in-depth [w3schools](https://www.w3schools.com/css/css_intro.asp)
3. **JavaScript:** Most new Developers confuse Java programming language as the short form of JavaScript. JavaScript is a different programming language that was later introduced into programming. JavaScript has grown so vast that people now consider it to be a Backend language. JavaScript is what makes the website functional. It brings the website to life. Let's take a house for example, the house structure is the HTML, the CSS is the paints doors and windows and the interior decorator of the house. JavaScript is what powers the house for it to have running water and electricity. Here is a YouTube video to expand your knowledge of JavaScript [SuperSimpleDev](https://www.youtube.com/watch?v=EerdGm-ehJQ&pp=ygUWamF2YXNjcmlwdCBmdWxsIGNvdXJzZQ%3D%3D)
3. **CSS Frameworks**: To make writing codes easier, faster and more efficient I recommend you learn a CSS Framework. There are many types of CSS frameworks but the most used are Bootstrap and Tailwind CSS. Bootstrap helps in grid systems and responsive designs but compared to Tailwind CSS, Bootstrap may be grand and complex to write. Tailwind CSS is widely preferred and is excellent for styling. Both have excellent documentation and community support. [Tailwind CSS Documentation](https://v2.tailwindcss.com/docs ) and [Bootstrap Documentation](https://getbootstrap.com/docs/4.1/getting-started/introduction/)
4. **React Js**: React is a famous JavaScript Libary, used for building dynamic user interfaces. React allows the user to break down the code into various components thereby making the coding space more clean and well-organized. Learn React on YouTube for free on [FreeCodeCamp](https://www.youtube.com/watch?v=bMknfKXIFA8&pp=ygUFcmVhY3Q%3D)
5. **NextJS:** After mastering React, learn NextJS for server-side rendering and generating static websites. The NextJS documentation offers a getting started guide. [NextJS Docs](https://nextjs.org/docs)
## Backend Development
Back-end Development means working on server-side software, which focuses on everything you can't see on a website. Back-end developers ensure the website performs correctly, focusing on databases, back-end logic, application programming interface (APIs), architecture, and servers.
1. **Node.js:** NodeJS is your go-to start in Backend Development. NodeJS is a runtime environment, a runtime environment is an environment in which the program or an application is executed. Node Js allows you to run a program outside the browser like Firefox or Chrome. NodeJS helps Frontend Developers build a program with a programming language they are familiar with. NodeJS can be seen as the backbone of the Backend which runs on a single server.
2. **DataBase:** MongoDB is one of the most common DataBase used to store documents in JSON-like format. MongoDB makes it easy for developers to store structured or unstructured data. Learn [MongoDB](https://www.mongodb.com/resources/products/fundamentals/why-use-mongodb#:~:text=MongoDB%20is%20built%20on%20a,like%20format%20to%20store%20documents.).
3. **PHP:** Is a Dynamic interpreted scripting language for building interactive websites on the server, despite haters pronouncing it dead, it remains one of the most popular languages for Backend Development. It powers content management systems like WordPress Wikipedia etc. Php is a widely-used open source general-purpose scripting language that is especially suited for Web Development and can be embedded into HTML. Learn [Php](https://www.php.net/manual/en/intro-whatis.php)
4. **Python:** Python is a high-level interpreted program famous for its zen-like codes. Python's simple, easy to learn syntax emphasizes readability and therefore reduces the cost of program maintenance. It is commonly used to build server side applications like web apps with Django framework. It is the language of choice for big data analysis and machine learning. Learn [Python](https://www.python.org/doc/essays/blurb/)
## Full-Stack Development
A full-stack developer is a developer or engineer who can build both the front end and the back end of a website. Integrating front-end and back-end knowledge, focus on creating full-stack applications. Use the technologies learned to build robust, scalable web applications.
## Mobile Application Development
**1. React Native:** Transition into Mobile Development with React Native, allowing you to reuse JavaScript knowledge. The official [React Native documentation](https://reactnative.dev/) is a good starting point.
**Continuous Learning and Specialization**
The tech landscape evolves rapidly, so continuous learning is crucial. Focus on areas of interest or where job opportunities are growing. Participate in open-source projects, contribute to forums, and attend meetups or conferences to stay current.
This roadmap provides a structured approach to becoming a versatile full-stack developer capable of handling a wide range of web and Mobile Development challenges. Remember, the key to success in this field is consistent practice and a willingness to adapt to new technologies and methodologies.
**Resources**
https://survey.stackoverflow.co/2020
https://www.coursera.org/articles/back-end-developer#:~:text=What%20is%20back%2Dend%20development,)%2C%20architecture%2C%20and%20servers
https://www.python.org/doc/essays/blurb/
https://www.youtube.com/@Fireship
https://youtu.be/q-xS25lsN3I?si=KpdKsN1f35fV8Df2 | code_duchess |
1,877,256 | Welcome Me | Hi everyone, I'm excited to have discovered this space, I'm hoping to learn from everyone here and... | 0 | 2024-06-04T21:25:09 | https://dev.to/hacksoul/welcome-me-2oa3 | Hi everyone, I'm excited to have discovered this space, I'm hoping to learn from everyone here and also contribute to on going discussion in areas such as Infomation Security, Machine Learning, Compliance, and many more.
Thanks for having me Guys!☺️ | hacksoul | |
1,877,252 | Revolutionizing Business Intelligence: Unicloud’s Pioneering Approach to Cloud-Enabled Data Analytics and AI | The fusion of cloud computing with data analytics and AI is a significant milestone in the... | 0 | 2024-06-04T21:16:55 | https://dev.to/unicloud/revolutionizing-business-intelligence-uniclouds-pioneering-approach-to-cloud-enabled-data-analytics-and-ai-5hge | ai | The fusion of cloud computing with data analytics and AI is a significant milestone in the technological evolution, marking a new era in business data handling. This integration is pivotal in transforming the way organizations process, analyze, and utilize data for strategic decisions. Unicloud is a key player in this transformative journey, providing sophisticated cloud solutions that significantly enhance the capabilities of data analytics and AI. Their offerings are not just about storing and processing vast amounts of data but also about leveraging the cloud’s computational power to unlock innovative possibilities in [AI and analytics](https://unicloud.co/blog/revolutionizing-business-intelligence-uniclouds-pioneering-approach-to-cloud-enabled-data-analytics-and-ai/). This approach enables businesses to harness the full potential of their data, leading to improved operational efficiencies, better customer insights, and a competitive edge in the market. With [Unicloud’s advanced solutions](https://unicloud.co/), companies can efficiently navigate the complexities of big data, apply AI-driven analytics, and achieve new levels of agility and insight in their operations.
**Unleashing the Potential of Cloud Computing: Advanced Analytics and AI Solution**
**1. Enhanced Data Processing Capabilities:** Unicloud’s advanced cloud infrastructure is specifically designed to manage large volumes of data and intricate AI algorithms. Its scalable resources allow for efficient and effective data analysis, critical for AI-driven operations. This feature not only enhances the speed of data processing but also ensures accuracy and reliability, which are essential for AI and machine learning applications.
**2. Agile AI Development Environment:** Unicloud’s cloud platform supports a rapid and flexible AI development process. It enables developers to quickly iterate and deploy AI models, reducing time-to-market. The platform is equipped with a comprehensive suite of AI tools and frameworks, encouraging innovation and allowing developers to experiment with new AI technologies in a secure and robust environment.
**3. Real-time Data Analytics:** The capability to process and analyze data in real-time is a standout feature of Unicloud’s solutions. It empowers organizations to make swift, informed decisions based on current data, thereby enhancing operational efficiency and enriching customer experiences. This real-time processing is crucial for applications that require immediate data interpretation, such as predictive analytics and dynamic market analysis.
**4. Advanced Machine Learning Tools:** Unicloud provides a wide array of sophisticated machine learning tools that cater to both seasoned data scientists and those with limited technical expertise. These tools facilitate the creation of advanced predictive models, making AI more accessible and usable across various business functions. They are designed to simplify complex data modeling processes, making it easier for businesses to [leverage AI for predictive analytics](https://unicloud.co/data-&-ai.html) and decision-making.
**5. Seamless Integration of Data Sources:** The cloud platform is engineered to seamlessly integrate a variety of data sources, including emerging technologies like IoT devices. This integration is crucial for conducting comprehensive analytics, as it combines disparate data sets to provide a more holistic view. This unified approach to data analysis enables businesses to gain deeper insights, driving more effective decision-making processes.
**6. Data Security and Compliance:** Data security and regulatory compliance are top priorities for Unicloud. The cloud solutions are meticulously designed to safeguard sensitive data, adhering to the latest industry standards and regulations. This commitment to security ensures that businesses can trust Unicloud with their critical data, knowing it is protected against threats and compliant with regulatory requirements.
**7. Customizable AI Solutions:** Understanding that each business has unique needs, Unicloud offers customizable AI solutions. These tailor-made solutions allow businesses to adapt the AI and analytics capabilities to their specific operational requirements. This flexibility ensures that the AI solutions provided by Unicloud are not only effective but also highly relevant to the specific challenges and opportunities faced by each business.
**Conclusion**
Unicloud’s cloud-based solutions are not just about offering storage and processing capabilities; they are about creating an ecosystem where data analytics and AI thrive. By harnessing the power of cloud computing, Unicloud is enabling businesses to unlock new possibilities and drive innovation in data analytics and AI. Unicloud’s cloud-based solutions go beyond mere data storage and processing; they represent an innovative ecosystem where data analytics and AI not only exist but flourish. This ecosystem empowers businesses to explore new horizons and catalyze innovation in the realms of data analytics and AI. By leveraging Unicloud’s robust cloud computing power, companies are equipped to delve into uncharted territories of data intelligence, unlocking groundbreaking possibilities. This advancement leads to transformative business strategies, informed decision-making, and a distinct competitive advantage in a data-driven world. Unicloud’s commitment to this integration signifies a pivotal step in advancing how businesses utilize technology for growth and innovation. | unicloud |
1,877,251 | rest-api-with-express-typescript | https://rsbh.dev/blogs/rest-api-with-express-typescript | 0 | 2024-06-04T21:15:43 | https://dev.to/dingzhanjun/rest-api-with-express-typescript-563d | node, rest, express, swagger | https://rsbh.dev/blogs/rest-api-with-express-typescript | dingzhanjun |
1,877,250 | Upstream preview: Government carrot, government stick: Exploring two contrasting approaches to improving open source security | Upstream is tomorrow on June 5, and wow, our schedule is brillant. We’re giving you a sneak preview... | 0 | 2024-06-04T21:15:03 | https://dev.to/tidelift/upstream-preview-government-carrot-government-stick-exploring-two-contrasting-approaches-to-improving-open-source-security-3g16 | upstream, opensource, government, security | <p style="font-size: 18px;"><em>Upstream is </em><strong><em>tomorrow</em></strong><em> on June 5, and wow, our schedule is brillant. We’re giving you a sneak preview into some of the talks and the speakers giving them via posts like these. RSVP </em><a href="https://upstream.live/register?__hstc=23643813.d1ddc767e9f4955f3bdd2f1c64c72f8c.1654699542897.1716388421586.1716392544277.1287&__hssc=23643813.2.1716392544277&__hsfp=1649118565" rel="noopener" target="_blank"><em><span>now</span></em></a><em>!</em></p>
<p style="font-size: 18px;">Governments are starting to believe that their traditional hands-off approach to open source no longer makes sense. But what then? Europe is providing examples of both “carrot” and “stick”: providing incentives to people and organizations to do more security work (i.e. the carrot) or penalizing them for not doing the work or after security incidents happen (i.e. the stick).</p>
<p style="font-size: 18px;">In this fireside chat, Tidelift co-founder and Upstream host Luis Villa sits down with <a href="https://upstream.live/speaker-2024/fiona-krakenbuerger" rel="noopener" target="_blank"><span>Fiona Krakenbürger</span></a> from the Sovereign Tech Fund and <a href="https://upstream.live/speaker-2024/mirko-boehm?hsLang=en" rel="noopener" target="_blank"><span>Mirko Boehm</span></a> from the Linux Foundation Europe to discuss the impending CRA legislation in the EU (the biggest government stick to date) and the Sovereign Tech Fund’s “carrot” approach to funding open security.</p>
<p style="font-size: 18px;">If this conversation about the contrast to “carrot” and “stick” of support and regulation peaks your interest, be sure to join us at <a href="https://upstream.live/"><span>Upstream</span></a> on June 5! </p>
<p style="text-align: center;"><a href="https://upstream.live/" style="padding: 10px 30px; background-image: linear-gradient(#22C994 0%, #22C994 100%); color: #ffffff; border-radius: 30px; text-decoration: none; font-weight: bold;" rel="noopener" target="_blank">RSVP now</a></p>
<h4 style="font-size: 18px;">About the speakers</h4>
<p style="font-size: 18px;">Mirko Boehm is a free and open source software contributor, community manager, licensing expert and researcher, with contributions to major open source projects like the KDE Desktop (since 1997, including several years on the KDE e.V. board), the Open Invention Network, the Open Source Initiative and others. He is a visiting lecturer and researcher on free and open source software at the Technical University of Berlin. He joined the Linux Foundation in June 2023 as senior director for community development for Linux Foundation Europe, where he focuses on driving engagement and collaboration between all European open source stakeholders. </p>
<p style="font-size: 18px;">Fiona Krakenbürger is the co-founder of the Sovereign Tech Fund, an initiative funded by the German Federal Ministry of Economic Affairs and Climate Action, to support Open Source Infrastructure in the Public Interest. Fiona has a background in Open Source Funding and has helped bootstrap and implement programs in Germany and the US. Besides her career in Open Source Funding, Fiona supported and founded various initiatives for more diversity in tech communities. She serves as a member on several boards and committees in the open source and technology ecosystem.</p> | kristinatidelift |
1,877,697 | You need to know about these amazing ai websites | In the digital age, high-quality images are essential for everything from professional photography to... | 0 | 2024-06-04T21:00:00 | https://dev.to/valterseu/you-need-to-know-about-these-amazing-ai-websites-1f84 | ai, development, devops, design |
In the digital age, high-quality images are essential for everything from professional photography to social media posts. However, enhancing image resolution can often be a challenging and costly endeavor. Enter Upscayl, a free AI-powered image upscaler that promises to make high-quality image enhancement accessible to everyone. In this blog post, we'll explore how Upscayl works, its key features, and why it's a game-changer for anyone needing superior image quality.
{% embed https://www.youtube.com/shorts/aS_tJy5L8Pw %}
Follow for more: https://www.youtube.com/@valters_eu
X: https://x.com/valters_eu
Website: https://www.valters.eu
What is https://upscayl.org?
Upscayl is a cutting-edge AI image upscaling tool that allows users to improve the resolution of their images effortlessly. Utilizing advanced artificial intelligence algorithms, Upscayl enhances image quality by increasing resolution and refining details, all while maintaining the integrity of the original image.
Key Features:
1. AI-Powered Upscaling: Upscayl employs sophisticated AI models to upscale images up to four times their original resolution.
2. User-Friendly Interface: The platform is designed for ease of use, making it accessible to both tech-savvy users and beginners.
3. Batch Processing: Upscayl supports batch processing, allowing users to upscale multiple images simultaneously, saving time and effort.
4. High-Quality Output: The AI technology ensures that the upscaled images retain high quality and sharpness, making them suitable for professional use.
5. Free to Use: Unlike many image upscaling tools that come with a hefty price tag, Upscayl offers its services for free, making it a cost-effective solution for all.
How to Use Upscayl:
Using Upscayl is straightforward. Simply upload the image you wish to enhance, select the desired upscaling factor, and let the AI work its magic. The platform provides a preview of the upscaled image, allowing users to compare it with the original and make adjustments if necessary. Once satisfied, users can download the high-resolution image directly from the site.
Conclusion:
Upscayl stands out as a powerful, user-friendly, and free AI image upscaler that meets the needs of various users, from professional photographers to casual users. Its advanced AI technology, combined with a straightforward interface and high-quality output, makes it a must-have tool for anyone looking to enhance their images. Visit Upscayl today to experience the future of image upscaling.
https://replicate.com
Replicate is a powerful platform that simplifies the process of running, fine-tuning, and deploying AI models. With just a single line of code, users can leverage thousands of open-source models for various applications such as generating images, text, videos, music, and speech. The platform also allows for easy fine-tuning of models with custom data, making it possible to create models tailored to specific needs. Additionally, Replicate offers scalable deployment, handling the complexities of infrastructure and ensuring efficient resource usage. Learn more about how you can integrate AI into your projects at Replicate.
Overview of Replicate:
Replicate is designed to democratize access to cutting-edge AI technologies. The platform hosts a vast collection of open-source models contributed by a vibrant community. These models cover a wide range of applications, including image generation, text generation, video creation, music composition, and speech synthesis. With just one line of code, developers can run these models and incorporate their functionalities into their applications.
1. Wide Range of Models: Replicate offers thousands of models that are ready to use in production. These models are not just demos; they are fully functional and come with production-ready APIs.
2. Ease of Use: Running a model on Replicate requires minimal setup. Users can start with a single line of code, making it accessible even for those who are not machine learning experts
3. Customization: Developers can fine-tune existing models with their own data to create bespoke models tailored to specific tasks. This capability is crucial for businesses looking to develop unique solutions.
4. Scalability: Replicate handles the complexities of scaling AI models. Whether a model needs to handle a few requests or millions, Replicate scales up and down automatically to meet demand.
5. Cost Efficiency: Users only pay for the compute resources they use, making it a cost-effective solution for deploying AI models.
Getting started with Replicate is straightforward. The platform supports various programming languages, including Python and JavaScript, making it versatile for different development environments. | valterseu |
1,877,247 | Entendendo o Uso de Offset em Tópicos do Kafka | O Apache Kafka é uma plataforma de streaming distribuída que permite a publicação, armazenamento e... | 0 | 2024-06-04T20:59:22 | https://dev.to/romerodias/entendendo-o-uso-de-offset-em-topicos-do-kafka-82 | O Apache Kafka é uma plataforma de streaming distribuída que permite a publicação, armazenamento e processamento de fluxos de registros em tempo real. Um dos conceitos fundamentais no Kafka é o "offset", que desempenha um papel crucial na forma como os dados são consumidos e gerenciados dentro dos tópicos. Neste post, vamos explorar o que é um offset, como ele funciona e por que é importante para o processamento de dados no Kafka.
O que é um Offset?
No Kafka, os dados são organizados em tópicos, que são divididos em partições. Cada mensagem dentro de uma partição é atribuída a um número sequencial único chamado de "offset". O offset é essencialmente um identificador que marca a posição de uma mensagem dentro de uma partição. Ele é imutável e sempre crescente, o que significa que cada nova mensagem recebida em uma partição terá um offset maior do que a mensagem anterior.
Como o Offset Funciona?
Quando um produtor envia uma mensagem para um tópico, essa mensagem é armazenada em uma das partições do tópico e recebe um offset. Por exemplo, se um produtor enviar três mensagens para uma partição, essas mensagens podem receber os offsets 0, 1 e 2, respectivamente.
Os consumidores, por sua vez, utilizam os offsets para ler as mensagens das partições. Cada consumidor mantém o controle do último offset lido em cada partição, permitindo que ele saiba de onde continuar a leitura na próxima vez que for buscar mensagens. Isso é crucial para garantir que as mensagens sejam processadas de forma ordenada e que nenhuma mensagem seja perdida ou processada duas vezes.
Importância do Offset
Controle de Fluxo: O offset permite que os consumidores leiam as mensagens no seu próprio ritmo. Eles podem pausar, retomar ou reiniciar a leitura a partir de um offset específico, proporcionando um controle preciso sobre o fluxo de dados.
Recuperação de Falhas: Em caso de falha, os consumidores podem reiniciar a leitura a partir do último offset confirmado, garantindo que nenhuma mensagem seja perdida. Isso é especialmente importante em sistemas distribuídos, onde falhas podem ocorrer a qualquer momento.
Processamento Paralelo: Como os tópicos são divididos em partições e cada partição tem seus próprios offsets, múltiplos consumidores podem processar mensagens em paralelo, aumentando a eficiência e a escalabilidade do sistema.
Reprocessamento de Dados: Em alguns casos, pode ser necessário reprocessar mensagens antigas. Com o offset, os consumidores podem voltar a um ponto específico no tempo e reler as mensagens a partir desse ponto, facilitando o reprocessamento de dados.
Gerenciamento de Offsets
O Kafka oferece várias opções para gerenciar offsets:
Armazenamento Automático no Kafka: Por padrão, o Kafka armazena os offsets dos consumidores em um tópico interno chamado __consumer_offsets. Isso facilita a recuperação de offsets em caso de falhas.
Armazenamento Externo: Os consumidores também podem optar por armazenar os offsets em um banco de dados externo ou em um sistema de armazenamento distribuído, proporcionando maior flexibilidade e controle.
Commit Manual e Automático: Os consumidores podem confirmar (commit) os offsets manualmente ou configurar o Kafka para fazer isso automaticamente em intervalos regulares. O commit manual oferece maior controle, enquanto o commit automático simplifica a implementação.
Conclusão
O offset é um componente fundamental no Apache Kafka, permitindo o controle preciso sobre a leitura e o processamento de mensagens. Ele garante a ordem, a recuperação de falhas e a escalabilidade do sistema, tornando o Kafka uma plataforma robusta para o streaming de dados em tempo real. Compreender e gerenciar os offsets de maneira eficaz é essencial para aproveitar ao máximo as capacidades do Kafka e construir sistemas de processamento de dados resilientes e eficientes.
Esperamos que este post tenha ajudado a esclarecer o papel e a importância dos offsets no Kafka.
#kafka #offset | romerodias | |
1,877,246 | SHBJSKSBJBDAKDABJKABADA | SHBJSKSBJBDAKDABJKABADASHBJSKSBJBDAKDABJKABADASHBJSKSBJBDAKDABJKABADASHBJSKSBJBDAKDABJKABADASHBJSKSBJ... | 0 | 2024-06-04T20:59:16 | https://dev.to/brucezelda/shbjsksbjbdakdabjkabada-1e4n | SHBJSKSBJBDAKDABJKABADASHBJSKSBJBDAKDABJKABADASHBJSKSBJBDAKDABJKABADASHBJSKSBJBDAKDABJKABADASHBJSKSBJBDAKDABJKABADA
SHBJSKSBJBDAKDABJKABADA
[https://github.com/h4ikyu-du3p5t3r-b4ttl3-index](https://github.com/h4ikyu-du3p5t3r-b4ttl3-index)
[https://github.com/m4mel-kannnja-crawel-index](https://github.com/m4mel-kannnja-crawel-index)
[https://github.com/hAikYu-an1m3-f1Lm-index](https://github.com/hAikYu-an1m3-f1Lm-index)
[https://github.com/kingD0m-0f-m0ny3t-index](https://github.com/kingD0m-0f-m0ny3t-index)
[https://github.com/maxfariosa-s4Ga-jjkuikk-aleX4dria-index](https://github.com/maxfariosa-s4Ga-jjkuikk-aleX4dria-index)
[https://github.com/garf1eLd-uc1n6-orenn-index](https://github.com/garf1eLd-uc1n6-orenn-index)
[https://github.com/asyduiab-sabufmyboobduas-najsn-index](https://github.com/asyduiab-sabufmyboobduas-najsn-index)
| brucezelda | |
1,877,245 | Golang REST API boilerplate | Upon meticulous investigation into the top Golang RESTful API project architectures, I have... | 0 | 2024-06-04T20:58:49 | https://dev.to/haithambh/golang-rest-api-boilerplate-165k | go, restapi, webdev |
Upon meticulous investigation into the top Golang RESTful API project architectures, I have discovered that the Goffers community shares a number of common structures. I'll share my decision with you along with a boilerplate that makes use of **MongoDB** and the **Golang Echo framework**.
## 1. Project initialization
First we need to initialize golang project with :
```
$ go mod init github.com/HichOoMa/golang-boilerplate
```
this command will init go modules and help you to manage your project dependencies
## 2. Files Structure
This the folder structure of the code that I used to work with
```
.
└── golang-boilerplate
├── cmd
│ ├── api
│ │ └── main.go
│ └── ...
├── api
│ ├── handlers
│ ├── middlewares
│ ├── routes
│ └── init.go
├── assets
│ └── ...
├── internal
│ ├── app
│ │ ├── config
│ │ ├── database
│ │ ├── models
│ │ ├── enums
│ │ ├── business
│ │ └── ...
│ └── lib
│ └── ...
├── pkg
│ ├── cloud
│ ├── jwt
│ ├── mailer
│ └── ...
├── test
│ └── ...
├── go.mod
├── go.sum
└── Makerfile
```
### a. `/cmd`
In a Go project, the "cmd" folder is a commonly used convention for organizing the entry points or main packages of your application. It stands for "command" and is typically used to house executable commands or programs within your project.
The purpose of organizing your main packages under "cmd" is to separate them from the reusable packages stored in the "pkg" or "internal" directories. This makes it clearer which parts of your code base are intended to be standalone commands or programs versus shared libraries or packages.
Using the "cmd" folder structure also aligns well with the Go ecosystem's conventions for package organization and helps keep your project modular and maintainable.
### b. `/api`
The api folder contain the necessary element of the owr API such as handlers, middelware and routes
#### Server Creation
In the first place we need to create our TCP server to listen for the HTTP requests coming to own local port. As we already mentioned we will use Echo framework.
We need first to install our dependencies :
```
$ go get github.com/labstack/echo/v4
```
Then we can start our server by creating our `init()` function in `api/init.go`
```go
package api
import (
"net/http"
"github.com/labstack/echo/v4"
)
func init() {
e := echo.New()
// add server routes and middleware
e.Logger.Fatal(e.Start(":1323"))
}
```
Now we are listning to encoming request on `localhost:1323`
#### `/handlers`
In REST API projects, the "handlers" folder is commonly used to organize HTTP request handlers or controllers. This folder typically contains Go files that define the logic for handling incoming HTTP requests, parsing request parameters, validating input, interacting with the database, and returning responses.
This is a base example of an echo framework handler
```go
func hello(c echo.Context) error {
// you can do some business here
return c.String(http.StatusOK, "Hello, World!")
}
```
#### `/middlewares`
Also the middlewares is commonly used to organize HTTP Request and add layers to specific routes for different reasons such as security, presentations, error handling, ...
```go
func helloMiddleware(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
// do some business here
return next(c)
}
}
```
#### `/routes`
Routes is the part of code that attach path ( "/example", "/order/edit", ... ) with middlewares and handlers to process HTTP request requests.
> it can be just a file or part of the init function if own app contain a small number of routes
```go
e.GET("/status", getOrder)
e.POST("/order", createOrder)
e.PUT("/hello", updateOrder)
e.DELETE("/hello", deleteOrder)
```
### c. `/assets
You can store any assets or resources you need it in you application such as images, csv files, logos, etc
### d. `/internal`
This folder contain mainly private application and library code. This is the code you don't want others importing in their applications or libraries. Note that this layout pattern is enforced by the Go compiler itself. See the Go 1.4 [`release notes`](https://golang.org/doc/go1.4#internalpackages) for more details. Note that you are not limited to the top level `internal` directory. You can have more than one `internal` directory at any level of your project tree.
You can optionally add a bit of extra structure to your internal packages to separate your shared and non-shared internal code. It's not required (especially for smaller projects), but it's nice to have visual clues showing the intended package use. Your actual application code can go in the `/internal/app` directory (e.g., `/internal/app/myapp`) and the code shared by those apps in the `/internal/pkg` directory (e.g., `/internal/pkg/myprivlib`).
Examples:
* https://github.com/hashicorp/terraform/tree/main/internal
* https://github.com/influxdata/influxdb/tree/master/internal
* https://github.com/perkeep/perkeep/tree/master/internal
* https://github.com/jaegertracing/jaeger/tree/main/internal
* https://github.com/moby/moby/tree/master/internal
* https://github.com/satellity/satellity/tree/main/internal
* https://github.com/minio/minio/tree/master/internal
#### `/internal/pkg`
Examples:
* https://github.com/hashicorp/waypoint/tree/main/internal/pkg
### e. `/pkg`
This one contain library code that's ok to use by external applications (e.g., `/pkg/jwt`). Other projects will import these libraries expecting them to work, so think twice before you put something here :-) Note that the `internal` directory is a better way to ensure your private packages are not importable because it's enforced by Go. The `/pkg` directory is still a good way to explicitly communicate that the code in that directory is safe for use by others. The [`I'll take pkg over internal`](https://travisjeffery.com/b/2019/11/i-ll-take-pkg-over-internal/) blog post by Travis Jeffery provides a good overview of the `pkg` and `internal` directories and when it might make sense to use them.
It's also a way to group Go code in one place when your root directory contains lots of non-Go components and directories making it easier to run various Go tools (as mentioned in these talks: [`Best Practices for Industrial Programming`](https://www.youtube.com/watch?v=PTE4VJIdHPg) from GopherCon EU 2018, [GopherCon 2018: Kat Zien - How Do You Structure Your Go Apps](https://www.youtube.com/watch?v=oL6JBUk6tj0) and [GoLab 2018 - Massimiliano Pippi - Project layout patterns in Go](https://www.youtube.com/watch?v=3gQa1LWwuzk)).
Note that this is not a universally accepted pattern and for every popular repo that uses it you can find 10 that don't. It's up to you to decide if you want to use this pattern or not. Regardless of whether or not it's a good pattern more people will know what you mean than not. It might a bit confusing for some of the new Go devs, but it's a pretty simple confusion to resolve and that's one of the goals for this project layout repo.
Ok not to use it if your app project is really small and where an extra level of nesting doesn't add much value (unless you really want to). Think about it when it's getting big enough and your root directory gets pretty busy (especially if you have a lot of non-Go app components).
The `pkg` directory origins: The old Go source code used to use `pkg` for its packages and then various Go projects in the community started copying the pattern (see [`this`](https://twitter.com/bradfitz/status/1039512487538970624) Brad Fitzpatrick's tweet for more context).
Examples :
* https://github.com/containerd/containerd/tree/main/pkg
* https://github.com/slimtoolkit/slim/tree/master/pkg
* https://github.com/telepresenceio/telepresence/tree/release/v2/pkg
### f. `/test`
Additional external test apps and test data. Feel free to structure the `/test` directory anyway you want. For bigger projects it makes sense to have a data subdirectory. For example, you can have `/test/data` or `/test/testdata` if you need Go to ignore what's in that directory. Note that Go will also ignore directories or files that begin with "." or "_", so you have more flexibility in terms of how you name your test data directory.
Examples:
* https://github.com/openshift/origin/tree/master/test (test data is in the `/testdata` subdirectory) | haithambh |
1,877,244 | How to use Notion as a backend in a project using Next.js | How to use Notion as backend In the world of web development, the choice of backend is... | 27,625 | 2024-06-04T20:56:24 | https://www.johnatanortiz.tech/blog/how-to-use-notion-as-a-backend-in-a-project-using-next.js | # How to use Notion as backend
In the world of web development, the choice of backend is crucial for the functioning and scalability of an application. However, what if we could harness the capabilities of a versatile tool like Notion as the backend for our applications? In this article, we will explore how we can integrate Notion as the backend in Next.js, a combination that offers flexibility, ease of use, and creative potential for web projects.
## Why Notion and Next.js?
- Notion: Notion is an all-in-one platform that allows creating documents, databases, wikis, and more, all in one place. Its intuitive interface and powerful features make it an attractive option for managing content.
- Next.js: Next.js is a React framework that allows building fast and scalable web applications with ease. Its focus on server-side rendering and static page generation makes it a popular choice for modern web projects.
## Steps to incorporate Notion as Backend in Next.js:
### 1. Step 1: Notion Setup:
- Create an account on Notion if you haven't already.
- Create a database in Notion to store your application data.
- Define database properties according to your application needs, such as text, numbers, options, etc.
### 2. Step 2: Notion API Setup:
- Access the Notion developer area and create an integration for your application.
- Get your Notion integration token, which will be used to authenticate requests to the API.
### 3. Step 3: Next.js Project Setup:
- Create a new Next.js project using the Next.js CLI or your preferred method.
- Install necessary dependencies to make requests to the Notion API, such as axios or another similar library.
### 4. Step 4: Integration with Notion API:
- Use the Notion integration token to make requests to the API and fetch or modify data in your database.
- Implement functions in your Next.js project to perform CRUD (Create, Read, Update, Delete) operations on the Notion database as needed.
### 5. Step 5: Data Consumption in Next.js:
- Utilize data obtained from Notion to render dynamic content on your Next.js pages.
- Design and style your React components using data retrieved from Notion to create an engaging user experience.
## Conclusion:
The combination of Notion as the backend and Next.js as the frontend offers a powerful and flexible way to build modern web applications. By leveraging Notion's capabilities to manage data and content, along with the speed and scalability of Next.js, developers can efficiently and effectively create sophisticated digital experiences. By following the steps mentioned above, you can incorporate Notion as the backend into your Next.js projects and unlock a new level of creativity and functionality in your web applications. Start experimenting today! | johnatan_stevenortizsal | |
1,877,243 | Shadcn-ui codebase analysis: How is the hero section built on ui.shadcn.com website? | I wanted to find out how the hero section is developed on ui.shadcn.com, so I looked at its source... | 0 | 2024-06-04T20:50:16 | https://dev.to/ramunarasinga/shadcn-ui-codebase-analysis-how-is-the-hero-section-built-on-uishadcncom-website-1hom | javascript, opensource, nextjs, shadcnui | I wanted to find out how the hero section is developed on [ui.shadcn.com](http://ui.shadcn.com), so I looked at its [source code](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/layout.tsx). Because shadcn-ui is built using app router, the files I was interested in were [Layout.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/layout.tsx) and [page.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/page.tsx).
In this article, we will find out the below items:
1. Where is the code related to the hero section shown in the image below

2\. Hero section code snippet.
3\. PageHeader component.
> _Want to learn how to build shadcn-ui/ui from scratch?_ [_Sovle challenges_](https://tthroo.com/build-from-scratch) _to build shadcn-ui/ui from scratch. If you are stuck or need help?_ [_there is a solution_](https://tthroo.com/build-from-scratch)_. Check out_ [_build-from-scratch_](https://github.com/Ramu-Narasinga/build-from-scratch) _and give it a star if you like it._
Where is the code related to the hero section?
----------------------------------------------
Hero section code is in [page.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/page.tsx) in a route group named (app)

Hero section code snippet
-------------------------
```
<PageHeader>
<Announcement />
<PageHeaderHeading>Build your component library</PageHeaderHeading>
<PageHeaderDescription>
Beautifully designed components that you can copy and paste into your
apps. Accessible. Customizable. Open Source.
</PageHeaderDescription>
<PageActions>
<Link href="/docs" className={cn(buttonVariants())}>
Get Started
</Link>
<Link
target="\_blank"
rel="noreferrer"
href={siteConfig.links.github}
className={cn(buttonVariants({ variant: "outline" }))}
>
<Icons.gitHub className="mr-2 h-4 w-4" />
GitHub
</Link>
</PageActions>
</PageHeader>
```
PageHeader, Announcement, PageHeaderHeading, PageHeaderDescription, PageActions are imported as shown below
```
import { Announcement } from "@/components/announcement"
import {
PageActions,
PageHeader,
PageHeaderDescription,
PageHeaderHeading,
} from "@/components/page-header"
```
PageHeader component.
---------------------
PageHeader is more like a wrapper for h1 for heading, Balance (more on this one in some other article) for description etc.,
```
import Balance from "react-wrap-balancer"
import { cn } from "@/lib/utils"
function PageHeader({
className,
children,
...props
}: React.HTMLAttributes<HTMLDivElement>) {
return (
<section
className={cn(
"mx-auto flex max-w-\[980px\] flex-col items-center gap-2 py-8 md:py-12 md:pb-8 lg:py-24 lg:pb-20",
className
)}
{...props}
>
{children}
</section>
)
}
function PageHeaderHeading({
className,
...props
}: React.HTMLAttributes<HTMLHeadingElement>) {
return (
<h1
className={cn(
"text-center text-3xl font-bold leading-tight tracking-tighter md:text-5xl lg:leading-\[1.1\]",
className
)}
{...props}
/>
)
}
function PageHeaderDescription({
className,
...props
}: React.HTMLAttributes<HTMLParagraphElement>) {
return (
<Balance
className={cn(
"max-w-\[750px\] text-center text-lg font-light text-foreground",
className
)}
{...props}
/>
)
}
function PageActions({
className,
...props
}: React.HTMLAttributes<HTMLDivElement>) {
return (
<div
className={cn(
"flex w-full items-center justify-center space-x-4 py-4 md:pb-10",
className
)}
{...props}
/>
)
}
export { PageHeader, PageHeaderHeading, PageHeaderDescription, PageActions }
```
Conclusion:
-----------
Shadcn-ui website’s hero section code is found in page.tsx and this code uses page-header.tsx components such as PageHeaderHeading, PageHeaderDescription, PageHeaderActions etc.,
About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/layout.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/layout.tsx)
2. [https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/page.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/page.tsx) | ramunarasinga |
1,874,782 | Quick Guide | MERN Stack Application Development | Developing a full-stack application offers a comprehensive understanding of application architecture.... | 0 | 2024-06-04T20:29:17 | https://dev.to/ishi_hisashi/quick-guide-mern-stack-application-development-4noe | webdev, react, mongodb, node | Developing a full-stack application offers a comprehensive understanding of application architecture. In this article, we will explore the development process of a full-stack application using the MERN stack, one of the most popular tech stacks today. This article will provide a hands-on guide for the steps to build a MERN stack application from scratch.
This article will aim to be as concise as possible to focus on providing an overview. So, let's get started!!🚀
### Introduction
This tutorial will begin with setting up the server using Node.js and Express.js (Step 1), connecting to MongoDB (Step 2), and creating APIs (Steps 3 and 4). Next, we will work on the frontend with React.js to retrieve data through the server and render it in the browser (Steps 5 and 6)
####File structure
- frontend (folder)
- backend (folder)
- config.env
- .gitignore (_*to ignore .env and node_modules_)
### Step1 : Set up server with Node.js
Go to backend folder
```
$ cd backend
```
Create json file to set up server
```
$ npm init
```
Install packages
```
$ npm install express cors
```
Create index.js in backend folder and configure server
```
import express from "express";
const app = express();
const PORT = 5500;
app.use(cors());
app.use(express.json());
app.listen(PORT, () => {
console.log(`APP is listning to port :${PORT}`);
});
```
Make sure package.json file contains below;
I recommend using nodemon.
```
"type": "module",
"scripts": {
"start": "node index.js",
"dev": "nodemon index.js"
},
```
Then, run the npm
```
$ npm run start
```
or
```
$ npm run dev
```
Finally, you will get

### Step2 : Connect mongoDB
Firstly, go to mongoDB website and create a cluster.
Install dependencies.
_*'dotenv' is used to retrieve environment variables from an env file. This helps to hide passwords and API keys (such as the MongoDB password) when you push this project to a public repository on GitHub. Any information in the env file is ignored by Git in this case._
```
$ npm install mongoose dotenv
```
Write mongoDB password on config.env
```
PASSWORD=************
```
In index.js,
```
import mongoose from "mongoose";
import dotenv from "dotenv";
dotenv.config({ path: "../config.env" });
.....
mongoose
.connect( `mongodb+srv://<YOUR USER NAME>:${process.env.PASSWORD}@cluster0.jwm7sal.mongodb.net/?retryWrites=true&w=majority&appName=Cluster0`,
{}
)
.then(() => {
console.log("connected with DB successfully!");
})
.catch((error) => {
console.log(error);
});
```
If you're connected with mongoDB, you will see the console message 'connected with DB successfully!!'. Congratulation🎉
### Step3 : Define Data model and create API
In this step, you will write code to interact with DB. In this tutorial, folders are divided into the following three;
- model : to define schema
- route : to define endpoint
- controller : to define function (action) based on particular HTTP request
Then, the folder structure looks like;

##### model
Under the model folder, create testModel.js;
```
import mongoose from "mongoose";
// Shcema
const Schema = mongoose.Schema;
const testSchema = new Schema({
name: String,
age: Number,
});
export const Testmodel = mongoose.model("Testmodel", testSchema);
```
##### route
Under the route folder, create testRoute.js;
```
import express from "express";
import * as testController from "../controller/testController.js";
// Define route to execute it
const router = express.Router();
router.get("/", testController.readTests);
router.post("/", testController.createTest);
export default router;
```
##### controller
Under the controller folder, create testController.js. All the handlings related to the test collection are here.
```
import { Testmodel } from "../model/testModel.js";
export const createTest = async (req, res) => {
try {
const newTest = await Testmodel.create(req.body);
res.status(201).json({
status: "success",
data: {
newTest,
},
});
} catch (err) {
res.status(400).json({
status: "fail",
message: err,
});
}
};
export const readTests = async (req, res) => {
try {
const docs = await Testmodel.find({});
res.status(201).json({
status: "success",
data: {
docs,
},
});
} catch (err) {
res.status(400).json({
status: "fail",
message: err,
});
}
};
```
Then, give an endpoint which is corresponding to this collection at index.js
```
import testRoute from "./route/testRoute.js";
.....
app.use("/test", testRoute);
```
### Step 4 : Examine if the API is working
To examine it, this tutorial uses Postman as an API platform.
##### test 1 : Create a document
Send HTTP post request like below picture and document is now successfully created.

The new document can also be seen on Mongo Atlas;

##### test 2 : Read documents
Likewise, the documents are read.

Now that we've done the simplest setting up for server & DB. Next, let's build frontend.
### Step 5 : Set up React.js
At your root file, create and start react by the command below
```
$ npx create-react-app frontend
$ cd frontend
$ npm start
```
(clean up default code on App.js, for the sake of simplicity)
### Step 6 : Interact with server
Once react is activated, let's try to connect with the server. Firstly, install axios.
```
$ npm i axios
```
Then, at your App.js, import axios and react Hooks;
```
import React, {useState, useEffect} from "react";
import axios from "axios";
```
Tap the API we created in Step3 by using useEffect. The retrieved data is stored in the state.
```
const [testArr, setTestArr] = useState("");
useEffect(() => {
axios
.get(`http://localhost:5500/test`)
.then((res) => {
setTestArr(res.data.data.docs);
})
.catch(console.log("failed"));
}, []);
```
Then, try to render it.
```
return (
<>
<p>{testArr[0].name}</p>
<p>{testArr[0].age}</p>
</>
);
```

Now the frontend is connected with the server we created and the data is successfully rendered.
This tutorial ends here, but the possibilities for creating your own applications are endless, thanks to i.e. the versatile React Hooks and the flexibility of MongoDB. I hope this provides a clear and comprehensive overview of building an app with the MERN stack.
| ishi_hisashi |
1,877,165 | Market Weekly Recap: Notcoin Doubles In Price, Bitcoin Struggles, Base Enters Memecoin Race | NOT achieving milestone and memecoins updating the capitalisation mark community-driven... | 0 | 2024-06-04T20:46:32 | https://dev.to/endeo/market-weekly-recap-notcoin-doubles-in-price-bitcoin-struggles-base-enters-memecoin-race-3ohi | webdev, javascript, web3, blockchain | #### NOT achieving milestone and memecoins updating the capitalisation mark community-driven tokens as the ruling tendency in the market
As Bitcoin struggles to overcome the $70,000 barrier, community-driven altcoins enter the market spotlight with a continuous surge in trend momentum.
How The Open Network’s token rallying through the market, and what is awaiting BTC in the nearest future – below.
## Notcoin Adds 289% to the Price; TON Poised for a Breakout
Leading through May and following the listing on WhiteBIT exchange, TON-based Notcoin (NOT) has demonstrated an uprising performance on the first day of June, surging 89% at the weekend and securing almost 3x growth in the 7-day timeframe.
During European trading hours on June 3, NOT extended previous gains by a 22.3% increase, taking it to $0.029349 amid the day. The slight increase followed a 33% dump to $0.019892 support, triggered by a local selloff and ongoing liquidations.

Despite the correction, the market remains optimistic about NOT, as evidenced by the coin’s Open Interest (OI) rates steady uptick.

By contrast, Tryrex, a crypto trader and market analyst, shares controversy on the Notcoin dynamics. In his post on X, he cited the end of NOT’s pump while expecting a bounceback,
> “The pump on $NOT is now over. Price made a blow-off top at 0.0294, indicating the end of the uptrend. I am looking for a possible short when price bounces a bit.”
Conversely, a closer look at the 4-hour chart reveals a bullish perspective for NOT, revealing that the price is trending far above the 50-day simple moving average (SMA). In the meantime, the price has retraced to the 0.382 Fibonacci retracement level that coincides with the $0.021.

A bounce from the level could move NOT to the 0.236 Fibonacci extension around $0.034. On the way, the altcoin may experience some resistance around $0.032, $0.043, and $0.057. By contrast, strong support may be evidenced around the 50-day SMA ($0.014).
While Notcoin keeps on achieving local milestones, its predecessor – Toncoin (TON) – demonstrates more modest performance, yet indicates a strong bullish outlook. Overnight, the token sprang 8%, boosting the price to $6.7 with an intraday pullback of 0.73% at the writing time.
As per the 1-day chart, the bullish recovery aims to conclude the lower high formation with a trendline breakout. However, the intraday pullback warns of a bear cycle to test the long-coming support trendline.
The bullish engulfing candle bolsters the uptrend continuation yet fails to surpass the overhead trendline. Thus, the sideline traders wait for the price action confirmation in the daily chart.

Still, the fundamental expansion of The Open Network and the staggering interest towards Notcoin may become the points in favor of TON’s bullish continuation. As per the trend-based Fibonacci levels, the bullish trend in the TON price could aim for the $10 milestone with the $7.50 breakout.
## Bitcoin Faces Struggles While Memecoins Take Advantage
Contrary to the investors’ hopes, Bitcoin’s bullish strength turned out to remain bearish. The first cryptocurrency retraced almost all of its gains after breaking past the $67,000 resistance point on May 20. It extended upwards to $71,900, but fell to test $67k again on May 23.
Nevertheless, the 1-day chart reveals a bullish perspective for Bitcoin, revealing a distinct bullish triangle.
A close above the triangle will suggest that the uncertainty turned tables in the bulls' advantage. Bitcoin may eventually attempt a rally to the strong overhead resistance at $73,777.

If the bears take over, BTC may see a possible drop to the critical support at $59,600.
Still, data from Santiment reveals that the Bitcoin ETF inflows have been positive lately, hinting at the bullish impact on BTC.

Regardless of the data-supported optimism, market narratives quickly switched to the memecoins.
As per Santiment data, memecoins have captured more of the public’s attention since mid-April due to their superior performance as a sector.

Top memecoins registered decent performances over the past week. Pepe (PEPE) saw remarkable gains, reaching another milestone of $0.000017 in 94% 7-day momentum.

Shiba Inu (SHIB), dogwifhat (WIF), and Floki (FLOKI) indicated 23%, 41%, and 40% growth over the past week respectively.
The memecoin mania brought Base into the spotlight. Thus, its meme-inspired token Brett (BRETT) crossed $1 billion market capitalisation over the recent week and achieved an all-time high of $0.128.

The heavy engagement with memecoins could be a sign of the market's prevailing greed and speculative momentum, which took over the organic development.
Combined with declined Bitcoin’s volatility and its $60,000-$72,000 steady range, which did not meet with the investors’ expectation, this created a perfect opportunity for memecoins to thrive. This only proves that investors ought to be patient and watch out for the range formations instead of getting caught out by the abrupt breakouts. | endeo |
1,872,081 | 💣Not Learning New Things at Work is Destroying Your Career | Goals with this article In this article, I want to focus on these points: Explain the... | 0 | 2024-06-04T20:44:56 | https://dev.to/lucaschitolina/not-learning-new-things-at-work-is-destroying-your-career-34g9 | programming, productivity, career | ## Goals with this article
In this article, I want to focus on these points:
- Explain the rule of hooks and how this motivated me to write this article.
- Sometimes, we don't have time to learn at work, this is a problem
- Identifying and noting new concepts for future study is a key skill.
- Share my approach of compiling new themes on Saturday mornings and studying the most relevant ones.
- Talk about the importance of continuous learning to avoid becoming complacent with routine workflows.
### The motivation behind this article
I was working on a project at my job this week and came across a common React error that some of you have probably seen before.
Can you guess the error by looking at the example code below?

I've lost count of how many times I've caused this error and how many times I've managed to resolve it.

The error happens because, in React, hooks must be called in the same order every time a component renders to maintain consistent state management.
This happens because React uses a linked list to keep track of the hooks in a component. When hooks are called in the same order each time a component renders, React can correctly associate the hook calls with the state and effects they manage. If the order changes, React can’t reliably match hook calls with their previous state, leading to bugs.
So calling hooks at the top level of your React function ensures ordering is maintained.
## But what does this have to do with the title of the article?
As I mentioned earlier, this is not the first time I've encountered this error. Since the first time I resolved it, the situation has become almost automatic: I encounter this error, I fix it, and I move on until the next time.
The problem is that, even though I know how to fix it, I never truly understood why it happened. I never took the time to study and thoroughly read the official documentation [Rules of Hooks](https://react.dev/warnings/invalid-hook-call-warning) to understand it.
And this is the main issue:
> In the workplace, our main goal is to complete tasks and deliver results.
This isn’t exclusive to the case of hooks and it doesn’t happen rarely. Almost every day, we find opportunities to do something better or learn something new, but often, due to the rush of routine, we can’t stop and study. If the task is done, at that moment, that’s what matters.
## The AI era
This situation has been increasing with the rise of AI in recent years. Nowadays, it’s much easier to find an error and, in a moment of haste, ask ChatGPT to solve it. We no longer need to spend so much time searching forums or documentation to try to understand and resolve issues.
I'm not naive enough to think that every programmer knows 100% of what they’re doing or that they deliver the best possible version; many deliver code that could be improved or done in better ways. Using AI to fill these gaps is a very important skill today, but you can’t allow yourself to fall behind.
## You Fall Behind
This is the main way to differentiate a professional who will succeed in the future from one who will not. The successful professional can learn amidst the daily workflow, even if it's just a little, and most of the time can find time to understand what they are doing.
## What You Can Start Doing Today to Improve This
One tip I have is to keep a document with frequent notes to study later.
It’s important not to get lost and let this document get too big. Curate it whenever you can so that your list doesn’t get too long and you end up never studying anything.
I like to use Obsidian (you can use any To-do app, even a calendar) because I can add notes by day; for example, I add to the following Saturday's daily note all the content I see during the week:

With this, I'm able to sit at my desk on Saturday and see each of the concepts I took note of. If it's just curiosity or a different way of doing something I already knew, I'll do it right there and give it a little study. If it's something more complex that will take a lot of time, I create a study plan or spend the whole morning delving deeper into the subject.
Remember that the main goal here is to keep track of your discoveries when you don't have time to go deep and understand.

## More Things
I believe I have many cool things to share about how I organize my study and make the most of it. One of the main tools I use is Obsidian.
Let me know in the comments if you want an article about dev study organization in Obsidian.
Also, check my post about how you can learn new things: https://dev.to/lucaschitolina/my-approach-to-learn-tailwind-css-and-new-things-1n46
| lucaschitolina |
1,875,928 | Design para programadores - Cores | Design é uma área que muitos programadores acabam por ter bastante dificuldade em lidar, em alguns... | 0 | 2024-06-04T20:41:53 | https://dev.to/terminalcoffee/design-para-programadores-1m28 | braziliandevs, beginners, tutorial, design | Design é uma área que muitos programadores acabam por ter bastante dificuldade em lidar, em alguns casos, a pessoa pode achar que não leva jeito para isso, que ela não tem criatividade, que ela não sabe desenhar (mesmo que essas áreas não tenham tanto assim em comum), que ela funciona melhor com lógica e números, enfim, diversos motivos diferentes, eu mesmo nunca me considerei como tendo uma grande veia artistica, entretanto existem coisas sim que podemos aprender sobre design (muitas na verdade), e inclusive utilizar lógica para lidar com isso, em favor de pelo menos não entregar algo que não sangre aos olhos dos usuários, ou pelo menos entender melhor o trabalho entregue a nós pelos designers com quem trabalhamos.
Algo que a área de design tem em comum com a de programação é que eles também seguem princípios e padrões por lá, e um dos princípais pílares do design visual seriam as cores, sendo isso que iremos explorar hoje.
# O circulo cromático
Quando passamos pelo ensino básico, nas aulas de artes, certamente você já deve ter tido uma aula sobre as cores, aprendendo que existem as cores primárias, as secundárias - formadas pela combinação entre as primárias, e, dependendo, até mesmo as cores terciárias - formadas pela combinação entre as primárias e as secundárias - em diante.
Dessa forma, se organizarmos como essas cores se relacionam na forma de um circulo, conseguimos o circulo cromático, algo como:

Organizando elas dessa forma podemos utilizar essas relações entre as cores para formar as nossas paletas de cores. Aqui uma dica seria manter a quantidade de cores entre 1 e 3, isso desconsiderando as variantes de uma cor em específico, por exemplo se escolhermos azul, verde e roxo, as nossas 3 cores seriam essas, mas poderíamos ter azul escuro e verde claro na paleta também, de fato, paletas monocromáticas são baseadas em pegar uma cor, e derivar variações dela.
Vale lembrar que diferentes sistema de representação de cores vão mapear para circulos cromáticos diferentes, visto que eles são só sistemas de coordenadas para que o computador possa representar as cores, e é por isso que nós temos sistemas diferentes no CSS como: RGB, HSL, Hexadecimal, e etc. O circulo apresentado anteriormente segue a ideia de que as cores primárias são: Azul, Amarelo, e Vermelho, entretanto no computador, as cores são representadas pelo sistema RBG, então as cores primárias seriam: Vermelho, Azul, e Verde, por isso o circulo cromático usado pelo CSS seria algo próximo a esse daqui:

# Montando uma paleta de cores
Decidir uma paleta de cores não é uma tarefa fácil, mas também não é algo totalmente abstrato onde nós precisamos sentir algo para conseguir criá-la. Existem inumeras técnicas para escolher quais cores vão fazer parte dela.
## Paleta monocromática
A escolha mais simples é simplesmente pegar uma cor só, por exemplo a cor princípal da logo do sistema sendo trabalhado, e derivar o resto das variações dela utilizando cálculos, algo que podemos fazer graças a um dos sistemas de cores suportados pelo CSS: o HSL.
### Derivando variações de uma cor
O HSL é um sistema onde dizemos qual cor nós queremos, informando o **H**ue, que significa qual cor no circulo cromático queremos usar por meio de um valor em graus, depois a **Saturação**, que seria o quanto daquela cor queremos aplicar (por isso que deixar a saturação em 0 deixa as fotos cinza no photoshop, e que dizemos que uma foto com cores muito fortes está muito saturada), e por fim a **Luz**, sendo o quão clara ou escura aquela cor vai ser (0 é preto e 100 é branco para qualquer cor). Ex:
```css
div {
background: hsl(0, 100%, 50%);
}
```
Nesse exemplo estamos representando o vermelho puro, pois 0 corresponde ao vermelho, estamos aplicando o máximo de vermelho possível com o 100% na saturação, e com 50% de luz estamos bem no meio entre preto e branco.
Um truque aqui é considerar saturação como inversamente proporcional a luz, então quanto mais escura uma cor, mais saturada ela deve ser, e quanto mais clara, menos saturada.
Assim é possível ir criando uma transição natural entre os tons, ao ponto de se colocar cada tom lado-a-lado é possível ver a formação de um degradê.
Isso é bem útil para gerar as variantes, mas também para conseguir variações de preto e branco, visto que uma técnica muito comum entre os designers é nunca usar preto ou branco puros, e sim versões mescladas com as cores sendo utilizadas para quando elas forem combinadas na interface, a composição final parecer mais harmoniosa.
## Cores complementares
Aqui seria como aquele ditado: os opostos se atraem. Podemos combinar cores que são opostas uma da outra no circulo cromático, por exemplo vermelho e verde no sistema tradicional, ou azul e amarelo no RGB.
Vale lembrar que por essas cores serem completos opostos uma da outra, é interessante utilizar a técnica mencionada na seção anterior para combinar cores com valores de saturação e luz diferentes, dessa forma a diferença entre elas vai ser amenizada, evitando combinações famosas por agredirem os olhos. Caso as saturações sejam iguais, o efeito tende a ser o contrário, fazendo as cores colidirem ao invés de se combinarem na interface, causando desconforto em quem vê.
Essa combinação é particularmente útil para formar paletas com duas cores, e pode ser formada a partir de qualquer cor, pois considerando que o círculo tem 360 graus, podemos pegar qualquer valor de Hue, e adicionar mais 180, afinal metade de 360 é 180, dessa forma parando no exato oposto dessa cor no círculo cromático.
## Tríade e Quadrado em diante
Seguindo o aumento na quantidade de Hues escolhidas, caso você queira formar uma paleta de 3 cores, uma opção é seguir a lógica das cores complementares, e dividir os 360 graus por 3, obtendo 120, dessa forma começando com 1 cor e ir somando 120 até ter as 3 cores.
Essa é uma abordagem onde você consegue as 3 cores primárias se começar por uma delas, ou as 3 secundárias caso começa por uma secundárias.
Se você tiver 4, pode formar um quadrado, se ir somando de 90 em 90 graus, de fato, se você pegou a lógica até aqui pode escalar isso para quantas cores você tiver na sua paleta simplesmente dividindo 360 (o circulo todo), pela quantidade de cores que você tiver.
## Cores análogas
Outra forma de lidar com várias cores de uma vez só, seria utilizar os vizinhos no circulo para formar a sua paleta, voltando as aulas de arte, nós aprendemos sobre esse tipo de agrupamento quando pensamos nas cores quentes e nas cores frias.
Nesse modelo, o tamanho do pulo funciona bem seria somar ou subtrair 30 graus de cada vez, e ele funciona bem pois cada cor seria o próximo passo no circulo, assim sendo uma combinação natural para os nossos olhos.
# Aplicando a paleta
Agora que já montamos a nossa paleta de cores, podemos aplicar ela no nosso projeto seguindo uma técnica conhecida como princípio 60-30-10, nesse modelo nós escolhemos uma cor primária, uma cor secundária, e uma cor de acento (normalmente chamada de accent). Por isso que paletas de até 3 cores são preferíveis, mas caso você tenha 4 ou mais, você pode dividir as cores que sobrarem entre as cores secundárias e de acento.
Nesse tópico vai ser necessário um pouco mais de sensibilidade para aplicar a técnica, pois não existe bem uma forma de calcular os valores, mas depois de um pouco de treino com esse princípio em mente, as coisas tendem a ficar mais fáceis, até porque você não precisa dividir perfeitamente.
A cor primária vai ser a cor predominante na sua interface, a princípal, ela vai ser a cor mais aparente quando se olhar para a tela, e por isso você deveria utilizar ela em cerca de 60% da sua interface.
A cor secundária seria a cor utilizada para dar contraste a cor primária, e enquanto ela ainda vai ser bem perceptível na tela, deve ser utilizada com parcimonia, em cerca de 30% das coisas.
Já a cor de acento seria a cor que você utiliza para os detalhes, e deve ser utilizada com ainda mais cuidado que as secundárias, estando presente em apenas 10% da tela.

(Não deixe de conferir o artigo original desta imagem no [uxmisfit.com]
# Conclusão
Espero que tenha gostado do artigo de hoje, não se esqueça de compartilhar com aquele seu amigo que sofre para escolher uma paleta de cor, e até a próxima.
# Links que podem te interessar
- [Usando cores na prática do uxmisfit.com]- [Princípio 60-30-10 no UX Planet](https://uxplanet.org/the-60-30-10-rule-a-foolproof-way-to-choose-colors-for-your-ui-design-d15625e56d25)
- [Adobe Color - Um site que cria paletas de cores](https://color.adobe.com/pt/create)
- [Um guia em mais detalhes para designers](https://medium.com/@imperavi/how-to-create-a-color-palette-for-design-systems-a803e90adc15)
- [Sobre criar shades via HSL](https://bootcamp.uxdesign.cc/why-you-should-ditch-hex-and-use-hsl-to-make-picking-colors-easy-c376f69ae34e)
- [HSL no MDN](https://developer.mozilla.org/en-US/docs/Web/CSS/color_value/hsl)
- [Sobre sistemas de cores](https://www.dio.me/articles/modelos-de-cores-e-porque-o-sol-e-branco)
Ass: Suporte cansado... | terminalcoffee |
1,877,155 | The Ultimate Guide to Hibernate: Importance, Usage, and Best Practices in Java Development | Exploring Hibernate: In the realm of Java development, managing databases efficiently is... | 0 | 2024-06-04T20:38:38 | https://dev.to/fullstackjava/importance-and-usage-in-java-development-56bm | webdev, beginners, programming, tutorial | # Exploring Hibernate:

In the realm of Java development, managing databases efficiently is crucial for building robust and scalable applications. Hibernate, a popular Object-Relational Mapping (ORM) framework, addresses this need by simplifying database interactions and providing a seamless bridge between Java objects and relational databases. This blog will delve into the importance and usage of Hibernate, exploring its features, benefits, and practical implementation.
## Table of Contents
1. [What is Hibernate?](#what-is-hibernate)
2. [Importance of Hibernate](#importance-of-hibernate)
- [Simplified Database Interaction](#simplified-database-interaction)
- [Portability](#portability)
- [Performance Optimization](#performance-optimization)
- [Transaction Management](#transaction-management)
- [Lazy Loading and Eager Loading](#lazy-loading-and-eager-loading)
3. [Core Concepts of Hibernate](#core-concepts-of-hibernate)
- [Configuration](#configuration)
- [SessionFactory](#sessionfactory)
- [Session](#session)
- [Transaction](#transaction)
- [Query Language](#query-language)
4. [Setting Up Hibernate](#setting-up-hibernate)
- [Adding Hibernate to a Project](#adding-hibernate-to-a-project)
- [Configuration File](#configuration-file)
- [Mapping File](#mapping-file)
5. [Hibernate Annotations](#hibernate-annotations)
6. [CRUD Operations with Hibernate](#crud-operations-with-hibernate)
7. [Best Practices](#best-practices)
## What is Hibernate?
Hibernate is an ORM framework for Java, which simplifies the development of Java applications to interact with a database. By mapping Java classes to database tables, Hibernate automates the process of converting data between incompatible systems, such as relational databases and Java objects.
## Importance of Hibernate
### Simplified Database Interaction
Hibernate abstracts the complexities of database interactions, allowing developers to work with high-level Java objects instead of raw SQL queries. This abstraction reduces the likelihood of errors and makes the code more readable and maintainable.
### Portability
Hibernate is database-agnostic, meaning it supports a variety of relational databases such as MySQL, PostgreSQL, Oracle, and many others. This portability allows developers to switch databases with minimal changes to the code.
### Performance Optimization
Hibernate includes several performance optimization features, such as caching mechanisms and batch processing. The first-level cache (Session-level) and second-level cache (SessionFactory-level) significantly enhance application performance by reducing database hits.
### Transaction Management
Hibernate manages transactions efficiently, ensuring data integrity and consistency. It integrates seamlessly with Java Transaction API (JTA) and Java Persistence API (JPA), offering robust transaction management capabilities.
### Lazy Loading and Eager Loading
Hibernate supports lazy loading and eager loading, which optimize data retrieval strategies. Lazy loading fetches related data on-demand, reducing initial load times, while eager loading retrieves all necessary data in a single query, reducing the number of database hits.
## Core Concepts of Hibernate
### Configuration
Hibernate configuration involves setting up database connection properties, mapping classes, and other settings in a configuration file (typically `hibernate.cfg.xml`).
### SessionFactory
The `SessionFactory` is a thread-safe object that provides `Session` instances. It is created once during application initialization and is responsible for managing Hibernate sessions.
### Session
A `Session` represents a single unit of work with the database. It is used to create, read, and delete operations for instances of mapped entity classes.
### Transaction
Transactions in Hibernate ensure that a series of operations are executed in an atomic, consistent, isolated, and durable (ACID) manner. Hibernate provides methods to begin, commit, and rollback transactions.
### Query Language
Hibernate Query Language (HQL) is an object-oriented query language, similar to SQL but operates on the entity objects. It supports polymorphic queries, which allow querying across the inheritance hierarchies.
## Setting Up Hibernate
### Adding Hibernate to a Project
To use Hibernate in a project, you need to add the necessary dependencies. If you are using Maven, you can include Hibernate and its dependencies in the `pom.xml` file:
```xml
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>5.6.3.Final</version>
</dependency>
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-entitymanager</artifactId>
<version>5.6.3.Final</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>8.0.26</version>
</dependency>
```
### Configuration File
The Hibernate configuration file (`hibernate.cfg.xml`) contains the database connection settings and mapping resources:
```xml
<hibernate-configuration>
<session-factory>
<property name="hibernate.dialect">org.hibernate.dialect.MySQLDialect</property>
<property name="hibernate.connection.driver_class">com.mysql.cj.jdbc.Driver</property>
<property name="hibernate.connection.url">jdbc:mysql://localhost:3306/yourdatabase</property>
<property name="hibernate.connection.username">yourusername</property>
<property name="hibernate.connection.password">yourpassword</property>
<property name="hibernate.hbm2ddl.auto">update</property>
<property name="hibernate.show_sql">true</property>
<property name="hibernate.format_sql">true</property>
<mapping resource="com/yourpackage/YourEntity.hbm.xml"/>
</session-factory>
</hibernate-configuration>
```
### Mapping File
The mapping file (`YourEntity.hbm.xml`) defines the mapping between the Java class and the database table:
```xml
<!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">
<hibernate-mapping>
<class name="com.yourpackage.YourEntity" table="your_table">
<id name="id" column="id">
<generator class="native"/>
</id>
<property name="name" column="name"/>
<property name="age" column="age"/>
</class>
</hibernate-mapping>
```
## Hibernate Annotations
Hibernate also supports annotations for mapping, which is a more modern and concise approach than XML-based configuration. You can use annotations directly in the entity class:
```java
import javax.persistence.Entity;
import javax.persistence.Id;
import javax.persistence.GeneratedValue;
import javax.persistence.GenerationType;
@Entity
public class YourEntity {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
private String name;
private int age;
// Getters and setters
}
```
## CRUD Operations with Hibernate
Here are examples of basic CRUD operations using Hibernate:
### Create
```java
SessionFactory sessionFactory = new Configuration().configure().buildSessionFactory();
Session session = sessionFactory.openSession();
Transaction transaction = session.beginTransaction();
YourEntity entity = new YourEntity();
entity.setName("John Doe");
entity.setAge(30);
session.save(entity);
transaction.commit();
session.close();
```
### Read
```java
Session session = sessionFactory.openSession();
YourEntity entity = session.get(YourEntity.class, 1L);
System.out.println("Name: " + entity.getName());
session.close();
```
### Update
```java
Session session = sessionFactory.openSession();
Transaction transaction = session.beginTransaction();
YourEntity entity = session.get(YourEntity.class, 1L);
entity.setAge(31);
session.update(entity);
transaction.commit();
session.close();
```
### Delete
```java
Session session = sessionFactory.openSession();
Transaction transaction = session.beginTransaction();
YourEntity entity = session.get(YourEntity.class, 1L);
session.delete(entity);
transaction.commit();
session.close();
```
## Best Practices
1. **Use Annotations**: Prefer annotations over XML for entity mappings for better readability and maintainability.
2. **Optimize Queries**: Use HQL or Criteria API to write efficient and optimized queries.
3. **Caching**: Utilize Hibernate's caching mechanisms to improve performance.
4. **Batch Processing**: Implement batch processing for bulk operations to reduce the number of database calls.
5. **Transaction Management**: Always use transactions to ensure data integrity and consistency.
6. **Lazy Loading**: Use lazy loading for associations to improve performance and avoid unnecessary data retrieval.
7. **Exception Handling**: Handle exceptions properly to ensure graceful application behavior.
In conclusion, Hibernate is a powerful ORM framework that simplifies database interactions and boosts productivity in Java development. By understanding its core concepts, setting up the environment correctly, and following best practices, developers can harness the full potential of Hibernate to build efficient and scalable applications. | fullstackjava |
1,877,153 | Why "Chat over Your Data" Is Harder Than You Think | Overview While Large Language Models (LLMs) are increasingly capable of performing... | 0 | 2024-06-04T20:34:28 | https://dev.to/sahinera/why-chat-over-your-data-is-harder-than-you-think-1e69 | ai, llm, programming |

## Overview
While Large Language Models (LLMs) are increasingly capable of performing general reasoning and solving everyday tasks, they typically lack the context to address specialized use cases, like answering questions about your proprietary data. This is because they lack the necessary context to understand your data – so instead, they often default to saying they don’t know, or worse, hallucinating wrong information. Instead, a better solution is to ground your LLM on the data it needs to generate highly effective responses, rather than forcing it to guess on topics it doesn’t have context on.

While this process sounds simple, when it comes to chat applications, there are many pitfalls in ensuring that you retrieve the right information from your knowledge base. At Arcus, we’ve built and run information retrieval systems at [planet scale](https://www.arcus.co/blog/rag-at-planet-scale) to discover and incorporate the most relevant context from your data into your LLMs, grounding them with real data to prevent hallucinations. Our information retrieval capabilities are customizable to your data and domain, enabling users to power personalized LLM applications, such as domain-specific, [chat-based copilots](https://www.arcus.co/blog/pump).
## The Challenge of Building “Chat over Your Data” Applications

A simple solution to building an LLM chat application that’s grounded on your data works as follows:
1.
[Index and structure](https://www.arcus.co/blog/rag-at-planet-scale) your data in a way that can be used for semantic retrieval. This requires large-scale data processing and intelligent methods for extracting metadata while preserving the overall structure and content of your data.
2.
Build or configure an information retrieval system that is capable of retrieving relevant context for a given set of chat messages. This context will represent snippets of data from your index that, when synthesized, can provide answers to users’ prompts.
3.
Configure a “post-processing” step that combines retrieved context with a user’s chat history to send to an LLM. This will ensure that your LLM can synthesize and understand your data in order to generate accurate responses to users’ chat queries.
While steps 1 and 3 above are required for building any LLM application grounded on additional data and challenges in their own right, step 2 can be especially tricky when dealing with chat applications.
Typically, information retrieval systems built for LLM applications take in a snippet of text as a query and retrieve indexed data that’s highly similar to the text provided. These information retrieval algorithms usually rely on vector semantic search, keyword-based searches, or [more intelligent approaches](https://www.arcus.co/blog/rag-at-planet-scale). However, in the context of chat applications, deciding what text to use for your query is surprisingly difficult.
### Challenge: What data from the chat history should you use for retrieval?
Consider the scenario where we build a chatbot to answer questions about various tech companies, dynamically retrieving context from a database of company summaries to enhance the knowledge of our LLM. Below is an example interaction that a user might have with our chatbot:

When the user asks their final prompt, our goal is to retrieve the company summary relating to Arcus and use that to answer the user’s question. However, the user’s final prompt doesn’t mention Arcus directly and to the model the “it” in the prompt is ambiguous – as the prompt refers to history in the chat. This means simply attempting to retrieve data based on the user’s final prompt won’t give us the data we need to answer their question. Here are some simple strategies for how we can generally use chat history for retrieval and why they might not work as expected:
1.
**Use only the final user prompt as the query.** This approach works poorly in chat applications when there is a lot of back-and-forth discussion, like the one above. In these scenarios individual prompts might not have all the context necessary to inform accurate retrieval. In the example above, the final user prompt “when was it founded?” doesn’t make sense on its own as it isn’t clear that the “it” refers to “Arcus” in this case. This ambiguity in the chat prevents an information retrieval system from retrieving the right context.
2.
**Use the entire chat history as the query.** This can be problematic in cases where the topic of conversation in the chat application changes over time, because irrelevant context will likely be retrieved. For example, in the example above, the chat history includes both information about Apple and Arcus. While the final user prompt requires information to be retrieved about Arcus’ founding date, the unrelated discussion about Apple in the chat history means that we will likely retrieve information that’s also related to Apple which isn’t topical to the subject at hand. This will potentially confuse your LLM with irrelevant context and lead to irrelevant or incorrect responses.
Since simple solutions to forming the data retrieval query from chat applications often fall short for a wide variety of user prompts, we need to find a more robust solution to this problem to build performant, production-worthy chat-based copilots over our data.
## Arcus’ Solution to Building Chat-Based Copilots over Your Data
At Arcus, we’ve architected a solution that decouples the chat history of the application from the query used for the retrieval system to solve the challenges above. Our solution relies on intelligent and automatic **query transformations**, which transform the user’s chat history into queries that get to the heart of the user’s prompt and can be used as single units of retrieval against our data. Using these transformed queries results in better retrieval performance and more accurate LLM responses.
### The Basics of Query Transformations For Chat Applications
By using query transformations to decouple the user's chat history from the specific queries used to retrieve information, we can ensure that the retrieval process focuses on the most pertinent data, minimizing the risk of irrelevant or conflicting information being retrieved. This approach is similar in spirit [to our approach for indexing data](https://www.arcus.co/blog/rag-at-planet-scale), which decouples the raw data we intend to retrieve from the information we use to index the data.
In the context of chat applications, determining the right query for information retrieval is a nuanced task. A simple and straightforward approach to using query transformations for chat applications is to **ask an LLM to re-write the chat history into a simple query.** This query can then be used to retrieve relevant information over our index. For example, we can ask ChatGPT the following question and use ChatGPT’s response as the query we use for our retrieval system:
> "Incorporate the above context to rephrase the user's final prompt into a single self-contained question."
For the example we gave previously, the generated query is a re-stated question that incorporates all the necessary information to retrieve the right context from our index:
> “When was Arcus Inc., the seed-stage startup based in New York City that focuses on building a data platform for LLMs, founded?”
This is now a self-contained prompt that can be used for the retrieval system and ensures that we retrieve the right context for our LLM to provide the correct response to the user’s prompt.
### Query Transformations for Chat: Key Considerations and Tradeoffs
Simply using an out-of-the-box LLM for query transformations is often not a production-ready solution due to poor reliability and performance. Here are some factors to consider when deciding how to use query transformations for chat applications:
1.
**Cost and latency.** Adding an LLM call to the critical path of the retrieval workflow can potentially double both the latency and the cost of generating responses for your chat application. While the naive approach to using query transformations requires an additional LLM call, this strategy can become prohibitive for an application. In applications that are latency or cost sensitive, it’s important to consider alternatives that are cheaper and faster than this approach.
2.
**Preserving important information from the chat history.** While LLMs are highly capable of performing open-ended, unstructured tasks, they can sometimes be unreliable in preserving information, [especially along long context windows](https://arxiv.org/pdf/2307.03172). Ensuring that your query transformation algorithms incorporate the entire chat history and distill it into the right query is vital to ensuring reliable retrieval performance.
3.
**More complex queries.** Some prompts may require information from multiple disparate sources at once, which necessitates more complex query patterns against your data. Query transformations for these more advanced use cases can’t consist of just re-stating user prompts into single queries, but rather into multiple queries that can be stated in succession.
### Arcus’ Solution

At Arcus, we’ve built a **Query Transformation Engine (QTE)** specifically designed to solve the key requirements and tradeoffs of using query transformations for chat applications. Our engine is built on the following core steps to solve the main challenges above:
1.
**Use faster and cheaper models for distilling chat history wherever possible.** The most capable models available today like OpenAI’s GPT-4 are great for exhibiting intelligence across a broad variety of tasks, but are relatively expensive and slow. Instead, QTE can achieve better performance at a fraction of the cost with models that are small and specialized only to distilling chat histories. QTE uses a combination of multiple small, specialized LLMs specifically meant for resolving ambiguities in text and transforming chat histories into single, succinctly stated questions.
2.
**Recursively break down long chat histories.** Because longer chat histories can be hard for LLMs to process, QTE only processes small portions of the chat history at a time as part of step 1 above. QTE then aggregates the intermediate results of these smaller tasks recursively into a single query. The aggregation processes enforce that key entities or details are not omitted or “washed out” in the recursive steps required to aggregate intermediate results.
3.
**Compute and use sub-queries to answer more complex questions.** After distilling all of a given chat history into a single question, the resulting question may not be something that can be answered by a single query against the data. This may need to be broken down into smaller queries that need to be aggregated back together. In this case, QTE can use a set of smaller, more specialized LLMs to break down the complex query into sub-queries that can individually be used to query the data. After these sub-queries are run, QTE runs a recursive aggregation to summarize and process what data is most relevant to the user’s prompt.
## Summary
Building a chat application that provides valuable insights using your data presents many unique challenges. At Arcus, we’ve designed an approach that gets to the heart of users’ questions and retrieves the most relevant data to answer them. As we continue to improve and iterate on the core challenges of chat-based copilots, we’re pushing the frontier of what’s possible for using your data intelligently to build LLM applications. [Request early access](https://app.arcus.co/account/early-access) to see how Arcus can help you ground LLMs on your data to build domain-specific copilots and AI applications!
Arcus is also hiring! We’re actively working on building LLM applications grounded on complex data, using advanced indexing and retrieval algorithms to answer complex user queries, large scale systems for processing heterogeneous data at scale, and understanding the performance of LLMs in the context of your data. Check out our [careers page](https://www.arcus.co/careers) or reach out to us at [recruiting@arcus.co](mailto:recruiting@arcus.co)!
_Originally published at [https://www.arcus.co](https://www.arcus.co)._ | sahinera |
1,877,152 | Understanding the Difference between Google Ads and PPC Advertising | Our digital IT world is growing rapidly day by day and it has given us many benefits. When we talk... | 0 | 2024-06-04T20:28:57 | https://dev.to/liong/understanding-the-difference-between-google-ads-and-ppc-advertising-34mh | googleads, clicks, malays, kualalumpur | Our digital IT world is growing rapidly day by day and it has given us many benefits. When we talk about this digital world, this shows us the digital marketing pattern, here many businesses are truly successful and striving so much due to the presence of more audience. This audience who sees our brand and enjoys it is considered to be the online visibility. Online visibility is all about the trafficking or the audience to the business's websites. This kind of trafficking plays a major role in helping out in their success and more earnings.
Here are the two different kinds of strategies or techniques that are used and are mentioned primarily more often, the two techniques or strategies are [Google Ads and PPC](https://ithubtechnologies.com/google-advertising-malaysia/?utm_source=dev.to&utm_campaign=googleadsandppc&utm_id=Offpageseo+2024)( Pay Per Click).
## What is PPC?
PPC is also known as Pay-Per-Click. This is a broad and best advertisement model that involves the process where the advertisers are the ones who pay in order to get their advertisement ready for their brand or anything. The advertisers are the ones who pay to get their ad whenever it is being clicked. Most importantly, this is a way that provides us with the idea to buy visits and more visibility to our website, rather than just getting practical work to "earn" the kind of organic visibility. This is what PPC or the Pay-Per-Click concept is all about. This is a platform that has many types of ads and techniques to make it a more accessible and versatile tool for marketers in the market.
## Key Features of PPC:
Additionally, PPC or Pay-Per-Click has its very own unique feature that helps it to become such an awesome technique or strategy to get more visibility to our business's websites. Here are the following features of PPC or Pay-Per-Click:
1. **Payment Model:** From the name itself, we can get the idea that the people or any individual who clicks on our ad means that at the end of the day, you have to pay. This kind of feature helps PPC to manage and maintain the advertising costs or prices. It also calculates or measures the easiness and smoothness of your very own campaigns.
2. **Platforms:** When we talk about platforms, it gives us an idea about top platforms like Google,Bing, Facebook, Instagram, LinkedIn, and much more. Every kind of platform has its audience and trends with analysis.
3. **Ad Formats:** Ad formats are the search ads or display ads that are displayed, social media ads, video ads, or maybe shopping ads. These are the formats that allow many varieties of businesses to reach their target audience more quickly.
4. **Targets:** The PPC has its platforms too that play the most important role in providing us with audience-gathering options. These targeting options are such as the images or demographics, interests, behaviors, location, and device types.
## What is Google Ads?
Here is the description of Google ads which were also used to be called Google AdWords. Google Ads is the online platform for advertising where at this point the advertisers are the ones that would confirm and bid on the display more briefly. This is a special kind of PPC advertising which is very much focused on Googled Ecosystem.
## Key Features of Google Ads:
1. **Search Network:** Google ads are the ones that contain a special feature known as the search engine. Whatever any person searches or wants to find out about something then the search engine results are seen.
2. **Display Network:** The display network gives us the idea that Google ads allow us to run the display ads of your products or business on a particular network that has over two to three million websites or applications (apps). This includes images or videos.
3. **YouTube Ads**: With the help of Google ads you can easily. be able to advertise your content or work on the best platform. like Youtube. This is the second largest search engine from all over the world because it has bumper ads too and many other things.
4. **Shopping Ads:** Shopping ads are mostly. done to showcase the catalog or products to the customer or a wide variety of audiences. The products are shown by images, titles or prices, and also the store name.
5. **Analytics:** This feature is best because this causes Google Ads to offer gathering options and detailed analysis that will allow advertisers to change their techniques.
## Differences Between Google Ads and PPC
1. **Scope:**
(i)Google Ads is a specific platform for PPC providing this platform within the reach of Google.
(ii)PPC is a model that is used in platforms like Bing, Facebook, LinkedIn, and much more.
2. **Platform Characteristics:**
(i)Google Ads has amazing features like Google Search ads, YouTube ads, and Shopping ads. This shopping ad is only present in google.
(ii)PPC on other platforms (e.g., Facebook) has features like social engagement and audience insights that go into social media advertising.
3. **Ad Formats:**
(i)Google ads focus on search, display, video, and shopping ads.
(ii)PPC has many varieties of formats like sponsored posts on social media and promoted pins on Pinterest.
4. **Targeting Capacity :**
(i)Google Ads only target those areas that are based on search intent, which is good for capturing high-conversion traffic.
(ii)Other PPC platforms like Facebook and LinkedIn offer targeting tactics that include behaviors that are effective for brand awareness and engagement campaigns.
## Conclusion
According to the above mentioned points of can conclude that Google Ads and PPC both play an important role in the advertisement world of businesses. here at this point, you had hit the idea of major differences between both of these platforms. When a person is using Google Ads or PPC, he/ she may know that both are helping out in digital marketing compared to each other. Google Ads are limited while PPC is a wider range of platforms and techniques. Both of these platforms have their very own benefits and
| liong |
1,877,150 | Title | Test Title Test Content | 0 | 2024-06-04T20:26:34 | https://dev.to/tyler_kendrick_b8e3240e37/title-55np | test | # Test Title
## Test Content | tyler_kendrick_b8e3240e37 |
1,877,148 | comment on i need a hacker to recover money from binary trading | It is worthy of the world to hear this testimony. I'm here to spread the word about Captain... | 0 | 2024-06-04T20:25:55 | https://dev.to/martin_luther_8cb8da3c599/comment-on-i-need-a-hacker-to-recover-money-from-binary-trading-4klb | It is worthy of the world to hear this testimony. I'm here to spread the word about Captain WebGenesis's wonderful deeds. My name is Martin Luther. I lost USD 232,000.00 in a binary investment trading scam and didn't realize it until a few weeks later. The site and services I utilized appeared authentic, and everything appeared legitimate until I emailed them to request a withdrawal of my weekly gains and they did not reply. I looked online for a specialized expert to help me get my money back. After reading countless testimonials about how Fastfund Recovery has assisted numerous con victims in getting their money back from fraudulent investment firms, I decided to give the expert a trial. I contacted Fastfund Recovery and submitted my case to the expert. He assured me that all my lost money would be retrieved and returned to my wallet address. Fastfund Recovery worked on my case, and to my amazement, my lost funds were returned to my wallet within 72 hours. Contact Fastfund Recovery right away to have your lost money returned to you.
Contact Info
Tele gram ; Fastfundsrecovery
G mail ; { Fastfundrecovery8 (AT) Gmail com } | martin_luther_8cb8da3c599 | |
1,877,147 | Nginx Alias Path Traversal | Path Traversal Overview of the Vulnerability Path traversal uses a server misconfiguration to access... | 0 | 2024-06-04T20:22:50 | https://dev.to/c4ng4c31r0/nginx-alias-path-traversal-1498 | **Path Traversal**
Overview of the Vulnerability
Path traversal uses a server misconfiguration to access hidden files and directories that are stored on the served web application. This can include sensitive operating files, code and data that runs the application, or in some cases, user credentials.
An attacker can leverage the path traversal vulnerability in this application to gain access to system files in a folder of a directory that is not intended for public access.
**Business Impact**
Path traversal can lead to reputational damage for the business due to a loss in confidence and trust by users. It can also result in data theft and indirect financial losses to the business through the costs of notification and rectifying and breached PII data if an attacker can successfully exfiltrate user data.
**Steps to Reproduce**
Use burp to replicate this request:
```
GET /api../README.md HTTP/2
Host: site.com
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip,deflate,br
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4512.0 Safari/537.36
Connection: Keep-alive
```
Note that it was possible to read the contents of the file.
I performed other checks, but I was unable to read the name of other commonly identified files, but you can better validate by checking the name of other files that actually exist on the server and properly validate the vulnerability.
Also note that the information contained in the readme file is partly from the external environment, where it mentions internal files, shows the execution of a cron job, displays the name of the internal server, among other information.
PoC:

Reward/Status:

| c4ng4c31r0 | |
1,877,146 | FlowerShop | The Flower Shop NYC offers an incredible selection of tops that combine style and comfort perfectly.... | 0 | 2024-06-04T20:21:56 | https://dev.to/valikoole24/flowershop-4iaj | The Flower Shop NYC offers an incredible selection of tops that combine style and comfort perfectly. Whether you're dressing up for a special occasion or looking for something casual, their collection has something for everyone. For the latest in fashionable tops, make sure to [visit](https://www.theflowershopnyc.com/tops/) their website and explore the variety they have to offer.
| valikoole24 | |
1,877,145 | Understanding Java Variables: A Comprehensive Guide | Java, as a statically typed language, requires the declaration of variables before they can be used.... | 0 | 2024-06-04T20:21:38 | https://dev.to/fullstackjava/understanding-java-variables-a-comprehensive-guide-3dme | webdev, beginners, programming, tutorial | Java, as a statically typed language, requires the declaration of variables before they can be used. This aspect of Java programming ensures type safety and improves code readability and maintainability. In this blog, we will delve into the different types of variables in Java, their scopes, and provide examples to illustrate their use.
## Table of Contents
1. [What are Variables?](#what-are-variables)
2. [Types of Variables](#types-of-variables)
- [Instance Variables](#instance-variables)
- [Class Variables](#class-variables)
- [Local Variables](#local-variables)
- [Parameters](#parameters)
3. [Variable Declaration and Initialization](#variable-declaration-and-initialization)
4. [Variable Scope](#variable-scope)
5. [Data Types](#data-types)
6. [Examples](#examples)
7. [Best Practices](#best-practices)
## What are Variables?
In Java, a variable is a container that holds data which can be changed during the execution of a program. Each variable in Java has a specific type, which determines the size and layout of the variable's memory, the range of values that can be stored, and the set of operations that can be applied to the variable.
## Types of Variables
### Instance Variables
Instance variables are non-static variables that are declared in a class but outside any method, constructor, or block. They are associated with an instance of the class and are initialized when the class is instantiated. Each instance of the class has its own copy of the instance variables.
```java
public class Car {
// Instance variable
private String color;
private String model;
// Constructor
public Car(String color, String model) {
this.color = color;
this.model = model;
}
// Getter method
public String getColor() {
return color;
}
// Getter method
public String getModel() {
return model;
}
}
```
### Class Variables
Class variables are declared with the `static` keyword in a class but outside any method, constructor, or block. They are also known as static variables and are shared among all instances of the class. Only one copy of the static variable exists, regardless of the number of instances of the class.
```java
public class Car {
// Class variable
private static int numberOfCars;
// Constructor
public Car() {
numberOfCars++;
}
// Getter method
public static int getNumberOfCars() {
return numberOfCars;
}
}
```
### Local Variables
Local variables are declared within a method, constructor, or block. They are created when the method, constructor, or block is entered and the variable will be destroyed once it exits. Local variables are not accessible outside the method, constructor, or block in which they are declared.
```java
public class Car {
public void displayCarInfo() {
// Local variable
String info = "This is a car.";
System.out.println(info);
}
}
```
### Parameters
Parameters are variables that are passed to methods or constructors. They act as input to the method or constructor and can be used within it.
```java
public class Car {
public void setColor(String color) {
// Parameter 'color'
this.color = color;
}
}
```
## Variable Declaration and Initialization
Variables in Java must be declared with a data type before they can be used. Declaration is the process of defining a variable's type and name, while initialization assigns a value to the variable.
```java
public class VariableExample {
public static void main(String[] args) {
// Declaration
int number;
// Initialization
number = 10;
// Declaration and initialization
String greeting = "Hello, World!";
System.out.println(number); // Output: 10
System.out.println(greeting); // Output: Hello, World!
}
}
```
## Variable Scope
The scope of a variable determines where it can be accessed within the code.
- **Instance variables**: Accessible within any non-static method or block of the class.
- **Class variables**: Accessible within any static method or block of the class.
- **Local variables**: Accessible only within the method, constructor, or block where they are declared.
- **Parameters**: Accessible within the method or constructor where they are declared.
## Data Types
Java is a strongly typed language, which means each variable must be declared with a data type. The main data types in Java are:
- **Primitive Data Types**: byte, short, int, long, float, double, char, boolean.
- **Reference Data Types**: Arrays, Classes, Interfaces, Strings, etc.
```java
public class DataTypesExample {
public static void main(String[] args) {
// Primitive data types
int age = 25;
double salary = 55000.50;
char grade = 'A';
boolean isEmployed = true;
// Reference data type
String name = "John Doe";
System.out.println("Age: " + age);
System.out.println("Salary: " + salary);
System.out.println("Grade: " + grade);
System.out.println("Is Employed: " + isEmployed);
System.out.println("Name: " + name);
}
}
```
## Examples
### Instance Variable Example
```java
public class Dog {
// Instance variable
private String breed;
// Constructor
public Dog(String breed) {
this.breed = breed;
}
// Getter method
public String getBreed() {
return breed;
}
public static void main(String[] args) {
Dog myDog = new Dog("Golden Retriever");
System.out.println("Breed: " + myDog.getBreed());
}
}
```
### Class Variable Example
```java
public class Employee {
// Class variable
private static int employeeCount = 0;
// Constructor
public Employee() {
employeeCount++;
}
// Getter method
public static int getEmployeeCount() {
return employeeCount;
}
public static void main(String[] args) {
new Employee();
new Employee();
System.out.println("Total Employees: " + Employee.getEmployeeCount());
}
}
```
### Local Variable Example
```java
public class Calculator {
public int add(int a, int b) {
// Local variable
int sum = a + b;
return sum;
}
public static void main(String[] args) {
Calculator calc = new Calculator();
int result = calc.add(5, 3);
System.out.println("Sum: " + result);
}
}
```
### Parameter Example
```java
public class Printer {
public void printMessage(String message) {
// Parameter 'message'
System.out.println(message);
}
public static void main(String[] args) {
Printer printer = new Printer();
printer.printMessage("Hello, Java!");
}
}
```
## Best Practices
1. **Use meaningful names**: Variable names should be descriptive and meaningful.
2. **Follow naming conventions**: Use camelCase for variable names and follow Java naming conventions.
3. **Keep scope as narrow as possible**: Declare variables in the smallest scope possible.
4. **Initialize variables**: Always initialize variables to avoid unexpected behavior.
5. **Avoid magic numbers**: Use named constants instead of hard-coding values.
```java
public class BestPracticesExample {
public static void main(String[] args) {
final int MAX_USERS = 100; // Named constant
int currentUsers = 50;
if (currentUsers < MAX_USERS) {
System.out.println("There is room for more users.");
} else {
System.out.println("User limit reached.");
}
}
}
```
In conclusion, understanding Java variables, their types, scope, and best practices is fundamental for writing clean and efficient Java code. By following the guidelines and examples provided, you can improve your programming skills and produce robust Java applications. | fullstackjava |
1,877,099 | Event Driven services using Kafka, SurrealDB, Rust, and Go. | Kafka? To understand the purpose of Kafka you need to experiment with it in various... | 0 | 2024-06-04T20:18:36 | https://dev.to/sourabpramanik/event-driven-services-using-kafka-surrealdb-rust-and-go-43k1 | eventdriven, kafka, rust, go | ## Kafka?
To understand the purpose of Kafka you need to experiment with it in various contexts and reason with it. This small project is a very slight overview of one of the use cases of Kafka: streaming messages to drive multiple service-agnostic applications. Streaming messages using Kafka is not required to specify the destination where these messages should land. Similarly, it is not relevant for the receiver to be aware of the sender in any context unless done manually.
Kafka assures the integrity of the messages in the stream in the face of system failure saving us from inconsistent data, data loss, or redundancy. It is much more powerful when used across systems and applications built upon different tech stacks, architecture, and purposes and is distributed over the network. To read more about it check out this [eBook](https://www.confluent.io/resources/ebook/kafka-the-definitive-guide/).
So to get a basic understanding of what Kafka is and how it operates, we will be creating two different applications and we will see how they can communicate with each other without making any HTTP request internally.
# Event-driven services?
A service is a logical unit within a larger application, responsible for executing specific tasks based on given inputs. Each service can communicate with other services using either synchronous communication methods (such as HTTP/REST, gRPC, and GraphQL) or asynchronous communication methods (such as webhooks, message queues, and event streaming). In this article, we will focus on using an event-streaming communication pattern to coordinate various services and perform the necessary operations to achieve the desired outcomes.
Event-driven services are those that communicate with each other in response to specific events being triggered. This communication is independent of the sources generating the events and the entities reacting to them.
## Overview
To simulate a real-life example we will be building an inventory service using Rust and Actix that should receive an HTTP request to check if the product with the required units is available in the inventory, if it exists then we take that many products from the whole stock of it and when the application has successfully finished its job then we will produce a message using Kafka.
Also to keep store the data of the products and their available units we will be using a database called [SurrealDB](https://surrealdb.com/). I have chosen SurrealDB because of a specific reason which we will explore later in this article.
Now that we have produced a message from the inventory we need a consumer to consume this message by connecting to the Kafka broker. So for this, we will create a shipment service using Go to simulate the shipping process when the products are released from the inventory but to keep this project short and concise we are not going to build the whole shipment system.
## Prerequisites
- Rust: [Install](https://www.rust-lang.org/tools/install), [Book](https://doc.rust-lang.org/stable/book/)
- Go: [Install](https://go.dev/doc/install), [Docs](https://go.dev/doc/)
- Docker: [Install](https://docs.docker.com/engine/install/), [Docs](https://docs.docker.com/get-started/)
- SurrealDB CLI: [Install](https://surrealdb.com/docs/surrealdb/installation/), [Docs](https://surrealdb.com/docs/)
Basic understanding of programming and a huge amount of curiosity to know the whys and hows. Nothing else.
> If you want to refer to the source code while reading this article then here is the [repository](https://github.com/sourabpramanik/eda-kafka-rust-go-surrealdb)
>
## Get started
### Docker compose
We will be pulling the docker images of Kafka, Zookeeper, and SurrealDB to keep the setup process light and easy.
```yaml
version: "3.8"
services:
surrealdb:
image: surrealdb/surrealdb:latest
container_name: surrealdb
command: start --auth --user root --pass root file:/container-dir/dev.db
ports:
- "8000:8000"
volumes:
- surrealdb-data:/container-dir
user: root
environment:
- SURREALDB_ENV_USER=root
- SURREALDB_ENV_PASS=root
networks:
- net
zookeeper-1:
container_name: zookeeper-1
image: zookeeper
restart: always
ports:
- 2181:2181
environment:
- ZOOKEEPER_CLIENT_PORT=2181
volumes:
- ./config/zookeeper-1/zookeeper.properties:/kafka/config/zookeeper.properties
kafka-1:
container_name: kafka-1
image: bitnami/kafka
restart: always
depends_on:
- zookeeper-1
ports:
- 9092:9092
environment:
- KAFKA_ZOOKEEPER_CONNECT=zookeeper-1:2181
- KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://localhost:9092
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
- KAFKA_AUTO_CREATE_TOPICS_ENABLE=true
- KAFKA_CREATE_TOPICS=stock_update:1:3
healthcheck:
test: ["CMD-SHELL", "kafka-topics.sh --bootstrap-server kafka:9092 --list || exit 1"]
interval: 5s
timeout: 10s
retries: 5
networks:
net:
name: "net"
driver: bridge
volumes:
surrealdb-data:
```
Upon execution of the compose file using this command the same directory:
```bash
docker compose up -d
```
the docker engine will start three different containers meant for each image and now you will have Kafka at port `9092` and SurrealDB at port `8000` running in your local machine.
### Inventory service
Create a new Rust binary application and you can name it `inventory_service`. Add these dependencies to your `Cargo.toml` file:
```toml
actix-web = "4"
dotenv = "0.15.0"
futures-util = "0.3"
rdkafka = { version = "0.36", features = ["ssl-vendored"] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
surrealdb = "1.5.1"
tokio = { version = "1", features = ["full", "macros", "rt-multi-thread"] }
```
To install them run this command:
```bash
cargo install
```
Create a new `.env` file in the same path, and add these variables:
```
SURREAL_URL=127.0.0.1:8000
SURREAL_DB=ecommerce
SURREAL_NS=foo
SURREAL_USER=root
SURREAL_PW=root
KAFKA_BROKER=localhost:9092
KAFKA_TOPIC=stock_update
```
> If you have changed any of these values then make sure to change them in your `.env` file.
>
Now being in the root path create a file `init.surql` which will have all the queries to set up the `inventory` table, `inventory_stock_events` table, and seed some dummy products. **Check this file [here](https://github.com/sourabpramanik/eda-kafka-rust-go-surrealdb) to get the queries you need to add.** And once that is done run this command to execute all the statements by importing them using SurrealDB CLI
```bash
surreal import --conn http://localhost:8000 --user root --pass root --ns foo --db ecommerce ./ini
t.surql
```
One important note, we have created `inventory_stock_events` table that will record the logs for the events whenever there is a change in units for any of the products. This logging happens autonomously so that we don’t have to trigger the logging process manually.
Now let's create the schema of the tables inside the `src/` directory. Ad this schema to your `schema.rs` file:
```rust
use serde::{Deserialize, Serialize};
use surrealdb::sql::{Datetime, Id};
#[allow(dead_code)]
#[derive(Debug, Deserialize, Serialize)]
pub struct ProductThing {
id: Id,
}
#[derive(Debug, Deserialize, Serialize)]
pub struct Product {
pub id: ProductThing,
pub name: String,
pub price: u16,
pub units: u16,
}
#[derive(Debug, Deserialize)]
pub struct UpdateProductStock {
pub units: u16,
}
#[allow(dead_code)]
#[derive(Debug, Deserialize, Serialize)]
pub struct EventThing {
id: Id,
}
#[derive(Debug, Deserialize, Serialize)]
pub struct StockEvent {
pub id: EventThing,
pub time: Datetime,
pub action: String,
pub product: ProductThing,
pub before_update: u16,
pub after_update: u16,
}
```
Create a [`kafka.rs`](http://kafka.rs) file and add the Kafka producer function. This can be used with different topics and brokers:
```rust
use rdkafka::{producer::FutureProducer, ClientConfig};
pub async fn producer(brokers: &str) -> FutureProducer {
ClientConfig::new()
.set("bootstrap.servers", brokers)
.set("message.timeout.ms", "5000")
.set("allow.auto.create.topics", "true")
.create()
.expect("Producer creation error")
}
```
Now that we have the base setup let's create handler functions in [`main.rs`](http://main.rs) :
- **Get inventory products handler**
This handler function will return us the list of products and their related attributes.
```rust
async fn get_inventory_products(state: web::Data<State>) -> impl Responder {
let db = &state.db;
let products: Vec<Product> = match db.select("inventory").await {
Ok(val) => val,
Err(e) => {
dbg!(e);
return HttpResponse::InternalServerError().body("Server problems!!");
}
};
HttpResponse::Ok().json(products)
}
```
- **Update stock handler:**
This handler function takes the `product_id` and `units` in the payload. It will check whether there is a product with this `product_id`, and `units` exist or not. And updates the stock units accordingly responding with appropriate messages and status
```rust
async fn update_stock(
product_id: web::Path<String>,
state: web::Data<State>,
payload: web::Json<UpdateProductStock>,
) -> impl Responder {
if product_id.is_empty() {
return HttpResponse::BadRequest().body("Invalid Product Id");
}
let db = &state.db;
let mut available_units: u16 = 0;
if let Ok(mut query_product) = db
.query(format!(
"SELECT units FROM inventory:{} WHERE units>={}",
product_id, payload.units
))
.await
{
if let Ok(Value::Array(arr)) = query_product.take(0) {
if !arr.is_empty() {
if let Value::Object(obj) = &arr[0] {
if let Some(Value::Number(units)) = obj.get("units") {
available_units = units.to_usize() as u16;
}
}
} else {
return HttpResponse::NotFound().body("Product not found or insufficient units");
}
} else {
return HttpResponse::InternalServerError().body("Unexpected query response format");
}
} else {
return HttpResponse::InternalServerError().body("Server Error");
}
if let Ok(mut update_product) = db
.query(format!(
"UPDATE inventory:{} SET units={}",
product_id,
available_units - payload.units,
))
.await
{
if let Ok(Value::Array(arr)) = update_product.take(0) {
if !arr.is_empty() {
HttpResponse::Ok().body("Product stock updated")
} else {
HttpResponse::NotFound().body("Product not found or insufficient units")
}
} else {
HttpResponse::InternalServerError().body("Unexpected query response format")
}
} else {
HttpResponse::InternalServerError().body("Server Error")
}
}
```
- **Stream stock change event handler**
```rust
async fn stream_stock_changes(
db: &Surreal<Client>,
stock_producer: &FutureProducer,
) -> surrealdb::Result<()> {
let kafka_topic = env::var("KAFKA_TOPIC").expect("KAFKA_TOPIC must be set.");
if let Ok(mut stream) = db.select("inventory_stock_events").live().await {
while let Some(result) = stream.next().await {
let res: Result<Notification<StockEvent>> = result;
let data = &res.unwrap().data;
stock_producer
.send(
FutureRecord::to(&kafka_topic)
.payload(&format!(
"Message {}",
&serde_json::to_string(data).unwrap()
))
.key(&format!("Key {}", 1))
.headers(OwnedHeaders::new().insert(Header {
key: "header_key",
value: Some("header_value"),
})),
Duration::from_secs(0),
)
.await
.expect("FAILED TO PRODUCE THE MESSAGE");
}
} else {
println!("Failed to stream")
}
Ok(())
}
```
In this handler function, we have leveraged the power of the live query select statement of SurrealDB. Using live query we can capture real-time data changes in the `inventory_stock_events` table without any fail or manual trigger.
Here’s how it operates:
[Event flow](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/44dleagdtd5pb1hc627d.png)
When the user makes a change to stock units of any valid product in the inventory table it gets logged by the inventory event log table and when the logging of the record change is successful the application will automatically trigger an event, and when that happens a message with specific changes will be produced and streamed by Kafka broker.
Using all the handler functions we can update the main function to this:
```rust
async fn main() -> std::io::Result<()> {
dotenv().ok();
// LOAD ENV VARS
let surreal_url = env::var("SURREAL_URL").expect("SURREAL_URL must be set.");
let surreal_ns = env::var("SURREAL_NS").expect("SURREAL_NS must be set.");
let surreal_db = env::var("SURREAL_DB").expect("SURREAL_DB must be set.");
let surreal_user = env::var("SURREAL_USER").expect("SURREAL_USER must be set.");
let surreal_password = env::var("SURREAL_PW").expect("SURREAL_PW must be set.");
let kafka_broker = env::var("KAFKA_BROKER").expect("KAFKA_BROKER must be set.");
// INIT DATABASE
let db = Surreal::new::<Ws>(&surreal_url)
.await
.expect("Failed to connect to the Surreal client");
db.signin(Root {
username: &surreal_user,
password: &surreal_password,
})
.await
.expect("Failed to authenticate");
db.use_ns(surreal_ns)
.use_db(surreal_db)
.await
.expect("Failed to access the Database");
let db_clone = db.clone();
// CREATE KAFKA PRODUCER
let stock_producer = producer(&kafka_broker).await;
// SPAWN A NEW THREAD TO EXECUTE KAFKA PRODUCER
task::spawn(async move {
stream_stock_changes(&db_clone, &stock_producer)
.await
.expect("failed to stream");
});
// EXECUTE SERVER
HttpServer::new(move || {
App::new()
.app_data(web::Data::new(State { db: db.to_owned() }))
.service(
web::scope("/inventory")
.service(web::resource("").route(web::get().to(get_inventory_products)))
.service(
web::scope("/{product_id}")
.service(web::resource("").route(web::patch().to(update_stock))),
),
)
})
.bind(("127.0.0.1", 3000))?
.run()
.await
}
```
Finally, your `main.rs` file should look like this [here](https://github.com/sourabpramanik/eda-kafka-rust-go-surrealdb/blob/d06fd99e112fe3652f7bc6a1ad107f927e827171/inventory_service/src/main.rs#L1C1-L193C2).
### Shipping service
This service is not that functional it just consumes the messages by connecting to the Kafka broker. I have kept this application as simple as possible for this article but in real projects, you would be doing much more logical operations with the messages that this application receives.
Create a directory `shipment_service` , inside the directory and create a new Go binary:
```bash
go mod init shipment_service
```
Install packages:
```bash
go get -u github.com/segmentio/kafka-go
go get -u github.com/joho/godotenv
```
Create `.env` File
```
KAFKA_BROKER=localhost:9092
KAFKA_TOPIC=stock_update
KAFKA_GROUP_ID=shipment_service_group
POSTGRES_USER=yourusername
POSTGRES_PASSWORD=yourpassword
POSTGRES_DB=yourdatabase
POSTGRES_HOST=localhost
POSTGRES_PORT=5432
```
> If you have changed any of these values then make sure to change them in your `.env` file.
>
Create a `main.go` file and add the following code:
```go
package main
import (
"context"
"encoding/json"
"log"
"os"
"os/signal"
"strings"
"syscall"
"github.com/joho/godotenv"
"github.com/segmentio/kafka-go"
)
type OriginalMessage struct {
ID IDWrapper `json:"id"`
Time string `json:"time"`
Action string `json:"action"`
Product IDWrapper `json:"product"`
BeforeUpdate int `json:"before_update"`
AfterUpdate int `json:"after_update"`
}
type IDWrapper struct {
ID IDStringWrapper `json:"id"`
}
type IDStringWrapper struct {
String string `json:"String"`
}
type TransformedMessage struct {
ID string `json:"id"`
ProductID string `json:"productId"`
BeforeUpdate int `json:"before_update"`
AfterUpdate int `json:"after_update"`
Action string `json:"action"`
Time string `json:"time"`
}
func main() {
// Load environment variables
err := godotenv.Load()
if err != nil {
log.Fatalf("Error loading .env file: %v", err)
}
// Kafka configuration
kafkaBroker := os.Getenv("KAFKA_BROKER")
kafkaTopic := os.Getenv("KAFKA_TOPIC")
kafkaGroupID := os.Getenv("KAFKA_GROUP_ID")
// Set up Kafka reader
reader := kafka.NewReader(kafka.ReaderConfig{
Brokers: []string{kafkaBroker},
GroupID: kafkaGroupID,
Topic: kafkaTopic,
MinBytes: 10e3, // 10KB
MaxBytes: 10e6, // 10MB
})
// Create a channel to handle OS signals
sigchan := make(chan os.Signal, 1)
signal.Notify(sigchan, os.Interrupt, syscall.SIGTERM)
// Start consuming messages
go func() {
for {
m, err := reader.FetchMessage(context.Background())
if err != nil {
log.Printf("Error fetching message: %v", err)
continue
}
rawMessage := string(m.Value)
jsonMessage := strings.TrimPrefix(rawMessage, "Message ")
var originalMessage OriginalMessage
if err := json.Unmarshal([]byte(jsonMessage), &originalMessage); err != nil {
log.Printf("Error unmarshaling message: %v", err)
continue
}
transformedMessage := transformMessage(originalMessage)
transformedMessageJSON, err := json.Marshal(transformedMessage)
if err != nil {
log.Printf("Error marshaling transformed message: %v", err)
continue
}
log.Printf("Incoming message: %s", transformedMessageJSON)
// Commit the message to mark it as processed
if err := reader.CommitMessages(context.Background(), m); err != nil {
log.Printf("Error committing message: %v", err)
}
}
}()
// Wait for a termination signal
<-sigchan
log.Println("Shutting down...")
reader.Close()
}
func transformMessage(orig OriginalMessage) TransformedMessage {
return TransformedMessage{
ID: orig.ID.ID.String,
ProductID: orig.Product.ID.String,
BeforeUpdate: orig.BeforeUpdate,
AfterUpdate: orig.AfterUpdate,
Action: orig.Action,
Time: orig.Time,
}
}
```
## Running the services
Go inside the `inventory_service` directory and run the binary:
```bash
cargo run
```
Open another terminal window, go inside the `shipment_service` directory and run the binary:
```bash
go run main.go
```
## Execution
Now on a new terminal window execute this CURL request:
```bash
curl --location --globoff --request PATCH 'http://localhost:3000/inventory/{product_id}' \
--header 'Content-Type: application/json' \
--data '{
"units":2
}'
```
You will notice that messages are getting printed on the terminal window where you are running your Go binary(shipment service). It is consuming the changes in stock in real time without any HTTP communication between the services.
## Final Output

## Conclusion
I hope you enjoyed reading this article and learned something new. Though I have tried to explain one of the use cases of Kafka and the features of SurrealDB, there is so much more that can be achieved and a highly efficient application can built using tools and technologies like this that cannot be covered in one single article.
Signing Off!! | sourabpramanik |
1,877,098 | MediaPipe HandGestureRecognizer Task for web | Check out this Pen I made! | 0 | 2024-06-04T20:16:40 | https://dev.to/ankkerbintang/mediapipe-handgesturerecognizer-task-for-web-2bni | codepen | Check out this Pen I made!
{% codepen https://codepen.io/mediapipe-preview/pen/zYamdVd %} | ankkerbintang |
1,877,097 | E-commerce: The History and Future of online shopping | The e-commerce setup has been revolutionized so that it can amaze millions of people with its... | 0 | 2024-06-04T20:15:39 | https://dev.to/liong/e-commerce-the-history-and-future-of-online-shopping-1lgm | socialmedia, onlineshopping, malaysia, kualalumpur | The e-commerce setup has been revolutionized so that it can amaze millions of people with its services. It can ease people's lives by giving shopping options in their comfort zone. This is what e-commerce online shopping feels like. When we try to remember the old times, there were mostly thick catalogs, our excitement for waiting for our parcel or delivery of our package, this kind of scheme or method is still loved by the majority of the people and because of this it makes online shopping the king of shopping era form a comfort zone for many people and it's still loved by the people until 2024. Now when we talk about the base starting part of e-commerce which has changed the lives of millions or billions of people, you need to buckle up to deceive yourself into the deep history and the future of e-commerce.
In this blog, you are going to get basic starting-level guidance or knowledge about e-commerce and its future in the upcoming years. The story of e-commerce has just started so don't get bored and let's get to the end without wasting any future time.
## The Seeds were Sown of the E-commerce:
The story of the dynamic and powerful platform known as e-commerce started in the early starting years of the 19s, let's go back to the times of the 1960s. This story is not just going to give you knowledge but the initial base of e-commerce. The story started surprising us in the early 1960s. Let's just imagine that there is a world present with computers and dial-up connections- This was not the early year of e-commerce that came into being but the era of amazing thinkers like Michael Aldrich. Michael Aldrich was not just an ordinary person but a person with a vision that was about to change the lives of billions of people. He was the one who in 1969 made or developed an amazing system, that system is called "The Electronic Shopping'. By the word of electronic, it describes the idea of electrical automatic ways to do transactions of your money. This did not only create the concept of online shopping but the transactions part too. These were the emphasis laid upon by our one and only Michael Aldrich.
Let's move forward into the next couple of decades, this decade is talking about the internet era. In the 1900s, with the help of the internet, people got the idea to do online shopping successfully. This was also known as the "E-commerce Revolution". Here at this point. Many businesses were the ones that started getting success based on paying themselves with the [e-commerce](https://ithubtechnologies.com/ecommerce-online-retailer/?utm_source=dev.to&utm_campaign=ecommerce&utm_id=Offpageseo+2024) world.
## The Dawn of the E-commerce World
Here comes the part of the entry of game changers, the top awesome multinational companies like Amazon and eBay. These are the top business e-commerce shopping tycoons that were founded in the year 1994. At that time, Amazon was the first one to start with an online bookstore. This bookstore was the only online bookstore at that time which drastically spread and automatically expanded itself a lot. This online store was the one that showed itself as an example of being the one involved in forever changing the retail landscape and it further caused the establishment of eBay in the year 1995. EBay was the one company that introduced the concept of doing online auctions. This was not just an ordinary auction but an auction that allowed the majority of the people to easily buy or just sell. This buying or selling was done directly to each other.
Let me give you an idea about the years of the late 1990s and the early 2000s. The 1990s and early 2000s were the ones that saw the rapid and drastic spread and growth of businesses all just because of the internet and e-commerce afterward. When we are talking about the world with topnotch growth of the companies. Then this also showed us that some top amazing companies automatically saw the pathway of solidifying their position as players in the e-commerce business world. Here comes the "Click and Mortar Model". This model gives us the idea that at this time the brick-and-mortar physical stores start getting their deserving attention and success through the e-commerce world. This model was the one that gave assurance to many businesses by giving them an online presence and attention. All these techniques opened the doors for billions of customers by giving them the enjoyment of both the best worlds. This enjoyment is about when you are browsing your options and getting to touch the products in real physical stores.
## The Mobile Revolution and Social Commerce:
In the year of 2010s, it was truly witnessed that there has been a drastic or rapid change and this change has been caused by the invention of mobile phones. The rapid changes occurred due to the mobile shopping system. Mobile phones played such an important role that they provided the scheme for many online businesses to move forward with mobile shopping through their mobile phones. When these smartphones came into being, this made it a lot easier, more comfortable, and enjoyable to do online shopping also known as mobile shopping. Here smartphones are the ones that made business platforms adapt themselves to mobile shopping setup by making sure that there is a presence of mobile apps and apps that consist of service legal payment methods for people to do the transactions easily. Now the mobile phone has made mobile shopping easier by doing it from anywhere or anytime in reality.
## Conclusion: The Future of Online Shopping
According to the above highlighted points, it can be concluded that the e-commerce world has a very deep history and it has revolutionized in such a way that it can make millions or billions of people's lives easier with its services. The future of e-commerce is just getting started to rise to the top and as technology is being modernized day by day, then e-commerce may have opened doors of possibilities for all of us. The best of the best product is the one that has the best price. So, say hi to shopping from your comfort zone.
| liong |
1,877,096 | My New Full-Stack Developer Portfolio | I've always admired the stunning portfolios created by designers, so I decided to try my hand at... | 0 | 2024-06-04T20:14:32 | https://dev.to/kiraaziz/my-new-portfolio-1h9k | webdev, javascript, beginners, programming | I've always admired the stunning portfolios created by designers, so I decided to try my hand at creating my own fancy portfolio as a full-stack developer. I'd love to get your feedback and suggestions on how I can improve it!
You can check out my portfolio here: [My Portfolio](http://coolkira.vercel.app/)
If you're interested in the code behind it, you can find the repository on GitHub: [GitHub Repository](https://github.com/kiraaziz/new-me) | kiraaziz |
1,877,095 | Frontend axios middleware | import axios from 'axios' import jwt_decode from "jwt-decode"; import dayjs from 'dayjs' import {... | 0 | 2024-06-04T20:10:00 | https://dev.to/vimal_adithan/frontend-axios-middleware-3nd4 | interceptor, webdev, beginners, programming | import axios from 'axios'
import jwt_decode from "jwt-decode";
import dayjs from 'dayjs'
import { useContext } from 'react'
import AuthContext from '../context/AuthContext'
const baseURL = 'http://127.0.0.1:8000'
const useAxios = () => {
const {authTokens, setUser, setAuthTokens} = useContext(AuthContext)
const axiosInstance = axios.create({
baseURL,
headers:{Authorization: `Bearer ${authTokens?.access}`}
});
axiosInstance.interceptors.request.use(async req => {
const user = jwt_decode(authTokens.access)
const isExpired = dayjs.unix(user.exp).diff(dayjs()) < 1;
if(!isExpired) return req
const response = await axios.post(`${baseURL}/api/token/refresh/`, {
refresh: authTokens.refresh
});
localStorage.setItem('authTokens', JSON.stringify(response.data))
setAuthTokens(response.data)
setUser(jwt_decode(response.data.access))
req.headers.Authorization = `Bearer ${response.data.access}`
return req
})
return axiosInstance
}
export default useAxios; | vimal_adithan |
1,877,093 | E-commerce: Shopping From Your Comfort Zone | You need to remember those days when you used to just randomly do planned shopping trips, moving... | 0 | 2024-06-04T20:07:23 | https://dev.to/liong/e-commerce-shopping-from-your-comfort-zone-1nkm | shopping, online, malaysia, kualalumpur | You need to remember those days when you used to just randomly do planned shopping trips, moving through the crowds and even holding those heavy bags back home. See, here at this point I got you, because now thankfully those stressful times are over. E-commerce is particularly the art of buying or selling goods or products online. E-commerce is like a platform where you can easily manage thousands of products each with a different variety and unlimited space, and this is the platform that has changed the old ways of doing shopping.
Let's get straight to the point, in this blog you are going to achieve every piece of knowledge about online store-making and the best part is shipping. You will easily explore the examples and latest trends of e-commerce. So what are you waiting for, let's dive into the enchanting world of E-commerce.
## So, what is E-commerce?
Here you need to first suppose or imagine an online virtual marketplace where any type of online store can exist under one single roof. [E-commerce](https://ithubtechnologies.com/ecommerce-online-retailer/?utm_source=dev.to&utm_campaign=ecommerce&utm_id=Offpageseo+2024) is having top websites like Amazon, eBay, and many other different types of functions or processes. The business mostly shows its online presence by choosing only. option, creating an online store, here you will showcase your products with demographics or images, videos, or even a complete description attached to each online store. When we talk from the point of view of the customer, you can see that he/she checks and locates the virtual aisles, adds more items to their "cart" and checkout after a few clicks means finalizing what to buy. Then this store also has a "wishlist" option in which you can add the products to buy in the future or it may be a type of reminder to buy it afterward.
Additionally, payment methods are already added in the store to ensure successful selling of products and as many transactions as possible depending on the buy of goods.
## But e-commerce isn't just about convenience. Here's why it's become such a major trend:
**- Endless Selection:**
Now here I'm going to let you know that when you are doing your online shopping, you select an unlimited number of goods or products as compared to the physical stores. You have access to getting mind-blowing numbers and types of products, from their niche items then getting to the top international brands level.
**- Competitive Prices:**
When you are clicking and checking the products, you can easily make a comparison between the prices of multiple sellers whether it's cheap or expensive. This will make sure that you get the best of the best deals.
**- 24/7 Access:**
The online stores are very much available 24/7 without making any issues regarding the place, time, etc. While the physical stores are available for a limited period.
**- Personalized Shopping:**
Multiple websites use cookies and sometimes purchase the history to give us an idea of products you may like according to your preferences or needs. This created a more awesome shopping experience.
**- Customer Reviews:**
When you are scrolling the site and blank about whether to choose this or that? Then you can just read the reviews of customers on different products that will help you to make the final decision.
## E-commerce is constantly evolving, embracing new trends that shape the future of online shopping:
**- Mobile Shopping:**
This included a way of doing online shopping just by your mobile phone without being dependent upon computers. Mobile shipping is friendly and easy to go to do the shopping further causing convergence.
**- Social Commerce:**
Social word comes from social media which includes the iviak media platforms like Instagram or Facebook. These are the combination of. e-commerce features that allow many individuals to do shopping directly. drom. the app.
**- Voice Commerce:**
Voice Commerce is your voice command given to cover assignments. The top voice assistants are Alexa and Google Assistant which change our way of interacting with e-commerce platforms.
**- Augmented Reality (AR):**
Sometimes you are choosing a new desk study table and want.o, check whether it looks nice in your room or not. This technology will allow you to "place" the products physically in your space before making any purchase.
## Examples of E-commerce:
There are hundreds of examples of e-commerce being used in everyday life. But here are a few real-life examples:
**- Example 1:**
You need a new pair of joggers and you want it a little cheaper than you buy the online sellers, you do a comparison between different kinds of brands and models, or even you read some reviews to check the quality of the product and Outkast you choose a perfect pair of joggers according to your likings.
**- Example 2:**
Suppose you are planning birthdays or your loved ones and then the e-commerce website starts offering decoration pieces, party favors or even cakes. Here the chosen goods are then delivered easiest.to your doorstep.
**- Example 3:**
If you are a bookworm and you need the latest new release books then it offers you the e-book downloaded version. This ebook version goes directly to your phone.
## E-commerce isn't without its challenges:
While incredibly suitable, e-commerce comes with a few of the challenges. The following are the challenges faced by e-commerce:
1. **No inspection:** You cannot touch the products in the online world.
2. **Shipment:** The shipment or delivery cost of the product changes according to the location where it has to be delivered.
3. **Security:** When you are trying to shop for a product and now it's time to add a payment method then you need to choose reputable websites as trust matters.
4. **Hassle-bassle:** This includes that when you are returning the unwanted products then this depends upon the seller's policy and this is a time-consuming process.
## Conclusion:
According to the above-highlighted points, it can be stated that e-commerce is an important part. od our system and with the latest or updated version of features. The examples here show us how to randomly use e-commerce without wasting any time in the physical store.
| liong |
1,854,416 | Utilização e Benefícios do Rerender em Testes de Componentes React com Jest e React Testing Library | Introdução No desenvolvimento de aplicações React, garantir que os componentes se... | 27,693 | 2024-06-04T20:06:35 | https://dev.to/vitorrios1001/utilizacao-e-beneficios-do-rerender-em-testes-de-componentes-react-com-jest-e-react-testing-library-5glp | testing, react, webdev, jest | ## Introdução
No desenvolvimento de aplicações React, garantir que os componentes se comportam conforme esperado após atualizações de estado ou props é crucial. O uso de bibliotecas de teste, como Jest e React Testing Library, é fundamental para validar a lógica de renderização e a resposta a interações do usuário. Este artigo focará especificamente no uso da função `rerender` da React Testing Library, explicando como ela pode ser utilizada para realizar testes mais robustos e confiáveis.
## O que é Rerender?
O `rerender` é uma função fornecida pela React Testing Library que permite atualizar o componente com novas props ou estado, simulando uma re-renderização como resultado de uma mudança no componente. Isso é particularmente útil em testes onde as interações do usuário ou atualizações de contexto provocam mudanças no componente que precisam ser testadas.
## Configurando o Ambiente de Teste
Antes de mergulhar nos exemplos de código, é essencial configurar um ambiente de teste adequado. Aqui está um exemplo básico de como configurar Jest e React Testing Library em seu projeto:
```bash
npm install --save-dev jest @testing-library/react @testing-library/jest-dom
```
Adicione o seguinte script no seu `package.json`:
```json
"scripts": {
"test": "jest"
}
```
## Exemplo Prático: Testando um Componente com Rerender
Considere um componente `Counter` que incrementa um contador cada vez que um botão é pressionado. Vamos testar como o componente reage a múltiplas interações usando `rerender`.
### O Componente `Counter`
```jsx
import React, { useState } from 'react';
function Counter() {
const [count, setCount] = useState(0);
return (
<div>
<p>Count: {count}</p>
<button onClick={() => setCount(count + 1)}>Increment</button>
</div>
);
}
export default Counter;
```
### Testando o Componente com Rerender
```jsx
import { render, fireEvent } from '@testing-library/react';
import Counter from './Counter';
describe('Counter Component', () => {
test('should increment count on button click', () => {
const { getByText, rerender } = render(<Counter />);
fireEvent.click(getByText('Increment'));
expect(getByText('Count: 1')).toBeInTheDocument();
// Rerender com as mesmas props para simular uma atualização de estado
rerender(<Counter />);
fireEvent.click(getByText('Increment'));
expect(getByText('Count: 2')).toBeInTheDocument();
});
});
```
## Benefícios do Uso de Rerender
1. **Testes de Fluxos de Usuário Realistas:** Permite simular e testar como um usuário interage com o componente ao longo do tempo.
2. **Precisão de Estado:** Garante que o componente maneja o estado corretamente após várias atualizações, crucial para componentes com lógica de estado complexa.
3. **Melhor Cobertura de Teste:** Permite testar vários cenários de atualização sem a necessidade de recarregar todo o componente, melhorando a cobertura de teste.
## Diferenças e Vantagens sobre Outras Abordagens
Diferentemente da abordagem de snapshot testing, que apenas verifica mudanças visuais entre renderizações, o `rerender` permite testar a lógica interna do componente e seu comportamento dinâmico. Isso oferece uma visão mais profunda de como o estado e as props são gerenciados internamente pelo componente.
## Conclusão
A função `rerender` da React Testing Library é uma ferramenta poderosa para desenvolvedores que buscam garantir que seus componentes React se comportem conforme esperado após atualizações. Ao integrar essa função nos testes, é possível obter uma verificação mais rigorosa da lógica de estado e interações do usuário, conduzindo a uma base de código mais robusta e confiável. | vitorrios1001 |
1,877,090 | Linux Date Commands | Explaining different date formats according to your needs 1.Date date -u :Displays the time in... | 0 | 2024-06-04T20:01:48 | https://dev.to/abdallah_kordy_94db275ef5/linux-commands-22mk | linux, gnu, commands | Explaining different date formats according to your needs
1.Date
- date -u :Displays the time in Coordinated Universal Time (UTC).
```
04 يون, 2024 UTC 07:32:39 م
```
- date -d "next Friday"
```
07 يون, 2024 EEST 12:00:00 ص
```
- date -R Outputs the date and time in RFC 2822 format.
output
```
Tue, 04 Jun 2024 22:37:39 +0300
```
- date -I :Outputs the date in ISO 8601 format
output
```
2024-06-04
```
- date "+%Y-%m-%d %H:%M:%S" customizable formate
```
2024-06-04 22:49:23
```
%Y: Year (4 digits)
%m: Month (2 digits)
%d: Day (2 digits)
%H: Hour (00-23)
%M: Minute (00-59)
%S: Second (00-59)
%A: Full weekday name (e.g., Monday)
%a: Abbreviated weekday name (e.g., Mon)
%B: Full month name (e.g., January)
%b: Abbreviated month name (e.g., Jan)
- date -s or --set: date -s "2023-12-31 23:59:59"
Sets the system date and time.
- date --rfc-3339=seconds
```
2024-06-04 23:10:08+03:00
```
- date -d "2023-12-31 23:59:59" Display the date and time for a specific date:
```
31 ديس, 2023 EET 11:59:59 م
```
| abdallah_kordy_94db275ef5 |
1,877,089 | Top Video Processing APIs for 2024 | What are your top Video Processing APIs? Video processing APIs allow developers to automate tasks... | 0 | 2024-06-04T20:01:20 | https://dev.to/nikoldimit/top-video-processing-apis-for-2024-93e | What are your top Video Processing APIs?
Video processing APIs allow developers to automate tasks like video encoding, video decoding, video editing, video resizing, video watermarking, and extracting video metadata from video files.
There are 5 types of video processing APIs:
1. Video Preview APIs
2. Video Encoding APIs
3. Video Decoding APIs, and
4. Video Manipulation APIs
Video Preview API
-Generate Video Thumbnail API
Video Encoding API
-Compress Video File API
-Convert Video Formats API
-Video Storage API
-api.video's CDN API
Video Decoding API:
-Generate GIF from Video API
-Generate Watermark For Videos API
Video Manipulation API
-Extract Video Metadata API
-Extract Audio from Video API
Read here for more: https://apyhub.com/blog/top-video-processing-apis-2024 | nikoldimit | |
1,877,088 | Benefits of E-commerce for Customers and Businesses | E-commerce is the miracle that happened to us in this world. This miracle has blessed us with nonstop... | 0 | 2024-06-04T19:57:29 | https://dev.to/liong/benefits-of-e-commerce-for-customers-and-businesses-26b | ecommerce, customers, malaysia, kualalumpur | E-commerce is the miracle that happened to us in this world. This miracle has blessed us with nonstop tremendous and awesome benefits because it is seriously not limited in giving us the blessing of itself. The benefits are in both ways, for example, it can give many benefits to their loyal e-commerce store customers and our very own [e-commerce](https://ithubtechnologies.com/ecommerce-online-retailer/?utm_source=dev.to&utm_campaign=ecommerce&utm_id=Offpageseo+2024) business. The thing doesn't come to an end here, but it has proven to us with the proof of our businesses reaching double or triple the audience in the website or store and more trafficking.
E-commerce has changed the way of our lives by revolutionizing the way we shop or the way we conduct our businesses. This is all about the few clicks but our customer and Tada, now you can buy anything from anywhere in the world through an online ecommerce store. There were times when physical shopping used to happen from local levels but now online shopping can happen from local to national level.
Now in this blog post, you are going to get an idea and knowledge about the benefits of e-commerce for customers and businesses. So what are you waiting for, buckle up your seat belts, and let's explore this enchanting world of e-commerce.
## 1. Convenience and Accessibility
Firstly, let's get started with benefit number one which is convenient and accessible. E-commerce is known for its accessibility to the world. In the old times, we can say that at that time the conversive was unchanging in the economy. There were also times when we used to go shopping but it took hours to do this in a single store. We used to wait in life for receipts and pay for our shopping, we also used to spend hours just looking at the same catalog and washing our time, and. also the most stressful and headache thing was dealing with the crowds.
Here, at this point, you can easily access the e-commerce stores at any time as they are open 24/7 without any restrictions. This automatically gives us the e usa that shipping can be done according to whatever you need at any time without stressing or worrying about the store closing times which most of the time happens in the physical stores. This kind of activity is remarkable for both the customers and the stores to achieve more staff, more orders and more sales of your products. This is what makes it more useful for people who are working or living in remote areas and these areas there service or connection issues. So, we can automatically remove such barriers or faults with the help of an internet connection.
## 2. Many Variety and Better Choices
When we are talking about online stores, they are the ones that offer us with wide and many different varieties of products. But if we talk about the physical stores, they are limited in their reach, as they are limited to their shelf space, and the e-commerce platforms can easily show unlimited numbers of the vast majority of the goods. From a customer's point of view, it gives them more access to a wider variety of goods by being able to read reviews or give ratings and makes them easily informed about their decisions or choices. Here the wide variety of products are very beneficial because of their niche markets or roles. They are specialized in their roles but they cannot be available directly in the local store. For example, the national level products cannot be available on the local market.
Additionally, it can be stated that the wide variety of options of products or goods for the customers makes it easier for them to get any kind of products according to their needs or requirements without depending on the physical or local stores. This is the flexibility that is provided by the e-commerce platform. This automatically allows many businesses to cater different products and to quickly adapt themselves to the market trends.
## 3. Money savings and Budget
Money management is what we are going to talk about here. The thing is that e -e-e-commerce has made us able to shop and enjoy according to our budget and provides us with the path to cost savings. Most people look for cheap products of their needy products so online stores here are all about getting lower prices for your desirable products or goods. There are a majority of the e-commerce stores that have businesses that have low costs of the products which is an old golden technique of brick and mortar stores. This makes it possible for them to pass the benefits to their customers. This is done by tactics like promotional discounts, vouchers, sales, and much more reduced costs. One more thing can be added here when you are running a successful online store, this means that you can easily get rid of the renting expenses of physical stores.
## 4. Shopping spree experience
In the shopping spree system, e-commerce is awesome in providing customers getting a personalized experience by using their very own hidden analytics. This is done when people shop online then the data automatically gets collected that consists of customers' needs and requirements, browsing or searching, etc. By using this info it gives the people their needy product preferences.
## Conclusion
E-commerce has transformed the landscape of shopping and business operations, offering numerous benefits to both customers and businesses. For customers, it means convenience, access to a wider variety of products, cost savings, personalized experiences, and better service. For businesses, it opens up new markets, reduces operational costs, provides valuable customer insights, and offers flexibility and scalability.
As technology advances and e-commerce platforms evolve, these benefits are only expected to grow. The future of shopping is undoubtedly digital, and embracing e-commerce is key to staying ahead in this rapidly changing world. Whether you’re a consumer enjoying the ease of online shopping or a business looking to expand your reach, the advantages of e-commerce are clear and compelling.
| liong |
1,877,086 | Understanding the SQL ORDER BY Clause | Introduction The ORDER BY clause in SQL is a powerful tool used to sort the result set of... | 0 | 2024-06-04T19:49:43 | https://dev.to/kellyblaire/understanding-the-sql-order-by-clause-llg | sql, sqlserver, database, sorting | ## Introduction
The `ORDER BY` clause in SQL is a powerful tool used to sort the result set of a query. Sorting data is a common requirement when retrieving records from a database, whether you want to order products by price, employees by name, or records by date. In this article, we will explore the `ORDER BY` clause, its syntax, and various use cases to help you effectively sort data in your SQL queries.
## Basic Syntax of ORDER BY
The `ORDER BY` clause allows you to sort the result set of a query by one or more columns, either in ascending (`ASC`) or descending (`DESC`) order. The basic syntax is as follows:
```sql
SELECT column1, column2, ...
FROM table_name
ORDER BY column1 [ASC|DESC], column2 [ASC|DESC], ...;
```
- **column1, column2, ...**: The columns by which you want to sort the results.
- **ASC**: Ascending order (default).
- **DESC**: Descending order.
### Example 1: Simple ORDER BY
Suppose you have a table `employees` with the following data:
```sql
+----+--------+-----------+----------+
| id | name | salary | hire_date|
+----+--------+-----------+----------+
| 1 | Alice | 70000 | 2019-03-01|
| 2 | Bob | 60000 | 2018-07-15|
| 3 | Charlie| 80000 | 2020-10-25|
| 4 | Diana | 75000 | 2017-06-10|
+----+--------+-----------+----------+
```
To sort these employees by their salary in ascending order:
```sql
SELECT id, name, salary, hire_date
FROM employees
ORDER BY salary ASC;
```
The result set will be:
```sql
+----+--------+-----------+----------+
| id | name | salary | hire_date|
+----+--------+-----------+----------+
| 2 | Bob | 60000 | 2018-07-15|
| 1 | Alice | 70000 | 2019-03-01|
| 4 | Diana | 75000 | 2017-06-10|
| 3 | Charlie| 80000 | 2020-10-25|
+----+--------+-----------+----------+
```
### Example 2: ORDER BY Multiple Columns
You can sort by multiple columns by specifying them in the `ORDER BY` clause. For instance, to sort employees by salary in ascending order and then by name in descending order:
```sql
SELECT id, name, salary, hire_date
FROM employees
ORDER BY salary ASC, name DESC;
```
The result set will be:
```sql
+----+--------+-----------+----------+
| id | name | salary | hire_date|
+----+--------+-----------+----------+
| 2 | Bob | 60000 | 2018-07-15|
| 1 | Alice | 70000 | 2019-03-01|
| 4 | Diana | 75000 | 2017-06-10|
| 3 | Charlie| 80000 | 2020-10-25|
+----+--------+-----------+----------+
```
### Example 3: Using Column Aliases
You can also use column aliases in the `ORDER BY` clause. Suppose you calculate a derived column in your query:
```sql
SELECT id, name, salary, hire_date, salary * 1.1 AS adjusted_salary
FROM employees
ORDER BY adjusted_salary DESC;
```
The result set will sort the employees by their adjusted salary in descending order.
## Advanced Usage of ORDER BY
### Ordering by Expressions
You can use expressions in the `ORDER BY` clause. For example, if you want to order employees by the year they were hired:
```sql
SELECT id, name, salary, hire_date
FROM employees
ORDER BY YEAR(hire_date) ASC;
```
### Ordering by Positions
Instead of specifying column names, you can use the column positions in the `ORDER BY` clause. This is useful when dealing with complex queries. For example:
```sql
SELECT id, name, salary, hire_date
FROM employees
ORDER BY 3 ASC, 2 DESC;
```
This query orders the result set by the third column (`salary`) in ascending order and the second column (`name`) in descending order.
### ORDER BY with NULL Values
Handling `NULL` values in sorting can vary between SQL implementations. By default, `NULL` values might appear first or last. Some databases allow specifying this explicitly:
```sql
SELECT id, name, salary, hire_date
FROM employees
ORDER BY salary ASC NULLS LAST;
```
This query places `NULL` salary values at the end of the result set.
### Limiting Results with ORDER BY
Often, you may want to sort data and retrieve only a subset of rows. You can combine `ORDER BY` with the `LIMIT` clause:
```sql
SELECT id, name, salary, hire_date
FROM employees
ORDER BY salary DESC
LIMIT 5;
```
This query returns the top 5 highest-paid employees.
### ORDER BY in Subqueries
You can use `ORDER BY` in subqueries, but the ordering applies to the subquery's result, not the outer query. For example:
```sql
SELECT *
FROM (SELECT id, name, salary FROM employees ORDER BY salary DESC) AS sorted_employees;
```
## Best Practices for Using ORDER BY
1. **Indexing**: Ensure the columns used in the `ORDER BY` clause are indexed to improve performance.
2. **Column Order**: Order columns thoughtfully to achieve the desired result set.
3. **Avoid Using Positions**: Use column names or aliases instead of positions for better readability and maintainability.
4. **Limit Rows**: Combine `ORDER BY` with `LIMIT` to optimize query performance when only a subset of rows is needed.
## Conclusion
The `ORDER BY` clause in SQL is an essential tool for organizing and presenting data. By mastering its syntax and understanding its various use cases, you can efficiently sort your query results to meet your specific needs. Whether you're dealing with simple or complex queries, proper use of `ORDER BY` can enhance the readability and functionality of your SQL operations. | kellyblaire |
1,877,082 | Top 7 Featured DEV Posts of the Week | Welcome to this week's Top 7, where the DEV Team handpicks our favorite posts from the previous... | 0 | 2024-06-04T19:47:12 | https://dev.to/devteam/top-7-featured-dev-posts-of-the-week-pg0 | top7 | _Welcome to this week's Top 7, where the DEV Team handpicks our favorite posts from the previous week._
Congrats to all the authors that made it onto the list 👏
{% embed https://dev.to/fsh02/the-long-path-of-javascript-from-es6-until-today-3gc3 %}
Farid walks us through how JavaScript has transformed since the launch of ES6, the second major and biggest release for the language back in 2015, until today.
{% embed https://dev.to/jderochervlk/rescript-has-come-a-long-way-maybe-its-time-to-switch-from-typescript-29he %}
Josh presents a compelling argument for trying out ReScript for your next project.
{% embed https://dev.to/opensauced/how-to-assess-your-skill-level-before-contributing-to-open-source-4pn3 %}
Bekah designed a self-assessment guide that helps developers prepare to make meaningful and impactful contributions to open source.
{% embed https://dev.to/rogiia/how-to-build-a-basic-rag-app-h9p %}
Roger shares tutorial to create a Resource Allocation Graph app, complete with a Google Colab notebook with step-by-step examples.
{% embed https://dev.to/abbeyperini/share-your-experience-loudly-and-often-36ah %}
Abbey encourages community members to share their knowledge and experiences.
{% embed https://dev.to/kibumpng/strategies-for-effective-urgent-ticket-classification-3ian %}
Laura helps us define and understand the meaning of urgency, and provides a guideline for classification and communication with stakeholders.
{% embed https://dev.to/arjunrao87/the-art-of-complaining-effectively-3kal %}
Arjun challenges readers to voice your complaints more constructively.
_And that's a wrap for this week's Top 7 roundup! 🎬 We hope you enjoyed this eclectic mix of insights, stories, and tips from our talented authors. Keep coding, keep learning, and stay tuned to DEV for more captivating content and [make sure you’re opted in to our Weekly Newsletter] (https://dev.to/settings/notifications) 📩 for all the best articles, discussions, and updates._ | thepracticaldev |
1,877,081 | NULL Handling in SQL | Introduction Dealing with NULL values in SQL is a fundamental aspect that every database... | 0 | 2024-06-04T19:44:22 | https://dev.to/kellyblaire/null-handling-in-sql-1mp2 | sql, sqlserver, database, mysql | ## Introduction
Dealing with `NULL` values in SQL is a fundamental aspect that every database professional must grasp. `NULL` represents missing or undefined values in a database table, and it is crucial to handle these values correctly to ensure the integrity and accuracy of your data operations. This article will delve into the concept of `NULL` in SQL, its implications, and various strategies for handling `NULL` values.
## Understanding NULL in SQL
`NULL` in SQL is a special marker used to indicate that a data value does not exist in the database. It is not equivalent to an empty string or a zero value. Instead, `NULL` signifies the absence of any value. The presence of `NULL` values in a database table can affect the results of queries, especially when performing operations like comparisons, aggregations, and joins.
### Key Characteristics of NULL
1. **Indeterminate Value**: `NULL` represents an unknown or indeterminate value.
2. **Non-comparability**: Comparisons involving `NULL` (e.g., `=`, `<`, `>`) always yield `NULL`, not `TRUE` or `FALSE`.
3. **Special Handling in Functions**: Many SQL functions have specific behavior when dealing with `NULL` values.
## Working with NULL in SQL
### Checking for NULL
To check if a value is `NULL`, you use the `IS NULL` or `IS NOT NULL` operators. For example:
```sql
SELECT * FROM employees WHERE manager_id IS NULL;
```
This query retrieves all employees who do not have a manager.
### NULL and Comparison Operators
Direct comparisons with `NULL` using the standard comparison operators (`=`, `!=`, `<`, etc.) do not work as expected. For example:
```sql
SELECT * FROM employees WHERE manager_id = NULL;
```
This query will not return any rows, even if some `manager_id` values are `NULL`. Instead, you should use:
```sql
SELECT * FROM employees WHERE manager_id IS NULL;
```
### NULL in Aggregations
Aggregation functions like `SUM`, `AVG`, `COUNT`, etc., handle `NULL` values differently. For example, `SUM` and `AVG` ignore `NULL` values, while `COUNT` can count them if explicitly specified.
```sql
SELECT SUM(salary) FROM employees; -- NULL values are ignored
SELECT AVG(salary) FROM employees; -- NULL values are ignored
SELECT COUNT(manager_id) FROM employees; -- NULL values are ignored
SELECT COUNT(*) FROM employees WHERE manager_id IS NULL; -- Counts only NULL values
```
### Dealing with NULL in Joins
When performing joins, `NULL` values can lead to unexpected results. For instance, `INNER JOIN` excludes rows with `NULL` values in the join condition, while `LEFT JOIN` includes them.
```sql
SELECT e.name, d.department_name
FROM employees e
LEFT JOIN departments d ON e.department_id = d.id;
```
This query retrieves all employees, including those without a department, because of the `LEFT JOIN`.
## Handling NULL in SQL Queries
### Using COALESCE
The `COALESCE` function returns the first non-`NULL` value in a list of expressions. It is useful for replacing `NULL` with a default value.
```sql
SELECT name, COALESCE(manager_id, 'No Manager') AS manager_id
FROM employees;
```
This query replaces `NULL` `manager_id` values with the string 'No Manager'.
### Using IFNULL or ISNULL
Many SQL dialects provide functions like `IFNULL` (MySQL) or `ISNULL` (SQL Server) to handle `NULL` values.
```sql
-- MySQL
SELECT name, IFNULL(manager_id, 'No Manager') AS manager_id FROM employees;
-- SQL Server
SELECT name, ISNULL(manager_id, 'No Manager') AS manager_id FROM employees;
```
### Using NULLIF
The `NULLIF` function returns `NULL` if the two arguments are equal; otherwise, it returns the first argument. It is useful for avoiding division by zero errors.
```sql
SELECT price / NULLIF(quantity, 0) AS unit_price
FROM products;
```
This query prevents division by zero by returning `NULL` if `quantity` is zero.
## Best Practices for NULL Handling
1. **Define Default Values**: When creating tables, define default values for columns to minimize `NULL` occurrences.
2. **Use Appropriate Functions**: Utilize functions like `COALESCE`, `IFNULL`, and `NULLIF` to handle `NULL` values effectively.
3. **Validate Data**: Implement data validation rules to prevent `NULL` values where they are not desired.
4. **Consider Database Design**: Design your database schema to handle `NULL` values appropriately, considering the impact on queries and performance.
## Conclusion
Handling `NULL` values in SQL requires careful consideration and understanding of their behavior in different operations. By using the appropriate SQL functions and adhering to best practices, you can ensure that your database queries produce accurate and reliable results. Whether you're checking for `NULL`, performing aggregations, or joining tables, proper `NULL` handling is crucial for maintaining data integrity and achieving the desired outcomes in your SQL operations. | kellyblaire |
1,877,078 | Resum tècnic de l'"Artificial Intelligence Risk Management Framework (AI RMF 1.0) | L'"Artificial Intelligence Risk Management Framework (AI RMF 1.0)", desenvolupat pel National... | 0 | 2024-06-04T19:34:59 | https://dev.to/gcjordi/resum-tecnic-de-lartificial-intelligence-risk-management-framework-ai-rmf-10-4aia | ai, ia, ciberseguretat, nist | L'"[Artificial Intelligence Risk Management Framework (AI RMF 1.0)](https://www.nist.gov/itl/ai-risk-management-framework)", desenvolupat pel National Institute of Standards and Technology (NIST), és un document fonamental per a la gestió de riscos associats als sistemes d'intel·ligència artificial (IA). Aquest marc té com a objectiu principal proporcionar eines i guies per a dissenyar, desenvolupar, desplegar i utilitzar sistemes d'IA de manera responsable i confiable, tot gestionant els riscos potencials que aquests sistemes poden implicar per a individus, organitzacions i la societat en general.
**Introducció i Conceptes Fonamentals**
El document comença ressaltant la importància de la IA en diversos sectors, des del comerç fins a la seguretat cibernètica, destacant el seu potencial transformador però també els riscos inherents.
El concepte de "risc" en el context de la IA es defineix com una mesura combinada de la probabilitat que ocorri un esdeveniment i la magnitud de les seves conseqüències. Els riscos poden ser tant positius com negatius, i la gestió d'aquests és essencial per a desenvolupar sistemes d'IA confiables.
**Característiques dels Sistemes d'IA de Confiança**
El marc identifica diverses característiques clau que defineixen un sistema d'IA confiable:
_Vàlid i Fiable:_ Els sistemes d'IA han de ser precisos, robustos i generalitzables, capaços de mantenir el seu rendiment sota diverses condicions.
_Segur:_ Els sistemes han de garantir la seguretat dels usuaris, evitant situacions que puguin posar en perill la vida, la salut o la propietat.
_Resilient i Segur:_ La resiliència es refereix a la capacitat del sistema per mantenir la seva funcionalitat enfront de canvis inesperats, mentre que la seguretat implica protegir-se contra accessos no autoritzats i altres amenaces.
_Responsable i Transparent:_ La transparència implica proporcionar informació clara sobre el funcionament del sistema i les decisions que pren, mentre que la responsabilitat es refereix a la capacitat de rendir comptes per les accions i els resultats del sistema.
_Explicables i Interpretables:_ Els sistemes d'IA han de ser capaços d'explicar les seves operacions i resultats de manera comprensible per als usuaris.
_Amb Privacitat Protegida:_ Els sistemes d'IA han de respectar la privacitat dels usuaris, adoptant tecnologies i pràctiques que salvaguardin la informació personal.
_Justos, Amb Gestió del Bias Nociu:_ Els sistemes d'IA han de mitigar els biaixos perjudicials, garantint un tracte just per a tots els usuaris i grups afectats.
**Funcions del Marc de Gestió de Riscos**
El nucli de l'AI RMF es divideix en quatre funcions principals que ajuden les organitzacions a gestionar els riscos associats als sistemes d'IA:
_Governança:_ Aquesta funció implica establir polítiques, processos i procediments per a la gestió dels riscos de la IA dins de l'organització, assegurant que es compleixin els requisits legals i reguladors.
_Mapatge:_ Identificar i entendre el context en què es desplegarà el sistema d'IA, incloent els seus objectius, usuaris previstos i impactes potencials.
_Mesura:_ Aplicar mètodes quantitatius i qualitatius per avaluar i monitoritzar els riscos de la IA, assegurant que els sistemes siguin provats i avaluats abans i durant el seu ús.
_Gestió:_ Desenvolupar i implementar accions per mitigar els riscos identificats, assegurant que els sistemes d'IA es gestionin de manera continuada al llarg del seu cicle de vida.
**Efectivitat del Marc**
El NIST destaca la importància d'avaluar contínuament l'eficàcia del marc, incloent el desenvolupament de mètriques i metodologies per mesurar les millores en la confiança dels sistemes d'IA.
Es recomana que les organitzacions revisin periòdicament l'impacte de l'AI RMF en les seves polítiques i processos, fomentant una cultura organitzacional que prioritzi la identificació i gestió dels riscos de la IA.
En resum, l'"Artificial Intelligence Risk Management Framework (AI RMF 1.0)" proporciona una estructura robusta per gestionar els riscos associats als sistemes d'IA, promovent el desenvolupament i ús responsable d'aquesta tecnologia emergent.
Aquest marc no només ajuda a mitigar els riscos potencials sinó que també fomenta la confiança pública en els sistemes d'IA, contribuint al seu desplegament segur i ètic en la societat.
[Jordi G. Castillón](https://jordigarcia.eu/) | gcjordi |
1,877,077 | What are Shopify theme app extensions? (and how to build with them) | When you think of a Shopify app, there’s a good chance you’re thinking of one of two things: an app... | 0 | 2024-06-04T19:33:10 | https://gadget.dev/blog/understanding-shopify-theme-app-extensions-what-they-are-and-how-to-build-with-them | shopify, liquid, javascript, themes | When you think of a Shopify app, there’s a good chance you’re thinking of one of two things: an app that helps a merchant optimize things behind the scenes, like an analytics tool, or an app that adds buyer-facing functionality to the storefront. If you think of the latter, then you’re probably thinking of a theme app extension.
## What are theme app extensions?
Shopify theme app extensions are bundles of Liquid blocks, static assets, and JavaScript snippets that allow merchants to extend and customize the functionalities of their Shopify themes. These extensions allow developers to easily add dynamic modules and additional features into a store without the need for extensive theme modifications.
Because they exist outside of the core theme, theme app extensions enable developers to maintain the integrity of the core theme while making updates or changes. This ensures that modifications are easier to manage, reducing the likelihood of conflicts and errors. Merchants can place and edit the theme extension anywhere in their store using the theme editor, instead of diving into a theme’s Liquid template.
## Themes vs apps
In order to fully understand theme app extensions, we have to go back to Shopify basics, because it’s important to really understand the difference between apps and themes. Themes, essentially, are the face of a Shopify store. They provide the layout and overall look and feel of the interface a shopper interacts with as they browse through products. Apps, on the other hand, give a Shopify store added features and functionality.
For a long time, any apps that wanted to inject new functionality into a theme would require merchants to manually edit the theme code upon install. This, obviously, was not ideal. If the app was uninstalled, entire sections of a store could break. Or, if the code was edited incorrectly, the merchant would run into problems that they likely didn’t know how to fix. Overall, it was a necessity, but we should all be relieved that there’s now a better way, and that is where Shopify theme app extensions come in.
Theme app extensions are the way Shopify plans to have themes and apps overlap moving forward. They provide developers with a controlled way to customize a theme, without requiring merchants to manually edit their theme code. This avoids any potential integration bugs, maintaining the integrity of the theme so all future updates are supported without issue, and the app is automatically removed from the theme when uninstalled, without the need for manual code cleanup. In order to use theme app extensions, it’s important to identify [whether a merchant is using a vintage theme or Online Store V2](https://gadget.dev/blog/how-to-work-with-shopify-themes-as-an-app-developer-part-1), as they are only compatible with 2.0 themes.
## Using theme app extensions
Just like with other extension types, theme extensions allow Shopify to standardize the experience for merchants. They offer a cleaner onboarding experience for public apps, because everything is standardized and automated, and for custom apps, it means less hand-holding.
There are two primary types of theme app extensions: App blocks and app embed blocks.
**App blocks** are blocks of code that inject **inline** content onto a page to extend a theme. Merchants can add app blocks to a compatible theme section, or as wrapped app blocks that add your app’s functionality directly into a theme. They allow merchants to inject their apps directly into a theme, while giving the merchant control to modify, add, remove, or reposition the app block using Shopify’s built-in storefront editor. To add app blocks to a section, a theme will need to include a generic block of type `@app` in the section schema. [You can find an example in Shopify’s documentation.](https://shopify.dev/docs/storefronts/themes/architecture/blocks/app-blocks#support-app-blocks-in-the-section) App blocks only work for themes on Online Store 2.0. Merchants using vintage themes still need to manually copy and paste Liquid and JavaScript code snippets to their theme.
**App embed blocks** are used when storefront apps do **not** require a UI component, or use a popup or floating element instead of inline functionality. Keep in mind that by default, app embed blocks are deactivated upon install, and merchants will need to manually activate them. If this is your first time working with app embed blocks, read our guide to [make sure your app embed blocks are running properly](https://gadget.dev/blog/how-to-work-with-shopify-themes-as-an-app-developer-part-2-embedding-your-frontend). App embed blocks work for both vintage and Online Store 2.0 themes, no copy-paste is required by the merchant.
When using either type of theme app extensions, your application can integrate Javascript, CSS files, or other static assets such as images, directly within an `/assets` subdirectory, which allows Shopify's CDN to distribute your content on your behalf, so you don’t have to worry about performant load times for your app components or managing your assets.
## Examples of theme app extensions
Whether it's improving navigation or optimizing product pages, extensions allow you to customize a Shopify store to a merchant’s specific needs. If you’re looking for ideas on how you can use theme app extensions in your apps, we’ve got a few suggestions!
**Product recommendation quizzes** can be a great way to help merchants increase conversion rates. By suggesting products based on a buyer’s own preferences or interests, you’re able to offer much more personalized recommendations, and they’re more likely to buy. You could even go a step further by creating a bundle based on the quiz results to increase AOV. You can check out our forkable [Shopify product quiz template](https://gadget.dev/resources/templates/product-quiz-template) to get a jump start.
**Chatbots** are a great way to help merchants hop on the AI bandwagon, and provide personalized recommendations to shoppers as they browse. By leveraging store knowledge, prompts, and context, you can have a chatbot that’s tailor-made for a specific store. We have a step-by-step tutorial to [build a storefront chatbot with OpenAI](https://docs.gadget.dev/guides/tutorials/ai-product-recommender).
**Customer reviews** will quickly provide social proof on product listings to build trust and increase conversion. If you support buyer-submitted photos, it can create a sense of authenticity, and has been shown to improve conversion rates.
## ---
Shopify theme app extensions have substantially changed the way developers can enhance Shopify themes. They provide better integration, smoother updates, and simple app installs and removals.
If you want to build using theme app extensions for your next app, Gadget makes it easy. You can [sign up for free to start building](https://app.gadget.dev/auth/signup), or connect with the community over on [the Gadget Discord](https://discord.com/invite/tY4ZjWdMcB). | gadget |
1,877,075 | FastAPI Beyond CRUD Part 6 - CRUD With Async SQLModel (An Introduction to Dependency Injection) | In this video, we build our CRUD functionality replacing the in-memory database (Python-list) with... | 0 | 2024-06-04T19:32:34 | https://dev.to/jod35/fastapi-beyond-crud-part-6-crud-with-async-sqlmodel-an-introduction-to-dependency-injection-5d1e | fastapi, api, python, programming | In this video, we build our CRUD functionality replacing the in-memory database (Python-list) with persistence from a real database (PostgreSQL). We also look at organizing our CRUD code to separate database logic from the route code.
{%youtube AtHEX76Wysw%} | jod35 |
1,877,074 | localforage vs localstorage | Localforage vs Localstorage So, whats the difference between localforage and localstorage?... | 0 | 2024-06-04T19:31:13 | https://dev.to/dumorando/localforage-vs-localstorage-43ap | javascript, webdev | # Localforage vs Localstorage
So, whats the difference between localforage and localstorage? And why do people recommend localforage rather then localstorage?
Its quite simple actually.
## Localforage
Localforage is an asynchronous based local database simular to localstorage but internally it uses IndexedDB.
IndexedDB is an arguably better alternative to localstorage, the problem is its kinda complex.
Localforage has a simple localstorage like syntax for IndexedDB.
Localforage also has a fallback to other storage methods for old browsers.
It's good for saving large data as IndexedDB has more storage then localstorage.
## Localstorage
Localstorage is a synchronous based local database.
You don't need to use promises/async syntax for it.
It has a lot less storage then IndexedDB.
It's great for saving user settings without using cookies.
## Conclusion
Its ultimately your choice.
I personally recommend localstorage for simply saving settings like dark mode, etc but i recommend localforage for saving data that is large in size, or if you can only use async in the current situation. | dumorando |
1,877,073 | How to Debug Using the Developer Console | The developer console in modern web browsers provides a powerful suite of tools for debugging web... | 0 | 2024-06-04T19:29:12 | https://dev.to/vidyarathna/how-to-debug-using-the-developer-console-1o0d | devtools, debugging, webdebugging, techcommunity |
The developer console in modern web browsers provides a powerful suite of tools for debugging web applications. Whether you're inspecting elements, monitoring network activity, or adjusting device settings, the developer console has you covered. Here’s how you can make the most of these tools:
### 1. **Opening the Developer Console**
- **Chrome**: Right-click on the webpage and select "Inspect" or press `Ctrl+Shift+I` (Windows/Linux) or `Cmd+Option+I` (Mac).
- **Firefox**: Right-click on the webpage and select "Inspect Element" or press `Ctrl+Shift+I` (Windows/Linux) or `Cmd+Option+I` (Mac).
- **Edge**: Right-click on the webpage and select "Inspect" or press `F12`.
- **Safari**: Enable the "Develop" menu in Preferences > Advanced, then select "Show Web Inspector" from the Develop menu or press `Cmd+Option+I`.
### 2. **Elements Panel**
The Elements panel allows you to inspect and manipulate the DOM and CSS.
- **Inspect Elements**: Right-click on any element on the page and select "Inspect" to jump to its position in the DOM tree.
- **Edit HTML**: Double-click an element or right-click and choose "Edit as HTML" to modify the HTML structure.
- **Edit CSS**: Modify CSS rules directly in the "Styles" pane. You can add new properties, change existing ones, and see the effects immediately.
### 3. **Console Panel**
The Console panel is used for logging diagnostic information and running JavaScript code.
- **Logging Messages**: Use `console.log()`, `console.error()`, `console.warn()`, and `console.info()` in your JavaScript code to output messages to the console.
- **Running JavaScript**: Enter JavaScript expressions directly into the console to test code snippets.
### 4. **Sources Panel**
The Sources panel is used for debugging JavaScript.
- **Set Breakpoints**: Click on the line number in the source code to set breakpoints. The execution will pause when it reaches these points.
- **Step Through Code**: Use the buttons for stepping over, into, and out of functions to navigate through your code line by line.
- **Watch Expressions**: Add expressions to the "Watch" panel to monitor their values as you debug.
### 5. **Network Panel**
The Network panel helps you monitor and analyze network activity.
- **Inspect Requests**: View detailed information about each network request, including headers, payload, and timing.
- **Simulate Network Conditions**: Click on the "Online" dropdown (usually located at the top-right of the Network panel) and select "Slow 3G", "Fast 3G", or add a custom speed profile to simulate different network conditions.
- **Throttling**: Enable network throttling to test how your application behaves under various network speeds.
### 6. **Performance Panel**
The Performance panel allows you to record and analyze runtime performance.
- **Record**: Click the record button to start capturing performance data. Interact with your page and then stop recording.
- **Analyze**: View the timeline, frames per second, and detailed call stacks to identify performance bottlenecks.
### 7. **Application Panel**
The Application panel provides access to various storage and service worker features.
- **Local Storage, Session Storage, and Cookies**: Inspect and modify data stored in these storage types.
- **Service Workers**: Manage service workers registered by your web application.
- **Clear Storage**: Clear all site data including cookies, local storage, and caches.
### 8. **Responsive Design Mode**
You can simulate different screen sizes and resolutions using the responsive design mode.
- **Toggle Device Mode**: Click the device icon in the top-left corner of the Developer Tools or press `Ctrl+Shift+M` (Windows/Linux) or `Cmd+Option+M` (Mac).
- **Choose a Device**: Select a predefined device from the device toolbar or add custom dimensions to see how your website looks on different screens.
- **Rotate Screen**: Click the rotate button to switch between portrait and landscape orientations.
Using these features in the developer console, you can effectively debug and optimize your web applications, ensuring a better user experience across various devices and network conditions.
Happy Coding! 😄 | vidyarathna |
1,877,072 | Registrations for Eastern India's largest Hackathon are officially OPEN! 🔥 | Hack4Bengal 3.0: Registrations Now Open! Hack4Bengal 3.0, Eastern India's largest... | 0 | 2024-06-04T19:28:51 | https://dev.to/arup_matabber/registrations-for-eastern-indias-largest-hackathon-are-officially-open-2l4m | ## Hack4Bengal 3.0: Registrations Now Open!
Hack4Bengal 3.0, Eastern India's largest hackathon, is now open for registrations! This premier event, set to take place in Kalyani, West Bengal on June 28, 2024, is a golden opportunity for coders of all levels to showcase their skills, collaborate on innovative solutions, and compete for glory. Whether you're a seasoned developer or a beginner, Hack4Bengal promises a supportive environment with mentors, workshops, and numerous resources.
## Why Join Hack4Bengal 3.0?
- Inclusive Participation:
Open to all skill levels, with dedicated workshops for beginners.
- Team Building:
Form teams of 2-4 members or join as a solo hacker and find your team on Discord.
- Comprehensive Support:
Enjoy free participation, meals, beverages, and sleeping arrangements.
Real-world Challenges: Address pressing societal and environmental issues through technology.
## Event Highlights
**Free Participation:** No cost to join, with meals, snacks, and swags provided.
**Expert Mentorship:** Guidance and workshops to help beginners and experienced hackers alike.
**Exciting Prizes:** Compete for top prizes and recognition.
Networking Opportunities: Meet like-minded tech enthusiasts and industry leaders.
## Frequently Asked Questions
**Is participation free?** Yes, it's absolutely free.
**Can beginners join?** Definitely, with ample support and resources available.
**Team size?** Teams of 2-4 are encouraged, with options for solo hackers to join teams via Discord.
**Under 18?** Allowed with parental consent.
**Registrations link :** https://lu.ma/0nruupo3?tk=6X86Ta | arup_matabber | |
1,877,071 | Sell Backstage like a salesperson | I've chatted with many platform/infra teams who are struggling to get buy-in for internal tools... | 0 | 2024-06-04T19:28:40 | https://dev.to/timnichols/sell-backstage-like-a-salesperson-83a | platform, idp, devex | I've chatted with many platform/infra teams who are struggling to get buy-in for internal tools (ex:[backstage](https://backstage.io/)) in the current market.
The problem is that they're using the wrong sales process!
Typically, someone sets up backstage during a hackweek, wins 1-2 HC and then after a year of mixed adoption and 'good' impact, management pulls the plug.
The mistake is to treat an internal tool like a bottom-up experiment (roll it out and hope DORA metrics tick up) rather than a top-down sales process (where you are mapping the needs and criteria of your ‘buyer’ from day one).
If you look at commercial alternatives to backstage ([Cortex](https://www.cortex.io/), [Port](https://www.getport.io/), etc), they tell a very different story (scorecards, service creation) to a different audience (Directors/VPs). Vendors know that there is no point engaging unless they've aligned on KPIs & Q/Q goals with a purchaser.
My view? To get y2 for your internal tool you need exec buy-in from d1, (and you should borrow marketing/ROI language from a commercial solution 😛 ) | timnichols |
1,877,069 | Install Notion on Ubuntu | Install Notion on Ubuntu effortlessly! This guide provides clear steps to set up Notion on your... | 0 | 2024-06-04T19:25:09 | https://dev.to/abdul_sattar/install-notion-on-ubuntu-533j | notion, linux, ubuntu | **Install Notion on Ubuntu effortlessly**! This guide provides clear steps to set up Notion on your Ubuntu system and boost your productivity
[Notion](https://www.notion.so/) can be your hero! This powerful workspace goes beyond just a database, tackling your note-taking, tasks, projects, and keeping everything organized in one place. But wait, there's good news! Even if you're an Ubuntu user, getting started with Notion is a breeze! This guide will walk you through the process in a few simple steps.
#### 1. Open the Terminal
Press Ctrl + Alt + T simultaneously, and a terminal window will appear.
#### 2. Update and Upgrade
Before installing Notion, it's good practice to ensure your system has the latest updates.
```Bash
sudo apt update
sudo apt upgrade
```
#### 3. Install Notion
Now comes the fun part! Run the following command in the terminal to install the necessary Notion packages:
```Bash
sudo apt install notion-app-enhanced
sudo apt install notion-app
```
####4. Launch Notion:
Once the installation is complete, you can launch Notion directly from the terminal by typing:
```Bash
notion-app
```
If you still facing any issue after installing the Notion on Desktop version.I noticed that some people had problems installing
```bash
sudo apt purge notion*
sudo snap install notion-snap-reborn
```
Want to learn more? Feel free to check out my website (link in bio) or connect with me on [LinkedIn](https://www.linkedin.com/in/a4sa/) for more Ubuntu tips and tricks!
| abdul_sattar |
1,877,015 | 13 Best OS for Old Laptops - Revive Your Hardware! | If there's one thing people dread more than work, it's change. Especially when it comes from planned... | 0 | 2024-06-04T19:19:22 | https://dev.to/denis_valcu_b31f71d81032a/13-best-os-for-old-laptops-revive-your-hardware-54mn | webdev | If there's one thing people dread more than work, it's change. Especially when it comes from planned obsolescence. Picture this: you've got a perfectly functioning laptop, but suddenly it's too old to keep up with the latest OS. Sound familiar?

Instead of tossing your trusty old laptop, why not give it a new lease on life with a lightweight OS? In this article, we'll review 13 operating systems that can transform your old machine into a speedy, efficient device once more.
Table of Contents
What Is a Lightweight Operating System?
Identifying Resource Constraints on Your Laptop
Changing Windows 11 for a Lightweight OS
Windows 10
Tiny Core
Lubuntu
ArchBang Linux
Linux Mint
Puppy Linux
Elementary OS
Chromium OS
Zorin OS Lite
Phoenix OS
Prime OS
Peppermint OS
Linux Lite
What Is a Lightweight Operating System?
A lightweight operating system (OS) is designed to run efficiently on older or lower-spec hardware. These OS options strip away non-essential features, focusing on basic user needs like internet browsing, text editing, and simple connectivity tools. Ideal for old laptops, these OS options can make your machine feel almost new again.
Identifying Resource Constraints on Your Laptop
Here are a few signs your laptop might need a lightweight OS:
Sluggish performance.
Frequent software crashes.
Random freezes.
Task manager showing high CPU or memory usage.
Your laptop is over five years old.
Changing Windows 11 for a Lightweight OS
Windows 11's high system requirements can be too much for older laptops. Switching to a lightweight OS can significantly improve your machine's performance. Let's explore the best options available.
1. Windows 10
Surprised to see Windows 10 here? Despite being a robust OS, it's more forgiving on older hardware compared to Windows 11. Key benefits include:
Windows Defender
Cortana integration
Edge browser
Virtual desktops
Minimum requirements are lower than Windows 11, making it a solid choice if you prefer sticking with Windows. Get your Windows 10 license at the best prices from RoyalCDKeys.
2. Tiny Core
Requiring just 16 MB of space, Tiny Core Linux is incredibly lightweight. It offers a minimalist interface and fast booting speed, perfect for making your old laptop zippy again. However, its simplicity can be limiting if you need more features.
3. Lubuntu
Lubuntu is a lightweight Linux distro using the LXDE desktop environment. It's perfect for low RAM and 32-bit CPUs. With just 128 MB of RAM and 700 MB of storage, Lubuntu can revitalize your old machine. It offers a user-friendly interface and supports Microsoft programs via Wine.
4. ArchBang Linux
ArchBang is based on Arch Linux, known for its minimalism and customization. It requires 200 MB of RAM and 800 MB of storage. It's highly configurable, making it ideal for both desktops and servers.
5. Linux Mint
Linux Mint offers a traditional interface with a balance between lightweight and feature-rich. It requires 2 GB of RAM and 20 GB of storage. Benefits include:
Pre-installed applications
Customization
High security
Open-source community support
6. Puppy Linux
Puppy Linux is ultra-portable and can be run from a USB stick or live CD. It's small (about 300 MB) but fully functional, making it ideal for extremely old hardware.
7. Elementary OS
Elementary OS is known for its beautiful, minimalist design. Based on Ubuntu, it works better on newer hardware but can be a sleek option for reasonably old laptops. Key features include multitasking views, media management, and user-friendly interfaces.
8. Chromium OS
Chromium OS is perfect for internet-heavy users. It's a lightweight version of Chrome OS, requiring 4 GB of RAM and 16 GB of storage. It's simple, secure, and great for web-based tasks.
9. Zorin OS Lite
Zorin OS Lite blends features from Ubuntu, Mac OS, and Windows. It's designed for older hardware, requiring 512 MB of RAM and 8 GB of storage. It offers constant updates and a familiar interface.
10. Phoenix OS
Phoenix OS brings an Android experience to your laptop. It requires 2 GB of RAM and 1 GB of storage. It offers a desktop-like environment with Android apps, ideal for light usage and gaming.
11. Prime OS
Similar to Phoenix OS, Prime OS is Android-based and great for gaming. It requires 2 GB of RAM, 3 GB of storage, and a dedicated GPU. It offers a full desktop experience with access to the Google Play Store.
12. Peppermint OS
Peppermint OS uses cloud computing technologies to remain light and fast. It requires 1 GB of RAM and 4 GB of storage. It includes minimal pre-installed apps, allowing you to customize it to your needs.
13. Linux Lite
Linux Lite is designed for old hardware, offering a familiar Windows-like interface. It requires 1 GB of RAM and 1.5 GHz CPU. It includes essential apps like LibreOffice and VLC Media Player, making it a comprehensive yet light solution.
For the best prices on software licenses, visit [RoyalCDKeys](https://royalcdkeys.com/). | denis_valcu_b31f71d81032a |
1,877,042 | Curso CCNAv7 Gratuito CISCO E NIC.br: Introdução Às Redes | O Curso CCNAv7: Introdução às Redes é uma iniciativa online gratuita, fruto da parceria entre o... | 0 | 2024-06-23T13:51:22 | https://guiadeti.com.br/curso-ccnav7-gratuito-introducao-as-redes/ | cursogratuito, cisco, cursosgratuitos, redes | ---
title: Curso CCNAv7 Gratuito CISCO E NIC.br: Introdução Às Redes
published: true
date: 2024-06-04 19:18:54 UTC
tags: CursoGratuito,cisco,cursosgratuitos,redes
canonical_url: https://guiadeti.com.br/curso-ccnav7-gratuito-introducao-as-redes/
---
O Curso CCNAv7: Introdução às Redes é uma iniciativa online gratuita, fruto da parceria entre o NIC.br e a CISCO. Este é o primeiro módulo preparatório para a certificação CISCO CCNA, feito para ser realizado a distância.
No desenrolar do curso, os participantes conheceram sobre a arquitetura, estrutura, funções, componentes e modelos de funcionamento da Internet e outras redes de computadores.
Serão explorados os princípios e a estrutura do endereçamento IP, bem como os fundamentos da Ethernet, incluindo conceitos de meio físico e operações.
O curso tem uma carga horária estimada em 70 horas, que pode variar conforme o ritmo e a familiaridade do aluno com o conteúdo.
## Introdução Às Redes
Este curso online gratuito é oferecido em parceria com o NIC.br e a CISCO, representando o primeiro estágio preparatório para a certificação CISCO CCNA.

_Imagem da página do curso_
Ideal para aqueles interessados em aprofundar seus conhecimentos sobre a arquitetura, estrutura, funções, componentes e modelos das redes de computadores, incluindo a Internet. As inscrições vão até o dia 30/06/2024.
### Conteúdo Programático
Os participantes do curso irão explorar os princípios e a estrutura do endereçamento IP, além de conceitos fundamentais sobre o meio físico e as operações da Ethernet.
Finalizando o curso, os alunos estarão aptos a criar redes locais simples (LAN), realizar configurações básicas em roteadores e switches, e implementar esquemas de endereçamento IP.
### Metodologia e Recursos Didáticos
O curso inclui material oficial da Cisco disponível em diversas línguas, como português, inglês, espanhol e francês.
Todas as aulas são ministradas através de videoaulas gravadas por instrutores renomados, complementadas por aulas práticas no simulador oficial da Cisco, o Cisco Packet Tracer.
As aulas podem ser acessadas livremente, permitindo que o aluno estude conforme sua disponibilidade.
### Avaliação e Certificação
Para a avaliação do aprendizado, serão aplicados seis exames teóricos baseados no conteúdo do curso, um exame prático no Packet Tracer, e um exame teórico final.
Para obter o certificado de conclusão, o aluno deve alcançar uma nota final igual ou superior a 70%.
### Inscrição e Acesso
Após a efetivação da inscrição, os alunos receberão um link para cadastro na plataforma Netacad da Cisco, onde o curso será ministrado.
É essencial que os alunos aceitem a política de privacidade das instituições parceiras e mantenham um acesso regular à plataforma, com a expectativa de entrada no sistema pelo menos uma vez a cada duas semanas.
O curso deve CCNAv7 ser concluído dentro de um período máximo de três meses, respeitando a data de término da turma para a submissão das atividades.
### Suporte ao Aluno
Durante o curso, os alunos contarão com o acompanhamento de um instrutor do NIC.br, que estará disponível para orientações e para responder dúvidas através do Fórum de Dúvidas, garantindo suporte contínuo ao aprendizado.
<aside>
<div>Você pode gostar</div>
<div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Curso-CCNAv7-Introducao-As-Redes-280x210.png" alt="Curso CCNAv7: Introdução Às Redes" title="Curso CCNAv7: Introdução Às Redes"></span>
</div>
<span>Curso CCNAv7 Gratuito CISCO E NIC.br: Introdução Às Redes</span> <a href="https://guiadeti.com.br/curso-ccnav7-gratuito-introducao-as-redes/" title="Curso CCNAv7 Gratuito CISCO E NIC.br: Introdução Às Redes"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/02/Curso-De-Microsoft-SC-900-280x210.png" alt="Curso De Microsoft (SC-900)" title="Curso De Microsoft (SC-900)"></span>
</div>
<span>Curso De Microsoft (SC-900) Gratuito Da Ka Solution</span> <a href="https://guiadeti.com.br/curso-de-microsoft-sc-900-gratuito-ka-solution/" title="Curso De Microsoft (SC-900) Gratuito Da Ka Solution"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Cursos-Educacao-Midiatica-280x210.png" alt="Cursos Educação Midiática" title="Cursos Educação Midiática"></span>
</div>
<span>Cursos Sobre Educação Midiática Gratuitos Do MEC</span> <a href="https://guiadeti.com.br/cursos-educacao-midiatica-gratuitos-mec/" title="Cursos Sobre Educação Midiática Gratuitos Do MEC"></a>
</div>
</div>
<div>
<div>
<div>
<span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/06/Bootcamp-Java-E-IA-280x210.png" alt="Bootcamp Java E IA" title="Bootcamp Java E IA"></span>
</div>
<span>Bootcamp De Java com IA Gratuito: Desenvolva Seu Portifólio</span> <a href="https://guiadeti.com.br/bootcamp-desenvolvimento-java-ia-gratuito/" title="Bootcamp De Java com IA Gratuito: Desenvolva Seu Portifólio"></a>
</div>
</div>
</div>
</aside>
## CCNAv7
A certificação Cisco Certified Network Associate version 7, ou CCNAv7, é oferecida pela Cisco Systems, uma autoridade global em tecnologias de rede.
Este reconhecimento valida a proficiência dos profissionais nos conceitos e práticas fundamentais associados às redes de computadores.
### Switching e Roteamento
O CCNAv7 envolve os princípios básicos das redes, explorando desde a arquitetura de rede até os protocolos de comunicação essenciais, proporcionando uma base sólida nos fundamentos desta área.
### Proficiência em Switching e Roteamento
Como um elemento vital para profissionais da área, o CCNAv7 foca intensamente nas tecnologias de switching e roteamento, equipando os certificados com as competências necessárias para configurar e administrar switches e roteadores de maneira eficaz.
### Redes Locais Sem Fio (WLAN) e Segurança
O programa também dedica uma parte significativa ao estudo das redes locais sem fio (WLAN), preparando os participantes para compreender e aprimorar WLANs com uma perspectiva que inclui tanto os aspectos operacionais quanto de segurança.
### Enfoque em Segurança Cibernética
Dada a crescente relevância da segurança cibernética, o CCNAv7 aprofunda o conhecimento em estratégias de proteção de redes, habilitando os profissionais a identificar e neutralizar ameaças à segurança.
O curso CCNAv7 busca preparar os alunos de maneira completa nos fundamentos e práticas essenciais de segurança em redes, garantindo uma compreensão robusta e aplicável no campo.
## Nic.br
A NIC.br desempenha um papel central na gestão dos domínios “.br”, sendo responsável pelo registro e manutenção desses recursos essenciais para a presença digital no Brasil.
Esta gestão eficaz garante a integridade e o funcionamento contínuo da infraestrutura digital do país, consolidando sua base tecnológica.
### Fomento a uma Internet Livre e Universal
Comprometida em tornar a internet livre e acessível para todos, a NIC.br implementa uma variedade de iniciativas e projetos que visam a inclusão digital, a educação, e o crescimento sustentável da rede no Brasil.
### Inovação através de Pesquisa e Desenvolvimento
A instituição mantém um compromisso ativo com a pesquisa e o desenvolvimento, focando esforços no avanço da infraestrutura e segurança da Internet no Brasil, com o objetivo de adaptar e melhorar continuamente a rede.
### Educação e Conscientização Digital
A NIC.br investe em educação e em programas de conscientização, promovendo boas práticas de uso da internet, segurança online e a utilização responsável das tecnologias.
### Construindo Parcerias Estratégicas
Através de colaborações ativas com outras entidades e organizações, a NIC.br estabelece parcerias estratégicas que reforçam o ecossistema da Internet no Brasil, assegurando um desenvolvimento sustentável e inclusivo da rede.
## Inscreva-se agora no Curso CCNAv7 gratuito da Cisco e NIC.br e transforme seu futuro na área de redes!
As [inscrições para o curso Curso CCNAv7: Introdução às Redes – Parceria NIC.br e CISCO – A distância](https://cursoseventos.nic.br/curso/curso-ccna-intro-cisco-nicbr/) devem ser realizadas no site da Nic.br.
## Compartilhe esta oportunidade – capacite-se e avance na sua carreira de redes!
Gostou do conteúdo sobre o curso gratuito de redes? Então compartilhe com a galera!
O post [Curso CCNAv7 Gratuito CISCO E NIC.br: Introdução Às Redes](https://guiadeti.com.br/curso-ccnav7-gratuito-introducao-as-redes/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br). | guiadeti |
1,877,014 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-06-04T19:17:31 | https://dev.to/vevet15033/buy-verified-cash-app-account-54bp | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n" | vevet15033 |
1,875,440 | Building a customizable dashboard with Dashy | Written by Shalitha Suranga✏️ With the Web 3.0 era, the availability of the internet and evolution... | 0 | 2024-06-04T19:07:37 | https://blog.logrocket.com/customizable-dashboard-dashy | dashy, webdev | **Written by [Shalitha Suranga](https://blog.logrocket.com/author/shalithasuranga/)✏️**
With the Web 3.0 era, the availability of the internet and evolution of web technologies helped web developers implement platform-independent, web-based software. Before this, software folks could only develop platform-dependent, standalone applications.
Nowadays, computer users can find a cloud-based app for nearly any general purpose need on the internet. To make this possible, developers run these cloud-based apps on web servers and typically run multiple backend services, frontend apps, and server-control portals. But how do they efficiently access links of web services to each web server during development? A traditional approach is to create spreadsheets or documents with links, or use web browser bookmarks.
Though practical, these solutions for storing web service links don’t offer centralized editing flexibility, a productive searching feature, clear visuals, and health-check-like networking features. On the flip side, self-hosted dashboard apps help you productively manage web service links and offer efficient ways to access links with keyboard shortcuts and instant searching features.
In this article, I’ll introduce Dashy, a free, open source, self-hosted dashboard app developed with Vue and Node.js. Dashy helps us create beautiful, customizable, modern dashboard pages with web service links and widgets. It also comes with inbuilt multi-language support, authentication, themes, integrated icon packs, and more productivity-focused features. We’ll discuss Dashy’s use cases and explore its features by installing it on the computer via Docker.
## Dashy’s use cases
According to Dashy’s [official website](https://dashy.to/), its primary goal is to create customizable dashboard interfaces for web servers by adding self-hosted web service links. But, it is a fully-featured generic dashboard app that we can use in various scenarios, such as:
### Creating a dashboard for web servers
Dashy’s primary use case is creating a dashboard with self-hosted web service links. It lets you aggregate all the self-hosted web service links of a specific web server into one web page, like Homar, Organizr, and Heimdall.
If you use Dashy in a web server, you can navigate to the Dashy port of the web server IP/URL to browse its available web services. Most Dashy users use Dashy to create dashboards for their home lab servers.
### A better alternative for bookmarks
The inbuilt URL bookmarking feature in web browsers helps us easily access websites that we frequently use. Chromium-based browsers support tab grouping and let users save tab groups within bookmarks. However, browser bookmarks won’t offer a productive dashboard with quick editing features. Anyone can use a Dashy instance to manage their bookmarks, regardless of their professional role.
### All-in-one client details dashboard for web developers
Web developers can use a Dashy instance to store all app and admin page URLs of clients within one place. For example, WordPress developers can create a dashboard links group for each client and add websites, admin pages, and server management (i.e., cPanel) portal URLs. A full-stack developer can even include GitHub repositories, API documentation, etc., per client.
## Highlighted features of Dashy
A Dashy project lets us create modern, beautiful, fully-featured web-based dashboards with the following highlighted features:
* Sectioned link items with different views
* Productive ways to configure the app
* A vast widget ecosystem
* Easy installation for bare-metal and managed servers
### Sectioned link items with different views
Dashy lets you add web service links into sections or groups that get rendered in switchable views. You can see links in auto(tiled), vertical, horizontal, and workspace views. It enables productive keyboard navigation, hotkeys support, and an instant search feature to navigate into any listed web service entry. Dashy also lets us enable health-checking for each URL and display web service status with an indicator element.
### Productive ways to configure the app
This dashboard app uses a single YAML file to store website configuration and offers three configuration editing strategies:
* Using the web-based interactive editor right through the dashboard app
* Editing the JSON configuration object via the inbuilt visual JSON editor (which uses [the `v-jsoneditor` package](https://www.npmjs.com/package/v-jsoneditor))
* Editing the `public/conf.yml` configuration file directly from the server
Besides, the Dashy development team plans to offer a RESTful API to handle dashboard configuration through HTTP. Then, we’ll be able to update dashboard configurations via a CLI program from any computer without accessing the server filesystem or the Dashy frontend!
### A vast widget ecosystem
Apart from sectioned URL items, the Dashy dashboard can contain widgets. Dashy comes with many inbuilt widgets that let you display time, weather, exchange rates, news, system information, etc.
Its inbuilt widgets also let you display information from third-party tools like Mullvad VPN, HealthChecks, Addy, Nextcloud, AdGuard, and Pi-hole. Moreover, web developers can create custom widgets easily with Vue and Dashy APIs.
### Easy installation for bare-metal and managed servers
The Dashy project is a full-stack app built with Vue and Node.js. Its codebase contains a Dockerfile and deployment configuration for Netlify-like cloud app hosting services. Further, Dashy has an official Docker image on Docker Hub, so you can easily install Dashy into any on-premise web server or managed cloud platform in record time.
Apart from these highlighted features, Dashy also supports authentication, backup/restore, a minimal mode, instant web search, and theme customization.
## How to install Dashy
Now that we’ve talked about Dashy’s use cases and highlighted features, let’s explore it practically and learn how to use it. We can undoubtedly run its source directly by cloning [its GitHub repository](https://github.com/lissy93/dashy) or downloading it from GitHub releases, but you can run it with just one command if you use Docker.
First, install Dashy with the official Docker image:
```bash
docker run -d -p 8080:80 --name my-dashboard --restart=always lissy93/dashy:latest
```
The above command pulls the Docker image, creates a new container named `my-dashboard`, and runs the container in detached mode. Give some time for Docker to start Dashy and visit `localhost:8080` from your web browser. Your very first Dashy dashboard looks like this:  Above, you can see the default dashboard that comes after the installation process.
If you have a home lab server, start installing Dashy on it to get the real experience of a self-hosted dashboard!
## Getting started with Dashy
Let’s become familiar with our dashboard by learning its screen layout and keyboard actions. Dashy comes with a minimal — but highly customizable — dashboard interface with productive keyboard support and hotkey configuration.
### UI elements and actions
Dashy has three main segments within its interface: header, toolbar, and dashboard items.
#### Header
The header segment contains the dashboard name, logo, description, and navigation links. All these elements are customizable based on our requirements: 
#### Toolbar
The left side of the toolbar holds a search box element that lets you filter web service links and use a specific web search engine. The right side of the toolbar contains a toolbox that holds various controls, such as the theme selector, layout switcher, configuration editor, etc.: 
#### Dashboard items
The third segment of the dashboard interface contains dashboard items. Dashboard items can either be web service link groups or widgets. A fresh Dashy installation creates a sample link group, as shown in the below screenshot: 
### Productive keyboard support
You can search and click on specific dashboard link items with the mouse or touchpad. You can do this faster by using the keyboard. Enter a search query, press the arrow keys, and press `Enter` on the selected entry, as shown in this preview:  Dashy lets you do a quick web search using the search query by pressing the enter key on the search box. You can also set up hotkeys to open links instantly by pressing a specific key. We’ll discuss setting up hotkeys after learning how to add new sections and links.
## Configuring the dashboard website
Every element you see on the dashboard is indeed customizable according to your requirements. Dashy uses a single configuration file `conf.yml` to store dashboard configuration. You can edit it easily with the interactive GUI edit mode or GUI JSON editor , or you can edit the YAML configuration file directly from a text editor.
Choose one configuration editing method as you wish. I’ll use the inbuilt JSON editor in this tutorial, so you can copy/paste configurations easily into your Dashy instance.
Let’s get started with basic website configuration by updating the dashboard title, logo, description, footer, and theme.
### Updating basic website details
First, open the JSON configuration editor by clicking on the tool-looking button under **Config**. Then, select the **EDIT CONFIG** tab and switch the JSON editor mode as follows:  Change the dashboard website title, and description, add a logo, and remove navigation links to build a minimal artistic dashboard:
```json
"pageInfo": {
"title": "MyDashboard",
"description": "Welcome to my awesome dashboard!",
"footerText": "MyDashboard - Made with <em>Dashy</em>",
"logo": "https://dashy.to/img/dashy.png"
},
```
Preview changes and click on the disk icon button to save. Note that direct saving doesn’t work at the time of writing this article due to a [minor bug](https://github.com/Lissy93/dashy/issues/1147#issuecomment-2058845300). Saving the configuration file triggers a new application build, so check the app after some time:  Above, you can see the dashboard app after updating basic website details.
### Selecting a theme
Dashy comes with more than 30 inbuilt themes. You can browse the available theme list from [this GitHub link](https://github.com/Lissy93/dashy/blob/6f94ac876419ffb80f7f6cfb5372d33106e2a59b/src/utils/defaults.js#L48). You can switch the theme from the theme selector in the toolbox, but it saves the selected theme to the client side using the `localStorage` API, so we can update the theme for all dashboard users as follows:
```json
"appConfig": {
"theme": "adventure",
...
...
},
```
The above theme configuration activates the Adventure theme:  Dashy also lets you customize existing themes and create your own. Check the [official theming documentation](https://dashy.to/docs/theming) to learn more.
## Adding new web service link sections
Right now, we only have the default **Getting Started** link group that gets created during installation. Let’s add a new link group to learn how to add new dashboard items.
### Using the interactive edit mode
You can create new sections and links with GUI controls by activating interactive edit mode. Turn on edit mode and click on the **Add New Section** button:  Create a new section by adding the section name “Apps” and icon `fas fa-file`. Here, we used a [FontAwesome](https://blog.logrocket.com/font-awesome-icons-vue-js-complete-guide/) icon name, but you can use icons from other [supported icon libraries](https://dashy.to/docs/icons) or add a PNG icon.
Now, add new links to the newly created section by clicking on the **Add New Item** button:  Add several entries and make the section like this:  After all edits, save the configuration file to the server to activate the new section for all dashboard users.
### Using the JSON configuration editor
The interactive editor approach is great for non-technical users — they can leisurely add new sections and items using GUI controls. However, developers would love to edit the JSON or YAML configuration to add new sections and links.
Open the JSON configuration editor and add the following object to the `sections` array to add the same Apps section without using the interactive edit mode:
```json
{
"name": "Apps",
"icon": "fas fa-file",
"items": [
{
"title": "Chat",
"icon": "fas fa-envelope",
"url": "http://192.168.1.5:5000"
},
{
"title": "Media Server",
"icon": "fas fa-music",
"url": "http://192.168.1.5:5001"
},
{
"title": "Process API",
"icon": "fas fa-microchip",
"url": "http://192.168.1.5:5002"
},
{
"title": "Smart Home",
"icon": "fas fa-house",
"url": "http://192.168.1.5:5003"
}
]
}
```
You can also add hotkeys for specific links by adding the `hotkey` property to a specific section item object. For example, the following configuration lets us open the chat web service by pressing the `2` numeric key:
```json
{
"title": "Chat",
"icon": "fas fa-envelope",
"url": "http://192.168.1.5:5000",
"hotkey": 2
},
```
### Adding status indicators
Every dashboard typically offers a feature to add web service status indicators that show whether a specific service is up or down. In Dashy, you can activate status indicators for all web service entries by using the following app configuration:
```json
"appConfig": {
...
...
"statusCheck": true,
"statusCheckInterval": 10
},
```
This configuration checks the status of all web services every ten seconds and updates indicators:  Here, we see red indicators for all links in the Apps section because those web services don’t exist on the network.
## Using Dashy widgets
A generic dashboard interface typically consists of widgets that provide helpful summarized details. Dashy offers many inbuilt widgets to display time, weather, trending GitHub repositories, networking information, host hardware resources usage, etc., for dashboard users.
### Adding and removing widgets
Let’s create a new section in the dashboard and add two widgets. We’ll add the `clock` and `joke` widgets (the joke one renders random programming joke!).
Create a new section with the following JSON:
```json
{
"name": "Widgets",
"icon": "fas fa-cube",
"widgets": [
{
"type": "clock",
"options": {
"timeZone": "America/Chicago",
"hideSeconds": true
}
},
{
"type": "joke",
"options": {
"language": "en",
"category": "programming",
"safeMode": true
}
}
]
}
```
As you can see, every widget entry has two JSON properties:
* `type`: Widget identifier, i.e., `clock`
* `options`: Widget configuration options
Here, we used `clock` and `joke` widget identifiers with options inside a new section. Save the updated configuration. Now you’ll see widgets in a new section:  You can edit the `widgets` array and remove entries to remove specific widgets from the dashboard. Browse all supported widgets from the [official widgets documentation](https://dashy.to/docs/widgets).
### Creating your own Dashy widget with Vue
Dashy offers many pre-developed widgets with every fresh installation, and you can request new widgets from the Dashy development team too. You can also create your own widget if you are familiar with Vue-based frontend development.
The Dashy project loads widgets from its codebase itself and won’t offer a centralized web store to install widgets yet. You can easily modify the source code of the Dashy instance and add a new widget. If you need to re-use a widget in several Dashy installations, you can maintain your own GitHub repository fork since Dashy is an open source project released under the MIT license.
Let’s learn how to create a custom Dashy widget by updating the Docker container’s source files. We’ll create a new simple widget called Cats that displays some facts about cats via [the `meowfacts.herokuapp.com` public RESTful API](https://meowfacts.herokuapp.com/).
First, enter into the Dashy Docker container’s shell interpreter:
```bash
docker exec -it my-dashboard sh
```
Now, create a new widget component named `Cats.vue` within the container:
```bash
touch src/components/Widgets/Cats.vue
```
Add the following Vue component implementation to the `Cats.vue` file:
```javascript
<template>
<div v-if="catFact" class="cats-wrapper">
<p class="cat-fact">{{ catFact }}</p>
</div>
</template>
<script>
import axios from 'axios';
import WidgetMixin from '@/mixins/WidgetMixin';
import { widgetApiEndpoints } from '@/utils/defaults';
export default {
mixins: [WidgetMixin],
components: {},
data() {
return {
catFact: null,
};
},
methods: {
/* Make GET request to Cat Facts API endpoint */
fetchData() {
axios.get(widgetApiEndpoints.catFacts)
.then((response) => {
this.processData(response.data);
})
.catch((dataFetchError) => {
this.error('Unable to fetch any cat facts', dataFetchError);
})
.finally(() => {
this.finishLoading();
});
},
/* Assign data variables to the returned data */
processData(data) {
[this.catFact] = data.data;
},
},
};
</script>
<style scoped lang="scss">
.cats-wrapper {
p.cat-fact {
color: var(--widget-text-color);
font-size: 1.2rem;
}
}
</style>
```
_**N.B.**: You can use the_ `vi` _editor program to add this code content to the file. Open the_ `Cats.vue` _file from the_ `vi` _editor, copy this code segment, and press_ `Ctrl + Shift + V` _shortcut to paste._ Next, add the cat facts API endpoint to the widget API endpoints in the `src/utils/defaults.js` file:
```javascript
widgetApiEndpoints: {
// ---
// ---
catFacts: 'https://meowfacts.herokuapp.com/',
},
```
Finally, register the newly created widget by modifying the `src/components/Widgets/WidgetBase.vue` file, as shown in the following code snippet:
```javascript
const COMPAT = {
// ---
// ---
cats: 'Cats',
};
```
Visit the Dashy instance from the web browser and rebuild the app:  Refresh the dashboard page. Now you will see the newly created Cats widget:  Similarly, you can create any widget for your dashboard using Vue. I demonstrated creating a custom widget by directly editing the Dashy codebase in the Docker container. However, you can fork and clone the Dashy GitHub repository for creating complex widgets.
## Creating private dashboards
By default, the Dashy app is publicly accessible, so anyone in the network can access your dashboard if they know the server’s hostname and Dashy port. Similarly, if you deploy your dashboard into a managed server, anyone on the internet can access it if they know the dashboard website URL or public IP. We can disable dashboard configuration and prevent dashboard interface modifications, but it doesn’t make your dashboard truly private.
Dashy has inbuilt basic authentication support with hashed passwords. It also supports the [KeyCloak](https://en.wikipedia.org/wiki/Keycloak) authentication server integration.
You can make your dashboard private with basic authentication by updating the `appConfig` object with an `auth` object as follows:
```json
"appConfig": {
...
...
"auth": {
"users": [
{
"user": "admin",
"type": "admin",
"hash": "8C6976E5B5410415BDE908BD4DEE15DFB167A9C873FC4BB8A81F6F2AB448A918"
},
{
"user": "user",
"hash": "04F8996DA763B7A969B1028EE3007569EAF3A635486DDAB211D512C85B9DF8FB"
}
]
}
}
```
The above basic authentication configuration creates an admin account and a normal user account. Here, we used [SHA](https://en.wikipedia.org/wiki/Secure_Hash_Algorithms)-256 hash strings for passwords. Note that, the unencrypted passwords are the same as usernames in the above JSON configuration segment.
Save the config and reload the app. Now, you have to log in to see your private dashboard:  Now, the `user` account login won’t allow editing the configuration file on the server, but you can use the `admin` account for updating the Dashy configuration.
The hash-based authentication method is a good authentication strategy for home labs since its configuration is so easy. But, if you need to enable authentication for a dashboard that is open publicly to the internet, you can implement a more secure alternative authentication method, as explained in the [official documentation](https://dashy.to/docs/authentication/#alternative-authentication-methods).
## Conclusion
In this article, we learned about Dashy and its features by installing it into the computer via Docker. Dashy helps home labs or production servers to aggregate running web services into one web-based interface allowing administrators or users to browse available services from one place. Also, anyone can have their own Dashy instance as an alternative to traditional web browser bookmarks.
Moreover, web developers can use Dashy to store the project details of each client. Homar, Organizr, Flame, and Heimdall popular projects also help us create self-hosted dashboards, but Dashy competitively offers more features to create fully-featured, beautiful dashboards. Its Vue codebase architecture also motivates developers to extend it with custom widgets and themes.
Create a beautiful, customizable dashboard with Dashy and experience how it boosts your daily productivity!
---
##Get set up with LogRocket's modern error tracking in minutes:
1. Visit https://logrocket.com/signup/ to get an app ID.
2. Install LogRocket via NPM or script tag. `LogRocket.init()` must be called client-side, not server-side.
NPM:
```bash
$ npm i --save logrocket
// Code:
import LogRocket from 'logrocket';
LogRocket.init('app/id');
```
Script Tag:
```javascript
Add to your HTML:
<script src="https://cdn.lr-ingest.com/LogRocket.min.js"></script>
<script>window.LogRocket && window.LogRocket.init('app/id');</script>
```
3.(Optional) Install plugins for deeper integrations with your stack:
* Redux middleware
* ngrx middleware
* Vuex plugin
[Get started now](https://lp.logrocket.com/blg/signup) | leemeganj |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.