id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,913,022 | Using Async in Ruby on Rails for CSV export | In this article, we'll go over the methods used to achieve async CSV export with Ruby on... | 0 | 2024-07-09T11:10:14 | https://dev.to/mharut/using-async-in-ruby-on-rails-for-csv-export-n69 | csv, ruby, async, webdev | In this article, we'll go over the methods used to achieve async CSV export with Ruby on Rails.
Problem: When a large amount of data is exported and the server is unable to process it quickly after receiving client requests, a timeout error may occasionally occur with CSV exports.
The solution is to move the CSV export to a worker thread, process it there so that it doesn't slow down the main thread, and notify the client when the desired CSV file is prepared.
We implemented a general solution using the Command design pattern, which will enable us to run the same code for many types of CSV format outputs.
Technology used
- Heroku
- Ruby On Rails
- Sidekiq as a worker thread
- Redis for keeping the state of csv export processing
- Filestack or any other cloud storage service
Here, two enums with the following structure are used.
```ruby
EXPORTABLE_REDIS_STATUSES = {
processing: 'Processing',
complete: 'Processed',
}.freeze
EXPORTABLE_REDIS_KEYS = {
members_csv: 'MEMBERS_CSV_GENERATORS',
all_tasks_csv: 'ALL_TASKS_CSV',
my_tasks_csv: 'MY_TASKS_CSV',
}.freeze
```
Here, we have two states: `processing` and `processed`. One is used to launch a worker thread, and the other is used to produce a CSV file once the task is completed.
The second enum is only used as a filter to prevent csv exports that aren't registered with the application.
This is how the command invoker class, which acts as a sidekiq worker, looks.
```ruby
class Exports::ExportableCommandJob < ApplicationJob
after_enqueue do |job|
uuid = job.arguments.first[:uuid]
redis_key = redis_collection_key(job.arguments.first[:redis_key])
REDIS.hset(
redis_key,
uuid,
{ status: Constants::EXPORTABLE_REDIS_STATUSES[:processing] }.to_json
)
end
def perform(uuid:, redis_key:, command:, params: {}, cleanup_interval: nil)
params = JSON.parse(params).symbolize_keys unless params.is_a?(Hash)
command = command.constantize.new(params)
redis_key = redis_collection_key(redis_key)
file_data = command.call
tmp_file = Tempfile.new('upload', encoding: 'ascii-8bit')
tmp_file << file_data
tmp_file.flush
tmp_file.rewind
file_name = command.file_name
uploaded_file = UploadFileService::UploadableFile.new(file: tmp_file, filename: file_name)
details = UploadFileService.upload_file(uploaded_file)
tmp_file.unlink
file_path = details.metadata[:fileurl]
generator = JSON.parse(REDIS.hget(redis_key, uuid))
generator['status'] = Constants::EXPORTABLE_REDIS_STATUSES[:complete]
generator['exportable'] = file_path
REDIS.hset(redis_key, uuid, generator.to_json)
end
after_perform do |job|
uuid = job.arguments.first[:uuid]
redis_key = redis_collection_key(job.arguments.first[:redis_key])
ExportableCleanup.set(wait: job.arguments.first[:cleanup_interval] || 1.hour)
.perform_later(uuid: uuid, redis_key: redis_key)
end
private
def redis_collection_key(key)
redis_key = key.to_sym
Constants::EXPORTABLE_REDIS_KEYS[redis_key] || key
end
end
```
We used the uuid and redis_key to set the job's status to processing after it was queuing, allowing us to monitor its progress at any time.
We accept a command class name and its arguments via params in the perform method, allowing us to invoke a function and expect the presence of CSV data. After that, we store the data in a temporary file and upload it using a fileStack service or any cloud storage service.
We obtain the file's url after putting it on the cloud, and we use it to set Redis to change the task state from `processing` to `processed`. The client can now request to get updated when the CSV export is completed and to receive generated URL for downloading.
For filtering purposes, the private method `redis_collection_key` has been used here.
In the end `after_perform` schedules a cleanup task, as shown in this example.
```ruby
class ExportableCleanup < ApplicationJob
def perform(uuid:, redis_key:)
exportable_json = REDIS.hget(redis_key, uuid)
unless exportable_json.nil?
generator = JSON.parse(exportable_json)
file_url = generator['exportable']
UploadFileService.remove_file(file_url) unless file_url.blank?
end
REDIS.hdel(redis_key, uuid)
end
end
```
Here it just removes data from Redis and file from cloud storage.
The invoker class call looks like this
```ruby
def export_csv_async(args, redis_key)
uuid = SecureRandom.uuid
Exports::ExportableCommandJob.perform_later(
uuid: uuid,
redis_key: redis_key,
command: 'CsvExportDataGenerator',
params: args.to_h.to_json,
)
uuid
end
```
which returns the uuid and will return it to the client so it can make the state checking calls as described above.
This is a simple action which checks process state in `Redis`
```ruby
class ExportableGeneratorsController < ActionController::API
include HttpErrorHandling
before_action :load_resource
def show
render json: { status: @generator['status'], fileUrl: @generator['exportable'] }
end
private
def load_resource
@exportable_key = Constants::EXPORTABLE_REDIS_KEYS[params[:key].to_sym]
gen = REDIS.hget(@exportable_key, params[:uuid])
return not_found('Process not found') if gen.nil?
@generator = JSON.parse(gen)
end
end
```
| mharut |
1,913,076 | Latest Gemini features support in LangChain4j 0.32.0 | LangChain4j 0.32.0 was released yesterday, including my pull requestwith the support for lots of new... | 0 | 2024-07-12T18:31:49 | https://glaforge.dev/posts/2024/07/05/latest-gemini-features-support-in-langchain4j/ | ---
title: Latest Gemini features support in LangChain4j 0.32.0
published: true
date: 2024-07-05 09:53:30 UTC
tags:
canonical_url: https://glaforge.dev/posts/2024/07/05/latest-gemini-features-support-in-langchain4j/
---
[LangChain4j](https://docs.langchain4j.dev/) 0.32.0 was released yesterday, including my [pull request](https://github.com/langchain4j/langchain4j/pull/1278)with the support for lots of new Gemini features:
- **JSON output mode** , to force Gemini to reply using JSON, without any markup,
- **JSON schema** , to control and constrain the JSON output to comply with a schema,
- **Response grounding** with Google Search web results and with private data in Vertex AI datastores,
- Easier debugging, thanks to new builder methods to **log requests and responses** ,
- **Function calling mode** (none, automatic, or a subset of functions),
- **Safety settings** to catch harmful prompts and responses.
Let’s explore those new features together, thanks to some code examples! And at the end of the article, if you make it through, you’ll also discover **2 extra bonus points**.
## JSON output mode
Creating LLM-powered applications means working with text, as this is what LLMs return. But to facilitate this integration between LLM responses and your code, the text format of choice is usually JSON, as it’s human-readable, and easy to parse programmatically.
However, LLMs are a bit chatty, and rather than sending you back a nice raw JSON document, instead, it replies with some extra sentence, and some markdown markup to wrap the piece of JSON.
Fortunately, Gemini 1.5 (Flash and Pro) allows you to specify the response MIME type. Currently, only `application/json` is supported, but other formats may come later.
To do that, when instantiating the Gemini model, use the `responseMimeType()` builder method:
```java
var model = VertexAiGeminiChatModel.builder()
.project(PROJECT_ID)
.location(LOCATION)
.modelName("gemini-1.5-flash")
.responseMimeType("application/json")
.build();
String response = model.generate("Roll a dice");
System.out.println(response);
```
No sentence, no markdown markup, nothing, just pure JSON:
```
{"roll": 3}
```
We didn’t even need to say in the prompt we wanted to get a JSON response!
However, the JSON key of that document may vary from time to time, so you may still wish to be a bit more prescriptive in your prompt, and ask the model to return JSON explicitly, give it an example of the JSON output you expect, etc. That’s the usual prompting approach…
But now there’s more!
## JSON Schema output
This is quite unique in the LLM ecosystem, as I believe it’s the only model out there that allows you to specify a JSON schema for constraining the JSON output. This works for Gemini 1.5 Pro only, not with Gemini 1.5 Flash.
Let’s have another look at our previous dice roll example, and let’s update it to specify a JSON schema for the output generation:
```java
import static dev.langchain4j.model.vertexai.SchemaHelper.fromClass;
//...
record DiceRoll(int roll) {}
var model = VertexAiGeminiChatModel.builder()
.project("genai-java-demos")
.location("us-central1")
.modelName("gemini-1.5-pro")
.responseSchema(fromClass(DiceRoll.class))
.build();
String response = model.generate("Roll a dice");
System.out.println(response);
```
The generated JSON document will always contain the `roll` key
```
{ "roll": 5 }
```
In this example, we used a convenience method called `fromClass()` that creates a JSON schema that corresponds to a Java type (here a Java record).
But there’s also another convenient method that lets us pass a JSON schema string, called `fromJsonSchema()`:
```java
var model = VertexAiGeminiChatModel.builder()
.project("genai-java-demos")
.location("us-central1")
.modelName("gemini-1.5-pro")
.responseSchema(fromJsonSchema("""
{
"type": "object",
"properties": {
"roll": {
"type": "integer"
}
}
}
"""))
.build();
```
It’s also possible to construct a JSON schema programmatically:
```java
var model = VertexAiGeminiChatModel.builder()
.project("genai-java-demos")
.location("us-central1")
.modelName("gemini-1.5-pro")
.responseSchema(Schema.newBuilder()
.setType(Type.OBJECT)
.putProperties("roll",
Schema.newBuilder()
.setType(Type.INTEGER)
.build())
.build())
.build();
```
Now you always get consistent JSON outputs!
## Response grounding with Google Search web results and Vertex AI datastores
Large Language Models are wonderful creative machines, but rather than benefiting from their high degree of creativity, we’d prefer having factual responses grounded on data and documents.
Gemini offers the ability to [ground responses](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/ground-gemini):
- against Google Search web results,
- against Vertex AI search datastores.
### Use Google Search to ground responses
The training of an LLM ended at a certain date: its _cut-off_ date. So it doesn’t know about news that happened after that date. But you can request Gemini to use Google Search to find more up-to-date information.
For example, if we ask Gemini about the current elections going on in France, it could reply with something like this:
```
There is no current national election happening in France right now.
The last major national election in France was the **Presidential
election in April and May 2022**, where Emmanuel Macron won a second
term.
There are, however, **local elections** happening regularly in
different regions of France.
To stay updated on French elections, you can check the website of
the **French Ministry of the Interior** or reputable news sources
like **The Guardian, BBC, CNN, or Le Monde**.
```
Now, let’s enable the use of Google Search web result with the `useGoogleSearch(true)` method:
```java
var model = VertexAiGeminiChatModel.builder()
.project(PROJECT_ID)
.location(LOCATION)
.modelName("gemini-1.5-flash")
.useGoogleSearch(true)
.build();
String response = model.generate(
"What is the current election going on in France?");
System.out.println(response);
```
The answer will be much different, and indeed factual and up-to-date:
```
France held the first round of a parliamentary election on July 4,
2024. The second round will be on July 7, 2024. The election is
significant because it could result in the first far-right government
in France since World War II. The National Rally, President Emmanuel
Macron’s centrist alliance, and the New Popular Front coalition are
the three major political blocs competing in the election. The
outcome of the election is highly uncertain, with the far-right
National Rally potentially gaining a parliamentary majority. If the
National Rally wins a majority, Macron would be expected to appoint
Jordan Bardella, the party's president, as prime minister.
```
There’s indeed a parliamentary election going on right now in France. Those elections were decided only a month ago, thus past the cut-of-date of the knowledge of the model.
> For my French audience, don’t forget to go voting next Sunday!
### Grounding with Vertex AI Search
The idea is that we want to ground responses on our own data. This is particularly important when the knowledge required is actually private information, like our internal docs, or our customers’ docs.
My colleague Mete wrote a great[article explaining how to setup grounding with private data](https://atamel.dev/posts/2024/07-01_grounding_with_own_data_vertexai_search/). Below, I’ll assume that we created a Vertex AI search app with a datastore backed by a Google Cloud Storage bucket that contains a fictious document which is a car manual, about the _Cymbel Starlight_ car model! I’m taking the same example as in Mete’s article.
This time, we specify the search location to point at the Vertex AI search datastore with `vertexSearchDatastore()`:
```java
var model = VertexAiGeminiChatModel.builder()
.project(PROJECT_ID)
.location(LOCATION)
.modelName("gemini-1.5-flash")
.vertexSearchDatastore(String.format(
"projects/%s/locations/%s/collections/%s/dataStores/%s",
PROJECT_ID, "global", "default_collection",
"cymbal-datastore_1720169982142")
)
.build();
String response = model.generate(
"What is the cargo capacity of Cymbal Starlight?");
System.out.println(response);
```
It’s a fictious car that doesn’t exist, but it’s covered in that private document, and indeed, Gemini is now able to respond to that question:
```
The Cymbal Starlight 2024 has a cargo capacity of 13.5 cubic feet.
```
What’s interesting as well is that the response returned by Gemini provides some context about the source document that helped it answer the user query (we’ll see in the next section how to enable logging requests and responses):
```
grounding_metadata {
2: {
1: {
3: 66
}
2: 0x3f7deee0
}
5: {
2: {
1: "gs://genai-java-demos-documents/cymbal-starlight-2024.pdf"
2: "cymbal-starlight-2024"
}
}
6: {
1: {
3: 66
4: "The Cymbal Starlight 2024 has a cargo capacity of 13.5 cubic feet."
}
2: "\000"
3: {
257772: 63
}
}
```
However, to be honest, I’m not quite sure what the numbers exactly mean, but this metadata mentions that the PDF uploaded in cloud storage is the one that was used to shape the answer of the LLM, and gives an excerpt of the sentence that was found in the document.
## Request and response logging
To better understand what’s going on under the hood, you can enable request and response logging. That way, you’re able to see exactly what is sent to Gemini, and what Gemini replies.
To enable logging, there are two methods we can use:
- `logRequests(true)` to log the request sent to Gemini,
- `logResponse(true)` to log the response received from Gemini.
Let’s see that in action:
```java
var model = VertexAiGeminiChatModel.builder()
.project(PROJECT_ID)
.location(LOCATION)
.modelName("gemini-1.5-flash")
.logRequests(true)
.logResponses(true)
.build();
String response = model.generate("Why is the sky blue?");
System.out.println(response);
```
Here’s what’s logged:
```
[main] DEBUG dev.langchain4j.model.vertexai.VertexAiGeminiChatModel -
GEMINI (gemini-1.5-flash) request: InstructionAndContent {
systemInstruction = null,
contents = [role: "user"
parts {
text: "Why is the sky blue?"
}
]
} tools: []
[main] DEBUG dev.langchain4j.model.vertexai.VertexAiGeminiChatModel -
GEMINI (gemini-1.5-flash) response: candidates {
content {
role: "model"
parts {
text: "The sky appears blue due to a phenomenon called
**Rayleigh scattering**. Here\'s a breakdown:\n\n* **Sunlight
is made up of all colors of the rainbow.** When sunlight enters
the Earth\'s atmosphere, it encounters tiny particles like
nitrogen and oxygen molecules.\n* **These particles scatter the
sunlight in all directions.** However, shorter wavelengths of
light, like blue and violet, scatter more strongly than longer
wavelengths, like red and orange.\n* **This preferential
scattering of shorter wavelengths is called Rayleigh
scattering.**
As a result, we see more blue light scattered throughout the sky,
making it appear blue.\n\n **Why is the sky not violet?** \n\nEven
though violet light scatters even more strongly than blue, our
eyes are more sensitive to blue light. This is why we perceive
the sky as blue rather than violet.\n\n**Other factors that
affect sky color: **\n\n*** Time of day:** The sky appears more
red or orange at sunrise and sunset because the sunlight has to
travel through more of the atmosphere, scattering away most of
the blue light.\n* **Clouds:** Clouds are made up of larger water
droplets or ice crystals, which scatter all wavelengths of light
equally. This is why clouds appear white.\n* **Pollution:**
Pollution particles can scatter light differently, sometimes
making the sky appear hazy or even reddish.\n\nLet me know if
you have any other questions about the sky! \n"
}
}
finish_reason: STOP
safety_ratings {
category: HARM_CATEGORY_HATE_SPEECH
probability: NEGLIGIBLE
probability_score: 0.054802597
severity: HARM_SEVERITY_NEGLIGIBLE
severity_score: 0.03314852
}
safety_ratings {
category: HARM_CATEGORY_DANGEROUS_CONTENT
probability: NEGLIGIBLE
probability_score: 0.100348406
severity: HARM_SEVERITY_NEGLIGIBLE
severity_score: 0.06359858
}
safety_ratings {
category: HARM_CATEGORY_HARASSMENT
probability: NEGLIGIBLE
probability_score: 0.10837755
severity: HARM_SEVERITY_NEGLIGIBLE
severity_score: 0.021491764
}
safety_ratings {
category: HARM_CATEGORY_SEXUALLY_EXPLICIT
probability: NEGLIGIBLE
probability_score: 0.10338596
severity: HARM_SEVERITY_NEGLIGIBLE
severity_score: 0.020410307
}
}
usage_metadata {
prompt_token_count: 6
candidates_token_count: 288
total_token_count: 294
}
```
Let me give you a bit more details about the logging. LangChain4j uses Slf4j by default for logging. Request & Response logging is logged at `DEBUG` level. So we have to configure our logger and/or logger façace accordingly.
In my test project for this article, I configured the following `Maven` dependencies for `Slf4j` and the `Simple` logger:
```xml
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-api</artifactId>
<version>2.0.13</version>
</dependency>
<dependency>
<groupId>org.slf4j</groupId>
<artifactId>slf4j-simple</artifactId>
<version>2.0.13</version>
</dependency>
```
I created a properties file to configure the loggers: `src/main/resources/simplelogger.properties`, which contains the following configuration:
```
org.slf4j.simpleLogger.defaultLogLevel=debug
org.slf4j.simpleLogger.log.io.grpc.netty.shaded=info
```
I set the default logging level to be `debug`. But there’s also Netty, the networking library used under the hood by the Gemini Java SDK, that logs at debug level. So I specified that the logging for this library should only be at `info` and above, otherwise the output is super chatty.
## Function calling mode
So far, when using Gemini for[function calling](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling), the model would decide on its own if a function would be useful to call, and which function to call.
But Gemini introduces the ability to[control the function or tool choice](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/function-calling#tool-config).
There are 3 options:
- `AUTO` — The familiar and default mode, where Gemini decides on its own if a function call is necessary and which one should be made,
- `ANY` — Allows to specify a subset of functions from all those available, but also forces the model to pick up one of them (only supported by Gemini 1.5 Pro),
- `NONE` — Even if tools are defined and available, prevents Gemini to use any of those tools.
Let’s have a look at this example:
```java
var model = VertexAiGeminiChatModel.builder()
.project(PROJECT_ID)
.location(LOCATION)
.modelName("gemini-1.5-pro")
.logRequests(true)
.logResponses(true)
.toolCallingMode(ToolCallingMode.ANY)
.allowedFunctionNames(Arrays.asList("add"))
.build();
ToolSpecification adder = ToolSpecification.builder()
.description("adds two numbers")
.name("add")
.addParameter("a", JsonSchemaProperty.INTEGER)
.addParameter("b", JsonSchemaProperty.INTEGER)
.build();
UserMessage message = UserMessage.from("How much is 3 + 4?");
Response<AiMessage> answer = model.generate(asList(message), adder);
System.out.println(
answer.content().toolExecutionRequests().getFirst());
```
We specify the `ToolCallingMode.ANY` mode, and we list the allowed function names of the functions that the model must pick in order to reply to the request (with the `allowedFunctionNames()` builder method).
We describe the tool that can be called. We create a message. And when calling `generate()`, we pass the tool specification corresponding to the function we want to be called.
The output will show that the model replied with the mandatory tool execution request:
```
ToolExecutionRequest { id = null, name = "add",
arguments = "{"a":3.0,"b":4.0}" }
```
Now it’s our turn to call the `add` function with the arguments. And then send back the function execution result back to Gemini.
> **Warning** : Currently, it is not possible to use the `ANY` forced function calling mode when using LangChain4j’s `AiServices` class.
>
> `AiServices` takes care of automatic function calling. But the process is a two-step request / response mechanism:
>
> - First, we ask the model the math question and pass the tool specification along.
> - The model replies with a `ToolExecutionRequest`.
> - Then `AiServices` makes the function call locally, and replies to the model with the function execution result. However, since the `ANY` calling mode is specified at the model level, the model still wants to reply with yet another tool execution request. Although at this point, the second call made to the model was _just_ to pass the function execution result, not to request another tool execution.
> - So `AiServices` enters an infite loop as the model requests a function execution again and again, not taking into account the execution result that it received.
>
> When using `AiServices`, it’s better to let Gemini operate under the default `AUTO` tool mode. So it knows when it needs to request a tool execution, or if just needs to handle the tool execution response.
>
> If you want to use the `ANY` mode with `allowedFunctionNames()`, then don’t use `AiServices`, and handle the function calls on your own in your code, to avoid such infite loop situations.
## Specify safety settings
In LLM-powered applications, where users can enter any kind of weird textual inputs, you may want to limit harmful content that may be ingested. To do so, you can specify some safety settings, for different categories of content, with different thresholds of acceptance:
```java
import static dev.langchain4j.model.vertexai.HarmCategory.*;
import static dev.langchain4j.model.vertexai.SafetyThreshold.*;
//...
var model = VertexAiGeminiChatModel.builder()
.project(PROJECT_ID)
.location(LOCATION)
.modelName("gemini-1.5-flash")
.safetySettings(Map.of(
HARM_CATEGORY_DANGEROUS_CONTENT, BLOCK_LOW_AND_ABOVE,
HARM_CATEGORY_SEXUALLY_EXPLICIT, BLOCK_MEDIUM_AND_ABOVE,
HARM_CATEGORY_HARASSMENT, BLOCK_ONLY_HIGH,
HARM_CATEGORY_HATE_SPEECH, BLOCK_MEDIUM_AND_ABOVE
))
.build();
```
If you want to make your app safer for your end-users, and to avoid malicious or ill-disposed users, that’s the way to go!
## Bonus point #1: Streaming responses with lambda functions
I’ll round up the review of Gemini-focused features with one little addition I contributed to the project: the ability to pass a lambda instead of a streaming content handler, when using a streaming model.
This is not Gemini-related, you can use it with any model!
More concretely, if you want to use Gemini or another model in streaming mode, to see the response being printed as it’s generated by the model, you would usually write the following code:
```java
var model = VertexAiGeminiStreamingChatModel.builder()
.project(PROJECT_ID)
.location(LOCATION)
.modelName("gemini-1.5-flash")
.build();
model.generate("Why is the sky blue?", new StreamingResponseHandler<>() {
@Override
public void onNext(String aFewTokens) {
System.out.print(aFewTokens);
}
@Override
public void onError(Throwable throwable) {
throw new RuntimeException(throwable);
}
});
```
Using an anonymous inner class implementing the `StreamingResponseHandler` interface is quite verbose. Fortunately, I contributed a couple static methods you can import, to make the code a little bit more concise:
```java
import static dev.langchain4j.model.LambdaStreamingResponseHandler.onNext;
import static dev.langchain4j.model.LambdaStreamingResponseHandler.onNextAndError;
//...
// onNext
model.generate("Why is the sky blue?",
onNext(System.out::println));
// onNextAndError
model.generate("Why is the sky blue?",
onNextAndError(
System.out::println,
ex -> { throw new RuntimeException(ex); }
));
```
Now you can stream your LLM output in a single instruction!
## Bonus point #2: Generating stunning images with Imagen v3
A second bonus point in this new LangChain4j release is the fact that the Vertex AI Image model now supports[Imagen v3](https://deepmind.google/technologies/imagen-3/) (Google DeepMind’s latest high-quality image generation model).
> **Warning:** To use the Imagen model, you’ll still have to be allow-listed for now. You’ll need to [fill this form](https://docs.google.com/forms/d/1cqt9padvfMgqn23W5FMPTqh7bW1KLkEOsC5G6uC-uuM/viewform)to request access to the model.
There are a few new parameters that are available that you can take advantage of when generating pictures. Let’s have a look at the following image generation code:
```java
var imagenModel = VertexAiImageModel.builder()
.project(PROJECT)
.location(LOCATION)
.endpoint(ENDPOINT)
.publisher("google")
.modelName("imagen-3.0-generate-preview-0611")
.aspectRatio(VertexAiImageModel.AspectRatio.LANDSCAPE)
.mimeType(VertexAiImageModel.MimeType.JPEG)
.compressionQuality(80)
.watermark(true) // true by default with Imagen v3
.withPersisting()
.logRequests(true)
.logResponses(true)
.build();
String prompt = """
An oil painting close-up, with heavy brush strokes full of
paint, of two hands shaking together, a young one, and an
old one conveying a sense of heartfelt thanks and connection
between generations
""";
Response<Image> imageResponse = imagenModel.generate(prompt);
System.out.println(imageResponse.content().url());
```
Let’s see the resulting picture?

In the code above, you certainly noticed the new builder methods:
- `aspectRatio()` — not only square, but wide and narrow landscape and portrait modes are available,
- `mimeType()` — in addition to PNG, you can request JPEG image generation,
- `comressionQuality()` — when requesting JPEG, you can chose the level of compression for encoding the image,
- `watermark()` — to have all your generated images be watermarked with [SynthId](https://deepmind.google/technologies/synthid/),
- `logRequest()` / `logResponse()` — to see what is exchanged with the model, in and out,
- `persistToCloudStorage()` — to specify you want the image saved in a cloud storage bucket (not used in this example).
If you get a chance, and request access to Imagen v3, you’ll notice really great quality improvements compared to v2!
## Conclusion
Lots of new Gemini related features in this[release of LangChain4j](https://github.com/langchain4j/langchain4j/releases/tag/0.32.0)! I hope this article helped you learn about them, and will make you want to use them in your projects.
If you want to go hands-on with Gemini with LangChain4j, don’t forget to check out my self-paced codelab:[Gemini codelabg for Java developers, using LangChain4j](https://dev.to/glaforge/gemini-codelab-for-java-developers-using-langchain4j-g5n-temp-slug-1278985). | glaforge | |
1,913,185 | Which Map Transformation Should I Use? | Map transformation functions find common usage in Android development. They are part of the Kotlin... | 0 | 2024-07-10T10:08:43 | https://darrylbayliss.net/which-map-transformation-should-i-use/ | kotlin, android, androiddev, functional | ---
title: Which Map Transformation Should I Use?
published: true
date: 2024-07-04 05:00:00 UTC
tags: kotlin,android, androiddev, functional
canonical_url: https://darrylbayliss.net/which-map-transformation-should-i-use/
---
Map transformation functions find common usage in Android development. They are part of the [Kotlin Standard Library](https://kotlinlang.org/api/latest/jvm/stdlib/), a library built by JetBrains to provide standard functionality across Kotlin codebases.
Inside the Standard Library is a package called [kotlin.collections](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/), containing the building blocks for different collections. These include [Lists](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/-list/), [Maps](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/-map/#kotlin.collections.Map), and [Sets](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/-set/).
The collections package also contain the map transformation functions. These functions take the contents of a collection and transform them into another collection containing the transformed state.

Let’s take a look at some examples.
## The Map Transformation
The first transformation is the `map()` function.
```kotlin
val numbersList = listOf(1, 2, 3)
numbersList.map { it + 1 }.also(::println) // listOf(2, 3, 4)
val numbersMap = mapOf("one" to 1,
"two" to 2,
"three" to 3)
numbersMap.map { it.value + 1 }.also(::println) // listOf(2, 3, 4)
val numbersSet = setOf(1, 2, 3)
numbersSet.map { it + 1 }.also(::println) // listOf(2, 3, 4)
```
This function iterates over a collection and applies the transformation to each value within a lambda, before returning a new collection containing the transformed values.
`map()` is available across different types of collections. The reason for this is because `map()` is an extension function on the [Iterable](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/-iterable/) interface. Most collections implement `Iterable`, meaning they can make use of the map function.
Map collection types are different. They don’t implement `Iterable`, instead they possess a separate [extension method](https://github.com/JetBrains/kotlin/blob/037b3697ed635a52c283da7b2bf6ecd0961ce8f4/libraries/stdlib/common/src/generated/_Maps.kt#L125) to provide a map function that iterates over each entry and returns a List of results.
## Map Transformations and Null Values
Map transformations come in different forms. Another useful transformation is the `mapNotNull()` function.
```kotlin
val numbersList = listOf(1, 2, 3)
numbersList.mapNotNull { if (it + 1 == 3) null else it + 1 }.also(::println) // listOf(2, 4)
val numbersMap = mapOf("one" to 1,
"two" to 2,
"three" to 3)
numbersMap.mapNotNull { if (it.value + 1 == 3) null else it.value + 1 }.also(::println) // listOf(2, 4)
val numbersSet = setOf(1, 2, 3)
numbersSet.mapNotNull { if (it + 1 == 3) null else it + 1 }.also(::println) // listOf(2, 4)
```
This function acts both as a transformation function and a filter for null values. If the transformation inside the lambda results in `null` then the value is not added to the new list.
`mapNotNull()` is available to collections implementing the `Iterable` interface.
Map types have their own [extension function](https://github.com/JetBrains/kotlin/blob/037b3697ed635a52c283da7b2bf6ecd0961ce8f4/libraries/stdlib/common/src/generated/_Maps.kt#L135) to provide similar functionality. Returning a list of results.
## Acquiring an Index with Map Transformations
If you need to know the location of the value within the collection being transformed you can use `mapIndexed()`.
```kotlin
val numbersList = listOf(1, 2, 3)
numbersList.mapIndexed { index, number -> number + index + 1 }.also(::println) // listOf(2, 4, 6)
val numbersMap = mapOf("one" to 1,
"two" to 2,
"three" to 3)
numbersMap.asIterable().mapIndexed { index, entry -> entry.value + index + 1 }.also(::println) // listOf(2, 4, 6)
val numbersSet = setOf(1, 2, 3)
numbersSet.mapIndexed { index, number -> number + index + 1 }.also(::println) // listOf(2, 4, 6)
```
Here the location of the value (the index) within the collection is passed alongside the value being transformed. `mapIndexed()` is available to collections implementing the `Iterable` interface.
Map types don’t have a `mapIndexed()` extension function. What you can do though is use the [asIterable()](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/as-iterable.html) extension to wrap the Map inside an Iterable instance. Then you can use `mapIndexed()` without issue.
If you need to check for null values and also require an index you can also use [mapIndexedNotNull()](https://github.com/JetBrains/kotlin/blob/037b3697ed635a52c283da7b2bf6ecd0961ce8f4/libraries/stdlib/common/src/generated/_Collections.kt#L1576).
```kotlin
val numbersList = listOf(1, 2, 3)
numbersList.mapIndexedNotNull { index, number -> if (number + 1 == 3) null else number + index + 1 }.also(::println) // listOf(2, 6)
val numbersMap = mapOf("one" to 1,
"two" to 2,
"three" to 3)
numbersMap.asIterable().mapIndexedNotNull { index, entry -> if (entry.value + 1 == 3) null else entry.value + index + 1 }.also(::println) // listOf(2, 6)
val numbersSet = setOf(1, 2, 3)
numbersSet.mapIndexedNotNull { index, number -> if (number + 1 == 3) null else number + index + 1 }.also(::println) //listOf(2, 6)
```
`mapIndexedNotNull()` works similarly to `mapNotNull()`. It filters away null values within the transformation lambda whilst also passing in the index for the value from the collection. Like other map transformations it exists on all types implementing Iterable.
Map types can use the `asIterable()` function to gain access to `mapIndexedNotNull()`.
## Other Transformations for Map Types
Map types work differently than other collections due to not implementing the [Collection](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/-collection/) or Iterable interfaces. Because of this they have a few of their own transformation functions not available to other types. The first is called [mapKeys()](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/map-keys.html).
```kotlin
val numbersMap = mapOf("one" to 1,
"two" to 2,
"three" to 3)
numbersMap.mapKeys { entry -> entry.key.capitalize() }.also(::println) // mapOf("One" to 1, "Two" to 2, "Three" to 3)
```
`mapKeys()` transforms each key within the map by passing through each Entry of the map. Once all the transformations are complete they are applied to the Map.
The second function is called [mapValues()](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/map-values.html).
```kotlin
val numbersMap = mapOf("one" to 1,
"two" to 2,
"three" to 3)
numbersMap.mapValues { entry -> entry.value + 1 }.also(::println) // mapOf("one" to 2, "two" to 3, "three" to 4)
```
`mapValues()` works in a similar way. It passes through each `Entry` of the map and transforms each value. Once all the transformations are complete they are applied to the map.
## Passing Map Transformations to a Destination
If you want to pass applied transformations to a different collection other than the source there are a few functions to help.
```kotlin
val numbersList = listOf(1, 2, 3)
val numbersDestinationSet = mutableSetOf<Int>()
numbersList.mapTo(numbersDestinationSet) { it + 1 }
println(numbersDestinationList) // setOf(2, 3, 4)
val numbersMap = mapOf("one" to 1,
"two" to 2,
"three" to 3)
val numbersDestinationList2 = mutableListOf<Int>()
numbersList.mapTo(numbersDestinationList2) { it + 1 }
println(numbersDestinationList2) // listOf(2, 3, 4)
val numbersSet = setOf(1, 2, 3)
val numbersDestinationList = mutableListOf<Int>()
numbersSet.mapTo(numbersDestinationList) { it + 1 }
println(numbersDestinationList) // listOf(2, 3, 4)
```
The `mapTo` functions work similarly to `map()`. The difference is they write the transformations to the passed in collection. The collection being written doesn’t have to be the same type as the source collection. Useful if you have a usecase where a different collection would be more optimal.
Map types can’t use `mapTo()`. This is because `mapTo()` expects to write to a [MutableCollection](https://kotlinlang.org/api/latest/jvm/stdlib/kotlin.collections/-mutable-collection/#kotlin.collections.MutableCollection), which Map types don’t inherit from.
There is a `MutableMap` type, however because it doesn’t inherit from `MutableCollection` there is no `mapTo()` extension available.
## More Resources
If you’d like to learn more about Kotlin’s transformation methods. I highly recommend [this page](https://kotlinlang.org/docs/collection-transformations.html) on collection transformations from the Kotlin language documentation.
As well as covering map transformations it also covers other methods like zipping, association and flattening. These topics are beyond the scope of this post however are nevertheless useful to understand. | darrylbayliss |
1,913,188 | Getting started with Amazon SageMaker: Building your first machine learning model | This article provides a guide on using Amazon SageMaker to build, train, and deploy a machine... | 0 | 2024-07-08T05:00:00 | https://dev.to/dhoang1905/getting-started-with-amazon-sagemaker-building-your-first-machine-learning-model-2m7n | aws, ai, machinelearning, sagemaker | This article provides a guide on using Amazon SageMaker to build, train, and deploy a machine learning model for predicting house prices using the Ames Housing dataset. It covers the key features of SageMaker, data preprocessing steps, model training, and deployment, and demonstrates how to test the deployed model. The guide also includes important steps to clean up resources to avoid unnecessary costs.
---
# Overview of Amazon Sagemaker
<p> </p>
Amazon SageMaker is a fully managed service provided by AWS (Amazon Web Services) that enables developers and data scientists to build, train, and deploy machine learning models at scale. It simplifies the machine learning workflow by offering a suite of tools and services designed to handle various stages of the machine learning lifecycle, from data preparation to model deployment and monitoring.
<p> </p>
## Key Features
<p> </p>
### Integrated Development Environment:
<p> </p>
- **SageMaker Studio:** An integrated development environment (IDE) for machine learning that provides a web-based interface to build, train, and deploy models. It offers a collaborative environment with support for notebooks, debugging, and monitoring.
Data Preparation:
- **Data Wrangler:** Simplifies data preparation and feature engineering with a visual interface that integrates with various data sources.
- **Feature Store:** A repository to store, share, and manage features for machine learning models, ensuring consistency and reusability across projects.
<p> </p>
### Model Building:
<p> </p>
- **Built-in Algorithms:** Provides a collection of pre-built machine learning algorithms optimized for performance and scalability.
- **Custom Algorithms:** Supports bringing your own algorithms and frameworks, including TensorFlow, PyTorch, and Scikit-learn.
<p> </p>
### Model Training:
<p> </p>
- **Managed Training:** Automatically provisions and manages the underlying infrastructure for training machine learning models.
- **Distributed Training:** Supports distributed training for large datasets and complex models, reducing training time.
- **Automatic Model Tuning:** Also known as hyperparameter optimization, it helps find the best version of a model by automatically adjusting hyperparameters.
<p> </p>
### Model Deployment:
<p> </p>
- **Real-time Inference:** Deploy models as scalable, secure, and high-performance endpoints for real-time predictions.
- **Batch Transform:** Allows for batch processing of large datasets for inference.
- **Multi-Model Endpoints:** Supports deploying multiple models on a single endpoint, optimizing resource utilization.
<p> </p>
### Model Monitoring and Management:
<p> </p>
- **Model Monitor:** Automatically monitors deployed models for data drift and performance degradation, triggering alerts and actions when necessary.
- **Pipelines:** Enables the creation and management of end-to-end machine learning workflows, from data preparation to deployment and monitoring.
<p> </p>
# Step-by-Step guide
<p> </p>
**AWS Free Tier :**
First let's talk about AWS Free Tier for SageMaker. What is interesting in our case is the "Studio notebooks, and notebook instances" and "Training" section. Based on what it offers, we are going to use `ml.t2.medium` as Notebook instance type and `ml.m5.large` as Training instance (check the availability of theses types in the region you will provision the resources, I am currently using eu-west-3 region).

<p> </p>
## Storing datasets
<p> </p>
First, create a basic S3 bucket that will be used to store our raw dataset and then the training and test formatted datasets.
For this example, I will use the "Ames Housing Dataset" dataset which is a well-known dataset used in machine learning for predictive modeling. It contains information about various houses in Ames, Iowa, and includes features such as the size of the house, the year it was built, the type of roof, and the sale price. The goal is to predict the sale price of houses based on these features.
<p> </p>
## IAM Role
<p> </p>
First, create an IAM role with the `AmazonSageMakerFullAccess` policy as well as the permission to get and create files on the dataset S3 bucket.

<p> </p>
## Notebook instance
<p> </p>
Create a SageMaker Notebook instance with the following parameters :
- type : `ml.t2.medium`
- attach the previously created IAM role

<p> </p>
## Load and explore dataset
<p> </p>
Once your instance appears as "InService" you can click on "Open Jupyter". (It can take minutes for your instance to become ready to use)

This will open a new page with the Jupyter Notebook interface. Now create a new notebook of type `conda_python3`.
**Dependencies and dataset loading**
Add this code in the first block and run it.
```python
import boto3
import pandas as pd
import numpy as np
import sagemaker
from sagemaker import get_execution_role
from sagemaker.serializers import CSVSerializer
from sagemaker.deserializers import JSONDeserializer
from sklearn.model_selection import train_test_split
# Load Data from S3
s3 = boto3.client('s3')
bucket_name = 'dhoang-sagemaker-datasets'
file_key = 'AmesHousing.csv'
obj = s3.get_object(Bucket=bucket_name, Key=file_key)
df = pd.read_csv(obj['Body'])
df
```
It will import all the libraries that we need to access the S3 bucket, pre-process the dataset, train our model and deploy it. You should have a result like this :

**Pre-processing the dataset**
Because we want to avoid empty or non-numerical values, we need to pre-process the dataset. In addition, we need to split the formatted dataset into a train and a test part to validate the model.
Add this code and run it :
```python
# Preprocess Data
print(df.isnull().sum())
numeric_cols = df.select_dtypes(include=['int64', 'float64']).columns
df[numeric_cols] = df[numeric_cols].fillna(df[numeric_cols].mean())
categorical_cols = df.select_dtypes(include=['object']).columns
df[categorical_cols] = df[categorical_cols].fillna(df[categorical_cols].mode().iloc[0])
# Encode Categorical Features
df = pd.get_dummies(df)
# Ensure all boolean columns are converted to numeric
df = df.applymap(lambda x: 1 if x is True else 0 if x is False else x)
# Split the Data
train, test = train_test_split(df, test_size=0.2, random_state=42)
train.to_csv('train.csv', index=False, header=False)
test.to_csv('test.csv', index=False, header=False)
# Upload the Processed Data to S3
s3_resource = boto3.resource('s3')
s3_resource.Bucket(bucket_name).upload_file('train.csv', 'train.csv')
s3_resource.Bucket(bucket_name).upload_file('test.csv', 'test.csv')
```
**Train and deploy the model**
Now this is the that we are going to explore SageMaker features. In our case, we use the `linear-learner` from SageMaker that allow us to make a linear regression on our dataset.
Next, we define the instance type that will be used for our SageMaker Training Jobs, here `ml.m5.large` to benefits of the Free Tier.
Then, after being trained, the model is deployed as a SageMaker Endpoint so we can use it afterward.
Add this code and run it :
```python
# Train Model
role = get_execution_role()
sess = sagemaker.Session()
output_location = 's3://{}/output'.format(bucket_name)
container = sagemaker.image_uris.retrieve("linear-learner", sess.boto_region_name, "1.0-1")
linear = sagemaker.estimator.Estimator(
container,
role,
instance_count=1,
instance_type='ml.m5.large',
output_path=output_location,
sagemaker_session=sess
)
linear.set_hyperparameters(
predictor_type='regressor',
mini_batch_size=100
)
train_data = 's3://{}/train.csv'.format(bucket_name)
test_data = 's3://{}/test.csv'.format(bucket_name)
data_channels = {
'train': sagemaker.inputs.TrainingInput(train_data, content_type='text/csv'),
'validation': sagemaker.inputs.TrainingInput(test_data, content_type='text/csv')
}
linear.fit(inputs=data_channels)
# Deploy Model
predictor = linear.deploy(initial_instance_count=1, instance_type='ml.t2.medium')
```
It should take some minutes to execute. During this time, you can go to the console to see the Training Job status :

with the logs in your notebook :

and after the job finished, the Model Endpoint deployment :

Wait until the Endpoint status appears as `InService`.
**Test the model**
The following code aims to use the fresh new Endpoint to analyse and predict SalePrice on the test dataset. I will run it into the notebook, but you can actually run it from anywhere (if you have access to your endpoint) :
```python
# Test Model
test_data_no_target = test.drop(columns=['SalePrice'])
# Ensure all data is numeric
assert test_data_no_target.applymap(np.isreal).all().all(), "Test data contains non-numeric values"
# Convert test data to CSV string
csv_input = test_data_no_target.to_csv(header=False, index=False).strip()
# Initialize the predictor with correct serializers
predictor = sagemaker.predictor.Predictor(
endpoint_name=predictor.endpoint_name,
sagemaker_session=sagemaker.Session(),
serializer=CSVSerializer(),
deserializer=JSONDeserializer()
)
# Make predictions
predictions = predictor.predict(csv_input)
print(predictions)
```
You should get a set of analysis results, here the precision is not mandatory as I just want to provide you a guide to use SageMaker.

<p> </p>
## Cleaning
<p> </p>
To avoid unwanted cost and clean your account, do not forget to delete :
- SageMaker Endpoint
- SageMaker Notebook instance
- S3 bucket
- IAM role
<p> </p>
---
<p> </p>
Thanks for reading ! Hope this helped you to use or understand how to train a model with Amazon SageMaker from a raw dataset to a ready to use endpoint. Don’t hesitate to give me your feedback or suggestions.
| dhoang1905 |
1,913,189 | Going Further with Styling | Hey there, welcome back to Learn As You Code: HTML & CSS! Today, we’re diving deeper into the... | 27,613 | 2024-07-08T14:00:00 | https://dev.to/nmiller15/going-further-with-styling-1dnp | html, css, beginners, webdev | Hey there, welcome back to **Learn As You Code: HTML & CSS**! Today, we’re diving deeper into the world of styling. Up until now, we’ve been styling elements directly. But what if you have two `<h2>` elements and want each to look different? Enter CSS selectors!
## Element Selectors
You’re already familiar with these, but let’s recap:
```css
h1 {
font-size: 32px;
font-family: Arial;
font-weight: 500;
}
```
This ruleset targets all `<h1>` elements, setting their font size, family, and weight. Element selectors are great for broad strokes, like setting a style guide for your whole page. But let’s face it, not all `<p>` tags should look the same. For more specific styling, we need to up our game!
## Class Selectors
Classes to the rescue! Want two `<p>` tags to look different? Add classes:
```html
<p class="big red">This text is BIG and red.</p>
<p class="small blue">This text is small and blue.</p>
```
Each `<p>` tag has two classes. In your CSS, target these classes with a `.`:
```css
.big {
font-size: 100px;
}
.small {
font-size: 9px;
}
.red {
color: red;
}
.blue {
color: blue;
}
```
Boom! Styles applied. You might ask, “Why not combine styles into fewer classes?” Good question! I like to keep classes flexible. You never know when you might want to reuse `small` without `blue`.
## Id Selectors
For unique elements, use IDs. Check this out:
```html
<p id="name">My Name is Nolan!</p>
```
Use IDs sparingly, only once per page. Target them in CSS with `#`:
```css
#name {
text-decoration: underline;
}
```
Simple, right?
## Conflicting Styles
Now, what if an element has both a class and an ID? Like this:
```html
<p id="red" class="blue">Will I be red or blue?</p>
```
It’ll be red! Why? Because IDs are more specific than classes. Here’s a quick example:
```html
<p id="red" class="underline">I’m styled by three rulesets!</p>
```
```css
p {
font-size: 12px;
color: black;
text-decoration: none;
}
.underline {
text-decoration: underline;
}
#red {
color: red;
}
```
The text turns red and gets underlined, with a font size of 12px. IDs trump classes, which in turn override element selectors. This cascade of styles makes your page look polished without repeated code.
## Challenge
Time to level up your About Me page! Here’s your mission:
- Set default styles for `<h1>`, `<h2>`, and `<p>` using element selectors.
- Add a tagline under your name with a `<p>` tag and style it using an ID.
- Jazz up other text using class selectors.
Play around with conflicting styles and see which rules win. Can you figure out why?
Thanks for reading! Let me know if there are any other topics you’d like me explore in this series in the comments, or just let me know how you’re enjoying the series! | nmiller15 |
1,913,274 | Angular Tutorial: Creating a Custom Loading Screen | If you’ve built apps in angular in the past, I’m sure you’ve experienced the blank screen while you... | 0 | 2024-07-12T18:25:31 | https://briantree.se/angular-tutorial-creating-a-custom-loading-screen/angular-tutorial-creating-a-custom-loading-screen/ | angular, angulardevelopers, tutorial, webdev | If you’ve built apps in angular in the past, I’m sure you’ve experienced the blank screen while you wait for the app to be bootstrapped. In smaller applications, it’s not as noticeable but in larger, more complex applications, we may need to wait for a little bit before we see the actual content loaded. And staring at a blank screen while we wait is not ideal. Well, we can upgrade this experience by adding our own custom loading screen and it’s pretty easy to do too. In this example that’s exactly what we’re going to do.
{% embed https://www.youtube.com/embed/C6XGJlusNqY %}
## How to Keep the Loading Screen Visible During Development
So, in order to work on our loading screen, we’re going to need to be able to see it right? We’ll need to do something to make it visible and keep it that way.
Well, I’ve found that the easiest way to do this is to simply comment out the `bootstrapApplication()` function.
#### main.ts
```typescript
...
// bootstrapApplication(App,{
// providers: [
// provideAnimations()
// ]
// });
```
This is the function that basically creates the Angular application, so by removing it, the app component and everything within it shouldn’t load.
If we save, we'll see that the loading screen will remain visible now.
So now we’re ready to work on it.
## Adding Markup and Styles for a More Captivating Loading Screen
The concept for this loading screen is pretty basic. If we look at the markup in the index document, we can see that we have the `app-root` element here, which is the root component for our app, and within it there’s the word “loading…”.
#### index.html
```html
<app-root>Loading...</app-root>
```
So, what happens here is that whatever content we place within the opening and closing tags of the `app-root` element will be visible while the application is bootstrapped.
Then, once it has been bootstrapped, that content will be replaced with the content from the app root component itself.
So, all we need to do is add some styles and mark-up here to make this look more in line with our branding and application overall. We will even include the styles for this loading page right here in an embedded stylesheet too.
```html
<app-root>
<style>
html,
body {
height: 100%;
}
body {
color: #6244b0;
display: grid;
place-items: center;
text-align: center;
}
</style>
<section>
<img src="assets/loader.png" />
<h1>PETPIX</h1>
</section>
</app-root>
```
Ok, I think that should be everything that we need so let’s save and see how it looks.
<div style="text-align: center">
<img src="https://briantree.se/assets/img/content/uploads/2024/07-12/demo-1.gif" alt="Example of a custom loading screen in Angular" width="592" height="980" style="width: 100%; height: auto; max-width: 592px;">
</div>
And there it is, pretty cool right? Much better than the old blank loading screen.
Now we can go and add back our bootstrap function, but I’m also going to wrap it in a `setTimeout()` to delay it a little bit.
#### main.ts
```typescript
...
setTimeout(() => {
bootstrapApplication(App,{
providers: [
provideAnimations()
]
});
}, 3000);
```
Now I wouldn’t normally want to do this, but this demo app is really small and loads super fast so I just need to slow it down a little so that we can actually see the loading screen before the app loads.
Ok, now when we save, we'll see the new loading screen for three seconds and then the app loads.
<div style="text-align: center">
<img src="https://briantree.se/assets/img/content/uploads/2024/07-12/demo-2.gif" alt="Example of a custom loading screen in Angular with the bootstrap delayed to test the experience" width="592" height="978" style="width: 100%; height: auto; max-width: 592px;">
</div>
This is better than it was, but feels a little abrupt when it switches between the two screens.
## Adding a Basic Enter Animation to Your Component Content
I think we can make this feel a little better by adding an enter animation to ease the app content when it loads in.
Before we do this, I just want to point out that I’ve created several Youtube videos and even a [playlist](https://www.youtube.com/playlist?list=PLp-SHngyo0_ikgEN5d9VpwzwXA-eWewSM) all about the animation framework in Angular so you should totally check them out too!
If any of what you’re about to see is unclear, hopefully those videos will help.
Ok, back to this example.
Let’s add the animations array. Then we’ll need to add a trigger with the [trigger()](https://angular.dev/api/animations/trigger) function, let’s call it “enter”.
#### main.ts
```typescript
import { trigger } from '@angular/animations';
@Component({
selector: 'app-root',
...
animations: [
trigger('enter', [
])
]
})
export class App {
}
```
Next we need a transition using the [transition()](https://angular.dev/api/animations/transition) function, and we’ll be transitioning the “enter” state of our content.
```typescript
import { ..., transition } from '@angular/animations';
@Component({
selector: 'app-root',
...
animations: [
trigger('enter', [
transition(':enter', [
])
])
]
})
export class App {
}
```
Ok, now we can add the starting state of our enter animation with the [style()](https://angular.dev/api/animations/style) function. Let’s start from an opacity of zero and a scale of point seven.
```typescript
import { ..., style } from '@angular/animations';
@Component({
selector: 'app-root',
...
animations: [
trigger('enter', [
transition(':enter', [
style({ opacity: 0, scale: 0.7 })
])
])
]
})
export class App {
}
```
And for the last piece, we’ll animate to our final state with the [animate()](https://angular.dev/api/animations/animate) function. Let’s go with a duration of four hundred milliseconds and an easing function of ease-in.
Then we just need to add the final style with another [style()](https://angular.dev/api/animations/style) function. It will animate to an opacity of one and a scale of one too.
```typescript
import { ..., style } from '@angular/animations';
@Component({
selector: 'app-root',
...
animations: [
trigger('enter', [
transition(':enter', [
style({ opacity: 0, scale: 0.7 }),
animate('400ms ease-in', style({ opacity: 1, scale: 1 }))
])
])
]
})
export class App {
}
```
Ok, so that’s the animation, now we can add the trigger on this div that wraps the rest of the content in this component.
```html
<div @enter>
<app-header></app-header>
<app-slider></app-slider>
</div>
```
So when that div enters, this animation will run. And that’s it, so let’s save and see how it looks now.
<div style="text-align: center">
<img src="https://briantree.se/assets/img/content/uploads/2024/07-12/demo-3.gif" alt="Example of a custom loading screen in Angular with an easing transition after the app has bootstrapped" width="588" height="978" style="width: 100%; height: auto; max-width: 592px;">
</div>
Nice, that’s a lot better.
## In Conclusion
Now, we could probably keep going on this if we wanted, but I’ll go ahead and stop here because I’m sure you get the idea by now.
It’s pretty easy to create a much more intriguing loading screen with very little effort, and now you know exactly how to do it.
I hope you found this tutorial helpful, and if you did, check out [my YouTube channel](https://www.youtube.com/@briantreese) for more tutorials about various topics and features within Angular.
## Want to See It in Action?
Check out the demo code and examples of these techniques in the in the Stackblitz example below. If you have any questions or thoughts, don’t hesitate to leave a comment.
{% embed https://stackblitz.com/edit/stackblitz-starters-ee1aen?ctl=1&embed=1&file=src%2Findex.html %}
---
## Found This Helpful?
If you found this article helpful and want to show some love, you can always [buy me a coffee!]( https://buymeacoffee.com/briantreese)
| brianmtreese |
1,913,351 | Review: Fifine Ampligame AM6 Condenser Mic | Before we get started, please note that Fifine sent me a microphone for this review. ... | 28,027 | 2024-07-12T03:43:26 | https://www.nickyt.co/blog/review-fifine-ampligame-am6-condenser-mic-714/ | productreview, microphone, livestreaming, contentcreation | Before we get started, please note that Fifine sent me a microphone for this review.
## Packaging
The AM6 comes in a nice box and is very well protected with foam.



You probably could have dropped it from a few floors and the mic probably would've still been fine in the box. 😅
## Construction Quality & Controls
I didn't realize it initially, but the AM6 is mainly hard plastic compared to other microphones like my previous one, the Blue Yeti. That means, if you're not careful, you could damage or chip it.
Because it's casing is hard plastic, it's much lighter. I had to tighten my mic boom arm a bit to prevent it from popping up, which wasn't an issue with my previous heavier mic. Not a big deal, but just something to be aware of.
On the top of the microphone, you tap it to mute/unmute. This works really well. The only minor issue I have with this is I can't see the light indicating whether it's muted (red) or not (green). I need to look just under the pop filter, as it obscures the light with the way it's on my mic boom arm.
On the bottom there's a noise-cancelling button, button for changing the microphone lighting color, and a headphone jack.
Aside from that, there's a dial for the gain, another dial for the headphone volume and one last dial that allows you to switch between gaming and chat mode. I don't really game, so I have it turned all the way to the right for chat.
It's a USB microphone that comes with a USB-A to USB-C cord.
The one thing the microphone doesn't have is an off button, but for my setup this is a non-issue as I have a USB hub with physical buttons to turn ports on and off.

## The Look
The microphone looks fantastic.

I love the aesthetic of the pop filter and the fact that it lights up, and you can change the colour of the mic.
## Sound
I'm not an audiophile, but this mic is a condenser mic which is typically better than a dynamic mic from what I've understood especially for something like what I do, coding on live streams and interviewing people.
You can hear in this clip from a recent livestream of mine. Someone asked if the gain was at different levels between the Blue Yeti and the Fifine AM6, but they were both cranked to the max so I honestly think the AM6 sounds better.
{% embed https://www.twitch.tv/videos/2187792388 %}
## Wrapping up
If you're looking for a budget microphone, the AM6 packs a lot of punch for the price point (80$ CAD currently). As someone who is not a sound guy, I really like the look, the controls and the sound quality. It definitely suits my needs as a content creator.
## Links to purchase the Fifine AM6
* [AM6 on amazon.ca](https://www.amazon.ca/Microphone-Streaming-Cancellation-Twitch-AMPLIGAME-AM6/dp/B0CSFZF62Y)
* [AM6 on FIFINE official website](https://fifinemicrophone.com/products/fifine-ampligame-am6)
| nickytonline |
1,913,362 | Internationalizing Next.js: A Comprehensive Guide to i18n Integration | Internationalization (i18n) is a crucial feature for any modern web application that seeks to reach a... | 0 | 2024-07-07T18:00:00 | https://www.coderamrin.com/blog/nextjs-14-i18n-integration | nextjs, i18n, beginners, webdev | Internationalization (i18n) is a crucial feature for any modern web application that seeks to reach a global audience. It enables you to deliver content in various languages, making your application accessible and user-friendly to people from different linguistic backgrounds.
In today’s guide, you will learn how to integrate internationalization to a Next.js application using `next-intl`.
Let’s get started.
### Prerequisite
To make the most out of this guide, here’s what you should have under your belt:
1. Understanding of Next.js app router and TypeScript.
2. Knowledge of how Next.js Middleware works.
## Install Next.js
To integrate multi-language support you need a Next.js project.
Let’s create a brand new project with create-next-app.
```jsx
npx create-next-app@latest
```
Run the above command and go with all the default options. You can choose the options according to your requirements.
Once you are done, you will have the Next.js project ready to integrate multi-language support.
## Write the Translations
Before we proceed to configure `next-intl` we need the translation for the different languages.
We will keep all the translations in the **messages** folder. You can name it anything you want.
Translations are stored as JSON objects. We only have a title for this example.
Go ahead and add all your text and their corresponding translations in these files.

## Configure next-intl
First, you have to install next-intl
Run this command to install the package:
```jsx
npm install next-intl
```
Once done, create a new route named [locale] inside the app directory.
And move all the routes under [locale], we will use this to get the locale and show the text in that language.

Now that we have all set up, let’s configure next-intl.
**#1:** Create a config file on the root of your project's directory and name it: **i18n.ts.**
Paste this code snippet into that file:
```jsx
import { notFound } from "next/navigation";
import { getRequestConfig } from "next-intl/server";
export const locales = ["en", "de", "fr"];
export default getRequestConfig(async ({ locale }) => {
// Validate that the incoming `locale` parameter is valid
if (!locales.includes(locale as any)) notFound();
return {
messages: (await import(`../messages/${locale}.json`)).default,
};
});
```
In this code snippet, set up the locales for your project and validate the incoming locale parameter. If the locale is not on your list, redirect it to the not-found page. If it matches one of the locales, return the message for that locale.
**#2**: Create **middleware.ts** file on the root of your project.
Copy this code snippet over to your middleware
```jsx
import createMiddleware from "next-intl/middleware";
import { locales } from "./i18n";
export default createMiddleware({
// A list of all locales that are supported
locales: locales,
// Used when no locale matches
defaultLocale: "en",
// Removes the locale when it's default
localePrefix: 'as-needed',
});
export const config = {
// Match only internationalized pathnames
matcher: ["/", "/(de|en|fr)/:path*"],
};
```
Import the locales from the i18n config file you created earlier and the CreateMiddleware function from next-intl.
Inside the CreateMiddlware function, you can specify all the options you want to have.
Check out the next-intl [documentation](https://next-intl-docs.vercel.app/docs/routing/middleware) for more information.
## Use the Translation
All the configurations are done.
Now, let’s see how to use the translation on a Client Component.
### Client component
To use the translation on a Client Component, you must wrap your Layout with `NextIntlClientProvider`.
And pass the messages as props to use on the Client components.
```jsx
import "./globals.css"
import { NextIntlClientProvider } from "next-intl";
import { getMessages } from "next-intl/server";
import LangSwitcher from "@/components/LangSwitcher";
export default async function LocaleLayout({
children,
params: { locale },
}: {
children: React.ReactNode;
params: { locale: string };
}) {
// Providing all messages to the client
// side is the easiest way to get started
const messages = await getMessages();
return (
<html lang={locale}>
<body>
<NextIntlClientProvider messages={messages}>
<LangSwitcher />
{children}
</NextIntlClientProvider>
</body>
</html>
);
}
```
Here is an example of using translation on a client component.
```jsx
import { useTranslations } from "next-intl";
export default function Index() {
const t = useTranslations("Index");
return (
<div className="container text-center my-5 mx-auto">
<h1 className="text-3xl font-bold">{t("title")}</h1>
</div>
);
}
```
First, import the **`useTranslations`** hook from **next-intl** and ****then call it by the object name.
It will provide the associated translations.
You can add more translations like this:
```jsx
{
"Index": {
"title": "Hallo Welt"
},
{
"About": {
"title": "About Us"
...
}
}
}
```
This way you can use the translations on a Client Component.
Let’s see how to use the translation on a Server Component.
### Server component
For the server component, you don’t have to configure anything else.
Just import `getTranslations` and call it with the translation object key.
```jsx
import React from "react";
import { getTranslations } from "next-intl/server";
const AboutPage = async () => {
const t = await getTranslations("About");
return (
<div className="container text-center my-5 mx-auto">
<h1 className="text-3xl font-bold">{t("title")}</h1>
</div>
);
};
export default AboutPage;
```
This is how you can use translations on a Server Component.
Now to test our translations let’s create a language switcher.
## Language switcher
Create a file named **LangSwitcher.tsx** in the components folder.
And paste this code into that file.
```jsx
"use client";
import Link from "next/link";
import { usePathname } from "next/navigation";
import { locales } from "@/i18n";
const LangSwitcher = () => {
const pathName = usePathname();
const redirectedPathName = (locale: string) => {
if (!pathName) return "/";
const segments = pathName.split("/");
segments[1] = locale;
return segments.join("/");
};
return (
<div className="flex space-x-10 justify-center my-5">
{locales.map((locale) => (
<Link
key={locale}
href={redirectedPathName(locale)}
className="capitalize hover:underline hover:text-blue-500"
>
{locale}
</Link>
))}
</div>
);
};
export default LangSwitcher;
```
This is the language switcher for your Next.js application. It dynamically generates links for each locale in the `locales` array, modifying the current path to reflect the selected locale.
When a user clicks on a locale link, they are redirected to the same page but in the chosen language.
## Resources:
**Documentations:** https://next-intl-docs.vercel.app/docs/getting-started
**Source code:** https://github.com/Coderamrin/next14-i18n-integration
## **Conclusion**
This article showed you how to integrate multi-language support into your Next.js projects.
If you followed this article step by step you should have i18n integrated on your Next.js project.
Comment below if you have any questions or suggestions.
**Connect With Me**
[Twitter/x](https://x.com/CoderAmrin)
[Github](https://github.com/coderamrin/)
[LinkedIn](https://www.linkedin.com/in/coderamrin/)
Happy Coding.
| coderamrin |
1,913,592 | nth-child ninja power | In this article, I will share a small example of the :nth-child() pseudo-class usage that allows one... | 0 | 2024-07-08T16:07:05 | https://dev.to/titovmx/nth-child-ninja-power-5ebc | frontend, webdev, css, tutorial | In this article, I will share a small example of the `:nth-child()` pseudo-class usage that allows one to efficiently work with dynamic lists.
## Problem
Let's first look at the problem I had. My task was to render a tree with different types of nodes. One of the node types presents a list, and I wanted to add alternating zebra-like backgrounds to them. Below is the visual representation of the UI I've been aiming for.

Ok, it seems that I should use a common pattern to match even nodes with an offset - `2n + 4` works for this example.
However, the difficulty is that this list of nodes is dynamic and the nodes can be different. For instance, not every node has a description, and, in this case, offset changes. If a user has permission to edit nodes, there will be an additional node after the description to select all node items for bulk editing. All of this makes offset of node items unpredictable.
According to its definition the pseudo-class `:nth-child()` matches elements based on the indexes of the elements in the child list of their parents. It means that the selector `.nodeItem:nth-child(even)` will not work as might be expected. It will highlight only `.nodeItem` nodes but analyse whether the element is even or not among all siblings.

## Solution
`:nth-child()` has syntax to match selectors though. The `:nth-child()` argument can not only describe the function to calculate indices but also restrict them by any selector. For the node example, it looks the following way:
```
.nodeItem:nth-child(even of .nodeItem) {}
```
Now it truly scans only `.nodeItem` nodes and applies styles to even nodes among them!

While it is very useful major browsers have supported this syntax only since the spring of 2023 so please still be careful and consider JavaScript solutions if you need to apply a similar approach to styling your dynamic lists with support for older browsers. | titovmx |
1,913,677 | Mastering Software Development: Essential Tips for Success | Becoming a proficient software developer is a journey that requires dedication, continuous learning,... | 0 | 2024-07-09T08:19:38 | https://dev.to/helloworldttj/mastering-software-development-essential-tips-for-success-49ee | software, developer, webdev, beginners |

Becoming a proficient software developer is a journey that requires dedication, continuous learning, and a passion for problem-solving. In this blog post, we will explore essential strategies and best practices to help you excel in your programming career.
## Key Strategies to Excel as a Software Developer
### 1. Focus on a Single Domain
Specializing in a single domain allows you to build deep expertise and become a go-to expert in that area. Here's how you can achieve this:
- **Identify Your Interest**: Choose a domain that excites you, such as web development, mobile app development, or data science.
- **Deep Dive**: Learn the core technologies and tools used in your chosen domain.
- **Stay Updated**: Keep abreast of the latest trends and advancements in your field.
### 2. Understand the Basics
A strong foundation in the basics of programming is crucial for long-term success. Focus on the following:
- **Data Structures and Algorithms**: Master these fundamental concepts to solve complex problems efficiently.
- **Programming Languages**: Gain proficiency in at least one or two programming languages relevant to your domain.
- **Code Quality**: Write clean, readable, and maintainable code.
### 3. Love Your Code
Develop a passion for coding and strive for excellence in every project you undertake:
- **Refactor Regularly**: Continuously improve your code by refactoring and optimizing it.
- **Code Reviews**: Participate in code reviews to learn from others and improve your coding standards.
- **Documentation**: Write clear and concise documentation for your code.
### 4. Problem-Oriented Study
Approach learning with a problem-solving mindset:
- **Project-Based Learning**: Work on real-world projects to apply your knowledge and gain practical experience.
- **Challenges and Competitions**: Participate in coding challenges and hackathons to test your skills.
- **Debugging Skills**: Develop strong debugging skills to quickly identify and fix issues in your code.
### 5. Take Risks
Don’t be afraid to step out of your comfort zone:
- **Experiment**: Try new technologies and frameworks to broaden your skill set.
- **Side Projects**: Work on side projects to explore new ideas and concepts.
- **Learn from Failures**: Embrace failures as learning opportunities and keep pushing forward.
### 6. Keep Coding Standards High
Maintain high standards in your coding practices:
- **Consistency**: Follow consistent coding conventions and best practices.
- **Testing**: Write unit tests and perform thorough testing to ensure code quality.
- **Version Control**: Use version control systems like Git to manage your codebase effectively.
## 7. Embrace Teamwork
Collaboration and teamwork are essential in software development:
- **Git and Project Management**: Use Git for version control and project management tools to track progress and collaborate with team members.
- **Communication**: Develop strong communication skills to work effectively with your team.
- **Mentorship**: Seek mentorship from experienced developers and mentor others in return.
## Conclusion
Becoming a good software developer requires a blend of technical skills, a problem-solving mindset, and a commitment to continuous learning. By focusing on a single domain, understanding the basics, loving your code, adopting a problem-oriented study approach, taking risks, maintaining high coding standards, and embracing teamwork, you can excel in your programming career.
| helloworldttj |
1,913,804 | Make Rust Object Oriented with the dual-trait pattern | Hi! Today I'll tell you about a cool Rust 🦀 trick: dual-trait pattern! It's especially useful when... | 0 | 2024-07-09T05:34:52 | https://dev.to/mslapek/make-rust-object-oriented-with-the-dual-trait-pattern-1kea | rust, architecture, howto, programming | Hi! Today I'll tell you about a cool Rust 🦀 trick: **dual-trait pattern!** It's especially useful when you're dealing with the `dyn` keyword and want to simulate OOP features in Rust.
The key idea is that we'll consider **two perspectives** (dual!) about a trait:
* **implementer** of the trait,
* **user** of the trait.
In the end, we will take a look at the dual-trait pattern in the wild - I’ve successfully applied the dual-trait pattern to an [Apache project](https://github.com/apache/datafusion/pull/5521). 😉
## Animal zoo
Let's start with a classic OOP example with animals. Each animal will:
* have a defined species (parrot, monkey, etc.),
* respond to a text command.
Management of the zoo favors diversity, so we must make sure, that each new animal is distinct from the already owned ones (it implies the use of the `Eq` trait).
As a final requirement, our Rust library cannot assume a predefined set of species in the zoo. This (artificial restriction) precludes the use of `enum` and forces out to use `dyn Animal` in the software.
## Let's do it!
First, we'll define the `Animal` trait:
```rust
pub trait Animal: Eq + Debug {
fn species(&self) -> &'static str;
fn react(&mut self, command: &str) -> String;
}
```
Notice, that aside from the species and reaction we implement `Eq` and `Debug`.
The zoo has a beautiful garden with palm trees, let's bring some 🦜 parrots:
```rust
#[derive(PartialEq, Eq, Debug)]
pub enum FeatherColor {
Red,
Green,
Blue,
}
#[derive(PartialEq, Eq, Debug)]
pub struct Parrot {
feather_color: FeatherColor,
}
```
Let's see, how it's natural 🌿 to implement an `Animal` trait. The `Eq` trait was automatically implemented through `#[derive(...)]`.
```rust
impl Animal for Parrot {
fn species(&self) -> &'static str {
"Parrot"
}
fn react(&mut self, command: &str) -> String {
match command {
"repeat" => "Polly want a cracker".to_string(),
_ => "Squawk!".to_string(),
}
}
}
```
The zoo has empty cages. Let's fill them with 🐵 monkeys:
```rust
#[derive(PartialEq, Eq, Debug)]
pub enum FurColor {
Brown,
Black,
White,
}
#[derive(PartialEq, Eq, Debug)]
pub struct Monkey {
fur_color: FurColor,
}
impl Animal for Monkey {
fn species(&self) -> &'static str {
"Monkey"
}
fn react(&mut self, command: &str) -> String {
if command.starts_with("invite") {
let who = command.split_whitespace().last().unwrap_or_default();
format!("Oooh oooh aah aah {}", who)
} else {
"Aaaah!".to_string()
}
}
}
```
## The zoo doesn't compile!
The zoo is just an array of animals:
```rust
pub struct Zoo {
animals: Vec<Box<dyn Animal>>,
}
```
However, it gives a compiler error 🛑:
```
the trait `Animal` cannot be made into an object
```
The problem is that with the `dyn` keyword we requested a polymorphic version of `Animal` trait, however, it's impossible, because the equality `eq` from `PartialEq` uses `Self` type:
```rust
fn eq(&self, other: &Self) -> bool;
```
But in polymorphic dispatch, we don't know the type of `Self`...
At first glance, it might look like a limitation of Rust. However, in Java and C# we have a similar problem. They solved it by taking `Object` as an argument, not a specific type:
```csharp
public bool Equals(Object obj)
```
So each implementation must cast the `obj` to the `Self` type manually, just like in [C# documentation](https://learn.microsoft.com/en-us/dotnet/fundamentals/runtime-libraries/system-object-equals):
```csharp
// C# code below, kind of a better Java
// 1. Self is Person6 class
public class Person6
{
// some person fields...
private string idNumber;
// 2. Taking Object, not Person6 as an argument
public override bool Equals(Object obj)
{
// 3. Casting to Self
Person6 personObj = obj as Person6;
if (personObj == null) {
// 4. Not Person6
return false;
} else {
// 5. Got another Person6. Comparing all fields manually.
return idNumber.Equals(personObj.idNumber);
}
}
}
```
## Object safe animals

We must introduce a new version of the `Animal` trait - an **object safe** one (it means it can be used with the `dyn` keyword)!
```rust
pub trait Animal: Debug {
fn dyn_eq(&self, other: &dyn Animal) -> bool;
fn as_any(&self) -> &dyn Any;
fn species(&self) -> &'static str;
fn react(&mut self, command: &str) -> String;
}
```
This differs from the original `Animal` trait:
* there is no `Eq` trait, so we have no method with a `Self` type parameter,
* we've introduced `dyn_eq`, taking *any* animal,
* the `as_any` returns us an instance of the [`Any`](https://doc.rust-lang.org/std/any/index.html) trait - which allows us to perform downcasting.
As you'll notice, the equality implementation will be similar to the one in the C# example above. Let's implement the trait for parrot:
```rust
#[derive(PartialEq, Eq, Debug)]
pub struct Parrot {
// the same...
}
impl Animal for Parrot {
fn dyn_eq(&self, other: &dyn Animal) -> bool {
// 1. Downcasting, to check whether the other animal is a parrot
match other.as_any().downcast_ref::<Self>() {
// 2. It's a parrot, let's use a comparison from Eq
Some(o) => self == o,
// 3. Not a parrot
None => false,
}
}
fn as_any(&self) -> &dyn Any {
self
}
fn species(&self) -> &'static str {
// the same...
}
fn react(&mut self, command: &str) -> String {
// the same...
}
}
```
It'll work, however, it has a major drawback - we've lost the simplicity of just deriving an `Eq` and calling it a day. If we had more traits with `Self` to support, like `Hash` and `Clone`, the `Parrot` implementation would become cluttered.
What is worse, each new `Animal` must implement this routine:
```rust
#[derive(PartialEq, Eq, Debug)]
pub struct Monkey {
// the same...
}
impl Animal for Monkey {
// 1. Copy and paste from Parrot
fn dyn_eq(&self, other: &dyn Animal) -> bool {
match other.as_any().downcast_ref::<Self>() {
Some(o) => self == o,
None => false,
}
}
// 2. Copy and paste from Parrot
fn as_any(&self) -> &dyn Any {
self
}
fn species(&self) -> &'static str {
// the same...
}
fn react(&mut self, command: &str) -> String {
// the same...
}
}
```
## Zoo
We can finally implement our zoo! Diversity requirements are satisfied with `dyn_eq`.
```rust
pub struct Zoo {
// 1. This time it compiles
animals: Vec<Box<dyn Animal>>,
}
pub struct InvalidAnimalError {
pub animal: Box<dyn Animal>,
}
impl Zoo {
pub fn new() -> Self {
Zoo {
animals: Vec::new(),
}
}
pub fn add(&mut self, animal: Box<dyn Animal>) -> Result<(), InvalidAnimalError> {
// 2. Notice, how dyn_eq is used
let already_exists = self.animals.iter().any(|a| a.dyn_eq(animal.as_ref()));
if already_exists {
// 3. Tigers will have a feast
Err(InvalidAnimalError { animal })
} else {
// 4. Go to a cage for the rest of your life!
self.animals.push(animal);
Ok(())
}
}
}
```
## The dual-trait pattern

There we have a collision between an *implementer* of the trait and a *user* of the trait.
* `Animal` developer wants to use cool `#[derive(Eq)]` features and strong static typing.
* The `Zoo` developer wants to have a more OOP-like style approach, supporting `dyn` and requiring tedious downcasting from animals.
The solution is to have... two traits! And we've already written them in this article!
First, let's make the **easy to implement** variant of the `Animal` trait:
```rust
/// This trait facilitates the implementation of the [`Animal`] trait.
pub trait AnimalCore: Eq + Debug + 'static {
fn species(&self) -> &'static str;
fn react(&mut self, command: &str) -> String;
}
```
Notice, that it's *not* object safe, because we use `Eq`. Another change is the `Core` suffix in the trait's name. However, it's easy to implement, just like in the "Let's do it!" section.
Let's make the **easy to use** `Animal` trait:
```rust
/// The [`AnimalCore`] trait is *the recommended way to implement* this trait.
pub trait Animal: Debug {
fn dyn_eq(&self, other: &dyn Animal) -> bool;
fn as_any(&self) -> &dyn Any;
fn species(&self) -> &'static str;
fn react(&mut self, command: &str) -> String;
}
```
It's the same trait as in the "Object safe animals" section. Of course, this *is* object safe.
And now the final trick 🎩🪄 - with Rust's *blanket implementation* the language will automatically implement an object safe variant for each `AnimalCore`
```rust
// 1. For each AnimalCore, we'll implement Animal
impl<T: AnimalCore> Animal for T {
fn dyn_eq(&self, other: &dyn Animal) -> bool {
// 2. The OOP downcasting hell
match other.as_any().downcast_ref::<Self>() {
Some(o) => self == o,
None => false,
}
}
fn as_any(&self) -> &dyn Any {
self
}
fn species(&self) -> &'static str {
// 3. Delegate to the original implementation
AnimalCore::species(self)
}
fn react(&mut self, command: &str) -> String {
// 4. Delegate to the original implementation
AnimalCore::react(self, command)
}
}
```
So... that's it! With that, you just implement the `AnimalCore` trait with idiomatic `#[derive(Eq)]`, and Rust will automatically provide you with an object-safe variant, which can be put in a zoo.
As an exercise, you can rewrite the zoo using [`HashSet`](https://doc.rust-lang.org/std/collections/struct.HashSet.html), so the addition of an animal will take a constant time. This will require you to support hashing with the `dyn_hash` function. You can try the exercise in 🏀 [the Rust playground!](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=0f27ce1ce554f7c13b9b76f97a56be09)
## One more thing...
As a last whiff, we should make it possible to compare the `&dyn Animal` with the `==` operator.
```rust
// 1. Notice the dyn keyword
impl PartialEq for dyn Animal {
fn eq(&self, other: &Self) -> bool {
// 2. Delegating to the polymorphic dyn_eq
self.dyn_eq(other)
}
}
impl Eq for dyn Animal {}
```
So now you can use the `==` operator in the `add` function:
```rust
let already_exists = self.animals.iter().any(|a| a == &animal);
```
Or even better, use the [`contains`](https://doc.rust-lang.org/std/primitive.slice.html#method.contains) algorithm from the standard library compatible with the `Eq` trait:
```rust
let already_exists = self.animals.contains(&animal);
```
## Drawbacks
The dual-trait pattern makes maintenance of the trait itself more complex. Fortunately, this complexity does not impact implementers or users of the pattern.
The performance shouldn't be impaired, except for the polymorphic calls themselves, rest of the code will be probably inlined.
Using the `dyn` keyword for file interfaces and I/O is usually a good choice. However, if you need to support complex traits like `Eq` or `Hash`, then you should first try to use `enum` and generics.
As a last resort, go for the dual-trait pattern to simulate classic OOP.
## In the wild

Animals somehow escaped into the wild... 🐾
It turns out that the dual-trait pattern is used in a *production Rust* software.
**Apache DataFusion** is an SQL query engine developed in Rust, used by [Apple](https://arrow.apache.org/blog/2024/03/06/comet-donation/) and [InfluxDB](https://www.influxdata.com/glossary/apache-datafusion/).
I've invented 😎 this dual-trait pattern for the purposes of the logical planner, as seen in [this merged PR](https://github.com/apache/datafusion/pull/5521). The problem was that the nodes in the plan (filter, select, etc.) had to support at the same time:
* equality `Eq` and hashing `Hash`,
* custom nodes with `dyn` keyword.
This prompted the use of the dual-trait pattern. Therefore there are two traits:
* `UserDefinedLogicalNodeCore` for an implementer,
* `UserDefinedLogicalNode` object-safe variant for a user.
There is a neat example, of how a third party project belonging to [the Linux Foundation](https://delta.io/), is implementing `UserDefinedLogicalNodeCore`: [`MetricObserver` in delta-rs](https://github.com/delta-io/delta-rs/blob/937015198df144a91e6ff73b1429d2fd1e23a588/crates/core/src/delta_datafusion/logical.rs#L5). The developer had to use only `#[derive(Debug, Hash, Eq, PartialEq)]` to get `dyn_eq` and `dyn_hash` implemented.
## Conclusions
Usually, your Rust types should be modeled with `struct` and `enum` from *functional paradigm*. However, when there is a need for OOP classes, just like in the logical planner example, then the dual-trait pattern should resolve this 🎯 *object-functional impedance*.
If you're into 🦀 Rust, then you might enjoy my other *dev.to* article [Array duality in Go and Rust](https://dev.to/mslapek/array-duality-in-go-and-rust-4pep), comparing various ways to allocate an array in Rust, like `Vec` or `Cow`.
Comments 💬 and questions are welcome! Don't forget to check out 🏀 [the Rust playground!](https://play.rust-lang.org/?version=stable&mode=debug&edition=2021&gist=0f27ce1ce554f7c13b9b76f97a56be09)
| mslapek |
1,913,869 | Understanding Asynchronous JavaScript: Callbacks, Promises, and Async/Await | Hello and welcome to the first post of the JavaScript: From Novice to Expert Series ! My goal with... | 27,941 | 2024-07-08T01:00:00 | https://dev.to/buildwebcrumbs/understanding-asynchronous-javascript-callbacks-promises-and-asyncawait-cdc | Hello and welcome to the first post of the **JavaScript: From Novice to Expert Series** !
My goal with this series is to review some concepts myself, and while doing so, share my learning to help you learn as well 🌱

---
Asynchronous JavaScript is an essential part of the language. It allows us to manage operations that involve waiting—like API requests, file reading, or any task that would otherwise block the execution thread—**without compromising the user experience**.
In this article we will explore the core concepts of asynchronous JavaScript, walking through the use of callbacks, promises, and async/await.
---
## Why Asynchronous JavaScript?
Asynchronous operations are vital in JavaScript, especially in web environments where blocking operations can severely affect user experience and performance. By understanding asynchronous programming, you can ensure your applications remain responsive and efficient, no matter the load.

_[Image from this freecodecamp post.](https://www.freecodecamp.org/news/synchronous-vs-asynchronous-in-javascript/)_
---
## 1. Understanding Callbacks
### Definition and Usage
Callbacks are functions passed into another function as arguments, which are then invoked to continue code execution after an asynchronous operation has been completed. This pattern is fundamental in handling tasks like I/O operations.
Imagine you order a pizza.
You call the pizza place (function call) and tell them your order (arguments passed to the function).
They tell you it will take 20 minutes (simulates an asynchronous operation) and they will call you back (callback function) when it's ready for pickup.
**Example Code:**
``` JavaScript
function fetchData(callback) {
setTimeout(() => {
callback('Data fetched');
}, 1000);
}
fetchData(data => {
console.log(data); // Outputs: Data fetched
});
```
### Limitations
Despite their utility, callbacks can lead to complex and unmanageable code structures known as "callback hell," especially when several asynchronous operations are chained together.

_[Callback Hell and How to Rescue it ?] (https://dev.to/jerrycode06/callback-hell-and-how-to-rescue-it-ggj) by @jerrycode06_
---
## 2. Mastering Promises
### Introduction to Promises
Promises represent the completion or failure of an asynchronous operation and its resulting value. They simplify handling asynchronous operations by providing a more manageable and robust approach than raw callbacks.
**Example Code:**
``` JavaScript
function fetchData() {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve('Data fetched');
}, 1000);
});
}
fetchData().then(data => {
console.log(data); // Outputs: Data fetched
});
```
### Chaining Promises
Promises can be chained to perform a series of asynchronous operations in a cleaner and more readable manner. Here's an example:
``` JavaScript
function getUser() {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve({ name: 'John Doe' });
}, 500);
});
}
function getPosts(user) {
return new Promise((resolve, reject) => {
setTimeout(() => {
resolve(['Post 1', 'Post 2']);
}, 1000);
});
}
getUser()
.then(user => getPosts(user))
.then(posts => {
console.log(`Hello, ${user.name}! Here are your posts:`, posts);
})
.catch(error => console.error('Error:', error));
```
This example fetches a user and then their posts using chained promises. Notice how errors can also be handled using the `.catch` method.
---
**Would you help us? ⭐**
Interested in supporting our free, Open Source initiative?
At Webcrumbs we are building an Ecosystem of Pluginns and themes for JavaScript and your support fuels our progress and helps bring more awesome tools and content to cool developers like you 😎
[👉 Star Webcrumbs on GitHub 🙏⭐](https://github.com/webcrumbs-community/webcrumbs)
---
## 3. The Power of Async/Await
### Simplifying Asynchrony with Async/Await
The async/await syntax introduced in ES2017 has revolutionized the way developers write asynchronous code, making it even cleaner and more intuitive than using promises alone. Async/await allows you to write asynchronous code that resembles synchronous code.
**Example Code:**
``` JavaScript
async function fetchData() {
let promise = new Promise((resolve, reject) => {
setTimeout(() => resolve("Data fetched"), 1000);
});
let result = await promise; // wait until the promise resolves
console.log(result); // "Data fetched"
}
```
In this example, the `async` keyword defines the function as asynchronous, and the `await` keyword pauses the execution of the function until the promise returned by `fetchData` resolves.
---
## 4. Best Practices
### Error Handling
Always implement error handling when dealing with asynchronous operations. This ensures that failures are gracefully managed and can significantly enhance the reliability of your application. Techniques like `.catch` for promises and `try...catch` with async/await are commonly used for error handling.
### Performance Considerations
Understand the impact of asynchronous operations on application performance. Use tools and techniques to monitor and optimize the performance of your async operations. This might involve techniques like debouncing or throttling for frequently occurring asynchronous calls.
---
## Have you learned something new?
Understanding and implementing asynchronous JavaScript effectively is super important for any developer looking to build fast, responsive, and efficient web applications.
By mastering callbacks, promises, and async/await, you'll be well-equipped to deal with modern web development challenges.
Experiment with the examples, convert some synchronous operations in your projects to asynchronous ones and share your experiences! 🚀
---
### Show Your Support for Webcrumbs
We are building an Ecosystem of plugins and themes for the JavaScript community!
Your support means a lot for us to continue developing innovative tools and content that make a difference.
{% cta https://github.com/webcrumbs-community/webcrumbs %} ⭐👉 Star Webcrumbs on GitHub 🙏 ⭐ {% endcta %}
 | pachicodes | |
1,913,907 | Spring Boot 3 application on AWS Lambda - Part 9 Develop application with Spring Cloud Function AWS | Introduction In the part 8 we introduced concepts behind Spring Cloud Function (AWS). In... | 26,522 | 2024-07-08T15:07:52 | https://dev.to/aws-builders/spring-boot-3-application-on-aws-lambda-part-9-develop-application-with-spring-cloud-function-aws-i1c | java, aws, springboot, serverless | ## Introduction
In the [part 8](https://dev.to/aws-builders/spring-boot-3-application-on-aws-lambda-part-8-introduction-to-spring-cloud-function-99a) we introduced concepts behind Spring Cloud Function (AWS). In this article we'll take a look into how to write AWS Lambda function with Java 21 runtime and Spring Cloud Function AWS using Spring Boot 3.2 version. To use the newer version of Spring Boot (i.e. 3.3) it will maybe be enough to update the version in pom.xml.
## How to write AWS Lambda with Spring Cloud Function AWS using Spring Boot 3.2
For the sake of explanation, we'll use our Spring Boot 3.2 [sample application](https://github.com/Vadym79/AWSLambdaJavaWithSpringBoot/tree/master/spring-boot-3.2-with-spring-cloud-function) and use Java 21 runtime for our Lambda functions.

There are multiple ways to write AWS Lambda with Spring Cloud Function AWS using Spring Boot 3.2 :
- Broadly speaking AWS Lambda with AWS Serverless Java Container introduced in the article [Develop application with AWS Serverless Java Container](https://dev.to/aws-builders/spring-boot-3-application-on-aws-lambda-part-3-develop-application-with-aws-serverless-java-container-2901) can also be considered as Spring Cloud Function AWS application because of dependency [spring-cloud-function-serverless-web](https://github.com/spring-cloud/spring-cloud-function/tree/main/spring-cloud-function-adapters/spring-cloud-function-serverless-web) which aws-serverless-java-container-springboot3 requires. This is the collaboration effort between Spring and AWS Serverless developers. It provides [Spring Cloud Function on AWS Lambda](https://docs.spring.io/spring-cloud-function/reference/adapters/aws-intro.html) functionality.
- We can also define Spring Beans (annotated with @ Bean) directly in the main Spring Boot Application class (annotated with @SpringBootApplication) and map the method name annotated with @ Bean (which is the default bean name) to exactly match the Lambda function name in the Infrastructure as a Code (i.e. AWS SAM template). Example of such approach can be found [here](https://github.com/olegz/spring-aws-2023/tree/main/scf-aws).
- Another approach is to use org.springframework.cloud.function.adapter.aws.web.WebProxyInvoker::handleRequest Lambda handler and normal Spring Boot RestController (annotated with @RestController). Example of such approach can be found [here](https://github.com/spring-cloud/spring-cloud-function/tree/main/spring-cloud-function-adapters/spring-cloud-function-serverless-web/sample/pet-store).
- In this article I'd like to show more classical approach how to convert Java 8 Function interface into AWS Lambda function.
In the [sample application](https://github.com/Vadym79/AWSLambdaJavaWithSpringBoot/tree/master/spring-boot-3.2-with-spring-cloud-function) we'll create and retrieve [products](https://github.com/Vadym79/AWSLambdaJavaWithSpringBoot/blob/master/spring-boot-3.2-with-spring-cloud-function/src/main/java/software/amazonaws/example/product/entity/Product.java) and use DynamoDB as the NoSQL database. You can find the DynamoProductDao.java implementation [here](https://github.com/Vadym79/AWSLambdaJavaWithSpringBoot/blob/master/spring-boot-3.2-with-spring-cloud-function/src/main/java/software/amazonaws/example/product/dao/DynamoProductDao.java). We also put Amazon API Gateway in front of it as defined in [AWS SAM template](https://github.com/Vadym79/AWSLambdaJavaWithSpringBoot/blob/master/spring-boot-3.2-with-spring-cloud-function/template.yaml).
In the [pom.xml](https://github.com/Vadym79/AWSLambdaJavaWithSpringBoot/blob/master/spring-boot-3.2-with-spring-cloud-function/pom.xml) we need to define these dependencies among others:
```
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-adapter-aws</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-function-web</artifactId>
</dependency>
```
which will convert Spring Cloud Function into AWS Lambda and expose it as web application.
This is our [Spring Boot main class](https://github.com/Vadym79/AWSLambdaJavaWithSpringBoot/blob/master/spring-boot-3.2-with-spring-cloud-function/src/main/java/software/amazonaws/Application.java) which we also have to have to additionally define in the [SAM template](https://github.com/Vadym79/AWSLambdaJavaWithSpringBoot/blob/master/spring-boot-3.2-with-spring-cloud-function/template.yaml) in the environment variables section:
```
Globals:
Function:
.....
Environment:
Variables:
MAIN_CLASS: software.amazonaws.Application
```
Next we implement our [GetProductByIdHandler](https://github.com/Vadym79/AWSLambdaJavaWithSpringBoot/blob/master/spring-boot-3.2-with-spring-cloud-function/src/main/java/software/amazonaws/example/product/handler/GetProductByIdHandler.java) which implements java.util.function.Function interface and has to be annotated with Spring @Component annotation.
```
@Component
public class GetProductByIdHandler implements
Function<APIGatewayProxyRequestEvent, APIGatewayProxyResponseEvent> {
@Autowired
private DynamoProductDao productDao;
...
public APIGatewayProxyResponseEvent apply(APIGatewayProxyRequestEvent requestEvent) {
String id = requestEvent.getPathParameters().get("id");
Optional<Product> optionalProduct = productDao.getProduct(id);
return new APIGatewayProxyResponseEvent().
withStatusCode(HttpStatusCode.OK).
withBody(....);
}
```
@Component is an annotation that allows Spring to detect our custom beans automatically. By default, Spring takes the simple name of the type (Java class) declaring the bean, changes the first letter to lowercase, and uses the resulting value to name the bean.
We can implement this Lambda function also without certain AWS dependencies by returning Product or Optional<Product> directly instead of wrapping it into APIGatewayProxyResponseEvent object.
Next we define the Lambda function mapping in [SAM template](https://github.com/Vadym79/AWSLambdaJavaWithSpringBoot/blob/master/spring-boot-3.2-with-spring-cloud-function/template.yaml)
```
GetProductByIdFunction:
Type: AWS::Serverless::Function
Properties:
Environment:
Variables:
SPRING_CLOUD_FUNCTION_DEFINITION: getProductByIdHandler
FunctionName: GetProductByIdWithSpringBoot32SCF
...
Events:
GetRequestById:
Type: Api
Properties:
RestApiId: !Ref MyApi
Path: /products/{id}
Method: get
```
The relevant part ist here the environment variable SPRING_CLOUD_FUNCTION_DEFINITION which is the Spring bean name which in our case corresponds to the Lambda function class name with the first letter changed to lowercase. In our example Lambda function with environment variable SPRING_CLOUD_FUNCTION_DEFINITION = **getProductByIdHandler** is mapped to the Lambda Java class **GetProductByIdHandler**.
Note that since AWS does not allow dots . and/or hyphens - in the name of the environment variable, we can benefit from Spring Boot support and simply substitute dots with underscores and hyphens with camel case. So for example spring.cloud.function.definition becomes spring_cloud_function_definition. We can also write the variable name i.e. spring.cloud.function.definition and spring.cloud.function.routing-expression using all capital letters.
Also in the SAM template in the global Lambda function environment variables we define the generic Spring Cloud Function AWS Lambda handler
```
Globals:
Function:
Handler: org.springframework.cloud.function.adapter.aws.FunctionInvoker::handleRequest
```
which routes all requests to the correct Lambda implementation (we defined separate Lambda Handler Java class for each Lambda function implementation).
You can learn more about Spring Cloud Function routing and filtering capabilities in this [article](https://docs.spring.io/spring-cloud-function/docs/4.0.5/reference/html/spring-cloud-function.html#_function_routing_and_filtering).
Then we need to deploy the application with **sam deploy -g** and to retrieve the existing product we have to invoke the following:
```
curl -H "X-API-Key: a6ZbcDefQW12BN56WEA7"
https://{$API_GATEWAY_URL}/prod/products/1
```
## Conclusion
In this article we took a look into how to write AWS Lambda function with Java 21 runtime with Spring Cloud Function AWS using Spring Boot 3 version. In comparison to AWS Serverless Java Container and AWS Lambda Web Adapter, Spring Cloud Function AWS doesn't strictly require but supports reusing Spring Boot Rest Controller. In our main example of using Spring Cloud Function AWS we exposed Java 8 Function as Lambda function.
In the next article of the series we'll measure the cold and warm start times for this sample application including enabling SnapStart on the Lambda function and also apply various priming techniques for the DynamoDB invocation. | vkazulkin |
1,913,940 | Django: difference between MEDIA_ROOT and STATIC_ROOT | Introduction STATIC_ROOT vs. MEDIA_ROOT In web development, particularly when... | 0 | 2024-07-10T09:10:47 | https://dev.to/doridoro/django-difference-between-mediaroot-and-staticroot-5cb | django | ## Introduction
### `STATIC_ROOT` vs. `MEDIA_ROOT`
In web development, particularly when working with Django, managing assets such as images, videos, CSS, and JavaScript files is a critical component of building a robust and efficient web application. Two key settings in Django, `MEDIA_ROOT` and `STATIC_ROOT`, that often cause confusion among developers, especially those new to the framework. Despite their seemingly similar roles in handling files, these settings serve distinct purposes and are configured differently within a Django project. Understanding the differences between `MEDIA_ROOT` and `STATIC_ROOT` is essential for organizing and deploying your web application's static and media content effectively. In this discussion, we will delve into the definitions, uses, and distinctions of `MEDIA_ROOT` and `STATIC_ROOT`, providing clarity on how each contributes to the file management strategy of a Django project.
- **`STATIC_ROOT`**:
- **Purpose**: It is the directory where all static files are collected (using the `collectstatic` management command: `python manage.py collectstatic`) for deployment.
- **Use Case**: Static files are typically assets like CSS, JavaScript, and images that are part of your application's front-end and do not change frequently.
- **Settings**:
```python
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
```
- **`MEDIA_ROOT`**:
- **Purpose**: It is the directory where user-uploaded files are stored. This includes files uploaded via forms, such as profile pictures or documents.
- **Use Case**: Media files are usually content that can change, such as user uploads or files generated by the application.
- **Settings**:
```python
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
```
### Key Differences
1. **Nature of Files**:
- `STATIC_ROOT`: Contains static assets that are part of your codebase and are typically versioned with your application.
- `MEDIA_ROOT`: Contains media files uploaded by users or dynamically generated by the application.
2. **File Management**:
- `STATIC_ROOT`: Files are collected here during deployment using `collectstatic`.
- `MEDIA_ROOT`: Files are stored directly when uploaded, without needing a collection step.
3. **URL Serving**:
- `STATIC_URL`: URL prefix for accessing static files.
- `MEDIA_URL`: URL prefix for accessing media files.
4. **Access Control**:
- `STATIC_ROOT`: Usually public and accessible to all users.
- `MEDIA_ROOT`: Access can be restricted and managed more granularly depending on the application needs.
### Using `upload_to` with `MEDIA_ROOT`
When you specify `upload_to` in a model field, it creates a subdirectory in the `MEDIA_ROOT` where the file will be stored.
#### Example with Model
```python
#models.py
from django.db import models
class Picture(models.Model):
photo = models.ImageField(upload_to='images/')
```
### Settings Example
Here’s an example of how you might set up `MEDIA_ROOT` in your `settings.py`:
```python
# settings.py
import os
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
MEDIA_URL = '/media/'
MEDIA_ROOT = os.path.join(BASE_DIR, 'media')
```
In this setup:
- `MEDIA_ROOT` points to the `media` directory in your project’s base directory.
- `upload_to='images/'` means the file will be saved in `media/images/`.
### How it Works with File Uploads
1. **Saving a File**:
When a user uploads a file, Django will save it in the `MEDIA_ROOT` directory, under the subdirectory specified by `upload_to`.
2. **Path Example**:
If a file named `example.jpg` is uploaded through a model field with `upload_to='images/'`, the file path would be:
```
media/images/example.jpg
```
### Serving Media Files
In **development**, Django can serve media files using the following settings in your `urls.py`:
```python
# urls.py
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
# Your URL patterns...
]
if settings.DEBUG:
urlpatterns += static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
urlpatterns += static(settings.MEDIA_URL, document_root=settings.MEDIA_ROOT)
```
In **production**, you typically configure your web server (e.g., Nginx or Apache) to serve files from `MEDIA_ROOT`.
### Recommended Structure
It’s typically recommended to separate media files from static files:
- **Static Files**:
- Directory: `static/`
- Path: `STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')`
- **Media Files**:
- Directory: `media/`
- Path: `MEDIA_ROOT = os.path.join(BASE_DIR, 'media')`
This helps maintain a clean and manageable structure for your project’s file assets.
### Directory Structure Example
```
# project tree
myproject/
│
├── static/
│ └── staticfiles/ # Collected static files (after running `collectstatic`)
│
├── media/
│ └── images/ # Uploaded media files (via `MEDIA_ROOT`)
│
├── manage.py
├── myproject/
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
|
├── app/
│ ├── __init__.py
│ ├── admin.py
│ ├── apps.py
│ ├── models.py
│ ├── views.py
│ └── templates/
│ └── projects.html
```
### Example Model with Media Files
Assuming you have the following models:
```python
# models.py
from django.db import models
class Project(models.Model):
title = models.CharField(max_length=200)
category = models.CharField(max_length=100)
# Other fields...
class Picture(models.Model):
project = models.ForeignKey(Project, related_name='pictures', on_delete=models.CASCADE)
photo = models.ImageField(upload_to='images/')
description = models.TextField()
# Other fields...
```
### Create a Class-Based-View (CBV) for the template
```python
#views.py
from django.views.generic import ListView
from app.models import Project
class PortfolioView(ListView):
model = Project
template_name = "app/projects.html" # default Django name: project_list.html
context_object_name = "projects"
```
### Using the Template to Display Media Files
Here’s how you can update your template to display the media files:
```html
# projects.html
{% for project in projects %}
<div>
<h4>{{ project.title }}</h4>
<p>{{ project.category }}</p>
{% for picture in project.pictures.all %}
<img src="{{ picture.photo.url }}" alt="{{ project.title }}">
{% endfor %}
</div>
{% endfor %}
```
In this setup:
- You access the media file URL in the template using `{{ picture.photo.url }}`, which Django handles based on the `MEDIA_URL`.
### Final Considerations
1. **Development vs. Production**:
- During development, Django can serve static and media files directly.
- In production, it’s recommended to use a web server (like Nginx or Apache) to serve static and media files.
2. **Security**:
- Ensure that media files are served securely, especially if they contain sensitive information.
- You may need to set appropriate permissions and access controls based on your application’s requirements.
By following these guidelines, you can efficiently manage and serve static and media files in your Django project. | doridoro |
1,913,949 | Basic Git and GitHub commands | Learning about version control system is an important part of your developer journey. As I started to... | 0 | 2024-07-09T09:57:13 | https://dev.to/sxryadipta/basic-git-and-github-commands-9jk | webdev, github, git, beginners | Learning about version control system is an important part of your developer journey. As I started to code, one of the first things that I learnt was about Git and GitHub. Straight from the horse’s mouth, “Git is a free and open-source distributed version control system designed to handle everything from small to very large projects with speed and efficiency.” It was created by Linus Torvalds in 2005 and has become the standard for version control in software development due to its efficiency and flexibility.
On the other hand, GitHub is a web-based hosting service for Git repositories. It provides a platform where developers can store their Git repositories in the cloud, making it easier to collaborate with others. GitHub adds features on top of Git, such as issue tracking, pull requests, code review, and project management tools. Open-source projects and teams widely use it for managing and sharing code.
Here are some of the basic commands of git that would help you get started using it:
1. git init
- Initializes a new Git repository in the current directory.
2. git clone [repository_url]
- Clones a remote repository from GitHub or another Git hosting service to your local machine.
3. git add [file(s)]
- Adds file(s) to the staging area to prepare them for a commit.
4. git commit -m "commit message"
- Commits the staged changes to the repository with a descriptive message.
5. git status
- Shows the current state of the working directory and staging area.
6. git pull
- Fetches changes from the remote repository and merges them into the current branch.
7. git push
- Pushes your commits to the remote repository.
8. git branch
- Lists all local branches in the repository.
9. git checkout [branch_name]
- Switches to the specified branch.
10. git merge [branch_name]
- Merges the specified branch into the current branch.
11. git remote -v
- Lists the remote repositories associated with the local repository.
Now let’s learn about some of the GitHub Commands:
1. git remote add origin [repository_url]
- Sets up a remote repository on GitHub as the origin for your local repository.
2. git push -u origin [branch_name]
- Pushes the specified branch to GitHub and sets it as the upstream branch.
3. git pull origin [branch_name]
- Fetches changes from GitHub and merges them into the current branch.
4. git clone [repository_url]
- Clones a repository from GitHub to your local machine.
5. git fork
- Creates a copy of a repository on GitHub under your GitHub account.
6. git pull-request
- Opens a pull request on GitHub for merging changes from a branch.
7. git fetch --all
- Fetches all branches from the remote repository to your local machine.
These commands cover the basic operations you'll perform when using Git and GitHub for version control and collaboration in software development.
| sxryadipta |
1,914,052 | What is the Difference between "sequel" and "SQL"? | As you talk to different people, you will hear two different pronunciations of SQL. SQUEL or... | 0 | 2024-07-08T02:50:00 | https://dev.to/thekarlesi/what-is-the-difference-between-sequel-and-sql-oh0 | webdev, beginners, programming, tutorial |

As you talk to different people, you will hear two different pronunciations of SQL. SQUEL or SQL.
What is the correct way? Well, it depends on who you ask, and of course, everybody thinks that their way of pronouncing this word is the right way. But, here is a little history about this language.
SQL was originally developed at IBM in the 1970s and back then it was initially called SQUEL, short for Structured English Query Language. But, they changed the acronym to SQL, because SQUEL was a trademark of an airplane company.
So, to this date, there has been an argument about what is the right way to pronounce this language.
Generally speaking, people in non-English speaking countries call it SQL. I'm used to calling it SQUEL because it is shorter and sweeter than SQL. But, if you prefer to call it SQL, that's fine with me, I'm not going to get mad at you.
So, that's the history behind this language.
Happy Coding!
Karl 🤛
P.S. If you want me to help you personally, get [the 2 hour web developer course](https://karlgusta.gumroad.com/l/eofdr) and I'll guide you when you are stuck to become a web expert and apply for web jobs.
| thekarlesi |
1,914,069 | JavaScript Decorators and Auto-Accessors | A walkthrough of how to create JavaScript decorators and how using auto-accessors helps improve your... | 27,975 | 2024-07-10T01:30:30 | https://dev.to/frehner/javascript-decorators-and-auto-accessors-437i | javascript, webdev | A walkthrough of how to create JavaScript decorators and how using auto-accessors helps improve your developer experience.
## Table of Contents
- [Context and Specification](#context-and-specification)
- [Preface](#preface)
- [Auto-Accessors](#autoaccessors)
- [Creating Decorators](#creating-decorators)
- [A Simple Decorator](#a-simple-decorator)
- [Validation With Decorators](#validation-with-decorators)
- [Decorator Options](#decorator-options)
- [Metadata](#metadata)
## Context and Specification
The [Decorators Proposal on GitHub](https://github.com/tc39/proposal-decorators) already does a great job of breaking down the basic use-cases of decorators. My goal isn't to recreate those examples there, but instead to highlight some lesser-known features and interactions. Additionally, in the next article in this series I'll highlight how to compose or chain multiple decorators on a single class property.
## Preface
Each code sample will come with a link to an interactive [Babel REPL playground](https://babeljs.io/repl#?browsers=defaults%2C%20not%20ie%2011%2C%20not%20ie_mob%2011&build=&builtIns=false&corejs=3.21&spec=false&loose=false&code_lz=Q&debug=false&forceAllTransforms=false&modules=false&shippedProposals=false&circleciRepo=&evaluate=true&fileSize=false&timeTravel=false&sourceType=module&lineWrap=false&presets=env%2Creact%2Cstage-2&prettier=false&targets=&version=7.24.7&externalPlugins=&assumptions=%7B%7D), so you can try it for yourself without needing to set up a polyfill or spin up a repo. The "Evaluate" option in the top left (under `Settings`) should be checked in all my examples, which means that you will be able to see the code, edit it, open your browser's dev console, and see the logs / results there.
**You don't need to pay attention to the transpiled code on the right-hand side of the Babel REPL**, unless you want to dig into the polyfill for decorators. The left-hand side of the Babel REPL is where you can edit and write code to try out for yourself.
**To emphasize, your developer tools' console should show console logs. If it doesn't, make sure that `Evaluate` is checked in the top left.**
## Auto-Accessors
An important feature of the Decorators spec are auto-accessors. We'll start with learning what they are and how using auto-accessor will make writing decorators easier.
The the Decorators Proposal [outlines auto-accessor here](https://github.com/tc39/proposal-decorators?tab=readme-ov-file#class-auto-accessors). But ultimately it's a simple feature; let's look at a basic working example: [Babel REPL](https://babeljs.io/repl#?browsers=defaults%2C%20not%20ie%2011%2C%20not%20ie_mob%2011&build=&builtIns=false&corejs=3.21&spec=false&loose=false&code_lz=MYGwhgzhAECyCeBhcVoG8BQ1pmMAplAPYBO0AtvAEJFEj5gB20AvNAGZggT4YC-GDMCKMIAF2hj85AA6tojfAHc4SFBAAUASiEiIdfADoQRAOYaA5OwCWJcRYA0k6TMOUaBpjoxTZb6rT0TPJiJACu-ADcuqIGxmaWPMKMACaOzn7ugQyMWkA&debug=false&forceAllTransforms=false&modules=false&shippedProposals=false&circleciRepo=&evaluate=true&fileSize=false&timeTravel=false&sourceType=module&lineWrap=false&presets=env%2Creact%2Cstage-2&prettier=false&targets=&version=7.24.7&externalPlugins=&assumptions=%7B%7D).
```js
class MyClass {
accessor myBoolean = false
}
```
In this class definition the `accessor` keyword goes before the property name. However, this hasn't really changed anything about the property yet - next, we'll see how useful auto-accessors are when combined with decorators.
(Note that you can also use `static` with auto-accessors, such as `static accessor myBoolean = false`)
## Creating Decorators
To better understand why we're using an auto-accessor, let's build some decorators.
### A Simple Decorator
We'll start by combining auto-accessors with a decorator that doesn't actually do much, in order to get an idea of the syntax.
Here's a functional decorator that keeps an internal variable, and allows you to get and set that variable through the property on the class: [Babel REPL](https://babeljs.io/repl#?browsers=defaults%2C%20not%20ie%2011%2C%20not%20ie_mob%2011&build=&builtIns=false&corejs=3.21&spec=false&loose=false&code_lz=GYVwdgxgLglg9mABAZxgWwA4BsCmARHCOAJwEMoSAKAN1KxBwBpEiwocAPKASkQG8AUIkS4oiGGxzEwdAGp0GiALyJgdZDiGJiOKCGn8twgOa7KvQcKvbd-pBPbS5CzdYC-jIyjO0sFr8IOUjJY8vQ4yoi-ATZ6BkFOoS5eblqpqQIQWKTIyIgAsgCeAMLZuYbCAAKomLgERGQUxFqkEBA4uSSIaIUAQnBwuKRgAhmsyGLsmJFgOADuBSVlyOaZCMiDOAB0WHDGlADkwDDEEyJ7B8xTGFs9_ZvD3ALXt30DQ0gqUMQMANxrYA2uB2e0OGlYABNzsZLogXnd3jhHgIBEA&debug=false&forceAllTransforms=false&modules=false&shippedProposals=false&circleciRepo=&evaluate=true&fileSize=false&timeTravel=false&sourceType=module&lineWrap=false&presets=env%2Creact%2Cstage-2&prettier=false&targets=&version=7.24.7&externalPlugins=&assumptions=%7B%7D)
```js
function simpleDecorator(value, context) {
let internalValue = false
return {
get() {
return internalValue
},
set(val) {
internalValue = val
return internalValue
}
}
}
class MyClass {
@simpleDecorator
accessor myBoolean
}
```
This decorator returns an object with two methods: `get()` and `set()`. This is how a decorator for an auto-accessor can "decorate" or wrap both the setter and the getter for a property in a single place; we don't have to create a `simpleGetterDecorator` and `simpleSetterDecorator`. Instead, we've combined them into a single definition with auto-accessors, which is easier.
In the end, this looks like a fairly normal function so far - which is great for an introduction!
### Validation With Decorators
To set us up for the rest of the article, let's update our decorator so that it actually does some sort of validation. We'll make a decorator that only allows you to set even numbers and nothing else. Here's what that would look like: [Babel REPL](https://babeljs.io/repl#?browsers=defaults%2C%20not%20ie%2011%2C%20not%20ie_mob%2011&build=&builtIns=false&corejs=3.21&spec=false&loose=false&code_lz=GYVwdgxgLglg9mABAgNgTwKIDcCmYByIAtgEY4BOAzgBRYCGKIOANIhAlDgB5QCUiAbwBQiRChxREMMJ3JgGhUhUQBeRAAYRichJBzBW0QHMJ1fsNGXtu_dNnyUisuUOIAvs1eVT9FOdei7GCUkmDEqohOFLQMvAFSwNQwlPh0-NRhRLz-VlYA9HmIACYIAOSS3pJQABY4iL5MCVJQpZSIYHCSdO3EzvGiOlB6SHYUDlEuue7xMImZiACkiABMiACEKmrqOVOIBcVlXSgocADuyEVFPUpUQgCQd4PDUjJjCr0U8W4zr3LvNxFfPEnrZfuMPpNLN9RN9vkIICg6JQ2gBZNAAYURyIMogAAqhMLgCBDKFo6BAIDhkXByIgiIS8BMhHCgiFEJwiAAHCJgHDnNGYpE0OKsuDiAB0JyM1FKRRwwDoIBQkgaOFKrA5nPF9OwjIhcSEmu1DOJALUUHITHhCEoYpwkrg0tKdGAsnZ5DQ0iM7LgiEqL18MCuqvV7JwXONutNzgNRp1RImEQAzNbgnaHU6XW6LZ6wN6oL7_QWLldMs5Q3GTRNY-GtfG9WbEAAWVO2iVSmVZ5SVWB5n2IHBE67ljW1yMJ_VCIRAA&debug=false&forceAllTransforms=false&modules=false&shippedProposals=false&circleciRepo=&evaluate=true&fileSize=false&timeTravel=false&sourceType=module&lineWrap=false&presets=env%2Creact%2Cstage-2&prettier=false&targets=&version=7.24.7&externalPlugins=&assumptions=%7B%7D)
```js
function onlyEvenNumbers(value, context) {
let internalNumber = 0
return {
get() {
return internalNumber
},
set(val) {
const num = Number(val)
if(isNaN(num)) {
// don't set the value if it's not a number or coerced to a number
return internalNumber
}
if(num % 2 !== 0) {
// don't allow odd numbers
return internalNumber
}
internalNumber = val
return internalNumber
}
}
}
class MyClass {
@onlyEvenNumbers
accessor myEvenNumber
}
```
So we add logic to the `set()` method and now anyone trying to set the `myEvenNumber` property on our class will go through that validation logic. Nice.
### Decorator Options
Now that we have a nice only-evens decorator, let's make it handle both even and odd numbers with an option to configure which type of number we want!
Fortunately, because this is fairly-normal-looking JavaScript we're writing here, it's not too hard to configure it to work this way. We wrap the original decorator with a function that takes in an option, and then return the decorator. [Babel REPL](https://babeljs.io/repl#?browsers=defaults%2C%20not%20ie%2011%2C%20not%20ie_mob%2011&build=&builtIns=false&corejs=3.21&spec=false&loose=false&code_lz=GYVwdgxgLglg9mABAUwG7LAZwPICdsAmBmAFAgDYCeAoulogLyJS4jICUiA3gFCKK5kUELiShIsBIgLIIcXAEMo8kqgXk2AGkRywUZAA8onXvwCQ5IYhh7ko9QDkQAWwBGdxogAMfRGcHCoty-_PwA5kIkJiGh_kIiSDb69uRObnYx_AC-mpmImJFq5NGhsbqYUIhgLp5p7riq6ux5ZjDAJDCYDgoOJNXO7CWlsQD0I9IIAOSVBZVQABbIiEVs1sDWUJOYVXCVClUu9XnmAQnWtil1GcN-WS1tfTUApIgATIgAhAxMZGBUtBhtgB-byIABciAAjINgmY4acgkk7GBHIdrsMzHcbq0LijUmjcJ4iscBPFEbjUelcHksbdfHc7jwIOQFJhtgBZSgAYRZbOC_AAAmhAXhCMQSCw2M1-AoIBBkGz5IhnDQ6FdqfxfEK6Dh8ERSMB1AVpYhZfLFYSVWL1TxGeU5shnAAHTxgZAAd0QnJ5rNIzXKcEsADpyHAwiRJsKwBCZIaQORKitkJNtPpnUGVQCwOrmjw006M6qMOrPJNFuRQ5MmQhMIHkCGwxGoxCFMBksxcJQbGFmHB8lYbEUYARluo2CnmI6C5m1QTc_nC1mS0wAMzVrB1hvhyN0FttjwsLtgHvKftzPtwIgHKkThcz4tznh5qeL2dUzwAFnXteDoe3zdNfdCVmWBj17FA6Gvepbxfe9s0fHgxkQABaVC0LQp8A1_RtJkvAgY2QOME1HDRk1TWDKGtBC70oohl0QMtkArOAqyw-s_wjPC93bQ9u3A2ZziHEckxg9MrTo6iKKo99V2_TcONwohuIPTs-NPATTzwqC7FE6daIIHMnxo6T6k_OTsO3LjAPbEC1L7KNtNwXTCxMux2AAbiAA&debug=false&forceAllTransforms=false&modules=false&shippedProposals=false&circleciRepo=&evaluate=true&fileSize=false&timeTravel=false&sourceType=module&lineWrap=false&presets=env%2Creact%2Cstage-2&prettier=false&targets=&version=7.24.7&externalPlugins=&assumptions=%7B%7D)
```js
function evensOrOdds(onlyEvens = true) {
return function decorator(value, context) {
let internalNumber = 0
return {
get() {
return internalNumber
},
set(val) {
const num = Number(val)
if(isNaN(num)) {
// don't set the value if it's not a number
return internalNumber
}
if(num % 2 !== (onlyEvens ? 0 : 1)) {
return internalNumber
}
internalNumber = val
return internalNumber
}
}
}
}
class MyClass {
@evensOrOdds(true)
accessor myEvenNumber
@evensOrOdds(false)
accessor myOddNumber
}
```
We've now configured our decorator to take in arbitrary options, which allows users of our decorator to customize its behavior. Yay.
## Metadata
One additional tool your decorators can utilize is `context.metadata`. This object is passed to each decorator and you could use it for a variety of things, but you need to be careful because the metadata object is the same for all invocations of every decorator.
## Continue Learning
Continue to [the next post in the series](https://dev.to/frehner/composing-javascript-decorators-2o38) to learn how to compose (or apply multiple) decorators to a single property! | frehner |
1,914,121 | Tech Debt: Code Todos Never Get Done? | Let's talk tech debt and code improvements. We've all been on a development team where ToDo comments... | 0 | 2024-07-08T11:56:36 | https://dev.to/grantdotdev/tech-debt-code-todos-never-get-done-2i31 | programming, productivity, discuss, developer | Let's talk tech debt and code improvements. We've all been on a development team where ToDo comments are left in the code, hoping they'll be picked up during downtime when no active tickets need attention.
## Table of Contents
[The Downtime Never Comes](#the-downtime-never-comes)
[How It Is Normally Handled](#how-it-is-normally-handled)
[Tech Debt Tickets](#tech-debt-tickets)
[ToDo Comments in Code](#todo-comments-in-code)
[What's the Problem?](#whats-the-problem)
[There Is a Solution Though!](#there-is-a-solution-though)[Forging a Tech Debt Strategy](#forging-a-tech-debt-strategy)
[There's Tools to Help You!](#theres-tools-to-help-you)
[Benefits Of This System](#benefits-of-this-system)
[Conclusion](#conclusion)
## The Downtime Never Comes
However, downtime never arrives. Product owners constantly demand new features, and developers' worklists keep growing. Tech debt tickets often get lost among other priorities, leaving no time for refactors or ToDo improvements.
## How It Is Normally Handled
In my experience, there are two ways tech debt or coding improvements are handled in a development team:
### Tech Debt Tickets
When a developer identifies an improvement or rogue code, they create a ticket (for this article we'll use JIRA) with instructions/details.
First, not all developers are skilled in creating JIRA tickets, leading to poorly titled tickets, missing tags, or overly detailed technical descriptions.
Secondly, these poorly created tickets get lost in a large product backlog. This method only works if tickets are correctly made and tagged as "tech debt" for easy retrieval.
### ToDo Comments in Code (the reason we're here today)
When developers encounter rogue code unrelated to their current work, they leave a comment like:
`//ToDo: Improve this code for performance - use an object literal.`
These comments are committed to the code base and visible to all future viewers.
**Important:** Avoid refactoring code unrelated to your current work. Mixing changes makes future reverts harder, as commits will include both task-related and unrelated code.
#### What's the Problem?
Many developers complain that tech debt rarely gets addressed because there is no documentation or allocated time in the sprint. These comments are often forgotten until someone stumbles upon them, lost, to only be found by chance.
They never see the light of day and the work is rarely improved, because the team aren't given the time for tech debt, or may not be working in that area of the code base for a while (where the changes requested are relevant to the ticket being worked on).
#### There Is a Solution Though!
Combine the two methods, adding ToDo comments as you go, using these to power sprint work, let's take a look at this in more detail.

## Forging a Tech Debt Strategy
To address ToDo comments, commit to a tech debt strategy. I propose the following solution:
1. Decide within your team how often you'll tackle tech debt, depending on your sprint cycles.
2. Add ToDo comments to the codebase as needed.
3. Before the sprint you've committed to tackling tech debt (based on step 1) review ToDo comments. Pick 3 ToDo comments to address and complete in the Sprint and create detailed tickets for them. This ensures all relevant parties are involved and tickets are well-documented with titles, tags, and necessary technical details.
## There's Tools to Help You!
Many IDE's (Integrated Development Environments) and code editors have plugins or built-in functionality to view `ToDo:` comments.
For example, JetBrains' IDEs view them like this:

As you can see, there are clear details around
- how many todo comments were found
- where they are located (filename and code line)
- the comment left.
If Jetbrains' software is a bit pricey for you, and you're utilising something like VS Code, there's a perfect extension called `ToDo Tree` which is free in the extensions library.
Name: Todo Tree
VS Marketplace Link: https://marketplace.visualstudio.com/items?itemName=Gruntfuggly.todo-tree
And displays the results similarly:

There are many tools, plugins/extensions for many other editors too, so you're not restricted.
## Benefits Of This System
*Easy to add at the time:* - Comments are simple to write and add whilst at that place in the code, creating a virtual bookmark.
*Doesn't break concentration:* - Being able to add the comment there and then whilst you spot it, allows a fire-and-forget approach to the change/improvement. Rather than having to break off, login into JIRA, create a ticket, remember the tags that needed to be added, and fill all the details in, just leave a comment and continue with your thought process.
Often "code smell" doesn't get recorded because developers don't want to go through the laborious task of creating a ticket when in the middle of a coding flow.
*Add Prefixes to categorise:* - An approach I've implemented in previous teams, is adding a tag/prefix to your ToDo comments, to help prioritise, or group comments (these would be pre-determined by your team, as not to create repeat or meaningless tags).
Some examples could be:
- High / Medium / Low => priority-based improvements
- "Security" => indicating a security improvement
- "Styling" => improvement around code style
- "Performance" => improve performance, e.g switch case to object literal
- "BP" => best practice e.g. variable naming, code layout, usage of particular functions etc.
Example of how these would be used:
```ts
// Single Tag
//TODO: Performance - update this switch case return, to utilise an object literal lookup`
//Combine tags
//TODO: High, Security - update this function to factor in exposing API secrets
```
Using various tools' filtering ability means sprints can be focussed on tackling particular types of tech debt, or higher priority issues using the pre-determined tags.
## Conclusion
Addressing tech debt and code improvements is a constant challenge for development teams. The traditional methods of handling tech debt, such as creating JIRA tickets or leaving ToDo comments in the code, often fail due to a lack of documentation, prioritization, or effective follow-up. However, combining these methods can create a more efficient strategy, meaning that tech debt is dealt with regularly.
By integrating ToDo comments with a structured approach to tech debt during backlog grooming sessions, teams can ensure these improvements are systematically addressed rather than on an ad-hoc basis. Utilising IDEs' built-in tools or add-on extensions can further streamline this process by providing easy visibility and management of these comments.
Ultimately, a committed tech debt strategy that includes regular review cycles, prioritisation, and commitment to resolving comments can help teams manage their workload more effectively. This approach not only maintains code quality but also ensures that necessary improvements are not overlooked, fostering a more sustainable development process.
As always I'd love to hear your thoughts and opinions on this topic. Feel free to follow for future posts.
For more tips, discussions, and to hear about other posts I make elsewhere drop me a follow on [Twitter](https://twitter.com/grantdotdev). | grantdotdev |
1,914,126 | Deploy a Static Website with Route53, CloudFront and AWS Certificate using a Terraform Script | Automation is queen in this side of our world, the more of your work you can automate, the... | 0 | 2024-07-10T13:51:05 | https://dev.to/chigozieco/deploy-a-static-website-with-route53-cloudfront-and-aws-certificate-using-a-terraform-script-25i8 | terraform, aws, devops, automation | Automation is queen in this side of our world, the more of your work you can automate, the better.
The use of Terraform is very necessary for cloud engineers in order to automate deployments of your infrastructure. Terraform is an infrastructure as code tool that lets you define infrastructure resources in human-readable configuration files that you can version, reuse, and share. You can then use a consistent workflow to safely and efficiently provision and manage your infrastructure throughout its lifecycle.
<hr>
# Terraform Provider and Initialize Terraform
You can find the complete terraform configuration code for the infrastructure we will be building today [here](https://github.com/ChigozieCO/altschool-3rd-semester/tree/main/03-Assignment-02).
To begin I will create a `main.tf` file and a `provider.tf` file for my root module. The first order of business is to configure the AWS provider for Terraform and initialize to get it ready to deploy AWS resources.
A provider is used by Terraform to interface with the API of whatever infrastructure you're trying to build. Since we are trying to build AWS infrastructure we will be using the AWS infrastructure for our configuration. If you were building in GCP or Azure you will be using a provider for those cloud services.
In the `provider.tf` file I will add the terraform block as well as the provider block, the terraform code block will allow terraform use the the AWS API build our infrastructure while the provider block will configure the AWS provider with the necessary credentials.
In the `provider.tf` file add the following lines of code:
```hcl
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}
provider "aws" {
region = "us-east-1"
shared_credentials_files = ["~/.aws/credentials"]
}
```
The great thing about terraform is that you do not have to have any of these codes memorized as you just start out with terraform, you can always refer to the [Terraform documentation](https://developer.hashicorp.com/terraform/docs) for help.
In the terraform block I specified AWS as the required provider with source as `hashicorp/aws` but I omitted the `version` argument as I would like terraform to download the latest version whenever it is initialized.
The provider block provides the information needed to access AWS specifically. In the provider block I specified the `region` argument, that is the only credential I will be hardcoding in my configuration. As I have already setup my AWS credentials using `AWS configure` with the AWS CLI I added the `shared_credentials_file` argument (if you have multiple profiles ensure you include the `profile` argument and supply the profile name) so Terraform will use that information to pick up these credentials and make use of them to build our infrastructure.
For a guide on how to configure your AWS credentials in the AWS CLI, check out [this post](https://dev.to/chigozieco/host-a-static-website-using-amazon-s3-and-serve-it-through-amazon-cloudfront-3om8#configure-aws-cli) of mine where I take you through the process.
Now I am ready to run a `terraform init` to initialize the project so that terraform can download the provider required for this project and connect to AWS.
Ensure you are in your project directory and run the below command in your terminal:
```sh
terraform init
```

<hr>
# Terraform Modules
Terraform modules are a powerful feature that allows you to organize and reuse infrastructure configurations. Modules encapsulate groups of resources that are used together and can be referenced and instantiated multiple times within your Terraform configuration.
Modules are a great way to follow the `Don't Repeat Yourself` (DRY) principle of software development which states that code should be written once and not repeated. Modules encapsulate a set of Terraform config files created to serve a specific purpose.
Modules are used to create reusable components inside your infrastructure. There are primarily two types of modules depending on how they are written (root and child modules), and depending if they are published or not, we identify two different types as well (local and published).
Reusability is the best consideration while writing Terraform code. Repetition of the same configuration will be laborious because HCL is a declarative language and can be very wordy. Therefore, for optimal reusability, we should attempt to use modules as much as possible so be sure to define them at the beginning, as much as possible.
We will be writing our configuration as modules and then run the modules to build the configuration.
It is recommend to place modules in a `modules` directory when locally developing modules but you can name it whatever you like.
To begin I created the `Modules` directory, this is where all my modules will reside.
<hr>
# Create S3 Bucket Module
For my S3 bucket module, I created a directory named `s3-bucket` in the `Modules` directory. In this directory I create the following files `main.tf`, `variables.tf`, `output.tf`.
## Create S3 Bucket
#### **modules/s3-bucket/variables.tf**
In the `variables.tf` I define the bucket name as a variable with the code below:
```hcl
variable "bucket-name" {
description = "The name of the S3 bucket"
type = string
validation {
condition = (
length(var.bucket-name) >= 3 && length(var.bucket-name) <= 63 &&
can(regex("^[a-z0-9][a-z0-9-.]*[a-z0-9]$", var.bucket-name))
)
error_message = "The bucket name must be between 3 and 63 characters, start and end with a lowercase letter or number, and can contain only lowercase letters, numbers, hyphens, and dots."
}
}
```
The validation simply checks that the bucket name is between 3 and 63 characters, start and end with a lowercase letter or number, and contains only lowercase letters, numbers, hyphens, and dots. This is necessary to prevent any error that AWS might throw as a result of wrong bucket naming convention.
The significance of using a variable file is for simplicity and easy refactoring of the code. In the event that we need to change the value of that variable, we will only need to change it in one place and the change will be picked up anywhere that variable is made reference to using the `.var` notation.
#### **modules/s3-bucket/main.tf**
Now to the creation of the S3 bucket, we will add the `aws_s3_bucket` resource block to the module's `main.tf` file as shown below.
```hcl
# Create S3 Bucket
resource "aws_s3_bucket" "site-bucket" {
bucket = var.bucket-name
force_destroy = true
}
```
The bucket name is supplied by a variable and will be substituted with the value at creation.
#### **modules/s3-bucket/outputs.tf**
This is the file in which we will output some of the values we will use in the rest of our configurations.
Go ahead and create a new file called `outputs.tf` in the s3 bucket module add the below code to the file:
```hcl
output "bucket_regional_domain_name" {
description = "This is the bucket domain name including the region name."
value = aws_s3_bucket.site-bucket.bucket_regional_domain_name
}
```
## Add the S3 Bucket to the Root Module
#### **main.tf**
To test that this module works we will create an s3 bucket using this module we have just written. Head to the `main.tf` file of your root module, outside your `Module` directory and enter the below piece of code in the file.
```hcl
module "s3-bucket" {
source = "./Modules/s3-bucket"
bucket-name = var.bucket-name
}
```
#### **variables.tf**
Create three new files also in your root module called `variables.tf`, `outputs.tf` and `terraform.tfvars`.
In the `variables.tf` add the code below
```hcl
variable "bucket-name" {
type = string
}
```
#### **outputs.tf**
In the `outputs.tf` add the following code:
```hcl
output "bucket-name" {
value = module.s3-bucket.site-bucket.bucket_regional_domain_name
}
```
We will make use of this output when creating our cloudfront distribution.
#### **terraform.tfvars**
In the `terraform.tfvars` file, enter the code below:
```
bucket-name = "<your unique bucket name>
```
:warning: **NOTE**
Your `.tfvars` file should never be committed to version control, add this file to your `.gitignore` file. Check out my [`.gitignore` file](https://github.com/ChigozieCO/altschool-3rd-semester/blob/main/03-Assignment-02/.gitignore) for files to add to yours.
You can also use [this site](https://www.toptal.com/developers/gitignore/) to generate your gitignore files for this project and future projects.
In your terminal, run the `terraform init` command again, you must rerun the command when you add a module or change provider. If you fail to run it and run any other terraform command you will get the below error message.

Now you can run `terraform plan` to see what terraform plans to create in your AWS account.
To create the bucket, run
```sh
terraform apply
```
Whenever you run this command terraform will always ask you if you want to carry out this action, you can either answer yes or no. To avoid this question coming up you can directly include `auto approve` in the command as shown below:
```sh
terraform apply --auto-approve
```
If you followed along correctly, you would have successfully created an s3 bucket, we will destroy it and continue writing our terraform script as we are not just merely creating a bucket.
Run the command below to destroy the created bucket:
```sh
terraform destroy
```
<hr>
## TF alias for Terraform
Before we continue, I want to set a short alias for terraform as we will need to call terraform a whole lot. Setting terraform alias to be `tf` will help simplify things for us when we are calling our commands, so we will no longer need to explicitly call out `terraform` but now we call it `tf` eg `tf apply` instead of `terraform apply`. It helps us shorten our command.
We can do this by setting up an alias in the bash profile.
To open the bash profile for the terminal, I used the below command:
```sh
vi ~/.bash_profile
```
This is where we can set bash configurations, we set our alias by calling alias with the short form as seen:
```sh
alias tf="terraform"
```
To now use the command we set as alias we need to run that bash profile script first so that the change is applied.
```sh
source ~/.bash_profile
```
Now we can use tf instead of terraform
## Upload Assets Into S3 Bucket
Before writing the code to upload our website assets into the bucket we should create a directory and save our assets. I will save this in our root module as `web-assets` and add my website assets in there.
#### **modules/s3-bucket/main.tf**
We will use the `for_each` meta arguments to upload our bucket assets, we are using this approach as we have multiple files to upload. this is useful when you create multiple resources with similar configurations.
It does not make sense to just copy and paste the Terraform resource blocks with minor tweaks in each block. Doing this only affects the readability and unnecessarily lengthens the IaC configuration files. Add the below code to your s3-bucket module `main.tf` file:
```hcl
# Upload objects into the s3 Bucket
resource "aws_s3_object" "upload-assets" {
for_each = fileset("${var.web-assets-path}", "**/*")
bucket = aws_s3_bucket.site-bucket.bucket
key = each.value
source = "${var.web-assets-path}/${each.value}"
content_type = lookup(var.mime_types, regex("\\.[^.]+$", each.value), "application/octet-stream")
}
```
The `for-each` will iterate through the files in the website directory. I used the `fileset` function to iterates over all files and directories in the specified path, making each file/directory available to the for_each loop in the resource definition.
The path isn't hardcoded, it is defined as a variable in the `variable.tf` file as you will see below.
The `for_each` loop over `fileset` returns file paths, not key-value pairs, this is why we use `each.value` as our key and not `each.key`.
We want the website to recognise each file type for it's correct respective MIME type and display it properly on the website which is why we used the `lookup` function in the `content_type` argument. `lookup(map, key, default)` is a function that searches for key in map and returns the associated value if found. If key is not found, it returns default.
The regex function extracts the file extension from each.value, which is the file name obtained from fileset in other to determine a more accurate MIME type.
#### **modules/s3-bucket/variables.tf**
Here we will define the variables we called in the piece of code above, add the below code to the file:
```hcl
# Set the variable for the file path of the files to be uploaded to the bucket
variable "web-assets-path" {
description = "This is the location of our website files"
type = string
}
variable "mime_types" {
description = "Map of file extensions to MIME types"
type = map(string)
default = {
".html" = "text/html"
".css" = "text/css"
".png" = "image/png"
".jpg" = "image/jpeg"
".jpeg" = "image/jpeg"
".pdf" = "application/pdf"
"json" = "application/json"
"js" = "application/javascript"
"gif" = "image/gif"
# Add more extensions and MIME types as needed
}
}
```
## Update Root Module `main.tf`, `variable.tf` and `terraform.tfvars` Files
#### **main.tf**
we updated our module and so we need to update our root module configuration as well. Your root module's `main.tf` file should now look like this:
```hcl
module "s3-bucket" {
source = "./Modules/s3-bucket"
bucket-name = var.bucket-name
web-assets-path = var.web-assets-path
}
```
#### **variable.tf**
Your root module's `variable.tf` file should now look like this:
```hcl
variable "bucket-name" {
type = string
}
variable "web-assets-path" {
type = string
}
```
#### **terraform.tfvars**
Your root module's `terraform.tfvars` file should now look like this:
```
bucket-name = "<your unique bucket name>
web-assets-path = "<the path to your website files (best to supply the absolute path)>
```
<hr>
# Create Hosted Zone in Route53
:warning: **Note**
>Even if you are still eligible for the AWS free tier, the Route53 service is never free. This hosted zone will attract a charge of $0.50 per month.
You need a custom domain name for this step, so if you don't already have one, pause, get one and continue along.
This step is going to be completed manually, initially I was going to import the resource into terraform after manually creating it but upon further consideration we have no reason to as I wouldn't want Terraform deleting the hosted zone.
The reason for creating the hosted zone manually is simply because when you create a hosted zone, you are given a new set of name servers which you will need to add to your custom domain configuration. Terraform does not have the infrastructure to complete this step and so your configuration will fail until you manually add your name servers to your custom domain yourself.
Since we already know this, we will manually create the hosted zone, add the name servers to our custom domain and then, using the terraform `data` resource, retrieve details of the created hosted zone into terraform to avoid any issues that might have arisen.
## Create Hosted Zone
- Open your [AWS management console](https://aws.amazon.com/).
- In `Services` under the `Network and Content delivery` category choose `Route53`
- Select `create hosted zone`
- It's pretty straightforward from there, enter your domain name in the space for `domain name`.
- Select `public hosted zone` under `type`.
- You can add a tag and description if you want.
- At the bottom of the page, click on `create hosted zone`.

- Once your hosted zone has been created, open it to view details and copy the name servers supplied by AWS.
- Copy each name server and replace those already in our domain name with these new ones.
## Retrieve the Details of the Hosted Zone Resource into Terraform Configuration
To add details of our hosted zone resource to terraform we will create a new module in the `Module` directory called `route53`.
#### **Modules/route53/main.tf**
Add the following code to the file:
```hcl
# Retrieve information about your hosted zone from AWS
data "aws_route53_zone" "created" {
name = var.domain_name
}
```
The above code might not look like much but it will retrieve the details of the hosted zone in our AWS account that matches the name we supply and then using it wherever we call that specific `data` resource in our configuration.
#### **Modules/route53/variables.tf**
You know the drill, add the declared variables to keep your code reusable.
```hcl
# domain name variable
variable "domain_name" {
description = "This is the name of the hosted zone."
type = string
}
```
<hr>
# Create TLS/SSL Certificate and Validate it
This is not a one stage process in terraform, we will need to first create the resource and then validate it with another resource block.
## Create Certificate
We will create our certificate before our cloudfront distribution as we will use our SSL certificate in our cloudfront distribution.
As usual, create a `certificate` directory in the `Module` directory which will house our certificate module. Create 3 new files in that directory `main.tf`, `variable.tf` and `output.tf`.
#### **Modules/certificate/main.tf**
```hcl
# Create the TLS/SSL certificate
resource "aws_acm_certificate" "cert" {
domain_name = var.domain_name
validation_method = var.validation_method
subject_alternative_names = var.subject_alternative_names
# Ensure that the resource is rebuilt before destruction when running an update
lifecycle {
create_before_destroy = true
}
}
```
#### **Modules/certificate/variables.tf**
Add the necessary variables to your variables file:
```hcl
variable "domain_name" {
description = "Domain name for which the certificate should be issued"
type = string
}
variable "validation_method" {
description = "Which method to use for validation."
type = string
default = "DNS"
}
variable "subject_alternative_names" {
description = "Set of domains that should be SANs in the issued certificate."
type = list(string)
default = []
}
```
#### **Modules/certificate/outputs.tf**
Define the outputs we will need to reference in other modules
```hcl
output "cert-arn" {
value = aws_acm_certificate.cert.arn
}
output "domain_validation_options" {
value = aws_acm_certificate.cert.domain_validation_options
}
```
## Create the ACM Certificate Validation Record
Before we create the resource to validate the certificate, we need to create a DNS record in AWS Route 53, which is used to validate the domain ownership for an AWS ACM certificate. The DNS record details (name, value, type) are obtained from the ACM certificate's domain validation options.
We will create this record in route53 so head on to your `Modules/route53/main.tf` file. Add the following to your file:
#### **Modules/route53/main.tf**
```hcl
# Create DNS record that will be used for our certificate validation
resource "aws_route53_record" "cert_validation" {
for_each = { for dvo in var.domain_validation_options : dvo.domain_name => {
name = dvo.resource_record_name
type = dvo.resource_record_type
record = dvo.resource_record_value
} }
name = each.value.name
type = each.value.type
records = [each.value.record]
ttl = 60
zone_id = data.aws_route53_zone.created.zone_id
}
```
The code above will create a CNAME record in your domain's hosted zone which will be used to validate the certificate which you created. However if you try to apply the code to create te certificate and create the record at the same time, you will get an error message that looks like the own below

This is why we will run the `terraform apply` command in two stages as you will see eventually.
#### **Modules/route53/variables.tf**
Add the following to your route53 module variables file
```hcl
variable "domain_validation_options" {
description = "The domain validation options from the ACM certificate."
type = list(object({
domain_name = string
resource_record_name = string
resource_record_type = string
resource_record_value = string
}))
}
```
## Validate the Certificate
The `aws_acm_certificate` resource does not handle the certificate validation in terraform, we need to use the `aws_acm_certificate_validation` resource to accomplish that.
As I earlier explained and you saw from the error message, the certificate first needs exist and the value of the `domain_validation_options` known first before terraform will honour our for_each statement, this is why we need to first create the certificate, then the record and then verify.
For the above reason we won't put the validation step in the certificate module but in the Route53 module, therefore open your `Route53` module.
The code for the actual validation is seen below:
#### **Modules/route53/main.tf**
```hcl
# Validate the certificate
resource "aws_acm_certificate_validation" "validate-cert" {
certificate_arn = var.certificate_arn
validation_record_fqdns = [for record in aws_route53_record.cert_validation : record.fqdn]
depends_on = [aws_route53_record.cert_validation]
}
```
The `depends_on` argument will force terraform to create the `aws_route53_record.cert_validation` resource first before attempting to validate our certificate.
#### **Modules/route53/variables.tf**
Add the required variables to the module's variables.tf file
```hcl
variable "certificate_arn" {
type = string
}
```
#### **Modules/route53/outputs.tf**
Add the following to your route53 module outputs.tf file
```hcl
output "dns_records" {
value = aws_route53_record.cert_validation
}
```
We still have to create the CloudFront Distribution Alias Record, this is the record where we will set our cloudfront distribution domain name as an alias for our custom domain name. We will do this after creating our cloudfront distribution.
<hr>
# Create CloudFront Module
Now we can go ahead and create our cloudfront distribution. Create a `cloudfront` directory in the `Modules` directory and add create `main.tf` and `variables.tf` files in the directory.
## Create Origin Access Control - OAC
The first thing we need to do is to create the `Origin Access Control` we will use in the configuration of our distribution. Do this by adding the code below to your `main.tf` file:
#### **Modules/cloudfront/main.tf**
```hcl
# Create the access origin control that will be used in creating our cloudfront distribution with s3 origin
resource "aws_cloudfront_origin_access_control" "assign-oac" {
name = var.oac-name
description = "An origin access control with s3 origin domain for cloudfront"
origin_access_control_origin_type = var.origin_access_control_origin_type
signing_behavior = var.signing_behavior
signing_protocol = var.signing_protocol
}
```
#### **Modules/cloudfront/variables.tf**
Declare the variables:
```hcl
variable "oac-name" {
description = "This is the name of the cloudfront origin Access control with s3 bucket origin domain"
type = string
default = "s3-bucket-oac"
}
variable "origin_access_control_origin_type" {
description = "The origin type must be the same as the origin domain"
type = string
default = "s3"
}
variable "signing_behavior" {
description = "Specifies which requests CloudFront signs."
type = string
default = "always"
}
variable "signing_protocol" {
description = "Determines how CloudFront signs (authenticates) requests."
type = string
default = "sigv4" # The only valid value
}
```
## Create Distribution
Now we can create our distribution. Add the following in the `main.tf` file:
#### **Modules/cloudfront/main.tf**
```hcl
# Create CloudFront Distribution
resource "aws_cloudfront_distribution" "cdn" {
origin {
domain_name = var.cdn-domain_name-and-origin_id
origin_id = var.cdn-domain_name-and-origin_id
origin_access_control_id = aws_cloudfront_origin_access_control.assign-oac.id
}
default_cache_behavior {
compress = true
viewer_protocol_policy = "redirect-to-https"
allowed_methods = [ "GET", "HEAD" ]
cached_methods = [ "GET", "HEAD" ]
target_origin_id = var.cdn-domain_name-and-origin_id
min_ttl = 0
default_ttl = 3600
max_ttl = 86400
forwarded_values {
query_string = false
cookies {
forward = "all"
}
}
}
restrictions {
geo_restriction {
restriction_type = var.restriction_type
}
}
viewer_certificate {
acm_certificate_arn = var.acm_certificate_arn
ssl_support_method = "sni-only"
minimum_protocol_version = "TLSv1.2_2021"
cloudfront_default_certificate = false
}
enabled = true
is_ipv6_enabled = true
default_root_object = var.default_root_object
aliases = [var.domain_name, "www.${var.domain_name}"]
}
```
#### **Modules/cloudfront/variables.tf**
Add the required variables
```hcl
variable "restriction_type" {
description = "Method that you want to use to restrict distribution of your content by country"
type = string
default = "none"
}
variable "default_root_object" {
description = "Object that you want CloudFront to return when an end user requests the root URL."
type = string
default = "index.html"
}
variable "domain_name" {
description = "your custom Domain name for which the certificate should be issued"
type = string
}
variable "cdn-domain_name-and-origin_id" {
type = string
}
variable "acm_certificate_arn" {
type = string
}
```
#### **Modules/cloudfront/outputs.tf**
```hcl
output "cloudfront-arn" {
value = aws_cloudfront_distribution.cdn.arn
}
output "cloudfront_domain_name" {
value = aws_cloudfront_distribution.cdn.domain_name
}
output "cloudfront_hosted-zone_id" {
value = aws_cloudfront_distribution.cdn.hosted_zone_id
}
```
<hr>
# Configure S3 Bucket Permission
Now we need to add the specific bucket permissions, that cloudfront needs to be able to adequately interact with our s3 bucket, to our s3 bucket.
Head back to your s3 bucket module.
#### **Modules/s3-bucket/main.tf**
This is the policy that will allow our cloudfront distribution access to our s3 bucket and it's object through it's access origin control.
Add the code below to our s3 bucket module's main.tf file
#### **Modules/s3-bucket/main.tf**
```hcl
# Add the permissions needed by cloudfront's origin access control to access the bucket and it's objects
resource "aws_s3_bucket_policy" "cloudfront-oac-policy" {
bucket = aws_s3_bucket.site-bucket.bucket
policy = jsonencode({
Version = "2012-10-17",
Statement = [
{
Sid = "AllowCloudFrontServicePrincipal",
Effect = "Allow",
Principal = {
Service = "cloudfront.amazonaws.com"
},
Action = "s3:GetObject",
Resource = "${aws_s3_bucket.site-bucket.arn}/*",
Condition = {
StringLike = {
"aws:UserAgent" = "Amazon CloudFront"
}
}
}
]
})
}
```
<hr>
# Create CloudFront Distribution Alias Record
We will create a new module, specially for this, create a new module called `alias`, create two files `main.tf` and `variables.tf`.
#### **Modules/alias/main.tf**
```hcl
# Retrieve information about your hosted zone from AWS
data "aws_route53_zone" "created" {
name = var.domain_name
}
# Create an alias that will point to the cloudfront distribution domain name
resource "aws_route53_record" "alias" {
zone_id = data.aws_route53_zone.created.zone_id
name = var.domain_name
type = "A"
alias {
name = var.cloudfront_domain_name
zone_id = var.cloudfront-zone-id
evaluate_target_health = false
}
}
```
#### **Modules/alias/variables.tf**
Declare the necessary variables as usual:
```hcl
variable "domain_name" {
description = "your custom domain name"
type = string
}
variable "cloudfront_domain_name" {
type = string
}
variable "cloudfront-zone-id" {
type = string
}
```
<hr>
# Putting it all Together: Move Modules into the Root Module
It's now time to put our modules to use in building our infrastructure. We do this by calling the module in the `main.tf` file of our root module, this is our main configuration file.
We had previously added our s3-bucket module to our main.tf earlier in the project when we wanted to test out our s3-bucket module, now we will add the rest of our modules to the `main.tf`.
#### **main.tf**
Your final configuration in your `main.tf` of your root module should look like this:
```hcl
# Create S3 bucket, upload objects into the bucket and set bucket policy.
module "s3-bucket" {
source = "./Modules/s3-bucket"
bucket-name = var.bucket-name
web-assets-path = var.web-assets-path
}
# Create and validate TLS/SSL certificate
module "certificate" {
source = "./Modules/certificate"
domain_name = var.domain_name
subject_alternative_names = ["www.${var.domain_name}"]
}
# Create OAC and cloudfront distribution,
module "cloudfront" {
source = "./Modules/cloudfront"
domain_name = var.domain_name
cdn-domain_name-and-origin_id = module.s3-bucket.bucket_regional_domain_name
acm_certificate_arn = module.certificate.cert-arn
depends_on = [ module.route53 ]
}
# Import the hosted zone from AWS, create dns records for certificate validation, and create A and CNAME records.
module "route53" {
source = "./Modules/route53"
domain_name = var.domain_name
domain_validation_options = module.certificate.domain_validation_options
certificate_arn = module.certificate.cert-arn
}
# Create an alias to point the cloudfront cdn to our domain name.
module "alias" {
source = "./Modules/alias"
domain_name = var.domain_name
cloudfront_domain_name = module.cloudfront.cloudfront_domain_name
cloudfront-zone-id = module.cloudfront.cloudfront_hosted-zone_id
depends_on = [ module.cloudfront ]
}
```
#### **variables.tf**
Now we will declare the necessary variables, your final variable.tf file should look like this:
```hcl
variable "bucket-name" {
type = string
}
variable "web-assets-path" {
type = string
}
variable "domain_name" {
type = string
}
```
#### **terraform.tfvars**
Add your secrets to your `*.tfvars` file like so:
```hcl
bucket-name = "<your unique bucket name>
web-assets-path = "<the path to your website files (best to supply the absolute path)>
domain_name = "<your custom domain name>"
```
Now we are all set to deploy our application.
<hr>
# Create the Infrastructure
### Install Modules
First run `tf init` to install all the added modules.
```sh
tf init
```

### Validate Configuration
Next you can run the validate command to validate your configuration
```sh
tf validate
```

### Create Infrastructure
As I already explained earlier the `for_each` function will only iterate on values that are already known at the time the `apply` command is , therefore if we were to apply before creating our certificate terraform will thrown an error.
To avoid this error we will apply in two stages, first with the `--target` flag and then apply the whole configuration.
First run:
```sh
tf apply --target module.certificate
```



Lastly, create the remaining resources:
```sh
tf apply
```


### Confirm Build
You can open your AWS console to see that the resources have been built.
Open your browser and navigate to your custom domain and you will see that your website is showing, here is mine.

<hr>
# Cleanup
Remember to clean up your environment when you are done. Don't leave the resources running in AWS to avoid unnecessary billing.
Use the destroy command:
```sh
tf destroy
```


| chigozieco |
1,914,239 | Ambient Mesh with Istio like a boss! | Why ambient mesh and not sidecar? Ambient mesh uses a shared agent on each Kubernetes... | 28,023 | 2024-07-10T17:59:14 | https://matthewdavis.io/ambient-mesh-with-istio | istio, kubernetes, networking, servicemesh | ## Why ambient mesh and not sidecar?
Ambient mesh uses a shared agent on each Kubernetes node, called a ***ztunnel***. This zero-trust tunnel securely connects and authenticates elements within the mesh, redirecting all traffic through the local ztunnel agent.
This separation allows operators to manage the data plane independently from applications, enabling easier scaling and upgrades.
Ztunnels provide core service mesh functions: zero trust, mTLS, telemetry, authentication, and L4 authorization, without parsing HTTP.
In comparison, ztunnel doesn't perform L7 processing, making it leaner than sidecars and suitable as shared infrastructure. The traditional sidecar, which has been a staple of Istio for years, now faces a significant competitor. This new contender challenges the long-standing dominance of the sidecar model, introducing innovative approaches and technologies that promise to enhance performance, security, and overall efficiency in service mesh architectures.
## Pre-requisites
This post assumes you already have istio set up in `ambient` mode. In case you don’t, this will get you up and running in two minutes:
```yaml
helm repo add istio https://istio-release.storage.googleapis.com/charts --force-update
helm install -n istio-system istio-base istio/base --create-namespace
helm install -n istio-system istio-cni istio/cni --set profile=ambient
helm install -n istio-system istiod istio/istiod --set profile=ambient
helm install -n istio-system ztunnel istio/ztunnel
helm install -n istio-ingress istio-ingress istio/gateway --create-namespace
kubectl get crd gateways.gateway.networking.k8s.io &> /dev/null || { kubectl kustomize "github.com/kubernetes-sigs/gateway-api/config/crd?ref=v1.1.0" | kubectl apply -f -; }
```
## The `Gateway`
The Gateway API simplifies the configuration of ingress and egress traffic, providing a unified approach to managing traffic routing within the mesh. This helps in maintaining a clear and manageable structure for directing traffic to the appropriate services.
### Setting Up the Gateway
To set up the Gateway, you need to create a `Gateway` resource that defines how traffic enters the mesh.
```yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: gateway
namespace: ingress-internal
spec:
gatewayClassName: istio
addresses: #
- value: 10.0.16.3 # This block is optional
type: IPAddress #
listeners:
- name: http
hostname: "*.matthewdavis.io"
port: 80
protocol: HTTP
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
gateway-internal-access: "true"
- name: https
hostname: "*.matthewdavis.io"
port: 443
protocol: HTTPS
tls:
mode: Terminate
certificateRefs:
- name: ingress-internal # Ensure secret is in the same namespace!
allowedRoutes:
namespaces:
from: Selector
selector:
matchLabels:
gateway-internal-access: "true"
```
### Labeling the Namespace
<aside>
💡 If you do not label your namespace(s) to match the selector above your routes will not be added!
</aside>
Next, label the namespace that will host your internal services to allow the Gateway to route traffic to them. This is a crucial step which tells istio to look for `HTTPRoute` objects in this namespace.
```bash
kubectl label ns internal-services gateway-internal-access=true
```
## Create `HTTPRoute` Objects
Create `HTTPRoute` resources to define how traffic should be routed to your services. Here’s an example for two different services:
### Tool A
Define an `HTTPRoute` for `tool-a`:
```yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: tool-a
namespace: internal-services
spec:
hostnames:
- "tool-a.matthewdavis.io"
parentRefs:
- name: gateway
namespace: ingress-internal
rules:
- backendRefs:
- name: tool-a
port: 8080
```
### Tool B
Define an `HTTPRoute` for `tool-b`:
```yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: tool-b
namespace: internal-services
spec:
hostnames:
- "tool-b.matthewdavis.io"
parentRefs:
- name: gateway
namespace: ingress-internal
rules:
- backendRefs:
- name: tool-b
port: 9090
```
## Learn more
- https://istio.io/latest/docs/tasks/traffic-management/ingress/gateway-api/
- https://gateway-api.sigs.k8s.io/ | mateothegreat |
1,914,271 | Simple Arduino Framework Photo Frame Implementation with Photos Downloaded from the Internet via DumbDisplay | Simple Arduino Framework Raspberry Pi Pico / ESP32 TFT LCD Photo Frame Implementation with Photos Downloaded from the Internet via DumbDisplay | 0 | 2024-07-08T10:06:04 | https://dev.to/trevorwslee/simple-arduino-framework-photo-frame-implementation-with-photos-downloaded-from-the-internet-via-dumbdisplay-170o | raspberrypipico, esp32, spitftlcd | ---
title: Simple Arduino Framework Photo Frame Implementation with Photos Downloaded from the Internet via DumbDisplay
description: Simple Arduino Framework Raspberry Pi Pico / ESP32 TFT LCD Photo Frame Implementation with Photos Downloaded from the Internet via DumbDisplay
tags: 'raspberrypipico, esp32, spitftlcd'
cover_image: 'https://raw.githubusercontent.com/trevorwslee/TFTImageShow/main/imgs/MAIN.jpg'
published: true
id: 1914271
---
# Simple Arduino Framework Raspberry Pi Pico / ESP32 TFT LCD Photo Frame Implementation with Photos Downloaded from the Internet via DumbDisplay
The target of this [project](https://github.com/trevorwslee/TFTImageShow) is to implement, mostly the software part, a simple Arduino framework photos / images showing "photo frame" using Raspberry Pi Pico or ESP32 with photos / images downloaded from the Internet via DumbDisplay -- an Android app running on your Android phone.
The microcontroller program here is developed in Arduino framework using VS Code and PlatformIO, in the similar fashion as described by the post -- [A Way to Run Arduino Sketch With VSCode PlatformIO Directly](https://www.instructables.com/A-Way-to-Run-Arduino-Sketch-With-VSCode-PlatformIO/)
The simple remote UI for downloading photos / images from the Internet is realized with the help of the DumbDisplay Android app. For a brief description of DumbDisplay, you may want to refer to the post -- [Blink Test With Virtual Display, DumbDisplay](https://www.instructables.com/Blink-Test-With-Virtual-Display-DumbDisplay/)
Please note that the UI is driven by the microcontroller program; i.e., the control flow of the UI is programmed in the sketch. Additionally, please note that the downloaded image, be it in **Jpeg** or **PNG** format, will be transferred to the microcontroller board in **Jpeg** format, scaled to site inside the TFT LCD screen.
For Raspberry Pi Pico board (WiFi), a ST7789 2.8 inch 240x320 SPI TFT LCD screen is attached to a Raspberry Pi Pico board.
The TFT LCD module library used is the `Adafruit-ST7735-Library` Arduino library.
For ESP32, LiLyGo TDisplay / TCamera Plus board is used.
The TFT LCD module library use is the `bodmer/TFT_eSPI` Arduino library.

In all cases, the **Jpeg** library used is the `bodmer/TJpg_Decoder` Arduino library.
A simple flash-based **LittleFS** file-system is allocated for storing the saved **Jpeg** images.
The microcontroller board has two running modes:
1) When connected to the DumbDisplay Android app (using WiFi), a simple UI is provided for downloading images from some predefined sites,
as well as for transferring the downloaded image in **Jpeg** format to the microcontroller board.
Note that the predefined sites is hardcoded in the sketch that you can conveniently change as desired by changing the sketch.
2) When not connected to the DumbDisplay Android app, the microcontroller cycles through the saved **Jpeg** images displaying them to the TFT LCD
screen one by one like a simple "photo frame". Note that since the images are stored in **LittleFS**, they will survive even after reboot of the
microcontroller board.
***Connect for the UI; disconnect to enjoy "photo frame" slide show.***
# The UI
| | |
|--|--|
|||
The first time connected, an image download will be initiated automatically.
After downloading an image the image will be transferred to the microcontroller board in **Jpeg** format and be displayed to the TFT LCD screen.
You can choose to save the transferred image by clicking the ***💾Save*** button.
Notice that the [7-segment] *number* displayed next to the ***Saved🗂️*** label will be bumped up after saving.
The *number* indicates the number of images saved to the microcontroller's **LittleFS** storage.
If you so desired, you can turn on auto-save by clicking the ***Auto*** save button. (You turn off auto-save by clicking the button again.)
If you want to initiate another download of image, click on the canvas that shows the downloaded image.
If you want to delete all the saved images, double-click on the [7-segment] *number*.
After you are done with downloading and saving images, disconnect the microcontroller.
After disconnection, the "photo frame" slide show begins on the microcontroller side.
Anytime you want to change the saved images, reconnect to DumbDisplay Android app.
***Connect for the UI; disconnect to enjoy "photo frame" slide show.***
# Wiring TFT LCD Module
For LiLyGo TDisplay / TCamera Plus board, the TFT LCD screen is in-built the microcontroller board; hence, no need for additional wiring.
For Raspberry Pi Pico board, as mentioned previously, a ST7789 2.8 inch 240x320 SPI TFT LCD module is used; hence, some wiring is necessary

|Raspberry Pi Pico|SPI TFT LCD |
|-----------------|------------|
| 3V3 | VCC |
| GND | GND |
| GP21 | BL |
| GP17 | CS |
| GP16 | RS / DC |
| GP18 | CLK / SCLK |
| GP19 | SDA / MOSI |
| GP20 | RST |
# Developing and Building
As mentioned previously, the sketch will be developed using VS Code and PlatformIO.
Please clone the PlatformIO project [TFTImageShow](https://github.com/trevorwslee/TFTImageShow) GitHub repository.
The configurations for developing and building of the sketch in basically captured in the `platformio.ini` file
```
[env]
monitor_speed = 115200
[env:PICOW] ; ensure long file name support ... git config --system core.longpaths true
platform = https://github.com/maxgerhardt/platform-raspberrypi.git
board = rpipicow
framework = arduino
board_build.core = earlephilhower
board_build.filesystem = littlefs
board_build.filesystem_size = 1m
lib_deps =
https://github.com/trevorwslee/Arduino-DumbDisplay
https://github.com/adafruit/Adafruit-ST7735-Library.git
https://github.com/adafruit/Adafruit-GFX-Library
https://github.com/Bodmer/TJpg_Decoder.git
Wire
SPI
https://github.com/adafruit/Adafruit_BusIO
build_flags =
-D FOR_PICOW
[env:TDISPLAY]
platform = espressif32
board = esp32dev
framework = arduino
board_build.filesystem = littlefs
lib_deps =
https://github.com/trevorwslee/Arduino-DumbDisplay
bodmer/TFT_eSPI ; Setup25_TTGO_T_Display
bodmer/TJpg_Decoder
LittleFS
build_flags =
-D FOR_TDISPLAY
[env:TCAMERAPLUS]
platform = espressif32
board = esp32dev
framework = arduino
board_build.filesystem = littlefs
lib_deps =
https://github.com/trevorwslee/Arduino-DumbDisplay
bodmer/TFT_eSPI ; modify User_Setup_Select.h ... Setup44_TTGO_CameraPlus
bodmer/TJpg_Decoder
LittleFS
Wire
SPI
SPIFFS
build_flags =
-D FOR_TCAMERAPLUS
```
***Please make sure you select the correct PlatformIO project environment*** -- `PICOW` / `TDISPLAY` / `TCAMERAPLUS`
For `PICOW`, the platform core is download from `https://github.com/maxgerhardt/platform-raspberrypi.git`.
(As far as I know, this is the only PlatformIO platform core that supports the use of Raspberry Pi PicoW WiFi capability.)
It might take a long time for PlatformIO to download and install it.
If PlatformIO fails to download and install the platform core, it might be that your system doesn't have long "file name" enabled, in such a case, try
```
git config --system core.longpaths true
```
For `TDISPLAY` that uses `bodmer/TFT_eSPI`, you will need to modify the installed `.pio/libdeps/TDISPLAY/TFT_eSPI/User_Set_Select.h`
to use `User_Setups/Setup25_TTGO_T_Display.h` rather than the default `User_Setup.h` like
```
...
//#include <User_Setup.h> // Default setup is root library folder
...
#include <User_Setups/Setup25_TTGO_T_Display.h> // Setup file for ESP32 and TTGO T-Display ST7789V SPI bus TFT
...
```
For `TCAMERAPLUS` which also uses `bodmer/TFT_eSPI`, modify `User_Set_Select.h` similarly
```
...
//#include <User_Setup.h> // Default setup is root library folder
...
#include <User_Setups/Setup44_TTGO_CameraPlus.h> // Setup file for ESP32 and TTGO T-CameraPlus ST7789 SPI bus TFT 240x240
...
```
The program entry point is `src/main.cpp`
```
// ***
// the below _secret.h just define macros like:
// #define WIFI_SSID "your-wifi-ssid"
// #define WIFI_PASSWORD "your-wifi-password"
// ***
#include "_secret.h"
#include "tft_image_show/tft_image_show.ino"
```
Notice there are two **included** files -- `_secret.h` and `tft_image_show/tft_image_show.ino` -- in the `src` directory
You will need to create the `_secret.h` with content like
```
#define WIFI_SSID "your-wifi-ssid"
#define WIFI_PASSWORD "your-wifi-password"
```
With these macros for accessing your WiFi, the microcontroller board will connect to DumbDisplay Android app using WiFi.
If you do not want to use WiFi, simply don't provide them.
In such a case, connection to DumbDisplay Android app is assumed to be using serial UART (slower) via an OTG adapter.
Please refer to the above mentioned post -- [Blink Test With Virtual Display, DumbDisplay](https://www.instructables.com/Blink-Test-With-Virtual-Display-DumbDisplay/)
# The Sketch
The sketch of the project is `tft_image_show/tft_image_show.ino`. You can [easily] customize some aspects of the sketch
```
...
// NEXT_S defines the delay (in seconds) to show next saved image
#define NEXT_S 5
...
// MAX_IMAGE_COUNT define that maximum number of images that can be saved
// set MAX_IMAGE_COUNT to 0 to force reformat the storage
#define MAX_IMAGE_COUNT 10
...
// getDownloadImageURL() returns a URL to download an image; add / remove sites as needed
// download image bigger than needed (on purpose)
const String urls[] = {
String("https://loremflickr.com/") + String(2 * TFT_WIDTH) + String("/") + String(2 * TFT_HEIGHT),
String("https://picsum.photos/") + String(2 * TFT_WIDTH) + String("/") + String(2 * TFT_HEIGHT),
};
const char* getDownloadImageURL() {
int idx = random(2);
return urls[idx].c_str();
}
...
```
* The slide show delay is defined by the macro `NEXT_S`, which default to 5 seconds
* The maximum number of saved images is defined by the macro `MAX_IMAGE_COUNT`, which default to 10.
Note that if you set `MAX_IMAGE_COUNT` to 0, flash and run the sketch, the **LittleFS** storage will be reformatted.
For normal running, `MAX_IMAGE_COUNT` should be at lease 1.
* You can modify `urls` / `getDownloadImageURL()` to add / remove Internet sites for downloading images.
# Sketch Highlight -- TFT LCD Library `Adafruit-ST7735-Library`
Here is how `Adafruit-ST7735-Library` used in the sketch.
First a global `tft` object is defined like
```
#define A_TFT_BL 21
#define A_TFT_CS 17
#define A_TFT_DC 16
#define A_TFT_SCLK 18
#define A_TFT_MOSI 19
#define A_TFT_RST 20
#define TFT_WIDTH 320
#define TFT_HEIGHT 240
#include <Adafruit_ST7789.h>
Adafruit_ST7789 tft(A_TFT_CS, A_TFT_DC, A_TFT_RST);
```
Notice the pin assignments exactly match the wiring described previously.
Then in `setup()`
```
pinMode(A_TFT_BL, OUTPUT);
digitalWrite(A_TFT_BL, 1); // light it up
tft.init(240, 320, SPI_MODE0);
tft.invertDisplay(false);
tft.setRotation(1);
tft.setSPISpeed(40000000);
```
* The back-light part is obvious.
* The TFT LCD screen size is 240x320.
* Why `SPI_MODE0` and other settings? Simply, they work for me.
# Sketch Highlight -- TFT LCD Library `bodmer/TFT_eSPI`
Here is how `bodmer/TFT_eSPI` used in the sketch.
First, a global `tft` object is defined like
```
#include <TFT_eSPI.h>
TFT_eSPI tft = TFT_eSPI();
```
Then in `setup()`
```
tft.init();
tft.setRotation(0);
```
# Sketch Highlight -- Jpeg Library `bodmer/TJpg_Decoder`
You might be wondering why use **Jpeg** but not RGB565 directly. Simply because of the very high data compression ratio of **Jpeg**.
Anyway, here is how `bodmer/TJpg_Decoder` used in the sketch.
Include the needed headers
```
#include <TJpg_Decoder.h>
```
In `setup()`
```
#if defined(TFT_ESPI_VERSION)
TJpgDec.setSwapBytes(true);
#endif
TJpgDec.setCallback(tft_output);
```
Why `setSwapBytes(true)`? Since it seems to work that way.
And here is the *callback* `tft_output`, which is mostly copied from an example of `bodmer/TJpg_Decoder`
```
bool tft_output(int16_t x, int16_t y, uint16_t w, uint16_t h, uint16_t* bitmap) {
// Stop further decoding as image is running off bottom of screen
if ( y >= tft.height() ) return 0;
// This function will clip the image block rendering automatically at the TFT boundaries
#if defined(TFT_ESPI_VERSION)
tft.pushRect(x, y, w, h, bitmap);
#else
tft.drawRGBBitmap(x, y, bitmap, w, h);
#endif
// Return 1 to decode next block
return 1;
}
```
Notice that in case of using `bodmer/TFT_eSPI`, the way to draw the decoded **Jpeg** chunk is
```
tft.pushRect(x, y, w, h, bitmap);
```
If `Adafruit-ST7735-Library` is use, the way to draw the decoded **Jpeg** chunk is
```
tft.drawRGBBitmap(x, y, bitmap, w, h);
```
After setting `TJpgDec` up, a **Jpeg** image can be drawn like
```
TJpgDec.drawJpg(x, y, jpegImage.bytes, jpegImage.byteCount);
```
# Sketch Highlight -- **LittleFS**
Include the needed headers
```
#include <FS.h>
#include <LittleFS.h>
```
In `setup()`
```
LittleFS.begin();
```
If you want to format the **LittleFS**, call `format()` like
```
LittleFS.format()
```
**Jpeg** image can be saved like
```
File f = LittleFS.open(fileName, "w");
if (f) {
f.println(currentJpegImage.width);
f.println(currentJpegImage.height);
f.println(currentJpegImage.byteCount);
f.write(currentJpegImage.bytes, currentJpegImage.byteCount);
f.close();
}
```
Notice that not only the **Jpeg** image bytes got written to the file, the various metadata are saved first. (I.e. the file is not a normal **JPG** file, but a customized one.)
And **Jpeg** image can be read back like
```
File f = LittleFS.open(fileName, "r");
if (f) {
int width = f.readStringUntil('\n').toInt();
int height = f.readStringUntil('\n').toInt();
int byteCount = f.readStringUntil('\n').toInt();
uint8_t* bytes = new uint8_t[byteCount];
f.readBytes((char*) bytes, byteCount);
f.close();
tempImage.width = width;
tempImage.height = height;
tempImage.byteCount = byteCount;
tempImage.bytes = bytes;
}
```
# Sketch Highlight -- DumbDisplay
Like all other use cases of using DumbDisplay, you first declare a global `DumbDisplay` object `dumbdisplay`
```
#if defined(WIFI_SSID)
#include "wifidumbdisplay.h"
DumbDisplay dumbdisplay(new DDWiFiServerIO(WIFI_SSID, WIFI_PASSWORD));
#else
#include "dumbdisplay.h"
DumbDisplay dumbdisplay(new DDInputOutput());
#endif
```
There, the new configured `DDWiFiServerIO` is one of the few ways to establish connection with DumbDisplay Android app.
Like in "no WiFi" case, Serial UART `DDInputOutput` is used.
Then, several global helper objects / pointers are declared
```
DDMasterResetPassiveConnectionHelper pdd(dumbdisplay);
GraphicalDDLayer* imageLayer;
LcdDDLayer* saveButtonLayer;
LcdDDLayer* autoSaveOptionLayer;
LcdDDLayer* savedCountLabelLayer;
SevenSegmentRowDDLayer* savedCountLayer;
SimpleToolDDTunnel* webImageTunnel;
ImageRetrieverDDTunnel* imageRetrieverTunnel = NULL;
```
* The `DDMasterResetPassiveConnectionHelper` global object `pdd` is a helper for managing connect and reconnection with DumbDisplay app.
* The `GraphicalDDLayer` pointer `imageLayer` is the `canvas` to which downloaded image is drawn to.
Like other layers, they the layer is created later, in this case, when DumbDisplay app is created.
* The `LcdDDLayer` pointers `saveButtonLayer`, `autoSaveOptionLayer` and `savedCountLabelLayer` are for **save** button, *auto-save* button, and *saved-count* label respectively.
* The `SevenSegmentRowDDLayer` pointer `savedCountLayer` is for show the saved image count.
* The `SimpleToolDDTunnel` pointer `webImageTunnel` is for download image to DumbDisplay Android app purpose. It will be created together with other layers and `tunnels`.
* The `ImageRetrieverDDTunnel` pointer `imageRetrieverTunnel` is for retrieving the data of the downloaded image, in **Jpeg** format.
The life-cycle of the above DumbDisplay layers and "tunnels" are managed by the global `pdd` object, which will monitor connection and disconnection of
DumbDisplay app, calling appropriate user-defined functions as well as DumbDisplay functions in the appropriate time. It is cooperatively given "time slices" in the `loop()` block like
```
void loop() {
...
pdd.loop(initializeDD, updateDD);
...
}
```
The `initializeDD` is the function defined in the sketch that is supposed to create the various layer and tunnel objects.
```
void initializeDD() {
tft.fillScreen(COLOR_BG);
// create a graphical layer for drawing the downloaded web image to
imageLayer = dumbdisplay.createGraphicalLayer(2 * TFT_WIDTH, 2 * TFT_HEIGHT);
...
// create a LCD layer for the save button
saveButtonLayer = dumbdisplay.createLcdLayer(6, 2);
...
// create a LCD layer for the auto save option
autoSaveOptionLayer = dumbdisplay.createLcdLayer(6, 1);
...
// create a LCD layer as the label for the number of saved images
savedCountLabelLayer = dumbdisplay.createLcdLayer(8, 1);
...
// create a 7-segment layer for showing the number of saved images
savedCountLayer = dumbdisplay.create7SegmentRowLayer(2);
...
// create a tunnel for downloading web image ... initially, no URL yet ... downloaded.png is the name of the image to save
webImageTunnel = dumbdisplay.createImageDownloadTunnel("", "downloaded.png");
...
// create a tunnel for retrieving JPEG image data from DumbDisplay app storage
imageRetrieverTunnel = dumbdisplay.createImageRetrieverTunnel();
// auto pin the layers
dumbdisplay.configAutoPin(DDAutoPinConfig('V')
.addLayer(imageLayer)
.beginGroup('H')
...
.endGroup()
.build());
}
```
The `updateDD` is the function define in the sketch that is supposed to receive "time slices" to update / act-on the layer and tunnel objects.
```
bool isFirstUpdate = !pdd.firstUpdated();
bool updateUI = isFirstUpdate;
if (autoSaveOptionLayer->getFeedback() != NULL) {
// toggle auto save
autoSave = !autoSave;
updateUI = true;
}
if (updateUI) {
if (autoSave) {
autoSaveOptionLayer->writeLine("Auto✅️");
} else {
autoSaveOptionLayer->writeLine("Auto⛔");
}
}
...
if (isFirstUpdate || state == NOTHING) {
if (isFirstUpdate || imageLayer->getFeedback() != NULL) {
// trigger download image
saveButtonLayer->disabled(true);
imageLayer->noBackgroundColor();
state = DOWNLOADING_FOR_IMAGE;
}
return;
}
if (state == DOWNLOADING_FOR_IMAGE) {
// set the URL to download web image
currentJpegImage.release();
String url = getDownloadImageURL();
webImageTunnel->reconnectTo(url);
imageLayer->clear();
imageLayer->write("downloading image ...");
state = WAITING_FOR_IMAGE_DOWNLOADED;
return;
}
if (state == WAITING_FOR_IMAGE_DOWNLOADED) {
int result = webImageTunnel->checkResult();
if (result == 1) {
// web image downloaded ... retrieve JPEG data of the image
imageRetrieverTunnel->reconnectForJpegImage("downloaded.png", TFT_WIDTH, TFT_HEIGHT);
imageLayer->clear();
imageLayer->drawImageFileFit("downloaded.png");
state = RETRIEVING_IMAGE;
retrieveStartMillis = millis();
} else if (result == -1) {
// failed to download the image
imageLayer->clear();
imageLayer->write("!!! failed to download image !!!");
dumbdisplay.writeComment("XXX failed to download XXX");
state = NOTHING;
}
return;
}
if (state == RETRIEVING_IMAGE) {
// read the retrieve image (if it is available)
DDJpegImage jpegImage;
bool retrievedImage = imageRetrieverTunnel->readJpegImage(jpegImage);
if (retrievedImage) {
unsigned long retrieveTakenMillis = millis() - retrieveStartMillis;
dumbdisplay.writeComment(String("* ") + jpegImage.width + "x" + jpegImage.height + " (" + String(jpegImage.byteCount / 1024.0) + " KB) in " + String(retrieveTakenMillis / 1000.0) + "s");
if (jpegImage.isValid()) {
...
} else {
...
}
...
state = NOTHING;
}
return
}
```
Notice
* how layer "feedback" (e.g. clicking) is received using `getFeedback()`
* how the `webImageTunnel` "tunnel" is used to download image with call to `reconnectTo()`
* how the `imageRetrieverTunnel` "tunnel" is used to initiate retrieving of download image data with call to `reconnectForJpegImage()`
* how the image data (`DDJpegImage`) is received via the tunnel `imageRetrieverTunnel` with call to `readJpegImage()`
The whole `updateDD` basically is a "state-machine" that handles the different states (`state`) of the UI processing:
- `NOTHING` -- just started, or finished download / saving of image; will wait for `imageLayer` being clicked to initiate an image
- `DOWNLOADING_FOR_IMAGE` -- this state could have been merged with previous stage; anyway, it reconnects `webImageTunnel` to activate an image download
- `WAITING_FOR_IMAGE_DOWNLOADED` -- waiting for download image to complete; then will retrieve the download image to be displayed to the TFT LCD screen
- `RETRIEVING_IMAGE` -- retrieving the download image data to be transferred to the microcontroller; once retrieved, display the image to the TFT LCD screen
The slide show is carried out when "idle" (not connected to DumbDisplay app)
```
void loop() {
pdd.loop(initializeDD, updateDD);
if (pdd.isIdle()) {
if (pdd.justBecameIdle()) {
// re-start slide show
...
}
unsigned long now = millis();
if (now >= nextMillis) {
if (MAX_IMAGE_COUNT > 0 && savedImageCount > 0) {
...
} else {
...
}
...
}
}
}
```
# Build and Upload
Build, upload the sketch and try it out!
For WiFi connectivity, you will need to find out the IP address of the microcontroller board. Simply connect the microcontroller board with a Serial monitor (set to use baud rate 115200), you should see lines like
```
binded WIFI TrevorWireless
listening on 192.168.0.218:10201 ...
listening on 192.168.0.218:10201 ...
```
See that the IP address of the microcontroller board is printed out.
On DumbDisplay Android app side
|||
|--|--|
|You will need the microcontroller's IP address to configure DumbDisplay Android app to connect it with your microcontroller||
* Start the DumbDisplay app.
* Click on the Establish Connection icon.
* In the "establish connection" dialog, you should see the "add WIFI device" icon at the bottom right of the dialog. Click on it.
* A popup for you to enter WIFI IP will be shown. Enter the IP address of your ESP board as Network Host. Click OK when done.
* Back to the "establish connection" dialog, a new entry will be added, click on it to establish WIFI connection.
Have fun with it!
# Enjoy!
> Peace be with you!
> May God bless you!
> Jesus loves you!
> Amazing Grace!
| trevorwslee |
1,914,292 | Mattermost: Free Open-source Alternative to Slack | In today's fast-paced digital world, efficient communication within teams is paramount. Many... | 0 | 2024-07-08T12:21:07 | https://blog.elest.io/mattermost-free-open-source-alternative-to-slack/ | opensourcesoftwares, elestio, mattermost | ---
title: Mattermost: Free Open-source Alternative to Slack
published: true
date: 2024-07-07 03:03:41 UTC
tags: Opensourcesoftwares,Elestio,Mattermost
canonical_url: https://blog.elest.io/mattermost-free-open-source-alternative-to-slack/
cover_image: https://blog.elest.io/content/images/2024/07/mattermost-thumbnail.png
---
In today's fast-paced digital world, efficient communication within teams is paramount. Many organizations rely on platforms like Slack for this purpose. However, with rising concerns over cost, data privacy, and customization, open-source alternatives like Mattermost are gaining traction.
[Mattermost](https://elest.io/open-source/mattermost?ref=blog.elest.io) offers robust features, flexibility, and an open-source model, making it a compelling choice for businesses and teams of all sizes. This article delves into the various aspects of Mattermost, showcasing why it stands out as a premier alternative to Slack.
<iframe width="200" height="113" src="https://www.youtube.com/embed/E3yowWCDC9c?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen title="Mattermost: Free Open-source Alternative to Slack"></iframe>
_Watch our Mattermost platform overview_
## Communication Platform
Mattermost provides a powerful communication platform designed to streamline team collaboration. Similar to Slack, it offers channels for team conversations, direct messaging, and group chats. What sets Mattermost apart is its high degree of customization. Users can tailor the platform to fit their specific needs, integrating it seamlessly with existing workflows. The interface is intuitive, ensuring a smooth user experience, and supports rich text formatting, file sharing, and multimedia attachments.
Moreover, Mattermost's open-source nature means organizations can host it on their own servers, giving them complete control over their data. This is a significant advantage for companies with strict compliance and security requirements.
## Playbooks
One of the standout features of Mattermost is its Playbooks. Playbooks are predefined processes that can be executed within the platform, streamlining repetitive tasks and ensuring consistency across projects. They are particularly useful for incident response, project management, and onboarding processes.
With Playbooks, teams can document and automate their workflows, reducing the potential for human error and enhancing efficiency. The Playbooks feature supports real-time collaboration, allowing team members to contribute and update processes as needed. This flexibility ensures that Playbooks remain relevant and effective, adapting to the evolving needs of the team.
## AI
Incorporating artificial intelligence into communication platforms can significantly enhance productivity, and Mattermost is no exception. Mattermost leverages AI to provide smart suggestions, automate routine tasks, and offer insights into team interactions. These AI-driven features help in managing workloads, identifying bottlenecks, and optimizing team performance.
For instance, AI can be used to prioritize messages, suggest relevant channels or Playbooks, and even predict potential issues before they escalate. This proactive approach enables teams to stay ahead of challenges, ensuring smooth and efficient operations.
## Security
Security is a critical concern for any communication platform, and Mattermost excels in this area. As an open-source solution, it allows organizations to scrutinize the code and ensure there are no vulnerabilities. Additionally, hosting Mattermost on-premises means companies have full control over their data, reducing the risk of third-party breaches.
Mattermost supports end-to-end encryption, multi-factor authentication, and single sign-on (SSO) to safeguard communications. Regular security audits and a transparent development process further enhance the platform's reliability. These features make Mattermost a trusted choice for organizations with stringent security requirements.
## Conclusion
Mattermost emerges as a robust, flexible, and secure alternative to Slack, offering a comprehensive communication platform tailored to meet the needs of modern teams. Its open-source nature, coupled with advanced features like Playbooks and AI integration, makes it an excellent choice for organizations seeking control, customization, and efficiency.
[Start using Mattermost with Elestio](https://elest.io/open-source/mattermost?ref=blog.elest.io) | kaiwalyakoparkar |
1,914,293 | Part 2: Learning HTML | July 6, 2024 I am working on Codeacademy's HTML Fundamentals Lesson and have learned a few HTML... | 0 | 2024-07-12T03:54:37 | https://dev.to/dgarcia1399/part-2-learning-html-ide | html | July 6, 2024
I am working on Codeacademy's HTML Fundamentals Lesson and have learned a few HTML concepts:
- Start an HTML file with the `<!DOCTYPE html>` declaration. This allows the browser to interpret the file as containing html content and works with the correct version of HTML.
- The next line should contain the `<html>` element. Add all of the code between the html open and closing tags.
- Information which will not be shown/rendered on the web page is contained inside between the `<head>` open and closing tags.
- Use the `<title>` element inside of the head to display the web page name on the browser's tab.
- Use `<a>` anchor elements to add links to internal/external pages.
- You can organize your code (spacing/indentation) however you like and it won't reflect on the web page (unless you are nesting incorrectly, which is why indenting and spacing to create readable code is important!).
- Use the following syntax to comment out code or information from the html file: `<!-- content -->"` . The content inside of this code will not be visible on the web page. | dgarcia1399 |
1,914,410 | State Management in React: A Beginner's Guide | Introduction State management is a critical concept in React, essential for building... | 0 | 2024-07-10T15:39:32 | https://dev.to/lovishduggal/state-management-in-react-a-beginners-guide-40mg | webdev, javascript, beginners, react | ## Introduction
State management is a critical concept in React, essential for building dynamic and interactive applications. As a junior developer, understanding how to effectively manage state can significantly enhance your ability to create responsive and maintainable React applications. In this blog post, we'll cover the basics of state management in React, with simple examples and explanations for beginners.
## What is State in React?
In React, state refers to data that can change over time and belongs to a component. State allows components to create and manage their own data, which can change over time and influence the component's rendering. Unlike props, which are passed down from parent to child components, state is managed within the component itself.
State is essential because it allows React components to respond to user inputs, server responses, or other events by re-rendering with new data. For instance, a component that displays a list of items might use state to keep track of the currently selected item.
## Basic State Management with useState Hook
The `useState` hook is the simplest way to add state to a functional component in React. Introduced in React 16.8, the `useState` hook lets you declare a state variable and a function to update it.
**Syntax**:
```javascript
const [state, setState] = useState(initialState);
```
Here's a simple example of a counter component using `useState`:
**Example: Counter Component**
```javascript
import { useState } from 'react';
function Counter() {
const [count, setCount] = useState(0);
return (
<div>
<p>You clicked {count} times</p>
<button onClick={() => setCount(count + 1)}>
Click me
</button>
</div>
);
}
```
In this example, the `count` state variable is initialized to `0`. Each time the button is clicked, the `setCount` function updates the `count` value, causing the component to re-render and display the new count.
State updates with `useState` are not immediately reflected in the component's state. Instead, React schedules these updates and processes them before the next render. This batching behaviour helps optimize performance by reducing the number of re-renders.
Understanding this behaviour is important, especially in more complex scenarios where multiple state updates might occur. It helps to know that the state value may not change immediately after calling the update function.
[CodeSandBox](https://codesandbox.io/embed/7xg4y2?view=editor+%2B+preview&module=%2Fsrc%2FApp.js)
## When to Lift State Up
Lifting state up refers to moving state from a child component to a common parent component, so that it can be shared among multiple child components. This technique is useful when several components need to reflect the same changing data.
**Example: Sharing State Between Parent and Child Components**
```javascript
function ParentComponent() {
const [sharedState, setSharedState] = useState('');
return (
<div>
<ChildComponent1 sharedState={sharedState} setSharedState={setSharedState} />
<ChildComponent2 sharedState={sharedState} />
</div>
);
}
function ChildComponent1({ sharedState, setSharedState }) {
return (
<input
type="text"
value={sharedState}
onChange={(e) => setSharedState(e.target.value)}
/>
);
}
function ChildComponent2({ sharedState }) {
return (
<p>The shared state is: {sharedState}</p>
);
}
```
In this example, the `sharedState` is managed in the `ParentComponent`, and passed down to `ChildComponent1` and `ChildComponent2` via props. This ensures that both child components are in sync with the same state.
[CodeSandBox](https://codesandbox.io/embed/g94sh4?view=editor+%2B+preview&module=%2Fsrc%2FApp.js)
## Context API for Global State Management
The Context API provides a way to pass data through the component tree without having to pass props down manually at every level. This is particularly useful for global state management, such as user authentication status or theme settings.
**When to Use Context API**
* When state needs to be accessible by many components at different levels.
* To avoid prop drilling (passing props through many layers of components).
**Example: Theme Toggler Using Context API**
```javascript
import { createContext, useContext, useState } from 'react';
const ThemeContext = createContext();
function ThemeProvider({ children }) {
const [theme, setTheme] = useState('light');
const toggleTheme = () => {
setTheme((prevTheme) => (prevTheme === 'light' ? 'dark' : 'light'));
};
return (
<ThemeContext.Provider value={{ theme, toggleTheme }}>
{children}
</ThemeContext.Provider>
);
}
function ThemeToggler() {
const { theme, toggleTheme } = useContext(ThemeContext);
return (
<button onClick={toggleTheme}>
Switch to {theme === 'light' ? 'dark' : 'light'} theme
</button>
);
}
function App() {
return (
<ThemeProvider>
<ThemeToggler />
</ThemeProvider>
);
}
```
In this example, the `ThemeProvider` component manages the theme state and provides it to any child components via the `ThemeContext`. The `ThemeToggler` component consumes the context to display and toggle the theme.
While the Context API is powerful, it's not always the best solution for all state management needs, especially in larger applications where performance can be an issue.
[CodeSandBox](https://codesandbox.io/embed/t8sdtg?view=editor+%2B+preview&module=%2Fsrc%2FApp.js)
## Advanced State Management with Redux
Redux is a popular state management library for React, designed to handle complex state interactions in large applications. Redux provides a predictable state container, making state management more consistent and easier to debug.
**Key Concepts:**
* **Store:** The single source of truth for the application's state.
* **Actions:** Plain objects describing the type of change.
* **Reducers:** Functions that determine how the state changes in response to actions.
**Example: Counter Application Using Redux**
1. **Install Redux and React-Redux:**
```bash
install redux react-redux
```
2. **Create Actions:**
```javascript
export const increment = () => ({ type: 'INCREMENT' });
export const decrement = () => ({ type: 'DECREMENT' });
```
3. **Create Reducer:**
```javascript
const counterReducer = (state = initialState, action) => {
switch (action.type) {
case 'INCREMENT':
return { ...state, count: state.count + 1 };
case 'DECREMENT':
return { ...state, count: state.count - 1 };
default:
return state;
}
};
export default counterReducer;
```
4. **Set Up Store:**
```javascript
import { createStore } from 'redux';
import counterReducer from './reducer';
const store = createStore(counterReducer);
export default store;
```
5. **Connect React Components:**
```javascript
import React from 'react';
import { Provider, useDispatch, useSelector } from 'react-redux';
import store from './store';
import { increment, decrement } from './actions';
function Counter() {
const count = useSelector((state) => state.count);
const dispatch = useDispatch();
return (
<div>
<p>Count: {count}</p>
<button onClick={() => dispatch(increment())}>+</button>
<button onClick={() => dispatch(decrement())}>-</button>
</div>
);
}
function App() {
return (
<Provider store={store}>
<Counter />
</Provider>
);
}
export default App;
```
Using Redux adds an extra layer of structure and predictability to your state management. However, it also introduces complexity and boilerplate code, making it more suitable for large applications with complex state requirements.
[CodeSandBox](https://codesandbox.io/embed/qk7s85?view=editor+%2B+preview&module=%2Fsrc%2FApp.js)
## Choosing the Right State Management Approach
Choosing the right state management solution depends on several factors:
* **Application size:** Smaller apps might be fine with `useState` and lifting state up.
* **State complexity:** More complex state interactions might benefit from Context API or Redux.
* **Performance considerations:** Context API can lead to unnecessary re-renders, while Redux provides more control over state updates.
**Use Cases:**
* **useState:** Simple, component-level state.
* **Context API:** Medium complexity, app-wide state.
* **Redux:** High complexity, large-scale applications.
## Conclusion
State management is a foundational concept in React, essential for creating dynamic and interactive applications. By understanding and utilizing different state management techniques, you can build more robust and maintainable React applications. Experiment with `useState`, Context API, and Redux to find the best fit for your projects, and continue exploring additional resources to deepen your knowledge.
For further learning, consider exploring the official React documentation, tutorials, and community resources. If you'd like, you can connect with me on [**Twitter**](https://twitter.com/lovishdtwts). Happy coding!
Thank you for Reading :) | lovishduggal |
1,914,454 | Understanding the Monad Design Pattern | Monads are a powerful concept in functional programming that help manage side effects and maintain... | 0 | 2024-07-11T02:11:27 | https://rmauro.dev/understanding-the-monad-design-pattern/ | javascript, architecture, designpatterns | Monads are a powerful concept in functional programming that help manage side effects and maintain clean, composable code.
In this post, we'll explore the `Maybe` monad design pattern using JavaScript, which is used to handle operations that might fail or return null/undefined.
### What is a Monad?
In simple terms, a monad is a design pattern that allows you to wrap values, chain operations, and handle side effects in a consistent way.
The `Maybe` monad is particularly useful for dealing with null or undefined values without littering your code with null checks.
### Implementing the Maybe Monad
This monad will wrap a value and provide methods to apply functions to that value if it exists.
```javascript
// Maybe Monad Implementation
class Maybe {
constructor(value) {
this.value = value;
}
static of(value) {
return new Maybe(value);
}
isNothing() {
return this.value === null || this.value === undefined;
}
map(fn) {
return this.isNothing() ? Maybe.of(null) : Maybe.of(fn(this.value));
}
getOrElse(defaultValue) {
return this.isNothing() ? defaultValue : this.value;
}
}
```
### Using the Maybe Monad
Let's consider a function that performs division but needs to handle division by zero.
```javascript
const safeDivide = (numerator, denominator) => {
return denominator === 0 ? Maybe.of(null) : Maybe.of(numerator / denominator);
};
const result = Maybe.of(10)
.map(x => x * 2) // 20
.map(x => x + 1) // 21
.map(x => safeDivide(x, 0)) // Maybe(null)
.getOrElse('Division by zero');
console.log(result); // Output: Division by zero
```
The `Maybe` monad wraps each intermediate value, applying transformations only if the value is not null or undefined.
The `safeDivide` function returns a `Maybe` monad, ensuring safe handling of division by zero.
### Benefits of Using the Maybe Monad
1. **Composability:** Chain multiple operations cleanly without worrying about null checks.
2. **Readability:** Simplify code by avoiding repetitive null checks.
3. **Safety:** Handle potential null or undefined values gracefully, reducing runtime errors.
### Conclusion
The `Maybe` monad is a powerful tool for managing null or undefined values in JavaScript. By wrapping values in a monad, you can chain operations safely and maintain cleaner, more readable code. This straightforward approach to monads can greatly enhance your functional programming toolkit in JavaScript.
For more technical insights and hands-on tutorials, visit [rmauro.dev](https://rmauro.dev/). Happy coding! | rmaurodev |
1,914,548 | Concurrency & Async programming in C# | Hello everyone! Today, we're diving into some crucial concepts in C# that will help you write... | 0 | 2024-07-09T10:38:09 | https://dev.to/ipazooki/concurrency-async-programming-in-c-1eda | csharp, tutorial, dotnet, community |
{% embed https://youtu.be/UNLdSsCWIXs %}
Hello everyone! Today, we're diving into some crucial concepts in C# that will help you write more efficient applications. We'll explore processes, threads, concurrency, and asynchronous programming. These topics might seem daunting initially, but don't worry; we'll break them down step by step. Without further ado, let's dive right in. 🏊♂️
### What is a Process? 💡
A process is essentially an instance of a program currently being executed. Think of it as a container that holds a program's code and its current activity. A process can be any application, such as Visual Studio or a web browser. Each process has its own memory space, ensuring that one process does not interfere with another.
### Process Components 🔧
- **Main Thread**: Each process has a main thread.
- **Memory Stack and Heap**: Each process has its own stack and heap, isolated from other processes.
- **Program Counter (PC)**: Keeps track of the number of executed code lines, particularly useful during context switching.
### What are Threads? 🧵
A process may consist of one or more threads, each with its own memory space. However, each thread only has a stack. These threads are often referred to as worker threads.
- **Value Type Variables**: Saved in the stack.
- **Reference Type Variables**: Reference saved in the stack, value stored in the heap.
### Garbage Collection (GC)
As discussed in previous sessions, the garbage collector manages the heap. When a process is completed, its allocated space is returned, and the GC takes care of memory management.
## CPU Scheduling Algorithms 🖥️
### First Come First Serve (FCFS) 📋
The CPU executes processes in the order they arrive. While simple, this method can lead to starvation if a long process blocks shorter ones.
### Shortest Job First (SJF) 🏃
The CPU executes the shortest job first. While this can be efficient, it can also lead to starvation if shorter processes continually arrive, blocking longer ones.
### Round Robin (RR) 🎡
Each process is assigned a fixed time slot in a cyclical manner. This prevents starvation and ensures fair CPU time distribution.
## Round Robin Algorithm in Detail
Consider we have many processes, from P1 to many others. They go to the ready status and get queued in a first-come, first-served manner. The CPU begins running the first process. If the process is completed within the designated time period, it is terminated. If not, it is moved to the end of the queue.
### Synchronous Programming 🤔
In C#, there are two types of threads: the worker thread and the main thread. In each thread, we output something - "MT" for the main thread and "T1" for the worker thread.
Here's an example code to demonstrate this:
```csharp
Thread thread1 = new Thread(() =>
{
for (int i = 0; i < 1000; i++)
{
Console.Write("T1 ");
}
});
thread1.Start();
thread1.Join();
for (int i = 0; i < 1000; i++)
{
Console.Write("MT ");
}
```
In this example, the main thread writes "MT" 1000 times, and the worker thread writes "T1" 1000 times. Depending on the CPU's time quantum, you'll observe a combination of "MT" and "T1."

### Asynchronous Programming 🤔
In asynchronous programming, the main thread does not wait for the worker thread to complete before it continues working. This results in no blocked threads and ensures that your application remains responsive during long-running operations.
```csharp
Thread thread1 = new Thread(() =>
{
for (int i = 0; i < 1000; i++)
{
Console.Write("T1 ");
}
});
thread1.Start();
for (int i = 0; i < 1000; i++)
{
Console.Write("MT ");
}
```
In this asynchronous example, we have context switching between the main and worker threads, so threads are not blocking each other.

## Conclusion ✨
Understanding processes, threads, concurrency, and asynchronous programming is crucial for developing efficient and responsive C# applications. Processes are containers for your running programs; threads are the execution units within these processes; concurrency ensures multi-tasks progress simultaneously; and asynchronous programming helps keep your application responsive during long-running operations.
Thank you for reading! I hope this guide has clarified these important concepts for you. If you have any questions, please leave a comment below. Don't forget to like, and subscribe. Cheers, and happy coding!
| ipazooki |
1,914,605 | Announcing Crawlee Python: Now you can use Python to build reliable web crawlers | We launched Crawlee in August 2022 and got an amazing response from the JavaScript community. With... | 0 | 2024-07-09T07:02:41 | https://crawlee.dev/blog/launching-crawlee-python | webdev, python, webscraping, programming |
We launched Crawlee in [August 2022](https://blog.apify.com/announcing-crawlee-the-web-scraping-and-browser-automation-library/) and got an amazing response from the JavaScript community. With many early adopters in its initial days, we got valuable feedback, which gave Crawlee a strong base for its success.
Today, [Crawlee built-in TypeScript](https://github.com/apify/crawlee) has nearly **13,000 stars on GitHub**, with 90 open-source contributors worldwide building the best web scraping and automation library.
Since the launch, the feedback we’ve received most often [[1]](https://discord.com/channels/801163717915574323/999250964554981446/1138826582581059585)[[2]](https://discord.com/channels/801163717915574323/801163719198638092/1137702376267059290)[[3]](https://discord.com/channels/801163717915574323/1090592836044476426/1103977818221719584) has been to build Crawlee in Python so that the Python community can use all the features the JavaScript community does.
With all these requests in mind and to simplify the life of Python web scraping developers, **we’re launching [Crawlee Python](https://github.com/apify/crawlee-python) today.**
The new library is still in **beta**, and we are looking for **early adopters**.

{% embed https://github.com/apify/crawlee-python/ %}
Crawlee for Python has some amazing initial features, such as a unified interface for HTTP and headless browser crawling, automatic retries, and much more.
## Why use Crawlee instead of a random HTTP library with an HTML parser?
- Unified interface for HTTP & headless browser crawling.
- HTTP - HTTPX with BeautifulSoup,
- Headless browser - Playwright.
- Automatic parallel crawling based on available system resources.
- Written in Python with type hints - enhances DX (IDE autocompletion) and reduces bugs (static type checking).
- Automatic retries on errors or when you’re getting blocked.
- Integrated proxy rotation and session management.
- Configurable request routing - direct URLs to the appropriate handlers.
- Persistent queue for URLs to crawl.
- Pluggable storage of both tabular data and files.
## Understanding the why behind the features of Crawlee
### Out-of-the-box support for headless browser crawling (Playwright).
While libraries like Scrapy require additional installation of middleware, i.e, [`scrapy-playwright`](https://github.com/scrapy-plugins/scrapy-playwright) and still doesn’t work with Windows, Crawlee for Python supports a unified interface for HTTP & headless browsers.
Using a headless browser to download web pages and extract data, `PlaywrightCrawler` is ideal for crawling websites that require JavaScript execution.
For websites that don’t require JavaScript, consider using the `BeautifulSoupCrawler,` which utilizes raw HTTP requests and will be much faster.
```python
import asyncio
from crawlee.playwright_crawler import PlaywrightCrawler, PlaywrightCrawlingContext
async def main() -> None:
# Create a crawler instance
crawler = PlaywrightCrawler(
# headless=False,
# browser_type='firefox',
)
@crawler.router.default_handler
async def request_handler(context: PlaywrightCrawlingContext) -> None:
data = {
"request_url": context.request.url,
"page_url": context.page.url,
"page_title": await context.page.title(),
"page_content": (await context.page.content())[:10000],
}
await context.push_data(data)
await crawler.run(["https://crawlee.dev"])
if __name__ == "__main__":
asyncio.run(main())
```
The above example uses Crawlee’s built-in `PlaywrightCrawler` to crawl the [https://crawlee.dev/](https://crawlee.dev/) website title and its content.
### Small learning curve
In other libraries like Scrapy, when you run a command to create a new project, you get many files. Then you need to learn about the architecture, including various components (spiders, middlewares, pipelines, etc.). [The learning curve is very steep](https://crawlee.dev/blog/scrapy-vs-crawlee#language-and-development-environments).
While building Crawlee, we made sure that the learning curve and the setup would be as fast as possible.
With [ready-made templates](https://github.com/apify/crawlee-python/tree/master/templates), and having only a single file to add the code, it's very easy to start building a scraper, you might need to learn a little about request handlers and storage, but that’s all.
### Complete type hint coverage
We know how much developers like their code to be high-quality, readable, and maintainable.
That's why the whole code base of Crawlee is fully type-hinted.
Thanks to that, you should have better autocompletion in your IDE, enhancing developer experience while developing your scrapers using Crawlee.
Type hinting should also reduce the number of bugs thanks to static type checking.

### Based on Asyncio
Crawlee is fully asynchronous and based on [Asyncio](https://docs.python.org/3/library/asyncio.html). For scraping frameworks, where many IO-bounds operations occur, this should be crucial to achieving high performance.
Also, thanks to Asyncio, integration with other applications or the rest of your system should be easy.
How is this different from the Scrapy framework, which is also asynchronous?
Scrapy relies on the "legacy" Twisted framework. Integrating Scrapy with modern Asyncio-based applications can be challenging, often requiring more effort and debugging [[1]](https://stackoverflow.com/questions/49201915/debugging-scrapy-project-in-visual-studio-code).
## Power of open source community and early adopters giveaway
Crawlee for Python is fully open-sourced and the codebase is available on the [GitHub repository of Crawlee Python](https://github.com/apify/crawlee-python).
We have already started receiving initial and very [valuable contributions from the Python community](https://github.com/apify/crawlee-python/pull/226).
> “Crawlee for Python development team did a great job in building the product, it makes things faster for a Python developer.” ~ [Maksym Bohomolov](https://apify.com/mantisus)
There’s still room for improvement. Feel free to open issues, make pull requests, and [star the repository](https://github.com/apify/crawlee-python/) to spread the work to other developers.
**We will award the first 10 pieces of feedback** that add value and are accepted by our team with an exclusive Crawlee for Python swag (The first Crawlee for Python swag ever). Check out the [GitHub issue here](https://github.com/apify/crawlee-python/issues/269/).
With such contributions, we’re excited and looking forward to building an amazing library for the Python community.
{% embed https://github.com/apify/crawlee-python/ %}
[Join our Discord community](https://apify.com/discord) with nearly 8,000 web scraping developers, where our team would be happy to help you with any problems or discuss any use case for Crawlee Python. | sauain |
1,914,636 | Fighting the imposter syndrome | I recently asked ChatGPT what is the definition of the term "Imposter Syndrome", and I have got the... | 0 | 2024-07-08T17:29:36 | https://eyal-estrin.medium.com/fighting-the-imposter-syndrome-7a0ec53d4e80 | career, careerdevelopment, learning, impostersyndrome |

I recently asked ChatGPT what is the definition of the term "Imposter Syndrome", and I have got the following answer:
*Imposter syndrome refers to feelings of inadequacy and self-doubt that persist despite evident success and accomplishments. People experiencing imposter syndrome often attribute their achievements to luck or external factors rather than their abilities, and they may fear being exposed as a "fraud."*
Many people suffer from imposter syndrome as part of their career journey, without even realizing it.
In this blog post, I will try to discuss types of imposter syndrome, my personal experience on the topic, and what each of you can do to overcome the feelings of being a fraud.
### Basic types of imposter syndrome
Research by [Dr. Valerie Young](https://impostorsyndrome.com/valerie-young/) found five basic types of imposter syndrome:
1. **The Perfectionist** - The Perfectionist subtype of imposter syndrome entails the belief that anything less than absolute perfection means you could have performed better.
2. **The Expert** - The Expert subtype of imposter syndrome arises when individuals feel like imposters because they have not acquired complete knowledge of a specific subject or have not mastered every aspect of a process.
3. **The Natural Genius** - In the Natural Genius subtype of imposter syndrome, individuals may experience feelings of fraudulence simply because they doubt their inherent intelligence or competence.
4. **The Soloist** - The Soloist subtype of imposter syndrome can occur when individuals feel fraudulent for seeking assistance to achieve a particular level or status.
5. **The Super-person** - The Super-person subtype of imposter syndrome entails believing that one must be the hardest worker and achieve the highest levels of success possible. Failure to do so can lead to feelings of being a fraud.
### Personal feelings and beliefs
For many years I have struggled with this concept, without even knowing it is a real thing.
Officially I have been working in the technology industry since 1998, and gain a lot of experience in many fields – from infrastructure, cybersecurity, and for the past almost 10 years, cloud computing.
Regardless, of what I did in my career, and the experience I have, in the back of my mind, I always question myself – am I good enough?
For many years, I struggled with the idea of standing on stage and providing in-person lectures.
I did not have any problem having conversations with a small number of people, but standing on stage and talking was a huge blocker for me, since I was always afraid that someone would ask me a question, I did not have any answer to.
Even when I wanted to change the workplace and apply for a new job, when I received the question, how much do you want to earn, I struggled with this question since I did not want to ask for a lot of money, and one day my managers will figure out I have less knowledge or experience that I claim to have during the interview process.
Looking back at my study experience, I can admit that I was an average student at school. I was not able to complete an academic degree since I struggled with the materials, the assignments, and the exams, and I quit after a couple of courses.
On the other side, I was able to successfully pass top cybersecurity and cloud certifications such as CISSP, CCSP, CISA, CISM, and more, simply because I was able to connect with the materials of the study.
I am a huge fan of the written word, and I gained a lot of experience writing technical documents and blog posts on many topics, up to a level where I published two books called [Cloud Security Handbook](https://www.amazon.com/Cloud-Security-Handbook-effectively-environments/dp/180056919X) and [Security for Cloud-Native Applications](https://www.amazon.com/dp/B0CYYFNSSQ).
### How to deal with imposter syndrome?
Although what I am about to say is not a step-by-step guide, I recommend you follow the recommendations below and take it to your personal life:
* **Acknowledge it** – Realize the conversation of being a fraud stops you from fully expressing yourself and gaining achievements in life.
* **Stop and do a reality check** – Whenever you feel something is holding you back, ask yourself – is it real? Is it a matter of life or death? Am I going to fail?
* **Share your feelings** with your family, friends and even work colleagues. There are many things that we can resolve by simply being honest with the people around us.
* **Challenge self-doubt** – There is no single person who has all the knowledge, not even in a specific field. It is ok to make mistakes, acknowledge them, and learn from experience.
* **Stop comparing yourself to others** – Everyone has their cons and pros. Look for the things that make you happy, and follow your dreams, regardless of the time it will take to reach them.
* **Focus on others** – Share your experience in any topic you feel expert or knowledgeable about, and support others in learning and gaining more experience of their own.
* **Celebrate success** – Look at what you have achieved in your life and your career, and celebrate it, no matter if it is a small or big achievement.
* **Keep on learning** – Dedicate time every week to learn something new, expand your knowledge, and do not forget to share it with others – People love to learn, even if it is from somebody else experience.
### Summary
For many years I thought that my career would begin to decline once I was over 40. I was wrong – my career is only progressing. I get acknowledgment from people I do not even know in real life, for the knowledge I am sharing on social media and other platforms – both at work and outside it.
I dedicate a large portion of my life to learning new stuff and sharing knowledge through blog posts, writing books, providing Zoom lectures, and lately video recording with friends on YouTube, Spotify, and other platforms.
The fear of being an imposter will probably remain with me, but I refuse to let it stop advancing my career.
If you are reading this blog post, and begin to realize you might be suffering from an imposter syndrome in your career or in life, do not feel bad about it. Recognize it, share your feelings with people in your life, and be ready to act.
### Acknowledgments
For a long time, I wanted to write this blog post and share my thoughts on the topic, but there were always other more urgent topics to write about.
I would personally like to thank [Dr Milan Milanović](https://www.linkedin.com/in/milanmilanovic) and [Taimur Ijlal](https://www.linkedin.com/in/taimurijlal), for sharing their thoughts on the topic and inspiring me to share my thoughts as well.
* [How to fight impostor syndrome?](https://x.com/milan_milanovic/status/1806949747548164280)
* [How To Battle Imposter Syndrome in Cybersecurity?](https://www.linkedin.com/posts/taimurijlal_how-to-battle-imposter-syndrome-in-cybersecurity-activity-7183735572862033921-8Fw3/)
If you want to read more on the topic from an academic perspective, feel free to review the articles below:
* [Imposter Syndrome: Why You May Feel Like a Fraud](https://www.verywellmind.com/imposter-syndrome-and-social-anxiety-disorder-4156469)
* [Belongingness: The Antidote to Workplace Imposter Syndrome](https://www.knowledgecity.com/blog/belongingness-the-antidote-to-workplace-imposter-syndrome/)
#### About the Author
Eyal Estrin is a cloud and information security architect, and the author of the books Cloud Security Handbook, and Security for Cloud Native Applications, with more than 20 years in the IT industry.
You can connect with him on social media ([https://linktr.ee/eyalestrin](https://linktr.ee/eyalestrin)).
Opinions are his own and not the views of his employer.
👇Help to support my authoring👇
☕Buy me a coffee☕
| eyalestrin |
1,914,646 | Apple Vision Pro: Redefining Augmented Reality With Cutting-Edge Technology | The Apple Vision Pro has quickly become a standout with its impressive features and presence. The... | 0 | 2024-07-07T20:50:52 | https://dev.to/kwan/apple-vision-pro-redefining-augmented-reality-with-cutting-edge-technology-28gj | apple, applevisionpro, technology | **_The Apple Vision Pro has quickly become a standout with its impressive features and presence. The innovative headset merges virtual reality with the physical world through spatial computing, allowing for seamless and intuitive interactions. In this article, we’ll delve into the key features of Apple Vision Pro as well as discuss the potential applications and future implications of this groundbreaking technology._**
You’ve probably heard about the Apple Vision Pro because its features are amazing, and it’s hard not to notice when someone’s using it out and about.

Apple recently introduced its latest innovation headset, marking a significant advancement in digital interface technology. This new device integrates virtual reality capabilities directly into physical space, a culmination of progress in virtual reality (VR) technology since 2013.
Spatial computing is at the core of the Apple Vision Pro. This revolutionary approach allows digital content to seamlessly integrate with the physical world, enabling users to interact with virtual environments in a natural and intuitive way. Unlike traditional computing, which confines interactions to screens, spatial computing places digital elements in our physical surroundings, creating immersive and interactive experiences.
Formerly referred to under different names, Apple’s rebranding and subsequent widespread adoption have thrust the “Apple Vision Pro” into the spotlight. Central to its appeal is its intuitive user interface, which eliminates the need for separating joysticks traditionally used for controlling and interacting with virtual environments. Instead, the headset leverages advanced eye-tracking and hand-motion technologies to seamlessly translate physical movements into virtual interactions.
Powering this innovative device is VisionOS, the operating system specifically designed for the Vision Pro. VisionOS manages the complex tasks of spatial computing, providing a robust and fluid user experience that feels both futuristic and surprisingly natural.
{% embed https://www.youtube.com/watch?v=IY4x85zqoJM&t=1s %}
This integration not only enhances user experience by simplifying interaction but also underscores Apple’s commitment to pushing the boundaries of immersive technology. By merging virtual elements with the real world, the device represents a new era in consumer technology, promising a more natural and responsive digital experience. As such, it has quickly gained traction among tech enthusiasts and consumers alike, signaling a transformative shift in how we engage with digital content.

## Apple Vision Pro’s Main Features
### SwiftUI
Apple already introduced the SwiftUI framework, which simplifies app development across all Apple devices, including support for spatial computing. This framework has revolutionized our approach to macOS development, overcoming challenges such as sparse documentation and limited APIs in the past. Now, developing for macOS is as straightforward as developing for iOS, thanks to SwiftUI’s user-friendly design principles.
Beyond the ability to write code once for multiple platforms (iOS, macOS, watchOS, tvOS), SwiftUI offers live previews that accelerate the development process. You can instantly see changes to a single screen without rebuilding the entire app, including support for visionOS. However, the most significant improvement is SwiftUI’s shift to being declarative and state-driven, making it reactive, easier to understand, and simpler to maintain.
### Design
The Apple Vision Pro was designed with productivity in mind but is also versatile for entertainment purposes. It seamlessly integrates with your Mac, enhancing apps and games by offering immersive spatial experiences.

In visionOS, the “Space” is a defined area where virtual content like windows, volumes, and 3D objects can be placed. This integration allows users to view the real world alongside virtual objects. It’s possible to create a more immersive experience using “Immersion”, where users can interact with and explore virtual environments.

The “Passthrough” feature provides a mixed reality approach, allowing users to easily switch between the real and virtual worlds using the physical crown button. This feature enhances comfort by enabling users to adjust opacity levels, allowing them to remain partially connected to the real world while interacting in virtual environments.
Lastly, Spatial Audio enhances the immersive experience by combining acoustic and visual cues to simulate realistic sound distances and directions. This technology aims to make audio interactions within virtual environments more lifelike and engaging.
### Object Interaction
Currently, users can interact with virtual objects in a variety of ways: rotating them, zooming in and out, dragging them around, and even touching them to trigger animations or transformations. However, at WWDC24, Apple announced object tracking, which takes interaction to the next level by allowing users to interact with real-world objects.
Including attaching labels, adding virtual objects to real ones, and using touch gestures to transform or animate them. This significantly enhances the capabilities of the Apple Vision Pro headset.
## Potential Practical Applications
This technology opens up a world of possibilities by utilizing the entire space around us without the need for traditional screens. Users can move windows and objects freely, interact seamlessly with both virtual and real objects, and transition between different worlds.
To illustrate its potential, consider some practical applications:
**Medicine**: Surgeons could use the headset during surgeries to visualize internal anatomy, including layers of muscle, fat, and veins. They could quickly find solutions to problems and consult with specialists remotely, all while wearing the headset.
**Fire Department**: During a fire in a building, firefighters could use the headset to understand the structure behind walls without physically seeing them. They could plan rescue operations and share real-time information with colleagues on the scene.
**Engineering**: Engineers could create virtual prototypes of projects before implementation, validating designs with stakeholders. Interacting with these virtual models makes it easier and more cost-effective to make changes and improvements.
These examples demonstrate how the Apple Vision Pro headset can revolutionize industries by enhancing visualization, collaboration, and problem-solving capabilities in unprecedented ways.
## Apple Vision Pro: Redefining Augmented Reality with Cutting-Edge Technology – Final Thoughts
The Apple Vision Pro headset is poised to revolutionize various industries by enhancing visualization, collaboration, and problem-solving capabilities.
Are you curious about the technology and want to see how an app is developed for it? Let us know! Like, share, or comment through our social media channels.
Article written by Jonatha Lima, and originally published at https://kwan.com/blog/apple-vision-pro-redefining-augmented-reality-with-cutting-edge-technology/ on July 3, 2024.
See you in the next article! | kwan |
1,914,659 | Infinite list loading 🤔, with React Query - useInfiniteQuery hook ! | Sometimes getting all the data in a single API query is worth than getting it in the form of... | 0 | 2024-07-09T03:18:11 | https://dev.to/delisrey/infinite-list-loading-with-react-query-useinfinitequery-hook--19i | react, reactquery, webdev, javascript | Sometimes getting all the data in a single API query is worth than getting it in the form of paginated data, but not in every case what if you need the API to be more optimized and efficient hence implementing a infinite loading kind of a feature from scratch might not only be complicated but also overwhelming if you are just a beginner in React, Let's see how react-query makes it possible with the help of infinite queries.
Hello I am [SHREY](https://x.com/shreykoradia), your author and the guide to put you up on a step closer on mastering react-query with implementing infinite loading list feature from scratch with the help of react-query.
So now if we see, for example real world application like [Dribble ](https://dribbble.com/), we may find a `Browse More Inspiration` kind of a button at the bottom of the screen after some amount of web mockups are displayed, so on clicking that button we may find more amount of data displayed again and the cycle continues.
So one way is we give a button and on the click of it we may find a set of data thrown at the client from the server or otherwise we could make the use of infinite queries with the help of scroll in that case we may need to use [react-intersection-observer](https://www.npmjs.com/package/react-intersection-observer) to get new data based on the viewport and scroll trigger.
Lets build the feature with a button saying load more data, without furthure ado 😅.
So Let us define how we could get the data from the backend, for example different api handles pagination differently, in my case I have limit and offset kind of a thing where developer from frontend handles limit of data they want with the help of offset, so my response structure from backend would be somewhat like the below example stated:
```
{
"feedbacks": {
"totalPages": 1,
"feedbacks": [
{
"_id": "6624d529440d8153b9967515",
"feedback": "Hehe cool think to do",
"feedback_author_id": "6621e2621755214cca2e300b",
"feedback_author_name": "test1",
"tale_id": "65f6bbd6feec14b3e8804846",
"created_at": "2024-04-21T08:58:17.963Z",
"__v": 0
},
{
"_id": "662afb1caca2a90e470bc78f",
"feedback": "sddsdsdd",
"feedback_author_id": "65b38594f8211ed0962fe067",
"feedback_author_name": "shrey",
"tale_id": "65f6bbd6feec14b3e8804846",
"created_at": "2024-04-26T00:53:48.653Z",
"__v": 0
},
{
"_id": "662afb34aca2a90e470bc79c",
"feedback": "xoxo",
"feedback_author_id": "65b38594f8211ed0962fe067",
"feedback_author_name": "shrey",
"tale_id": "65f6bbd6feec14b3e8804846",
"created_at": "2024-04-26T00:54:12.671Z",
"__v": 0
}
]
}
}
```
Now I will be having a backend api endpoint in somewhat this fashion where user can attach limit and offset in the form of query params
```
http://localhost:3000/v1/feedback/get-feedbacks?taleId=123&offset=12&limit=6
```
Okay so my only point of stating this was to get easily think how our hasNextPage params work in frontend when we try to apply it using inifite queries.
lets see how we need to handle react-query hook called `useInfiniteQuery`
```
const fetchFeedbacks = ({ pageParam = 0 }) => {
return getFeedbacks({ taleId: params.taleId, offset: pageParam });
};
const query = useInfiniteQuery({
queryKey: ["get-feedbacks", params?.taleId],
queryFn: fetchFeedbacks,
initialPageParam: 0,
getNextPageParam: (lastPage, lastPageParam) => {
if (lastPage.data.feedbacks.feedbacks.length === 0) {
return undefined;
}
return lastPageParam.length * LIMIT;
},
enabled: !!params?.taleId,
});
```
In this above piece of code we have assigned our values returning from the useInfinteQuery to a variable where we already have known in our previous blogs about the queryKey , queryFn etc but here we see `initalPageParam `, this is a parameter required by default when using this hook which helps in fetching the first page we set it to 0 and hence this will get us the first page data from the backend server. Hence when the api endpoints get fired the offset in the api endpoint tends to be 0 and the data actually is fetch from the 0th index or in layman terms we say page.
`getNextPageParam` is a callable function which return a single variable in the docs it is described something like this:
When new data is received for this query, this function receives both the last page of the infinite list of data and the full array of all pages, as well as pageParam information.
so how did I used getNextPageParam is similar to what written above, this means lastPage with the data returned by the API endpoint **if having length === 0 than I will return undefined stating that there is no more data that can be sent back to client from the server ** otherwise I will be returning the next page param which will get us the data from the backend server lastPageParam.length * LIMIT, I have also added how I would get the lastPageParam and what I will be returning , offset is what will be returned by `lastPageParam.length * LIMIT`.

and you already know what enabled is and how do we already use it and why we use it. If you are not familiar to the enabled keyword you know the drill => get to my previous [blogs ](https://dev.to/delisrey/series/24656)and dig the docs to find out what enabled does.
There are also other set of parameter where you could use it with useInfiniteQuery such as getPreviousPageParam and maxPages you could dig a deeper view it on the react-query docs but getPreviousPages does the similar thing as getNextPageParams but it takes `firstPage, allPages, firstPageParam, allPageParams` firstPage and firstPage params while maxPages is the number of pages to be stored in the infinite query data.
This is how it will display it, I have recorded a prime use case from one of my product that i am gonna make it enable to use for the new users for
brainstorming and creating story boards for the application soon.
https://jam.dev/c/0b3c18fd-d250-4bc5-96fb-69d01ce7f915
This was all for now, I know I am not curating a blog every week like the way I used to curate before, but I will try my best. Also lets not forget to like and share the blog with others and comment any good or optimized way to handle the react query useInfiniteQuery hook I would love to know your feedbacks as well,
btw Hello once again I am SHREY, Part-time SDE at Talez (hacking on my product) otherwise I do a full time Software Development at an early scale start-up if you reached till here reading the blog do follow me and yeah have a happy working week Bye:)
| delisrey |
1,914,735 | Clean Architecture in Node.js: An Approach with TypeScript and Dependency Injection. | A Clean Architecture What is clean architecture and why do we even care about it? The... | 0 | 2024-07-08T14:39:23 | https://dev.to/evangunawan/clean-architecture-in-nodejs-an-approach-with-typescript-and-dependency-injection-16o | node, cleancode, microservices, architecture | ## A Clean Architecture
What is clean architecture and why do we even care about it? The clean architecture approach is a software design pattern and a guideline proposed by Robert C. Martin (Uncle Bob). This architecture urges us to build a cleaner code and more structured code.
So why do we care about it, why is it a good fit (at least, for me) to be used with a Node.js project, especially with TypeScript?
While there is a catch like a more complex code and it may be overkill for some simple or some quick projects, we have some benefits that we can get from following this guideline like great maintainability, testability, and flexibility.
There are some layers of clean architecture: infrastructure layer, adapter/controller layer, application, and domain layer.
The infrastructure layer consists of any kind of framework and infrastructure that we use in our application. For example, database connections and instances, message brokers, caches, and even external API clients.
The adapter/controller layer is like a bridge between our infrastructure and application layer. In this case, the infrastructure layer is like API listeners (for example, we can have express.js requests or message broker subscribers as the infrastructure). The listened API then will be handled by this controller layer. Please be aware that in this layer, we don't want to have any business rules and logic. It is purely just to receive any inputs from the infrastructure layer, maybe transform it, and call the usecase from the application layer.
## A Great Match for Microservice
A microservice consists of some operations and logic that are scoped and contained within a boundary. As it is named, the service size is small and micro.
Using a clean architecture for microservice can be a good option since this microservice can be tidy and clean. Of course, it will be greatly structured and easily understood by other developers.
One of the best benefits of using microservice is that the business logic and rules are encapsulated and separated from the infrastructure and whatever we do outside them. We can change, move, and replace any infrastructure we want without touching and interfering with any code inside the business rules. This one key of clean architecture can be suited perfectly by using Dependency Injection (DI).
By using a clean architecture, we can improve the service maintainability and testability. In this case, we can easily test and maintain them without interfering with other projects and even with other use cases or business logic.
## Dependency Injection (DI)
Dependency Injection is a design pattern that we can use for this matter. As the name suggests, we can inject any service and dependencies into a class or an entity inside our application. In this case, we can inject any kind of dependencies into our business use cases. A business use case can be contained in a class inside our application layer.
DI sometimes also relies on interface usage. Every use case class should have an interface that works as a template or mold of what kind of dependencies we can inject into a use case class.
So what we can conclude is, that we can inject any dependencies into a use case, that will be used within the application and business flow. Sometimes, injection can occur by using a class constructor inside our use case class.
Of course, there are a lot of resources online that you can read to learn more about this pattern.
## A Clean Architecture Example with Node.js
Since we use DI and OOP patterns for this architecture, it will be the best fit to use TypeScript as our project language. While it is strict and has a lot of benefits for this project, TypeScript has a huge community and is an actively maintained technology that we can use for a long time.
Here is an example of a Node.js project with TypeScript using a clean architecture approach and DI pattern using `tsyringe`.
[node-clean-architecture](https://github.com/evangunawan/node-clean-architecture)
There is an alternative to using NestJS to implement our approach. But, sometimes it may be an overkill approach since NestJS has a higher level of complexity and may add some complication for the developers.
In the project, we are using `tsyringe` since it is a library maintained by Microsoft and of course, it should be compatible and be a great fit with TypeScript.
This project uses a clean architecture approach by separating them into three major directories: application, controller, and infrastructure. All entities (domains) are included inside the application layer.
We are using express.js as the web server listening for API requests. In this case, all the handlers will be placed inside the controller directory which has a class that points into a use case class. Dependency injection mostly happened here with the creation of use case classes while injecting them with their required dependencies.
The application layer consists of some use cases. In this project, each use case has its separate class file. For example, creating and fetching posts has different use case classes. This is done for a cleaner approach and structured dependency injections.
The infrastructure layer has an example of using a mock class too. In this case, I have an example of a use case implementing an interface that needs a repository. The repository can be switched in the controller class with an infrastructure with the same interface implementation.
There is also an implementation of DTO uses. We use DTO to make sure the business domain and entities are decoupled with other objects. DTOs are primarily used as the use case input and output. It also can be used to regulate and standardize API response structure.
I happily share this project with you all for your inspiration and use. Of course, this is far from a perfect project that includes all the rules and flows by the guidelines. But, regardless I hope you like it. I am also very open to some feedback and improvement. Please share your thoughts in the comments!
| evangunawan |
1,914,752 | Destiny or Coincidence: Based on Shannon Perry s Experiment. | TL;DR: I developed a project to determine if you have ever crossed paths with ‘that’ person before... | 26,735 | 2024-07-11T00:50:12 | https://chriisduran.medium.com/destiny-or-coincidence-based-on-shannon-perry-s-experiment-d43da0ac5529 | python, javascript, replit, codepen | **TL;DR:** I developed a project to determine if you have ever crossed paths with ‘that’ person before meeting them in your life based on your mobile.
## Background
I often read [Xataka](https://www.xataka.com/privacidad/cuantas-veces-coincidiste-tu-pareja-antes-conocerla-asi-se-pueden-usar-datos-google-maps-para-descubrirlo) to stay updated on the latest technological developments and news. One thing that i readed and has intrigued me the most is [Shannon Perry](https://twitter.com/Channon_Perry)’s experiment — a data analyst who used location data from her and her boyfriend’s histories to determine if they had ever been close before meeting each other.
## How did she do it?
She exported her and her boyfriend’s data from _Google Takeout_. For more information, you can visit her website. The approach is simple yet fascinating: finding all instances where two people were close within a specific distance, on the same day, and at the same time.
## Now it’s my turn
Considering the approach is straightforward, I was intrigued to develop a similar tool — easy to use and accessible for anyone interested in experimenting with it
Although Shannon explains the procedures to obtain results, as a programmer, I set myself the challenge of doing it on my own, using my own logic.
## Tools Used:
- **Google Takeout**: Used Google Takeout to export my location history data. Once the export process was completed, Google provided a download link.
- **Repli**t: Opted to run the code in the cloud to avoid overloading my laptop. While other options like Google Colab exist, I found Replit worked best for me with the free account. It’s important to interact with the page regularly to prevent the session from timing out.
- **Firebase**: Since my JSON location records file was approximately 400 MB, I uploaded it to Firebase. I chose to process the JSON files via a URL instead of downloading the file directly.
- **CodePen**: Used CodePen to visualize the information obtained from Replit. I decided to use a heatmap to visualize most of the matches, along with options like lists and checkboxes to examine the positions of the two individuals in detail.
Let’s get to work.
1. Go to...
Visit my Medium [post](https://chriisduran.medium.com/destiny-or-coincidence-based-on-shannon-perry-s-experiment-d43da0ac5529) to keep reading... 😊 | chriisduran |
1,914,908 | Understanding self-assumption and scoped-down policy in AWS IAM | AWS IAM is a fundamental service for all 200+ AWS services, as it enables interaction with AWS... | 0 | 2024-07-08T21:13:55 | https://dev.to/aws-builders/understanding-self-assumption-and-scoped-down-policy-in-aws-iam-2io | aws, iam, security, githubactions | AWS IAM is a fundamental service for all 200+ AWS services, as it enables interaction with AWS principals. An AWS IAM role consists of two components: a policy and a trust relationship. The trust relationship handles authentication, while the policy is for authorization.

The trust relationship has rules specifying which AWS principals are allowed to assume the role. What is assume the role? In a nutshell, entities can use AWS STS to assume the role by calling the `aws sts assume-role`. If an entity is able to assume the role, it can execute the actions specified in the attached policy. Therefore it's important to follow best practices and choose suitable patterns when implementing IAM.
-
**Self-assumption**
Have you ever encountered a scenario where an IAM role assumes itself? It may sound awkward, yet it's real. An IAM role needs to be explicitly allowed to assume itself as it doesn't have self-assumption capabilities by default. It is to improve consistency and visibility of a role's privileges.
To elaborate, I have an IAM role `GHAction-Role` with `AssumeRoleWithWebIdentity` to authenticate GithubActions in AWS and a github action respectively.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GithubOidcAuth",
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com"
},
"Action": [
"sts:AssumeRole",
"sts:AssumeRoleWithWebIdentity"
],
"Condition": {
"StringLike": {
"token.actions.githubusercontent.com:sub": "repo:harik8/services:*"
}
}
}
]
}
```
```
STS:
runs-on: ubuntu-latest
needs: [CI]
steps:
- name: Git clone the repository
uses: actions/checkout@v4
- name: configure aws credentials
uses: aws-actions/configure-aws-credentials@v4
with:
role-to-assume: ${{ vars.IAM_ROLE_ARN }} # ARN of the GHAction-Role
aws-region: ${{ vars.AWS_REGION }}
```
Output of above action.
```
Run aws-actions/configure-aws-credentials@v4
with:
role-to-assume: arn:aws:iam::123456789012:role/GHAction-Role
aws-region: eu-north-1
audience: sts.amazonaws.com
Assuming role with OIDC
Authenticated as assumedRoleId AROASIGA2HTHJOXZFKTPL:GitHubActions
```
The GitHub workflow is able to assume the role using WebIdentity. However, if the GitHub workflow tries to perform `sts:AssumeRole` against the `GHAction-Role`, it will encounter an issue.
```
aws sts assume-role --role-arn arn:aws:iam::123456789012:role/GHAction-Role --role-session-name GitHubActions
```
```
An error occurred (AccessDenied) when calling the AssumeRole operation: User: arn:aws:sts::12345678912:assumed-role/GHAction-Role/GitHubActions is not authorized to perform: sts:AssumeRole on resource: arn:aws:sts::12345678912:assumed-role/GHAction-Role/GitHubActions
```
The trust policy of the `GHAction-Role` doesn't currently allow it to assume the role. To resolve this, the `GHAction-Role` needs to be able to assume itself. Therefore, the `GHAction-Role`'s ARN `arn:aws:iam::123456789012:role/GHAction-Role` should be added to the trust policy to permit this action as shown below.
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GithubOidcAuth",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/GHAction-Role",
"Federated": "arn:aws:iam::123456789012:oidc-provider/token.actions.githubusercontent.com"
},
"Action": [
"sts:AssumeRole",
"sts:AssumeRoleWithWebIdentity"
],
"Condition": {
"StringLike": {
"token.actions.githubusercontent.com:sub": "repo:harik8/services:*"
}
}
}
]
}
```
-
**Scoped down policy**

If a given GitHub Action runs more than one job and requires different permissions for each, using a single `GHAction-Role` with maximum permissions is not a good design practice as it violates IAM's principle of least privilege. This is where scoped-down policies come into play.
A scoped-down policy refers to a policy that grants the minimum set of permissions required for a user, group, or role to perform their necessary tasks.
Instead of having one generic role, `GHAction-Role`, with all required policies attached, we should create a specific role for each job with the necessary least privileges. For example, we would have roles like `GHAction-Role-S3`, `GHAction-Role-EC2`, and `GHAction-Role-EKS`. These roles would be assumed by `GHAction-Role`.

So, the trust policy for the above roles will look like this:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "GHAction-S3",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012:role/GHAction-Role"
},
"Action": "sts:AssumeRole"
}
]
}
```
Even though self-assumption is suitable for certain use cases. Scoped-down policies generally provide more secure, control and manageable permissions.
HAPPY ASSUMING! | harik8 |
1,914,799 | Enabling Controlled Folder Access in Windows 11! | Controlled Folder Access in Windows 11 : It is a security feature in Windows 11 designed to protect... | 0 | 2024-07-09T14:24:49 | https://winsides.com/how-to-enable-controlled-folder-access-in-windows-11/ | windows11, beginners, tutorials, tips | ---
title: Enabling Controlled Folder Access in Windows 11!
published: true
date: 2024-07-07 14:35:28 UTC
tags: Windows11,beginners, tutorials, tips
canonical_url: https://winsides.com/how-to-enable-controlled-folder-access-in-windows-11/
cover_image: https://winsides.com/wp-content/uploads/2024/07/Controlled-Folder-Access-in-Windows-11.webp
---
**Controlled Folder Access in Windows 11** : It is a **security feature** in Windows 11 designed to protect your files and folders from unauthorized changes by malicious applications, such as **[ransomware](https://dev.to/winsides/how-to-manage-ransomware-protection-in-windows-11-3780)**. A **protected folder** is a directory safeguarded from unauthorized changes by untrusted applications, especially malware and ransomware. When a folder is protected, only trusted and **whitelisted applications** can modify, delete, or create files within that folder. This helps prevent malicious software from encrypting or corrupting important data. Controlled Folder Access is a part of **Windows Defender Exploit Guard** and helps ensure that only trusted apps can access protected folders. This article will use simple steps to navigate the steps on How to Turn on Controlled Folder Access in Windows 11.
> ## Key Points
>
> - Open **Windows Settings**.
> - Navigate to **Security and Privacy**.
> - Click on **Manage Settings** under the **Virus & Threat Protection settings** section.
> - Enable **Controlled Folder Access**.
- Go to **Windows Settings** using the key combination **<kbd>Win Key</kbd> + <kbd>I</kbd>**.
- You can find **Privacy and Settings** on the left pane. Click on that.

_Privacy & Security_
- Under **Security** , click on **Windows Security**.

_Windows Security_
- Now, click on **Virus & Threat Protection** under **Protection areas**.

_Virus & threat protection_
- Click on **Manage Settings** under Virus & Threat Protection Settings.

_Manage Settings_
- Scroll down and locate **Controlled Folder Access**. Click on **Manage Controlled Folder Access**.

_Manage Controlled Folder Access_
- Toggle the Controlled Folder Access switch to **ON**.
- **User Account Control** will open and ask for confirmation. “ **Do you want to allow this app to make changes in this Computer?** “. Click **Yes**.
- That is it. Controlled Folder Access is now enabled in Windows 11.

_Enable Controlled Folder Access_
## How to Allow an App through Controlled Folder Access in Windows 11?
This option allows users to **explicitly permit specific applications** to access and modify files within **protected folders**. This is essential when you have trusted applications that need to interact with the files in these protected folders but are being blocked by the Controlled Folder Access feature. **Some legitimate applications** may need access to files within protected folders to function correctly. By allowing these apps, you ensure they can perform necessary tasks without being hindered by security restrictions. The following are the steps.
- Click on “ **Allow an App through Controlled Folder Access** ” under Controlled Folder Access.

_Allow an app through Controlled folder access_
- User Account Control will prompt for confirmation. Click **Yes**.
- Now, click on **Add an allowed app**.

_Add an allowed app_
- You can find two options, “ **Recently Blocked Apps** “, and “ **Browse All Apps** “.

_Choose an App_
- If you have encountered any apps blocked recently, then click on Recently Blocked Apps, you can find the app over there, making it easy to choose. If not, click on Browse All Apps.
- Finally, choose the App that you want to allow through Controlled Folder Access. That is it!
### Controlled Folder Access – Things to know:

_Controlled Folder Access_
- By default, Controlled Folder Access protects common folders such as Documents, Pictures, Videos, Music, Favorites, and Desktop.
- Users can add **additional folders** to be protected or remove default-protected folders as needed.
- If an **unauthorized app** attempts to modify files in a protected folder, you will receive a notification from Windows Security.
- This feature is integrated with **Windows Defender Antivirus** , providing a **comprehensive security** solution without needing third-party software. **Check out: [How to Enable Real-Time Protection in Windows 11?](https://dev.to/winsides/enabling-real-time-protection-in-windows-11-15ha)**
## Take away:
**Controlled Folder Access in Windows 11** is an essential feature for enhancing your system’s security and safeguarding your data from potential threats. By enabling this feature, you can add an **extra layer** of protection to your valuable files and ensure they remain safe from malicious attacks. For more interesting articles, stay tuned to Winsides. **Safe Computing! Peace out!** | vigneshwaran_vijayakumar |
1,914,800 | How to Enable SmartScreen for Edge in Windows 11? | Turn on SmartScreen using Microsoft Edge Browser: Open Microsoft Edge Browser in Windows... | 0 | 2024-07-09T14:17:28 | https://winsides.com/enable-smartscreen-for-microsoft-edge-browser/ | windowssecurity, smartscreenformicros | ---
title: How to Enable SmartScreen for Edge in Windows 11?
published: true
date: 2024-07-07 16:06:39 UTC
tags: WindowsSecurity,SmartScreenforMicros,SmartScreenforMicros
canonical_url: https://winsides.com/enable-smartscreen-for-microsoft-edge-browser/
cover_image: https://winsides.com/wp-content/uploads/2024/07/Windows-Defender-SmartScreen-for-Edge-Browser.webp
---
## Turn on SmartScreen using Microsoft Edge Browser:
- Open Microsoft Edge Browser in Windows 11.
- Click on **Settings and more** that is available in the top right corner of the Microsoft Edge. You can also use the shortcut **<kbd>Alt</kbd> + <kbd>F</kbd>**.

_Settings and more_
- Now, click on **Settings**.

_Settings_
- From the left pane, click on **Privacy, Search, and Services**.

_Privacy, Search, and Services_
- Scroll down and locate Microsoft Defender SmartScreen under Security. Toggle the switch to **enable Microsoft Defender SmartScreen in Edge Browser**.

_Enable Microsoft Defender SmartScreen in Edge Browser_
## Enable SmartScreen for Microsoft Edge Browser using Windows Settings:
This is an alternate way, relatively a long way to enable this feature in Edge Browser. However, if you face any issues following the above method, you can try out this method. SmartScreen for Apps is an integrated security feature available in the Windows 11 OS.
- Go to **Windows Settings** using the key combination **<kbd>Win Key</kbd> + <kbd>I</kbd>**.
- Click on **Privacy & Settings**.

_Privacy-Settings_
- Under **Security** , click on **Windows Security**.

_Windows-Security_
- Open **App & Browser Control** that is available under **Protection areas**.

_App and Browser Control_
- App & Browser Control windows will open. Click on **Reputation-based protection settings**.

_Reputation-based Protection Settings_
- Scroll down and locate SmartScreen for Microsoft Edge. Toggle the switch to **ON** to **Enable SmartScreen for Microsoft Edge Browser**. That is it!

_Turn on SmartScreen for Edge Browser_
## Takeaway:
**SmartScreen for Microsoft Edge** is seamlessly integrated into Microsoft Edge Browser, providing a smooth and **unobtrusive security experience**. It significantly enhances your security by blocking access to **malicious websites and preventing harmful downloads**. This offers **real-time protection** by constantly updating its **database of threats**. **Happy Browsing! Peace out!** | vigneshwaran_vijayakumar |
1,914,890 | Browser locally uses AI to remove image backgrounds | Yo, so I've been digging into this whole AI thing for front-end development lately, and stumbled upon... | 0 | 2024-07-08T08:56:11 | https://chi.miantiao.me/posts/ai-remove-image-background/ | browser, ai | ---
title: Browser locally uses AI to remove image backgrounds
published: true
date: 2024-07-07 13:28:51 UTC
tags: Browser,AI
canonical_url: https://chi.miantiao.me/posts/ai-remove-image-background/
---
Yo, so I've been digging into this whole AI thing for front-end development lately, and stumbled upon this cool Transformers.js example. Turned it into a sweet little tool, check it out!
Basically, it uses Transformers.js in a WebWorker to tap into WebGPU and run this RMBG-1.4 model. Long story short, you can now use AI to nuke image backgrounds right in your browser. And get this, it only takes half a second to process a 4K image on my M1 PRO!
Here's the link to the tool: [https://html.zone/background-remover](https://html.zone/background-remover)
[](https://html.zone/background-remover)
* * *
Wanna build it yourself? Head over to [https://github.com/xenova/transformers.js/tree/main/examples/remove-background-client](https://github.com/xenova/transformers.js/tree/main/examples/remove-background-client) for the source code. Oh, and heads up, you gotta be on Transformers.js V3 to mess with WebGPU.
 | ccbikai |
1,914,892 | How to Replace Google Safe Browsing with Cloudflare Zero Trust | So, get this, right? I built the first version of L(O*62).ONG using server-side redirects, but Google... | 0 | 2024-07-08T09:03:56 | https://chi.miantiao.me/posts/google-safe-browsing-alternative/ | google, cloudflare, safebrowsing, zerotrust | ---
title: How to Replace Google Safe Browsing with Cloudflare Zero Trust
published: true
date: 2024-07-07 14:48:53 UTC
tags: Google,Cloudflare,SafeBrowsing,ZeroTrust
canonical_url: https://chi.miantiao.me/posts/google-safe-browsing-alternative/
---
So, get this, right? I built the first version of [L(O\*62).ONG](https://loooooooooooooooooooooooooooooooooooooooooooooooooooooooooooooo.ong/) using server-side redirects, but Google slapped me with a security warning the very next day. Talk about a buzzkill! I had to scramble and switch to local redirects with a warning message before sending folks on their way. Then came the fun part – begging Google for forgiveness.
Now, the smart money would've been on using Google Safe Browsing for redirects. But here's the catch: Safe Browsing's got a daily limit – 10,000 calls, and that's it. Plus, no custom lists. And since I'm all about keeping things simple and sticking with Cloudflare, Safe Browsing was a no-go.
Fast forward to a while back, I was chewing the fat with someone online, and bam! It hit me like a bolt of lightning. Why not use a secure DNS server with built-in filters for adult content and all that shady stuff to check if a domain's on the up-and-up? Figured I'd give [Family 1.1.1.1](https://blog.cloudflare.com/zh-cn/introducing-1-1-1-1-for-families-zh-cn/) a shot, and guess what? It actually worked! Problem was, no custom lists there either. Then I remembered messing around with Cloudflare Zero Trust Gateway back in my [HomeLab](https://www.awesome-homelab.com/) days. Turns out, that was the golden ticket – a solution so good, it's almost criminal.
**Here's the deal: Cloudflare Zero Trust's Gateway comes packing a built-in DNS (DoH) server and lets you set up firewall rules like a boss. You can block stuff based on how risky a domain is, what kind of content it has, and even use your own custom naughty-and-nice lists. And get this – it pulls data from Cloudflare's own stash, over 30 open intelligence sources, fancy machine learning models, and even feedback from the community. Talk about covering all the bases! Want the nitty-gritty? Hit up the [official documentation](https://developers.cloudflare.com/cloudflare-one/policies/gateway/domain-categories/#docs-content).**
So, I went ahead and blocked all the high-risk categories – adult stuff, gambling sites, government domains, anything NSFW, newly registered domains, you name it. Plus, I've got my own little blacklists and whitelists that I keep nice and tidy.

Once I was done tweaking the settings, I got myself a shiny new DoH address:

To hook it up to my project, I used this handy-dandy code:
```
async function isSafeUrl(
url,
DoH = "https://family.cloudflare-dns.com/dns-query"
) {
let safe = false;
try {
const { hostname } = new URL(url);
const res = await fetch(`${DoH}?type=A&name=${hostname}`, {
headers: {
accept: "application/dns-json",
},
cf: {
cacheEverything: true,
cacheTtlByStatus: { "200-299": 86400 },
},
});
const dnsResult = await res.json();
if (dnsResult && Array.isArray(dnsResult.Answer)) {
const isBlock = dnsResult.Answer.some(
answer => answer.data === "0.0.0.0"
);
safe = !isBlock;
}
} catch (e) {
console.warn("isSafeUrl fail: ", url, e);
}
return safe;
}
```
And here's the kicker: Cloudflare Zero Trust's management panel has this sweet visualization interface that lets you see what's getting blocked and what's not. You can see for yourself – it's got the kibosh on some adult sites and those brand-spanking-new domains.

Oh, and if a domain ends up on the wrong side of the tracks, you can always check the log to see what went down.

 | ccbikai |
1,914,902 | It's time to ditch the complicated and slow Python setup | A guide and tutorial on Rye | 0 | 2024-07-11T14:03:06 | https://dev.to/assertnotnull/its-time-to-ditch-the-complicated-and-slow-python-setup-5ee8 | python | ---
title: It's time to ditch the complicated and slow Python setup
published: true
description: A guide and tutorial on Rye
tags: python
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-07-07 19:20 +0000
---
At my work I was faced with a complicated installation process for Python. The Python version of the project was older than the one shipped by my OS. I've wasted a morning to set it right. So I've wondered if there was a more straight forward way to install it and if there was a more developer friendly approch to managing dependencies than manually setting the version in requirements.txt. I've looked for a solution and I found one. It's called Rye.
## Rye: The Modern Python Project Management Tool
Rye is an awesome project management tool for Python that aims to simplify dependency management and project set-up. It offers a user-friendly approach to handling Python projects, making it easier for developers to manage their work efficiently.
## Key Features of Rye
- **Simplified Dependency Management**: Rye streamlines the process of managing project dependencies.
- **Virtual Environment Handling**: It automatically creates and manages virtual environments for your projects.
- **Project Initialization**: Rye provides easy-to-use commands for setting up new projects or integrating with existing ones.
- **Consistent Environments**: Ensures consistency across different development environments.
## Setting it up in a New Project
To start a new project with Rye, follow these steps:
1. **Install it**: [Follow your OS instructions](https://rye.astral.sh/guide/installation/). It will ask you which shims you want to use (choose the one of Rye's) then which Python version to use as it manages python versions.
2. **Initialize a python project**:
```
rye init <directory_name>
```
This command will create some files including the new `pyproject.toml` configuration file.
3. **Install the initial dependencies**:
```
rye sync
```
Running this command set up a virtual environment for your project as `.venv` and install the dependencies specified in the pyproject.toml. It uses `uv` by default (it's own Python tool for handling dependencies written in Rust) or `pip-tools`. Notice that there's no requirements.txt. It's all handled by Rye now using lock files.
4. **Add Dependencies**: Use Rye to add any necessary dependencies:
```
rye add package_name
```
5. **Sync again to install dependencies**
```
rye sync
```
5. **That's it!**: Just repeat the `sync` step when adding dependencies or modifying the project file!
## Integrating Rye into an Existing Project
If you have an existing Python project that uses a `requirements.txt` file, you can easily move to Rye. Here's how:
1. **Initialize Rye with Existing Requirements**:
```
rye init -r requirements.txt
```
This command will:
- Create a `pyproject.toml` file and include your dependencies in it.
- Set up a `.venv` virtual environment (or it will asks if it exists already)
3. **Review and Adjust**: Check the newly created `pyproject.toml` file and make any necessary adjustments. Delete the `requirements.txt` as it is unused.
4. **Update Your Development Process**: Start using Rye commands for managing dependencies and your development environment.
## Using Rye to manage Python versions
Rye is your main tool to manage the python ecosystem.
## Best Practices When Using Rye
- Use `rye run` to execute scripts in your project's environment
- Leverage `rye add` and `rye remove` for managing project dependencies
- Commit your `pyproject.toml` file to version control, but exclude the `.venv` directory
## A 15 min walktrough by the author
{% embed https://youtu.be/q99TYA7LnuA %}
## Conclusion
Rye offers a modern and efficient approach to Python project management. Whether you're starting a new project or integrating it into an existing one, Rye simplifies many aspects of Python development. By following the steps outlined above, you can harness the power of Rye to streamline your Python project workflow. | assertnotnull |
1,915,061 | 4 Database tips for Java Developers 💡 | Introduction 🔍 Understanding databases is essential for Java developers, as they... | 0 | 2024-07-08T00:18:44 | https://dev.to/joaomarques/4-database-tips-for-java-developers-2hg9 | ### Introduction
🔍 Understanding databases is essential for Java developers, as they frequently handle critical data such as user information, past actions, application settings, and feature flags. As applications expand, databases must scale to meet performance demands and ensure efficient data management. For insights into scaling databases effectively, refer to my [article on optimizing bank data growth with sharding architecture.](https://dev.to/joaomarques/optimizing-bank-data-growth-with-sharding-architecture-3pnf)
🚀 Drawing from my experience, I've wrote 4 essential tips to enhance your database practices in Java.
### 1. Always use an ORM Framework (Please do that, think about your future dev colleagues 😅)
💡 **Why:** ORM frameworks like Hibernate or JPA help manage database interactions efficiently, reducing boilerplate code and they manage the protection for SQL injections.
Trust me, big applications without orm are very hard to maintain and takes more time than you think to manage simple things.
💻 **Example:** Using Hibernate with JPA annotations.
```java
import javax.persistence.*;
@Entity
@Table(name = "users")
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@Column(name = "username", nullable = false, unique = true)
private String username;
@Column(name = "email", nullable = false, unique = true)
private String email;
// Getters and Setters (I'd ratter use lombok)
}
```
**Repository:**
```java
// Repository class using Spring Data JPA
import org.springframework.data.jpa.repository.JpaRepository;
public interface UserRepository extends JpaRepository<User, Long> {
User findByUsername(String username);
}
```
### 2. Validate Input Data
💡 **Why:** Validating data ensures that only correct and expected data is stored, preventing corruption and errors.
Make sure your services are ready catch exceptions coming from the database.
💻 **Example:** Using Hibernate Validator.
```java
import javax.validation.constraints.*;
@Entity
@Table(name = "users")
public class User {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
private Long id;
@NotNull
@Size(min = 5, max = 15)
private String username;
@NotNull
@Email
private String email;
// Getters and Setters (I'd ratter use lombok)
}
```
### 3. Implement Database Transactions
💡 **Why:** Transactions ensure that you database transactions can always rollback in case an exception happens during thread execution.
💻 **Example:** Using Spring's `@Transactional` annotation.
In this example, in case `userActivityRepository.save(activity)` throws an exception `userRepository.save(user)` will be rolled back.
```java
import org.springframework.stereotype.Service;
import org.springframework.transaction.annotation.Transactional;
@Service
public class UserService {
@Autowired
private UserRepository userRepository;
@Transactional
public void createUserAndSaveActivity(User user, UserActivity activity) {
userRepository.save(user);
userActivityRepository.save(activity);
}
}
```
### 4. Monitor Queries
💡 **Why:** Monitoring queries can inform your team where in your application is causing the bottleneck.
The team can create jira tickets to work on the performance for those bottleneck queries.
💻 **Example:** Using Hibernate's statistics with the `@Timed` annotation from the Spring Metrics module to monitor the performance.
```java
import org.springframework.stereotype.Repository;
import org.springframework.metrics.annotation.Timed;
import javax.persistence.EntityManager;
import javax.persistence.PersistenceContext;
import java.util.List;
@Repository
public class UserRepository {
// PersistenceContext was used only for example
@PersistenceContext
private EntityManager entityManager;
@Timed
public List<User> getUsersWithOptimizedQuery(String role) {
return entityManager.createQuery("SELECT u FROM User u WHERE u.role.name = :roleName", User.class)
.setParameter("roleName", role)
.getResultList();
}
}
```
📈 Implementing these strategies are just the basis to keep your database clean and efficient when working with Java applications, that can grow anytime.
| joaomarques | |
1,915,068 | (JavaScript Proxy vs Object.define Property) Vue.js dagi reaktivlikni ortida nima turadi va qanday ishlaydi ? | Assalamu alaykum! ahli O'zbek Vue community! Keling bugun Vue.js da reaktivlik qanday ishlaydi ,... | 0 | 2024-07-08T00:20:53 | https://dev.to/mukhriddinweb/vuejs-dagi-reaktivlikni-ortida-nima-turadi-va-qanday-ishlaydi--p4g | javascript, vue, nuxt, mukhriddinweb |

Assalamu alaykum! ahli O'zbek Vue community! Keling bugun Vue.js da reaktivlik qanday ishlaydi , minde sizlarga tushuntirsam :)
Bismillah!
Hamma gap **JavaScriptda** , ya'ni `Object.defineProperty` va `Proxy` JavaScript-da obyektlarni manipulyatsiya qilish va eshitish uchun ishlatiladigan ikki xil mexanizmdir. **Vue2 ** va **Vue3** reaktivlik tizimlarida bu ikkalasi qanday qo'llanilishi haqida gaplashamiz.
### `Object.defineProperty`
#### Xususiyatlari
- **Xususiyatlar holatini kuzatish**: `Object.defineProperty` biror obyektda mavjud bo'lgan xususiyatlarni (property) larni kuzatish uchun ishlatiladi. Bu xususiyatlarni o'qish va yozish uchun getter va setterlarni belgilash imkonini beradi.
- **Biroz cheklov bor aynan bu metodda **: Bu usul faqat mavjud xususiyatlar uchun ishlaydi va yangi xususiyatlar (property) qo'shilganda avtomatik kuzatilmaydi.
- **Vue2 da ishlatilishi**: Vue2 reaktivlik tizimi `Object.defineProperty` asosida qurilgan. Bu har bir xususiyat uchun getter va setterlarni yaratish orqali o'zgarishlarni kuzatadi.

#### Misol
```javascript
let data = {};
Object.defineProperty(data, 'count', {
get() {
console.log('Getting value');
return count;
},
set(newValue) {
console.log('Setting value');
count = newValue;
}
});
data.count = 5; // Setting value
console.log(data.count); // Getting value
```
### `Proxy`

#### Xususiyatlari
- **Obyekt holatini kuzatish**: `Proxy` obyekt darajasida kuzatish imkonini beradi, ya'ni butun obyektni "proxylash" mumkin va bu obyektning har qanday xususiyatiga kirish yoki uni o'zgartirish kuzatiladi.
- **Qo'llash**: `Proxy` yangi xususiyatlar qo'shilganda yoki olib tashlanganda ham kuzatilishini ta'minlaydi.
- **Vue3 da ishlatilishi**: Vue 3 reaktivlik tizimi `Proxy` asosida qurilgan. Bu Vue2 ga nisbatan samarali va kengroq ko'lamli kuzatish imkonini beradi.
#### Misol
```javascript
let data = {
count: 0
};
let proxyData = new Proxy(data, {
get(target, property) {
console.log(`Getting ${property}`);
return target[property];
},
set(target, property, value) {
console.log(`Setting ${property} to ${value}`);
target[property] = value;
return true;
}
});
proxyData.count = 5; // Setting count to 5
console.log(proxyData.count); // Getting count
```
### `Object.defineProperty` va `Proxy` taqqoslash
- **Kuzatishning ko'lamini**: `Object.defineProperty` faqat mavjud xususiyatlarni kuzatadi, yangi qo'shilgan xususiyatlarni avtomatik kuzatish qiyin. `Proxy` esa butun obyektni kuzatadi, shu jumladan yangi qo'shilgan xususiyatlarni ham.
- **Performans**: `Proxy` samaraliroq bo‘lishi mumkin, chunki u obyektni bir marta proxylaydi va xususiyat darajasida ishlov bermaydi.
- **Moslashuvchanlik**: `Proxy` ko'proq moslashuvchan bo‘lib, turli xil operatsiyalarni (masalan, xususiyatni o‘chirish) kuzatish imkonini beradi.
### Vue reaktivlik tizimidagi farqlar
- **Vue 2**: `Object.defineProperty` asosida qurilgan. Bu Vue 2 ning reaktivlik tizimini ba'zi cheklovlarga olib keladi, masalan, yangi xususiyatlarni kuzatishda muammo bor.
- **Vue 3**: `Proxy` asosida qurilgan. Bu Vue 3 ning reaktivlik tizimini yanada kuchli va samarali qiladi, shu jumladan yangi xususiyatlarni va murakkab strukturalarni oson kuzatish imkonini beradi.
Yuqoridagi misollar Vue2 va Vue3 ning reaktivlik tizimlarining qanday ishlashini va qaysi texnologiyalarni qo'llaganligini ko'rsatadi. Bu farqlar Vue3 ni Vue2 ga nisbatan yanada kuchliroq va flexibility qiladi.
Quyida bu haqida rasmlarda ko'rib chiqamiz batafsil!
Barakalloh fiikum!

































Bizni tarmoqlarda kuzatishingiz mumkin va maqola foydali bo'lsa izoh va Vuechi do'stlaringizga ulashing. 🫡
🔗 https://t.me/mukhriddinweb
🔗 https://medium.com/@mukhriddinweb
🔗 https://dev.to/mukhriddinweb
🔗 https://khodieff.uz
🔗 https://github.com/mukhriddin-dev
🔗 https://linkedin.com/in/mukhriddin-khodiev
🔗 https://youtube.com/@mukhriddinweb
| mukhriddinweb |
1,915,069 | Building Flexible and Maintainable Go-Lang Apps | In software development, Dependency Injection (DI) is one of the fundamental principles that help... | 0 | 2024-07-08T00:26:19 | https://dev.to/dyaksaa_/building-flexible-and-maintainable-go-lang-apps-56kn | webdev, go, programming, tutorial | In software development, Dependency Injection (DI) is one of the fundamental principles that help build flexible and maintainable applications. In this article, we will discuss the use of Dependency Injection in Go-Lang and how the Wire tool can help us configure dependencies easily.
**What is Dependency Injection?**
Dependency Injection (DI) is a commonly used software design pattern for managing dependencies between the components that make up an application. When we build software, we often break our code into smaller, isolated components that interact with each other to provide certain functionality. These components have dependencies on each other, called dependencies.
First of all, let us understand why we need to use Dependency Injection. As an application grows, the dependency graph becomes increasingly complex. This can lead to cumbersome initialisation and it is difficult to split the code cleanly, especially when some dependencies are used multiple times. In addition, managing dependencies manually can be time-consuming and difficult to make changes to code, test functionality with different dependencies, and follow code traces.
Dependency Injection allows us to separate the logic of building objects from the logic of using those objects. Basically, dependencies are provided or injected into objects through constructors or parameters. This allows us to build applications that are better managed, easier to test, and more flexible.
**Using Dependency Injection in Go-Lang**
Go-Lang, or Go, is a programming language designed to build efficient, simple, and maintainable applications. Go-Lang has inbuilt support for Dependency Injection and provides tools like Wire that can help us configure dependencies easily.
**Why Use Wire?**
Wire is a Dependency Injection tool developed by the Google team. It is based on compile-time code processing, which means we can configure dependencies at compile-time and avoid using complex reflection. In this sense, Wire can help us produce more efficient and maintainable code.
Wire also provides features such as code static analysis, cyclic dependency detection, and organised dependency grouping. This allows us to better manage dependencies and make our code more structured.
**Installing Wire**
The first step to using Wire is to install it. To install Wire, we can use the go get command:
`go get github.com/google/wire`
Once Wire is installed, we can start configuring the dependencies in our Go-Lang application.
**Configuring Dependencies with Wire**
To configure dependencies using Wire, we need to create a wire.go file in our project directory. This file will be used by Wire to generate the code required to configure dependencies.
Here are the steps to configure dependencies using Wire:
**1. Make File wire.go**
Create a new file called wire.go in your project directory. This file will be the configuration file that will be used by Wire.
**2. Import Package Wire**
Add the following line at the top of the wire.go file to import the Wire package:
`import "github.com/google/wire"`
**3. Define Dependency Injection Function**
Next, we need to define a function that will be used by Wire to inject dependencies. This function must have the name Initialize and return the data type of the object that the dependency will be injected into.
For example, if we want to inject dependencies into struct UserService, we can define the InitializeUserService function as follows:
```
func InitializeUserService() *UserService {
// Konfigurasi dependensi di sini
return &UserService{}
}
```
4. Using the Build() Function
After defining the Initialize function, we need to use the Build() function of the Wire package to generate the code needed to configure the dependencies.
Add the following line at the end of the wire.go file:
```
func main() {
wire.Build(InitializeUserService)
}
```
**5. Running Wire**
Once the wire.go file has finished configuring, we can run Wire to generate the necessary code.
Open a terminal or command prompt, navigate to your project directory, and run the following command:
`wire`
Wire will generate a wire_gen.go file that contains the code needed to configure the dependencies.
**Using Configured Dependencies**
Once Wire generates the wire_gen.go file, we can use the configured dependencies.
The following example shows how to use the configured UserService dependencies using Wire:
```
func main() {
userService := InitializeUserService()
// Gunakan userService di sini
}
```
We can use the userService object configured by Wire according to our application needs.
**Conclusion**
Using Dependency Injection in Go-Lang application development can help us build more flexible, maintainable and well-organised applications. Tools like Wire can help us configure dependencies easily and generate more efficient code.
By using Dependency Injection, we can separate the logic of building objects from the logic of using those objects. This allows us to make changes to dependencies more easily, test code with different dependencies, and make our code more structured and maintainable.
So, if you’re building a Go-Lang application, consider using Dependency Injection and tools like Wire to better manage your dependencies. This way, you’ll be able to build more flexible, maintainable, and efficient applications.
| dyaksaa_ |
1,915,070 | Deploying a windows 11 VM, Windows Server & a Linux VM | Step 1: To access the Azure portal, we must provide our login credentials which includes our username... | 0 | 2024-07-08T01:43:30 | https://dev.to/bdporomon/deploying-a-windows-11-vm-windows-server-a-linux-vm-1m9b | webdev, beginners, programming, devops | Step 1: To access the Azure portal, we must provide our login credentials which includes our username and password.
Step 2: In the "Search resources, services and docs" field, type and click “virtual machines”.
Step 3: Click the “Create” button to start the virtual machine creation process then Create a virtual machine hosted by Azure.
Step 4: Select the appropriate subscription and create a resource group by clicking the create resource group button and giving it a name.

Step 5: Give the VM a name and choose either Ubuntu server 20.04 or Windows 11, leave every other thing option as default.

Step 6: Create an Administrator Account. We need to a username and a password in case we need to connect to the Virtual Machine
Step 7: Select the inbound port rule as SSH if we selected a Linux VM and RDP if it's a windows VM.

Step 8: Check the licensing. By default, this is unchecked, click the box to have it checked.
Step 9: Click “Next” till we get to boot diagnostics in the Monitoring tab, and click on “disable”.
Step 10: Click on “Review + Create” button, if the validation passes, the deployment will go on.

Step 11: Once the virtual machine has been deployed, we can access it by clicking on the "Connect" button in the virtual machine blade in the Azure portal.

For Windows: Click Native RDP and click on select and wait for configured sign to be displayed on the right hand side and then Download the RDP file.

Open the RDP file from local computer & click on “connect”. Enter the Admin details created during the process of creating the VM.
Click on the “connect” button displayed to initiate connection.
Enter the username and password created for the Admin section.
Wait for the remote pc to be configured.
Open Powershell in the remote pc.
Run the following command to install the IIS role and management tools:
Install-WindowsFeature -name Web-Server -IncludeManagementTools
If you need specific role services, you can specify them using their feature names, such as: Install-WindowsFeature -name Web-Server
You can verify that IIS has been installed by opening a web browser and pasting the ip address of the VM in a browser.

For a Linux Server: Open Powershell.
Log in with the initially created username and password.
Make sure that you are logged in as root use the command - sudo su
Type the command - apt install nginx -y
apt = is the package manager of nginx
install = this is a verb and the action that you want the package manager to perform
nginx = this is what you want to install on the VM
-y = This is a command that prompts the system to automatically accept anything that requires you to accept a yes or no

We can verify this installation by pasting the IP Address of the VM on a browser

For AWS: type "EC2" in the search bar and select "EC2" from the dropdown.
Click on the "Launch Instance" button.
Name the server.

Choose an Amazon Machine Image (AMI):
In the "Choose an Amazon Machine Image (AMI)" step, select "Ubuntu Server 20.04 LTS"

Select an instance type based on your requirements. The "t2.micro" instance is a good option for testing and is eligible for the free tier.
Create a New Key Pair: Enter a key pair name and then use the defaults for the key pair type and private key file format.
Scroll down to the firewall section and check the box next to allow HTTP traffic from the internet.

Click "Launch Instance."

Connect to Your EC2 Instance: When the terminal opens Make sure that you are logged in as root use the command - sudo su
Type the command - apt install nginx -y
apt = is the package manager of nginx
install = this is a verb and the action that you want the package manager to perform
nginx = this is what you want to install on the VM
-y = This is a command that prompts the system to automatically accept anything that requires you to accept a yes or no

We can verify this installation by pasting the IP Address of the VM on a browser

| bdporomon |
1,915,071 | Exploring Advanced Features of TypeScript | Introduction: TypeScript is a superset of JavaScript that offers advanced features and advantages to... | 0 | 2024-07-08T00:34:52 | https://dev.to/kartikmehta8/exploring-advanced-features-of-typescript-807 | Introduction:
TypeScript is a superset of JavaScript that offers advanced features and advantages to enhance the development experience. It provides static typing, interfaces, classes, and other features to help developers write more robust and error-free code. In this article, we will explore some of the advanced features of TypeScript and the benefits they offer.
Advantages:
1. Static Typing: TypeScript allows developers to define types for variables, parameters, and return values, providing better code predictability and reducing runtime errors.
2. Code Organization: Using interfaces and classes, TypeScript provides a more structured approach for organizing code, making it easier to maintain and understand.
3. Tool Support: TypeScript integrates with popular code editors, providing features like code completion, refactoring, and error highlighting, making development more efficient.
Disadvantages:
1. Steep Learning Curve: TypeScript has a steeper learning curve compared to JavaScript, as it adds new syntax and concepts.
2. Complex Configurations: Setting up and configuring TypeScript projects can be more complex and time-consuming.
Features:
1. Decorators: TypeScript offers decorators, a language feature for iterating over class declarations, allowing developers to add extra functionality to a class without changing its implementation.
2. Generics: Generics allow developers to write reusable code for different types, similar to templates in other programming languages.
Conclusion:
In conclusion, TypeScript offers powerful features, such as static typing, interfaces, classes, and decorators, providing developers with a more structured and efficient approach to building applications. While it may have a steeper learning curve and require more complex configurations, the benefits it offers make it a valuable tool for developers to explore and utilize. | kartikmehta8 | |
1,915,073 | Vue3 da ( ref va reactive) farqi | Vue3-daref va reactive hook-larini tanlashda qaysi biri qulayroq ekanligini aniqlashda, ularning... | 0 | 2024-07-08T00:53:30 | https://dev.to/mukhriddinweb/vue3-da-ref-va-reactive-farqi-1bme | vue, javascript, php, mukhriddinweb | **Vue3-da**`ref` va `reactive` hook-larini tanlashda qaysi biri qulayroq ekanligini aniqlashda, ularning farqlarini va qanday holatlarda foydalanishni tushunish kerak . Har ikkala hook ham reaktiv ma'lumotlar yaratish uchun ishlatiladi, lekin ularning ishlash usuli va qo'llanilishi jichcha farq qiladi.
### `ref`
#### Qulayliklari
1. **Oddiy qiymatlar uchun mos**: `ref` asosan primitive turlar (string, number, boolean) uchun qulay. Masalan, `count`, `message` kabi oddiy qiymatlar uchun.
2. **DOM elementlariga murojaat qilish**: `ref` DOM elementlarini saqlash va ularga murojaat qilish uchun ishlatiladi. Masalan, `<div ref="myDiv"></div>`.
3. **Qiymatga kirish oson**: `ref` bilan ishlashda, `.value` orqali qiymatga kirish va uni o'zgartirish mumkin.
#### Misol
```javascript
import { ref } from 'vue';
const count = ref(0);
count.value++; // Qiymatni oshirish
```
### `reactive`
#### Qulayliklari
1. **Murakkab ma'lumot tuzilmalari uchun mos**: `reactive` object yoki array kabi murakkab tuzilmalarga ega bo'lgan ma'lumotlar uchun qulay. U butun ob'ekt yoki massivni reactive (reaktiv) qiladi.
2. **Objectlar bilan ishlash**: `reactive` objectning barcha xususiyatlarini reaktiv qiladi, shuning uchun to'g'ridan-to'g'ri xususiyatlarga kirish va ularni o'zgartirish mumkin.
#### Misol
```javascript
import { reactive } from 'vue';
const state = reactive({
count: 0,
name: 'Vue'
});
state.count++; // Qiymatni oshirish
state.name = 'Vue 3'; // Xususiyatni o'zgartirish
```
### `ref` va `reactive` farqlari
1. **Qiymat turi**:
- `ref` oddiy qiymatlar uchun mos va `.value` orqali ularga kiriladi.
- `reactive` complex holatga ega object yoki array uchun mos va to'g'ridan-to'g'ri xususiyatlarga kiriladi.
2. **Qo'llanilish holatlari**:
- `ref` primitive turlar (string, number, boolean) va DOM elementlari uchun ishlatiladi.
- `reactive` object yoki array kabi murakkab tuzilmalar uchun ishlatiladi.
3. **Reaktivlik**:
- `ref` faqat bitta qiymatni reaktiv qiladi.
- `reactive` butun bir object yoki array ni reaktiv qiladi, shu jumladan uning barcha xususiyatlarini.
### Tanlash qachon qulay
- Agar sizda oddiy qiymat bo'lsa yoki DOM elementiga murojaat qilish kerak bo'lsa, `ref` dan foydalaning.
- Agar sizda ko'p xususiyatlarga ega bo'lgan object yoki array bo'lsa, `reactive` dan foydalaning.
### Umumiy misol
Quyida `ref` va `reactive` ni birgalikda qo'llash misoli keltirilgan:
```vue
<template>
<div>
<p>Message: {{ message }}</p>
<p>Todos:</p>
<ul>
<li v-for="todo in todos" :key="todo.id">{{ todo.text }}</li>
</ul>
<input v-model="newTodoText" placeholder="New todo" />
<button @click="addTodo">Add Todo</button>
</div>
</template>
<script setup>
import { ref, reactive } from 'vue';
const message = ref('Hello, Vue 3!');
const todos = reactive([
{ id: 1, text: 'Learn Vue 3' },
{ id: 2, text: 'Build something awesome' }
]);
const newTodoText = ref('');
function addTodo() {
todos.push({ id: todos.length + 1, text: newTodoText.value });
newTodoText.value = '';
}
</script>
```
Ushbu misol `ref` va `reactive` hook-larini qanday birgalikda ishlatish mumkinligini ko'rsatadi. Tanlov qaysi turdagi ma'lumotlar bilan ishlayotganingizga bog'liq bo'ladi.

**PS: Yuqridagi rasmda nega bunday demoqda , 🤔🤔🫢🫢🙄🙄🙄😩😫😫 , video darsda javob berib o'taman bu haiqda :)**
_Bizni tarmoqlarda kuzatishingiz mumkin va maqola foydali bo'lsa izoh va Vuechi do'stlaringizga ulashing. 🫡_
🔗 https://t.me/mukhriddinweb
🔗 https://medium.com/@mukhriddinweb
🔗 https://dev.to/mukhriddinweb
🔗 https://khodieff.uz
🔗 https://github.com/mukhriddin-dev
🔗 https://linkedin.com/in/mukhriddin-khodiev
🔗 https://youtube.com/@mukhriddinweb | mukhriddinweb |
1,915,074 | Mastering Multithreading in C Programming: A Deep Dive with In-Depth Explanations and Advanced Concepts | Introduction: Multithreading in C programming enables developers to harness the full... | 0 | 2024-07-08T01:06:39 | https://dev.to/vivekyadav200988/mastering-multithreading-in-c-programming-a-deep-dive-with-in-depth-explanations-and-advanced-concepts-245g | multithreading, posix, c, embedded | ## Introduction:
Multithreading in C programming enables developers to harness the full potential of modern multicore processors, facilitating concurrent execution of tasks within a single process. This comprehensive guide explores fundamental multithreading concepts, synchronization mechanisms, and advanced topics, providing detailed explanations and sample code for each concept.
### 1. Understanding Threads:
Threads are independent sequences of execution within a process, allowing for concurrent execution of tasks. Understanding thread creation, management, and states is crucial for effective multithreading.
**Thread Creation:**
**pthread_create()**: Initializes a new thread and starts its execution.
**pthread_join()**: Waits for a thread to terminate before proceeding.
```
#include <stdio.h>
#include <pthread.h>
void *threadFunc(void *arg) {
printf("Hello from the new thread!\n");
pthread_exit(NULL);
}
int main() {
pthread_t tid;
pthread_create(&tid, NULL, threadFunc, NULL);
pthread_join(tid, NULL);
printf("Back to the main thread.\n");
return 0;
}
```
### 2. Synchronization and Mutual Exclusion:
Race conditions occur when multiple threads access shared resources concurrently, leading to unpredictable behavior. Synchronization mechanisms such as mutexes, semaphores, and condition variables ensure thread safety.
**Mutexes (Mutual Exclusion):**
Mutexes provide mutual exclusion, allowing only one thread to access a shared resource at a time. They prevent data corruption and ensure consistent behavior.
```
#include <stdio.h>
#include <pthread.h>
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
int sharedVariable = 0;
void *threadFunc(void *arg) {
pthread_mutex_lock(&mutex);
sharedVariable++;
printf("Thread incremented sharedVariable to: %d\n", sharedVariable);
pthread_mutex_unlock(&mutex);
pthread_exit(NULL);
}
int main() {
pthread_t tid;
pthread_create(&tid, NULL, threadFunc, NULL);
pthread_mutex_lock(&mutex);
sharedVariable--;
printf("Main thread decremented sharedVariable to: %d\n", sharedVariable);
pthread_mutex_unlock(&mutex);
pthread_join(tid, NULL);
return 0;
}
```
**Semaphores:**
Semaphores are synchronization primitives used to control access to shared resources and coordinate the execution of multiple threads. They maintain a count to limit the number of threads accessing the resource simultaneously.
```
#include <stdio.h>
#include <pthread.h>
#include <semaphore.h>
sem_t semaphore;
void *threadFunc(void *arg) {
sem_wait(&semaphore);
printf("Thread acquired semaphore\n");
// Critical section
sem_post(&semaphore);
pthread_exit(NULL);
}
int main() {
pthread_t tid;
sem_init(&semaphore, 0, 1); // Initialize semaphore with value 1
pthread_create(&tid, NULL, threadFunc, NULL);
// Main thread
sem_wait(&semaphore);
printf("Main thread acquired semaphore\n");
// Critical section
sem_post(&semaphore);
pthread_join(tid, NULL);
return 0;
}
```
### 3. Thread Communication:
Thread communication facilitates coordination and synchronization between threads. Condition variables allow threads to wait for specific conditions to be met.
**Condition Variables:**
Condition variables enable threads to wait for a specific condition to occur. They are commonly used in producer-consumer scenarios, where a thread waits for data availability before proceeding.
```
#include <stdio.h>
#include <pthread.h>
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
pthread_cond_t condVar = PTHREAD_COND_INITIALIZER;
int dataReady = 0;
void *producer(void *arg) {
pthread_mutex_lock(&mutex);
dataReady = 1;
pthread_cond_signal(&condVar);
pthread_mutex_unlock(&mutex);
pthread_exit(NULL);
}
void *consumer(void *arg) {
pthread_mutex_lock(&mutex);
while (!dataReady) {
pthread_cond_wait(&condVar, &mutex);
}
printf("Consumer: Data is ready!\n");
pthread_mutex_unlock(&mutex);
pthread_exit(NULL);
}
int main() {
pthread_t producerThread, consumerThread;
pthread_create(&producerThread, NULL, producer, NULL);
pthread_create(&consumerThread, NULL, consumer, NULL);
pthread_join(producerThread, NULL);
pthread_join(consumerThread, NULL);
return 0;
}
```
### 4. Advanced Concepts:
Advanced topics such as priority inversion, starvation, deadlock, and spinlock are critical for building robust multithreaded applications.
**Priority Inversion:**
Priority inversion occurs when a low-priority thread holds a resource required by a high-priority thread, causing priority inversion. Priority inheritance protocol helps mitigate this issue by temporarily raising the priority of the low-priority thread to that of the high-priority thread.
```
#include <stdio.h>
#include <pthread.h>
pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t mutex2 = PTHREAD_MUTEX_INITIALIZER;
void *highPriorityThread(void *arg) {
pthread_mutex_lock(&mutex1);
pthread_mutex_lock(&mutex2);
// Perform high-priority task
pthread_mutex_unlock(&mutex2);
pthread_mutex_unlock(&mutex1);
pthread_exit(NULL);
}
void *lowPriorityThread(void *arg) {
pthread_mutex_lock(&mutex2);
pthread_mutex_lock(&mutex1);
// Perform low-priority task
pthread_mutex_unlock(&mutex1);
pthread_mutex_unlock(&mutex2);
pthread_exit(NULL);
}
int main() {
pthread_t highPrioTid, lowPrioTid;
pthread_create(&highPrioTid, NULL, highPriorityThread, NULL);
pthread_create(&lowPrioTid, NULL, lowPriorityThread, NULL);
pthread_join(highPrioTid, NULL);
pthread_join(lowPrioTid, NULL);
return 0;
}
```
**Starvation:**
Starvation occurs when a thread is unable to gain access to required resources due to other threads continuously acquiring those resources. Fair scheduling policies ensure that all threads have a fair chance of resource allocation, preventing starvation.
```
#include <stdio.h>
#include <pthread.h>
pthread_mutex_t mutex = PTHREAD_MUTEX_INITIALIZER;
int sharedResource = 0;
void *threadFunc(void *arg) {
pthread_mutex_lock(&mutex);
// Increment shared resource
sharedResource++;
printf("Thread incremented sharedResource to: %d\n", sharedResource);
pthread_mutex_unlock(&mutex);
pthread_exit(NULL);
}
int main() {
pthread_t tid1, tid2;
// Create two threads
pthread_create(&tid1, NULL, threadFunc, NULL);
pthread_create(&tid2, NULL, threadFunc, NULL);
// Wait for both threads to finish
pthread_join(tid1, NULL);
pthread_join(tid2, NULL);
// Main thread
pthread_mutex_lock(&mutex);
// Access shared resource
printf("Main thread accessed sharedResource: %d\n", sharedResource);
pthread_mutex_unlock(&mutex);
return 0;
}
```
**Deadlock:**
Deadlock occurs when two or more threads are waiting indefinitely for each other to release resources they need. Avoiding circular wait and implementing deadlock detection and recovery mechanisms help mitigate deadlock situations.
```
#include <stdio.h>
#include <pthread.h>
pthread_mutex_t mutex1 = PTHREAD_MUTEX_INITIALIZER;
pthread_mutex_t mutex2 = PTHREAD_MUTEX_INITIALIZER;
void *thread1(void *arg) {
pthread_mutex_lock(&mutex1);
pthread_mutex_lock(&mutex2);
// Critical section
pthread_mutex_unlock(&mutex2);
pthread_mutex_unlock(&mutex1);
pthread_exit(NULL);
}
void *thread2(void *arg) {
pthread_mutex_lock(&mutex2);
pthread_mutex_lock(&mutex1); // Potential deadlock point
// Critical section
pthread_mutex_unlock(&mutex1);
pthread_mutex_unlock(&mutex2);
pthread_exit(NULL);
}
int main() {
pthread_t tid1, tid2;
pthread_create(&tid1, NULL, thread1, NULL);
pthread_create(&tid2, NULL, thread2, NULL);
pthread_join(tid1, NULL);
pthread_join(tid2, NULL);
return 0;
}
```
**Spinlock:**
Spinlocks are synchronization primitives where a thread continuously polls for the availability of a resource. They are efficient for short critical sections and low contention scenarios.
```
#include <stdio.h>
#include <pthread.h>
pthread_spinlock_t spinlock;
void *threadFunc(void *arg) {
pthread_spin_lock(&spinlock);
// Critical section
printf("Thread acquired spinlock\n");
// Perform some task
pthread_spin_unlock(&spinlock);
pthread_exit(NULL);
}
int main() {
pthread_t tid1, tid2;
pthread_spin_init(&spinlock, 0);
pthread_create(&tid1, NULL, threadFunc, NULL);
pthread_create(&tid2, NULL, threadFunc, NULL);
pthread_join(tid1, NULL);
pthread_join(tid2, NULL);
pthread_spin_destroy(&spinlock);
return 0;
}
```
## Conclusion:
Mastering multithreading in C programming requires a deep understanding of fundamental concepts, synchronization mechanisms, and advanced topics. By delving into these concepts and exploring sample code, developers can build robust, efficient, and responsive multithreaded applications. Continuous practice, experimentation, and adherence to best practices are key to becoming proficient in multithreading and developing reliable software systems that fully utilize the capabilities of modern hardware. | vivekyadav200988 |
1,915,075 | Understanding Blocking and Non-blocking Sockets in C Programming: A Comprehensive Guide | Introduction: In the realm of network programming with C, mastering the intricacies of... | 0 | 2024-07-08T01:11:02 | https://dev.to/vivekyadav200988/understanding-blocking-and-non-blocking-sockets-in-c-programming-a-comprehensive-guide-2ien | ## Introduction:
In the realm of network programming with C, mastering the intricacies of socket operations is paramount. Among the fundamental concepts in this domain are blocking and non-blocking sockets, which significantly influence the behavior and performance of networked applications. In this comprehensive guide, we delve into the nuanced differences between blocking and non-blocking sockets, explore their respective advantages and disadvantages, and provide practical examples to illustrate their usage in C programming.
### Blocking Sockets:
Blocking sockets, also known as synchronous sockets, adhere to a straightforward paradigm: I/O operations halt the execution of the program until they are completed. When you read from or write to a blocking socket, your program will pause until data is available to be read or the write operation finishes. This synchronous behavior simplifies the flow of the program, making it intuitive for developers, especially those new to network programming.
**Key characteristics of blocking sockets include:**
**Blocking Behavior**: I/O operations block the program's execution until they conclude.
**Synchronous Operation**: Operations are performed in a synchronous manner, meaning the program waits until each operation finishes before proceeding.
**Simplicity**: Blocking sockets offer simplicity and ease of understanding, making them an attractive choice for beginners in network programming.
However, the simplicity of blocking sockets comes at a cost. Consider a scenario where a blocking socket is used to communicate with multiple clients simultaneously. If one client's operation takes an unexpectedly long time to complete, it may block the entire program, potentially causing delays in serving other clients.
To illustrate, let's consider a basic example of using blocking sockets in a TCP client-server application:
**Server (TCP Server)**
```
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <arpa/inet.h>
#define PORT 8080
#define MAX_PENDING_CONNECTIONS 5
#define BUFFER_SIZE 1024
int main() {
int server_fd, new_socket;
struct sockaddr_in address;
int addrlen = sizeof(address);
char buffer[BUFFER_SIZE] = {0};
const char *message = "Hello from server";
// Create socket file descriptor
if ((server_fd = socket(AF_INET, SOCK_STREAM, 0)) == 0) {
perror("socket failed");
exit(EXIT_FAILURE);
}
// Set server address parameters
address.sin_family = AF_INET;
address.sin_addr.s_addr = INADDR_ANY;
address.sin_port = htons(PORT);
// Bind the socket to the specified port
if (bind(server_fd, (struct sockaddr *)&address, sizeof(address)) < 0) {
perror("bind failed");
exit(EXIT_FAILURE);
}
// Listen for incoming connections
if (listen(server_fd, MAX_PENDING_CONNECTIONS) < 0) {
perror("listen failed");
exit(EXIT_FAILURE);
}
printf("Server listening on port %d...\n", PORT);
// Accept incoming connection
if ((new_socket = accept(server_fd, (struct sockaddr *)&address, (socklen_t*)&addrlen)) < 0) {
perror("accept failed");
exit(EXIT_FAILURE);
}
// Read client message
read(new_socket, buffer, BUFFER_SIZE);
printf("Client message: %s\n", buffer);
// Send response to client
send(new_socket, message, strlen(message), 0);
printf("Response sent to client.\n");
// Close sockets
close(new_socket);
close(server_fd);
return 0;
}
```
**Client (TCP Client)**
```
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <arpa/inet.h>
#define PORT 8080
#define SERVER_ADDRESS "127.0.0.1"
#define BUFFER_SIZE 1024
int main() {
int sock = 0;
struct sockaddr_in serv_addr;
char buffer[BUFFER_SIZE] = {0};
const char *message = "Hello from client";
// Create socket file descriptor
if ((sock = socket(AF_INET, SOCK_STREAM, 0)) < 0) {
perror("socket creation failed");
exit(EXIT_FAILURE);
}
// Set server address parameters
serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(PORT);
// Convert IPv4 and IPv6 addresses from text to binary form
if (inet_pton(AF_INET, SERVER_ADDRESS, &serv_addr.sin_addr) <= 0) {
perror("invalid address / address not supported");
exit(EXIT_FAILURE);
}
// Connect to server
if (connect(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr)) < 0) {
perror("connection failed");
exit(EXIT_FAILURE);
}
// Send message to server
send(sock, message, strlen(message), 0);
printf("Message sent to server.\n");
// Read response from server
read(sock, buffer, BUFFER_SIZE);
printf("Server response: %s\n", buffer);
// Close socket
close(sock);
return 0;
}
```
### Non-blocking Sockets:
In contrast to blocking sockets, non-blocking sockets operate asynchronously. When an I/O operation is initiated on a non-blocking socket, the program continues its execution immediately, regardless of whether the operation succeeds or not. This asynchronous behavior allows the program to perform other tasks while waiting for I/O operations to complete, enhancing overall efficiency and responsiveness.
**Key characteristics of non-blocking sockets include:**
**Non-blocking Behavior**: I/O operations return immediately, even if they cannot be completed immediately.
**Asynchronous Operation**: Operations are performed asynchronously, enabling the program to continue executing without waiting for each operation to finish.
**Increased Complexity**: Non-blocking sockets introduce additional complexity into the program logic, as it needs to handle situations where operations may not complete immediately.
While non-blocking sockets offer improved responsiveness and better resource utilization, they require careful handling of asynchronous events. Developers must implement mechanisms to manage the asynchronous nature of non-blocking sockets effectively, such as employing event loops or using multiplexing techniques like select() or poll().
Let's examine a practical example demonstrating the use of non-blocking sockets in a TCP client-server application:
**Server (Non-blocking TCP Server)**
```
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <arpa/inet.h>
#include <fcntl.h>
#define PORT 8080
#define MAX_PENDING_CONNECTIONS 5
#define BUFFER_SIZE 1024
int main() {
int server_fd, new_socket;
struct sockaddr_in address;
int addrlen = sizeof(address);
char buffer[BUFFER_SIZE] = {0};
const char *message = "Hello from server";
// Create socket file descriptor
if ((server_fd = socket(AF_INET, SOCK_STREAM, 0)) == 0) {
perror("socket failed");
exit(EXIT_FAILURE);
}
// Set server address parameters
address.sin_family = AF_INET;
address.sin_addr.s_addr = INADDR_ANY;
address.sin_port = htons(PORT);
// Bind the socket to the specified port
if (bind(server_fd, (struct sockaddr *)&address, sizeof(address)) < 0) {
perror("bind failed");
exit(EXIT_FAILURE);
}
// Set the server socket to non-blocking mode
if (fcntl(server_fd, F_SETFL, O_NONBLOCK) < 0) {
perror("fcntl failed");
exit(EXIT_FAILURE);
}
// Listen for incoming connections
if (listen(server_fd, MAX_PENDING_CONNECTIONS) < 0) {
perror("listen failed");
exit(EXIT_FAILURE);
}
printf("Server listening on port %d...\n", PORT);
while (1) {
fd_set readfds;
int max_sd, activity;
// Clear the socket set
FD_ZERO(&readfds);
// Add server socket to the set
FD_SET(server_fd, &readfds);
max_sd = server_fd;
// Wait for activity on any socket
activity = select(max_sd + 1, &readfds, NULL, NULL, NULL);
if (activity < 0) {
perror("select error");
exit(EXIT_FAILURE);
}
// If server socket has activity, it's a new connection
if (FD_ISSET(server_fd, &readfds)) {
if ((new_socket = accept(server_fd, (struct sockaddr *)&address, (socklen_t*)&addrlen)) < 0) {
perror("accept failed");
exit(EXIT_FAILURE);
}
printf("New connection, socket fd is %d\n", new_socket);
// Send message to client
if (send(new_socket, message, strlen(message), 0) != strlen(message)) {
perror("send failed");
}
close(new_socket); // Close the connection
}
}
return 0;
}
```
**Client (Non-blocking TCP Client)**
```
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <unistd.h>
#include <arpa/inet.h>
#include <fcntl.h>
#define PORT 8080
#define SERVER_ADDRESS "127.0.0.1"
#define BUFFER_SIZE 1024
int main() {
int sock = 0;
struct sockaddr_in serv_addr;
char buffer[BUFFER_SIZE] = {0};
const char *message = "Hello from client";
// Create socket file descriptor
if ((sock = socket(AF_INET, SOCK_STREAM, 0)) < 0) {
perror("socket creation failed");
exit(EXIT_FAILURE);
}
// Set server address parameters
serv_addr.sin_family = AF_INET;
serv_addr.sin_port = htons(PORT);
// Convert IPv4 and IPv6 addresses from text to binary form
if (inet_pton(AF_INET, SERVER_ADDRESS, &serv_addr.sin_addr) <= 0) {
perror("invalid address / address not supported");
exit(EXIT_FAILURE);
}
// Set the socket to non-blocking mode
if (fcntl(sock, F_SETFL, O_NONBLOCK) < 0) {
perror("fcntl failed");
exit(EXIT_FAILURE);
}
// Connect to server
if (connect(sock, (struct sockaddr *)&serv_addr, sizeof(serv_addr)) < 0) {
// Non-blocking connect will return immediately
// Check errno to distinguish between connection in progress and connection failed
if (errno != EINPROGRESS) {
perror("connection failed");
exit(EXIT_FAILURE);
}
}
// Wait for connection to complete
sleep(1);
// Send message to server
send(sock, message, strlen(message), 0);
printf("Message sent to server.\n");
// Read response from server
read(sock, buffer, BUFFER_SIZE);
printf("Server response: %s\n", buffer);
// Close socket
close(sock);
return 0;
}
```
## Conclusion:
In conclusion, understanding the distinctions between blocking and non-blocking sockets is essential for proficient network programming in C. While blocking sockets offer simplicity and straightforward operation, non-blocking sockets provide greater flexibility and efficiency by enabling asynchronous I/O operations. When selecting the appropriate socket mode for your application, consider the specific requirements, scalability, and performance constraints. With a solid grasp of blocking and non-blocking socket concepts, developers can architect robust and responsive networked applications tailored to their unique needs. | vivekyadav200988 | |
1,915,076 | Working on Something You Hate for 7 Years - How to Escape a Professional Crisis | Have you ever found yourself with a feeling of emptiness, feeling some kind of melancholy that you... | 0 | 2024-07-08T01:13:08 | https://dev.to/juanemilio31323/working-on-something-you-hate-for-7-years-how-to-escape-a-professional-crisis-5892 | career, help, productivity, learning | Have you ever found yourself with a feeling of emptiness, feeling some kind of melancholy that you can't explain? If the answer is yes, let me tell you that I have felt it too, and it hasn't been easy to turn it off. In this post, I want to share the personal and professional crisis that I'm going through and the things that I'm doing to escape it.
Before continuing, let me tell you that I know how crazy this story is going to sound and how hard it is to believe it, but it's mine and it's the only one I've got.
## Context and Guide
Hey, if you are looking for a **bullet-list or guide**, jump straight to the part of this article called: **"What I'm doing to solve it"**. If you want to know my story and maybe get some context from there, just keep reading.
I'm Juan, and I'm a few days away from celebrating my birthday. Let me tell you that the last three years have been crazy for me. I've changed my job, almost died, and found myself deep in a professional crisis. Now I live with my girlfriend, and I think I'm pretty close to getting what I want for my life, but before going there, let me give you some context about how I am.
## How Everything Started
If you'd asked me what I wanted to be when I was just 12 years old, I would have answered that I wanted to be a physicist. I loved science and especially physics. I spent most of my childhood playing video games and reading physics books, hoping someday to make a scientific contribution.
Things didn't go that way. When I was 13 years old, my parents asked me what I wanted to study, and I gave that same answer. To my surprise, my parents gently explained to me that they couldn't afford such an education and that I should forget it. Anyway, physicists don't make any money, or at least that's what my parents taught me back then.
Just one year later, I got to know one of my now best friends, Natz (this is a nickname, not his real name). I was playing video games, and he was streaming the match. I entered his Twitch channel, and we became good friends; that's the story in a nutshell. We got so close that we talked almost daily. Natz was around 30 years old back then and worked from home, so most of the time, he was available to talk and play. I was a teenager with nothing to do—perfect friends. At that time, I also realized for the first time that the economic state of my country, Argentina, was going through a hard time, and it didn't seem like it was going to change anytime soon. So, like any teenager who gets terrified with his country and doesn't know what he is going to do with his life, I decided that I wanted to emigrate to Canada (it was a good country to emigrate to if you were an Argentinean at that time).
I investigated, and I figured that one of the jobs with the best salaries and relocation possibilities was software development. Surprise, surprise, Natz was a software developer. I ran to my Discord account and wrote: "Hey Natz, can you teach me how to code?" His answer was something like: "Complete these courses, and if you do it, we'll talk again about this." Just one week later, I finished them. I asked him again about it, and this was his answer: "Normally, no one finishes them and asks me back." He gave me more courses and helped me develop my first JavaScript bot for Discord. Just a few months later, Natz came to me with this message: "Hey Juan, I have a project, and I need help. Do you want to help me code it?"
There I was, I got my first project with basic knowledge of HTML, CSS, and JavaScript. I couldn't believe it, and he was even going to pay me. We finished it; nothing special, just a web page with basic CSS and JS for an e-sport team that I think now doesn't even exist. The important part is, that's when I realized that I could really do this—coding for a living.
I decided to get more projects for myself, and that's how I started. Buying courses on Udemy, reading books from the internet, and watching every possible course along the way. I started to develop my first side project, Smart-Restaurant, and I was working with any local business that wanted a web page, system, or application. I wasn't afraid to tackle projects way over my head. I kind of loved the pressure it gives you, knowing that you need to finish something that you don't even know how to start (this is something that I do until now, and sometimes it is great and sometimes it destroys me).
My parents couldn't understand it; they thought I was getting money from God knows where. When I was 16 years old, I landed my first "big" job with a start-up from Buenos Aires. That was my first time experiencing the feeling of having a real job—coding on demand, doing things how they asked me, and working on things that maybe were out of my scope of interest. I was lucky. My parents never forced me to go to school, and I would stay at home as much as I wanted. Obviously, there was a minimum number of times that I had to fulfill if I wanted to finish school, and the guys I worked for understood it. For that reason, I was working for objectives and not hours. They were really open-minded, and we are friends until now. They taught me most of the things that helped me land my next job.
Just a couple of months later, after my 18th birthday, I landed one of my biggest jobs at an international company. It was my first full-time job as a full-stack developer working on a USA project. Also, it was a face-to-face job, so I had to attend the office. Due to that and the fact that I couldn't study what I wanted, I decided to drop out of university and dedicate myself full-time to my job (I was studying computer science). The pay was really good at that time. I was a junior trainee, and they were paying me something around $1500. That amount in Argentina is a bunch of money.
The conflict in my house persisted, so I decided it was time to get out of my parents' house and live on my own. That's how, at just 18 years old, I quit university, separated from my family (not in the most peaceful way), and moved out. It took me just a couple of months to get my first raise that year. My performance was so good that they had to move me from junior trainee to junior advanced. That year, I got nine raises, going from junior trainee to project lead in just one year. I know, extreme, but you need to take into consideration that at this point, I had been doing this for almost five years. Also, I knew how to speak English better than most of my team members, something really helpful in a company that gave us close contact with the client.
And that's the story of how, at just 19 years old, I was completely alone, having a high-paying job and living entirely on my own.
### Then is When it Struck Me
I can remember it as if it happened yesterday. I woke up to prepare myself to go to work, but I felt different—tired, sad. I felt an emptiness like I had never felt before. I went to my job anyway (like I could not do it). I finished my day and went home. I worked out. I didn't feel quite good, but I ignored it. I was really stressed. I sat on my couch and ate dinner. Then I started crying. I couldn't explain it. I wasn't where I wanted to be. This feeling wasn't going anywhere; it came to stay for a long time. Every week it was becoming worse. Until one week, I felt completely empty. Then is when I said to myself that this had been enough, and I decided to go to therapy.
My therapist is fantastic, and even now that I don't necessarily need to go, I visit her from time to time just to talk. If you are feeling things like the ones I'm describing, I strongly encourage you to go and find a therapist.
My therapist helped me, and I tried to help myself by watching YouTube videos, reading books, and listening to podcasts. I was feeling worse and worse every day. I didn't want to do anything at all, but I forced myself to keep doing it through willpower. After a time, I couldn't exactly say when my therapist came to the conclusion that I was going through a personal and professional crisis.
I was lacking purpose in my life, feeling that what I was doing wasn't interesting at all—"leading" a team and developing projects for a company that I didn't care about. I wanted to do something more interesting, more challenging, something that helped people.
Then is when I tried to do things outside of my job, only to realize I had something around three hours free in my day during the week. I woke up at 8:45 to go to work around 9:00, got out at 18:00, arrived home at 18:15, did my necessities, it's 19:00, worked out, it's 20:00, made dinner, and there you go—you are too tired to do anything, and the day is almost over.
That destroyed me. The math was against me. There weren't enough hours in the day to do anything. But I didn't give up. I put in all the effort that I could to push as many activities as I could to get something out of that time. I was studying, trying things, making plans, developing side projects. At the same time, my therapist was asking me to be more
social, so I started to code after my job at a coffee shop (good idea, you'll understand why later).
It's worth mentioning that in my job, I did all that I could to get even more out of those hours—reading, watching courses, and secretly programming my own things. Even with all that effort and neglecting my health, I couldn't do enough to escape from it. To add even more complexity to the process, I suffered a medical problem (I don't like talking about this) that required a medical intervention that almost made me lose a leg and was kind of complex (I insist, I don't like talking about this). Because of that, I was feeling so physically sick, and the stress and psychological problems didn't help. I had to go through an extensive time of recovery to be able to walk again and an even longer process to be able to work out again. Thanks to this, I'll always have some pain in my leg, but I feel much better now.
### It's Coming to an End
Yep, I know you are probably asking yourself how old I am now. The answer: 20, almost 21. In two weeks, it's my birthday. The last three years have been a roller coaster. If you are anything like me, you love a happy ending. I really do. And that's the case. Now I work much more relaxed, I have a personal and social life, I feel physically and psychologically better, I have my partner who I've been dating for a year and a half now, and we are already living together (very happily). So if you want to know what I did to get to this point, just keep reading.
## What I Did and Am Still Doing to Solve It
As you read before, it took me a while to transform my life into a mess, and fixing it is also taking a considerable amount of time. I'm saying this because if you are going through the same process, you need to understand that this is not going to be easy.
### What Do You Want?
This is probably the question I should have asked myself back then when I started studying software development. I didn't decide to study software development because I really loved it, but instead, I did it out of necessity. Now I invite you to ask yourself what you really want and be honest; no one is judging you.
Ready? I hope so. Below is my answer.
Now, after seven years, I can say that software development can be extremely boring and also extremely exciting. But after so much time, it is already part of me, so I know for sure that I want to keep doing this (with some changes, obviously). At the same time, I want something else. While leading a project for the company where I used to work before, I discovered that I'm really good at teaching people and I'm also really good at finding talent. So I would like to do something that connects both of those things. Last but not least, I want to do something related to science. I'll probably try to get a degree in math or physics once all this is more stable.
### Refine the Details
Okay, I hope you answered the previous question. Now we are going to polish the details. This is something like trying to create a mind map. Please write down the things you like about the things you are already doing and also write down the things that you want to do.
This is mine:
#### Right Now:
1. Love coding
2. Love solving hard problems
3. Love communicating with people in other languages
#### In the Future:
1. Science
2. Leading projects and teaching people
3. Having freedom
Ok, we have our list. Now I invite you to think of the easiest solution for this problem. Most people will jump straight to entrepreneurship and stuff like that, but most of the time, that's the hardest solution. So hopefully, you can come up with a different answer. Here are some solutions that may apply:
1. Changing jobs
2. Renegotiating in your current job
The previous two options are pretty good for most people, and many of them don't even consider them.
In my case, and most likely in your case too, the best answer is entrepreneurship. Given that it's the hardest solution, we'll have to make many sacrifices, and we'll need a good plan.
### It's Not About Yourself
Sadly, if you are inclined towards entrepreneurship, let me tell you the harsh truth. Nobody cares about your plan or your business. And don't get me wrong; I would love to hear your idea, and probably many people out there will too. But most of the time, the ideas that we think are going to rock it don't end up that way.
If you are picking entrepreneurship instead of changing jobs or renegotiating, it's probably because you want the ideal freedom that comes with it. Sadly, that's the fruit that comes from sacrificing many things. If we want to achieve that, we'll need to understand something really basic, but that evades most of us. We'll need someone's money. If we want someone to give us money, we'll need to offer something in exchange, and it ideally will be something that our customer wants. So it's not about ourselves or our idea; it's about what our potential customer wants. So don't waste too much time working on something that nobody wants to pay for. If you want more information about this, read my post [48hs is all you need for your project](https://medium.com/@theprof301/48hs-is-all-you-need-15083345c5d5).
And finally, when you are pricing something, think about this. Let's say your barber is asking you for $10,000 for a haircut. What would you feel? Probably you'll feel that this guy is trying to make himself rich with you. I hate when something like that happens. So often, when I have to put a price on something, I take that into consideration.
### It's Going to Be a Long Run
Whatever you choose and whatever you are doing, if you want to change your life, you'll need time, plenty of time. If you are thinking that you can do this on the first try or just in one month, sorry, my boy, you are not being realistic. It will be better if you accept upfront that it is going to be hard.
Once you accept it, you can make a plan. Right now, I'm still trying to stabilize my businesses. I have a coffee shop, I'm developing two projects, writing blogs, and working for a big company as a senior developer. I'm not telling you this because I want to show off how full my schedule is, but instead because I know my plan is going to take a lot of time, and because I know I'm doing too many things. Next month, I'll have to reduce the number of activities that I'm doing because I'm a human being, and I need to breathe, eat, do sports, and have a life in general, and you do too. So when you are making your plan, be realistic. Estimate for the long term, don't give yourself too many options to quit, and take into consideration that you need to have a life outside of your projects and your "dreams."
### Give Yourself Space to Breathe
I know I mention this too many times, but most people don't realize that this is a possibility. In the previous point, I mentioned that you have to plan for the long term and take it easy. Most likely, if you are in a situation similar to the one I was in, you want to run from your job and change your life tomorrow. A really good option is to start looking for a job that is less demanding. Even if you have to do the "same" kind of job, I did exactly that.
One day I was exhausted. My boss threw the blame for something I didn't do at me. The company was going through a hard time, and my team was feeling the pressure. So when I got home, I updated my curriculum and looked for places where I could do a similar job for similar pay and work fewer hours. Now I'm working there, and from time to time, I want to run because, let's be honest, I don't want to do what I'm doing for a living, but it's much more relaxed, and I have enough time to work on my escape plan, and at the same time, I have a job that is paying the bills.
So go out there and look for a better option. Probably someone else is willing to offer you a better deal.
### Having Ways to Socialize
If you pay enough attention, you'll notice that I mentioned that I have a girlfriend and that it's a good idea to go to a coffee shop to do your things. That's how I met my girlfriend; she was working there, and I was a regular customer because I was going there to code.
What I'm trying to share with this is that socializing doesn't imply going to a club or a party. You can find ways to put yourself out there and at the same time keep doing your stuff. Anything that puts you close to other human beings and forces you to interact with other people is a good idea. We are extremely social beings, and it's amazing how much our interpersonal relationships can affect our lives.
### Having Your Escape Number
A good way to structure your plan is first to find your escape number. This is pretty easy. It varies a little bit from country to country because of the taxes, so I'll just give you the concept, and later you can polish the number. It goes something like this: for one month, keep track of how much you need to pay for rent, your food, and everything else that you strictly need to live.
That's your escape number. Find ways to produce that every month, and you are out.
### Being Logical About the Future
When you are making decisions like this, it's important to stay calm, control your emotions, and think as logically as you can about it. Let's say you have your plan, you know what you truly want to do, how much time it's going to take you, and how much money you need. Ok, now let's be calm and realistic.
You need to save money and be prepared for it to fail. What are you going to do if everything goes wrong? In my case, I have enough savings to survive for three months. In the meantime, if everything fails, I'm looking for another job, and if I can't get it in that time, I have enough money to go back to my parents' house to start all over again. I know, I know, it's easy for me to say—I'm just 21, and I don't have children, but believe me, no one wants to fail.
### Not Giving Yourself a Way to Fail
Normally, most people quit
before getting to the end. Have you ever noticed that your phone takes almost the same time to go from 80% to 100% as it does from 0% to 80%? That's because the potential difference of the battery is going down the closer you get to 100%. We can extrapolate that to any project. What took you to go from 0% to 80% is probably the same amount of effort and time that you are going to need to go from 80% to 100%. So if you apply the same rules for both parts, it's most likely that you are going to lose.
Then let's change the rules. Let me give you another example. I love playing the piano, but I suck at it. So I started taking classes. I already want to quit, but I put a challenge to myself. I can only quit after 100 lessons. If I don't want to play after 100 lessons, I'm more than free to quit. The thing is, I'm pretty sure that after the 100th lesson, I'll want to play.
In that way, there is no way for me to lose because I changed the rules of the game, and just like a casino, the chance of the house losing is extremely low.
### You Don't Need to Do All of This
Finally, probably the best advice of all: you don't need to do all of this. I would love to be rich, and I used to think that everybody wants to be rich. To my surprise, one day I decided to ask my cousin (I'm really close with my cousin). You know what he said? He said: "No Juan, I don't want to be rich, I just want to be a medic and help people, I don't care about that." I replied: "But you understand that if you are not wealthy enough, you'll have to work for the rest of your life under someone else's desires and rules." He answered: "I don't care. I have no problem working for hours if I'm doing something good and productive that can be admired." Finally, I asked: "Ok, that's great. But then, how do you want to live?" His answer: "I just want to have enough time to be with my girlfriend, enough money to not worry about the bills, and some nice vacations and time for my children." I think my cousin's answer was beautiful.
Maybe you are more like my cousin. Just because everybody wants to be rich or because everybody wants to be an entrepreneur or a software developer doesn't mean you have to do the same. So be free to decide what you think is better for yourself. Also, don't repeat my mistake and don't let others tell you what is worth doing and what's not, and don't let the circumstances of your surroundings determine what you are going to do, as if we were capable of predicting the future.
## Wrapping Up
I hope you find this post useful. It has been an adventure for me to write about this. It's something extremely personal, but I really wanted to share it. If you found it helpful, please leave me a comment telling me what's going on with your life.
## Before You Go
I'm thinking of posting on X. Would you follow me? Thank you in advance, and if you really enjoyed the post, would you help me pay my rent?
[---------------------------------------------------------------------------] 0% of $400, [let's pay my rent](https://buymeacoffee.com/juanemilio) | juanemilio31323 |
1,915,093 | Invalid Date in Safari | Hello everyone, Today, I encountered a weird bug that only appears in the Safari browser. It works... | 0 | 2024-07-08T01:43:13 | https://dev.to/deni_sugiarto_1a01ad7c3fb/invalid-date-in-safari-4ff6 | safari, webdev, javascript, programming | Hello everyone,
Today, I encountered a weird bug that only appears in the Safari browser. It works fine in other browsers. After debugging the code in Safari, I found that filtering data by date was resulting in an empty array. I have been using dayjs as my date library for formatting and filtering.
Here is the source date I use: "2024-7-1,6:0:0".
After some research, I discovered that Safari requires dates to be in ISO 8601 format. To handle this, I created a function formatDateForSafari that converts a date string into the ISO 8601 format. Here is the code:
```
function dateStringToISO(dateString) {
const date = new Date(dateString);
// Check if the date is valid
if (isNaN(date.getTime())) {
throw new Error("Invalid date");
}
const year = date.getFullYear();
const month = (date.getMonth() + 1).toString().padStart(2, '0');
const day = date.getDate().toString().padStart(2, '0');
return `${year}-${month}-${day}`;
}
// Example usage:
const date = "2024-7-1,6:0:0";
console.log(dateStringToISO(date)); // Output: 2024-07-01
```
By using this function, you can ensure that your date strings are properly formatted for Safari, avoiding issues with invalid dates.
This version maintains your original points while improving readability and coherence.
Update function name regarding @jayantbh suggestions. Thanks for your suggestion | deni_sugiarto_1a01ad7c3fb |
1,915,095 | HackerRank 3 Months Preparation Kit(JavaScript) - Plus Minus | Given an array of integers, calculate the ratios of its elements that are positive, negative, and... | 0 | 2024-07-08T01:52:07 | https://dev.to/saiteja_amshala_035a7d7f1/hackerrank-3-months-preparation-kit-plus-minus-3cgn | webdev, javascript, beginners, learning | Given an array of integers, calculate the ratios of its elements that are positive, negative, and zero. Print the decimal value of each fraction on a new line with _6_ places after the decimal.
Note: This challenge introduces precision problems. The test cases are scaled to six decimal places, though answers with absolute error of up to 10 to the power -4 are acceptable.
**Example**
arr = [1,1,0,-1,-1]
There are n = 5 elements, two positive, two negative and one zero. Their ratios are 2/5, 2/5 and 1/5. Results are printed as:
0.400000
0.400000
0.200000
**Function Description**
Complete the plusMinus function in the editor below.
plusMinus has the following parameter(s):
- int arr[n]: an array of integers
**Print**
Print the ratios of positive, negative and zero values in the array. Each value should be printed on a separate line with 6 digits after the decimal. The function should not return a value.
**Input Format**
The first line contains an integer, n, the size of the array.
The second line contains n space-separated integers that describe _arr[n]_
**SOLUTION**
 | saiteja_amshala_035a7d7f1 |
1,915,096 | A new beginning | Hi, Good morning This is umaganesh and I am much thrilled to learn python through my native language... | 0 | 2024-07-08T01:56:32 | https://dev.to/yuga/a-new-beginning-3glf | python | Hi, Good morning
This is umaganesh and I am much thrilled to learn python through my native language thanks for kaniyam foundation. | yuga |
1,915,097 | Concatenate Column Values and Perform Grouping & Aggregation | Problem description & analysis: In the table below, the 1st column is person’s name, and the... | 0 | 2024-07-08T02:16:29 | https://dev.to/judith677/concatenate-column-values-and-perform-grouping-aggregation-3b3 | programming, beginners, tutorial, productivity | **Problem description & analysis**:
In the table below, the 1st column is person’s name, and the multiple columns after it are items they purchased. There are people who sometimes buy multiple same items in one purchase and who place multiple orders at different times.

We need to rearrange the table into a crosstab, where the column headers are items and the row headers are people’s names, as shown below:

**Solution**:
Use _**SPL XLL**_ to do this:
```
=spl("=?.groupc@r(~1;~.m(2:);1).pivot@s(~1:Name; ~2,count(~2))",A1:D5)
```
As shown in the picture below:

**Explanation**:
groupc@r groups members of a sequence by a specified number and transposes columns to rows; ~1 represents the 1st child member of the current member, and ~.m(2:) gets child members of the current member from the 2nd to the last. pivot@s transposes rows to columns and performs aggregation on each group of data.
| judith677 |
1,915,098 | Learn Suspense by Building a Suspense-Enabled Library | Suspense (along with concurrent rendering) has been a feature in React since v16.6.0. Despite this, I... | 0 | 2024-07-08T03:32:03 | https://www.bbss.dev/posts/react-learn-suspense/ | react, javascript | ---
title: Learn Suspense by Building a Suspense-Enabled Library
published: true
tags: react, javascript
canonical_url: https://www.bbss.dev/posts/react-learn-suspense/
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zoqblkosiwbgexh5jbh8.png
---
Suspense (along with concurrent rendering) has been a feature in React since v16.6.0. Despite this, I haven’t seen much of it in action beyond React.lazy and limited applications of “suspense-enabled libraries”.
What’s going on?
As of the impending React v19 release, Suspense is still not quite ready for primetime. The story of its APIs and internals still seems incomplete. In fact, the React team seems to think it’s so incomplete that the Suspense API is entirely undocumented. The Suspense documentation insists that the only way of using Suspense is via “Suspense-enabled frameworks”.
I think that purposefully hiding APIs in documentation is silly, but fine! I’ll play their game! Let’s build a Suspense-enabled library, and use it.
We will peel back the curtain of Suspense along the way.
Read more: https://www.bbss.dev/posts/react-learn-suspense/
| vezyank |
1,915,101 | Get 24% Off Nexa N20000 Disposable Vape – 20,000 Puffs of Pure Bliss! | Experience unparalleled vaping satisfaction with the Nexa N20000 Disposable Vape, now at an... | 0 | 2024-07-08T02:27:02 | https://dev.to/vapenear/get-24-off-nexa-n20000-disposable-vape-20000-puffs-of-pure-bliss-1dld | Experience unparalleled vaping satisfaction with the Nexa N20000 Disposable Vape, now at an incredible 24% off at [VapeNear](https://vapenear.com/nexa-n20000-disposable-vape-20000-puffs.html)! Enjoy 20,000 puffs of premium flavor and convenience. Don't miss out on this limited-time offer – elevate your vaping experience today!
 | vapenear | |
1,915,103 | ChatGPT Portugues: A Revolução da IA em Língua Portuguesa | Nos últimos anos, a inteligência artificial (IA) tem transformado a maneira como interagimos com a... | 0 | 2024-07-08T02:30:46 | https://dev.to/chatgpt_portugues_15d9d13/chatgpt-portugues-a-revolucao-da-ia-em-lingua-portuguesa-35n9 | chatgpt | Nos últimos anos, a inteligência artificial (IA) tem transformado a maneira como interagimos com a tecnologia. Um dos exemplos mais notáveis dessa transformação é o ChatGPT, um modelo de linguagem desenvolvido pela OpenAI. Com o crescente interesse e necessidade de conteúdos em português, surge o ChatGPT Portugues, uma plataforma dedicada a oferecer uma experiência personalizada para os falantes dessa língua.
Visite nossa página para mais detalhes: [ChatGPT Portugues](https://chatgptportugues.io/)

**A Importância de uma IA em Português**
A presença de uma IA que compreende e responde em português é crucial para a inclusão digital e para atender às demandas específicas de um público vasto e diversificado. O ChatGPT Portugues é projetado para entender nuances culturais, gírias e expressões regionais, proporcionando respostas mais precisas e contextualmente relevantes. Isso é fundamental para garantir que a interação com a IA seja natural e eficaz.
**Funcionalidades do ChatGPT Portugues**
A plataforma chatgptportugues.io se destaca por sua interface amigável, que facilita o uso mesmo para aqueles que não têm familiaridade com tecnologias avançadas. Além disso, a rapidez nas respostas e a alta segurança dos dados dos usuários são prioridades. Compreender e atender às necessidades dos usuários de forma eficiente é um dos principais objetivos do ChatGPT Portugues.
A segurança é um aspecto essencial, e a chatgptportugues.io se compromete a proteger as informações dos seus usuários. Através de rigorosos protocolos de segurança, a plataforma garante que os dados estejam sempre seguros, permitindo uma experiência de uso tranquila e confiável.
**Aplicações Práticas**
As aplicações do ChatGPT Portugues são inúmeras e abrangem diversos setores. Na educação, por exemplo, a IA pode auxiliar estudantes com dúvidas, fornecer explicações detalhadas sobre temas complexos e até mesmo ajudar na prática de redação. No atendimento ao cliente, as empresas podem utilizar o ChatGPT para oferecer um suporte rápido e eficiente, melhorando a satisfação do cliente.
Além disso, profissionais de diversas áreas, como jornalistas, escritores e pesquisadores, podem utilizar a chatgptportugues.io para obter informações, gerar ideias e até mesmo revisar textos. A versatilidade do ChatGPT torna-o uma ferramenta valiosa para quem busca otimizar seu trabalho e obter resultados de alta qualidade.
**O Futuro do ChatGPT Portugues**
O futuro do ChatGPT Portugues é promissor. Com constantes atualizações e melhorias, a plataforma tende a se tornar ainda mais eficiente e abrangente. A equipe por trás da chatgptportugues.io está comprometida em ouvir o feedback dos usuários e implementar mudanças que tornem a experiência cada vez melhor.
À medida que a tecnologia avança, espera-se que o ChatGPT se torne uma presença constante no dia a dia dos falantes de português, auxiliando em tarefas cotidianas, profissionais e acadêmicas. A acessibilidade e a precisão das respostas fazem do ChatGPT Portugues uma ferramenta indispensável para quem busca inovação e praticidade.
**Conclusão**
Em resumo, o ChatGPT Portugues é uma inovação que veio para transformar a maneira como interagimos com a inteligência artificial em nossa língua nativa. Através da plataforma chatgptportugues.io, os usuários têm acesso a uma ferramenta poderosa, segura e eficiente, que atende às suas necessidades de maneira personalizada e precisa. Com um futuro promissor, o ChatGPT Portugues promete continuar evoluindo e se tornando cada vez mais relevante para os falantes de português em todo o mundo.
Informações de contato
Empresa: ChatGPT Português
Endereço: Av. Dom Dinis 16, Vila Real, Portugal
Estado completo: Vila Real
Cidade: Vila Real
País: Portugal
Código postal: 5000 – 600
Telefone: +351 920534472
Site: https://chatgptportugues.io/
E-mail: chatgptportugues.io@gmail.com
Google Mapa: Av. Dom Dinis 16, Vila Real, Portugal | chatgpt_portugues_15d9d13 |
1,915,104 | Transparent LED display: Leading the new modern vision | Introduction With its unique visual effects and application flexibility, transparent LED display is... | 0 | 2024-07-08T02:34:07 | https://dev.to/sostrondylan/transparent-led-display-leading-the-new-modern-vision-245o | transparent, led, display | Introduction
With its unique visual effects and application flexibility, [transparent LED display](https://sostron.com/products/crystal-transparent-led-screen/) is gradually changing our perception of traditional display technology. This article will explore the innovative application of transparent LED display in different application scenarios and how it can bring revolutionary visual experience to commercial environments and public spaces.

Application scenario analysis
Shopping malls
In shopping malls, the integration of transparent LED display and modern architectural aesthetics provides consumers with a new shopping experience. They are usually installed on large glass partitions or elevator shafts in shopping malls, which not only enhances the modern sense of shopping malls, but also provides a striking advertising platform for brands. [Provide you with LED transparent screen solutions. ](https://sostron.com/led-transparent-screen-solution/)

Chain franchise stores
Personalized store brand image is essential to attract customers. With its unique design and dynamic advertising short films, transparent LED display replaces traditional store wall advertising, bringing higher attention and customer traffic to stores. [7 advantages of using LED transparent screen rental for retail display. ](https://sostron.com/7-advantages-of-using-led-transparent-screen-rental/)
Technology Exhibition Hall
Technology Exhibition Hall is an important place for spreading scientific and technological knowledge. The special-shaped customization capability of transparent LED display makes it an ideal choice for displaying new technological effects, allowing visitors to intuitively feel the wonder and mystery of technology.

Laminated Glass Window Display
With the rapid development of the retail industry, the application of transparent LED display screens in the fields of facades, laminated glass window display decoration, etc. is becoming increasingly popular. They have brought disruptive changes to retailers and provided a new visual marketing tool. [Here are ten questions about transparent LED window display screens for you. ](https://sostron.com/transparent-led-window-display-ten-questions-answered/)

Architectural Media
With the continuous advancement of LED technology, architectural media technology has also made significant progress. Especially in the application of curtain wall glass buildings, transparent LED display screens such as LED light strip screens and transparent LED skylight screens add dynamic visual effects to buildings.

Application advantages of transparent LED display
Metal curtain wall
The transparent LED display is attached to the main glass keel and perfectly integrated with the curtain wall glass, providing excellent advertising display effect while maintaining the transparency and aesthetics of the building.
Indoor space design
The transparent LED display can be customized with different shapes and designs according to different indoor space requirements, which not only meets the cleaning effect of the space, but also adds a sense of modernity to the interior decoration. [What is the difference between indoor and outdoor LED walls? ](https://sostron.com/ten-differences-between-indoor-and-outdoor-led-walls/)
Exhibition design
At various exhibitions, such as auto shows and new product launches, the multi-angle display capability of transparent LED displays provides all-round visual support for product promotion.

Window display
The transparent LCD advertising machine hanging on the window has excellent commercial publicity effect and brings new vitality to the window display.
Conclusion
With its innovative design and flexible application, the transparent LED display is becoming the new favorite of modern visual display. Whether in a commercial environment or a public space, it can provide a unique visual experience and an efficient way of disseminating information. With the continuous development of technology, we expect transparent LED displays to play a greater role in future application scenarios.
Thank you for watching. I hope we can solve your problems. Sostron is a professional [LED display manufacturer](https://sostron.com/about-us/). We provide all kinds of displays, display leasing and display solutions around the world. If you want to know: [The quotations for different application scenarios of LED display screens are different.](https://dev.to/sostrondylan/the-quotations-for-different-application-scenarios-of-led-display-screens-are-different-5f8b) Please click read.
Follow me! Take you to know more about led display knowledge.
Contact us on WhatsApp:https://api.whatsapp.com/send?phone=+8613570218702&text=Hello | sostrondylan |
1,915,107 | How to Build a Data Collection System (Fast & Easy) | Build a Data Collection System in 3 Easy Steps In this guide, we outline the steps necessary to... | 0 | 2024-07-08T02:40:41 | https://five.co/blog/how-to-build-a-data-collection-system/ | database, datascience, mysql, development | <!-- wp:heading -->
<h2 class="wp-block-heading">Build a Data Collection System in 3 Easy Steps</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>In this guide, we outline the steps necessary to build and launch a data collection system using Five’s rapid application development environment.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>The process involves:</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><!-- wp:list-item -->
<li>Creating the <a href="https://www.oracle.com/au/database/what-is-database">database</a></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Adding the form</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Launching the web form</li>
<!-- /wp:list-item --></ol>
<!-- /wp:list -->
<!-- wp:paragraph -->
<p>We will also cover securing the web form with logins, authentication, and permissions.</p>
<!-- /wp:paragraph -->
<!-- wp:essential-blocks/table-of-contents {"blockId":"eb-toc-7ioq5","blockMeta":{"desktop":".eb-toc-7ioq5.eb-toc-container { max-width:610px; background-color:var(\u002d\u002deb-global-background-color); padding:30px; border-radius:4px; transition:all 0.5s, border 0.5s, border-radius 0.5s, box-shadow 0.5s }.eb-toc-7ioq5.eb-toc-container .eb-toc-title { text-align:center; cursor:default; color:rgba(255,255,255,1); background-color:rgba(69,136,216,1); font-size:22px; font-weight:normal }.eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper { background-color:rgba(241,235,218,1); text-align:left }.eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li { color:rgba(0,21,36,1); font-size:14px; line-height:1.4em; font-weight:normal }.eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li:hover,.eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li.eb-toc-active \u003e a { color:var(\u002d\u002deb-global-link-color) }.eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li a { color:inherit }.eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li svg path { stroke:rgba(0,21,36,1) }.eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li:hover svg path { stroke:var(\u002d\u002deb-global-link-color) }.eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li a,.eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li a:focus { text-decoration:none; background:none }.eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li { padding-top:4px }.eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper .eb-toc__list li:not(:last-child) { padding-bottom:4px }.eb-toc-7ioq5.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list { background:#fff; border-radius:4px }","tab":"","mobile":"","editorDesktop":"\n\t\t \n\t\t \n\n\t\t .eb-toc-7ioq5.eb-toc-container{\n\t\t\t max-width:610px;\n\n\t\t\t background-color:var(\u002d\u002deb-global-background-color);\n\n\t\t\t \n \n\n \n\t\t\t \n padding: 30px;\n\n \n\t\t\t \n \n \n \n\n \n \n border-radius: 4px;\n\n \n \n\n \n\n\n \n\t\t\t transition:all 0.5s, \n border 0.5s, border-radius 0.5s, box-shadow 0.5s\n ;\n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container:hover{\n\t\t\t \n \n \n\n\n \n\n \n \n \n\n \n \n\n \n\n \n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-title{\n\t\t\t text-align: center;\n\t\t\t cursor:default;\n\t\t\t color: rgba(255,255,255,1);\n\t\t\t background-color:rgba(69,136,216,1);\n\t\t\t \n\t\t\t \n \n\n \n\t\t\t \n \n font-size: 22px;\n \n font-weight: normal;\n \n \n \n \n \n\n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper{\n\t\t\t background-color:rgba(241,235,218,1);\n\t\t\t text-align: left;\n\t\t\t \n \n\n \n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper ul,\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper ol\n\t\t {\n\t\t\t \n\t\t\t \n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li {\n\t\t\t color:rgba(0,21,36,1);\n\t\t\t \n \n font-size: 14px;\n line-height: 1.4em;\n font-weight: normal;\n \n \n \n \n \n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li:hover,\n .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li.eb-toc-active \u003e a{\n\t\t\t color:var(\u002d\u002deb-global-link-color);\n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li a {\n\t\t\t color:inherit;\n\t\t }\n\n .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li svg path{\n stroke:rgba(0,21,36,1);\n }\n .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li:hover svg path{\n stroke:var(\u002d\u002deb-global-link-color);\n }\n\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li a,\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li a:focus{\n\t\t\t text-decoration:none;\n\t\t\t background:none;\n\t\t }\n\n\t\t \n\n .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li {\n padding-top: 4px;\n }\n\n .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper .eb-toc__list li:not(:last-child) {\n padding-bottom: 4px;\n }\n\n \n .eb-toc-7ioq5.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list{\n background: #fff;\n \n \n \n \n\n \n \n border-radius: 4px;\n\n \n \n\n \n\n\n \n }\n\n\n\t \n\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper{\n\t\t\t display:block;\n\t\t }\n\t\t ","editorTab":"\n\t\t \n\t\t .eb-toc-7ioq5.eb-toc-container{\n\t\t\t \n\n\t\t\t \n \n\n \n\t\t\t \n \n\n \n\t\t\t \n \n \n\n \n\n \n \n \n\n \n \n\n \n\t\t }\n\t\t .eb-toc-7ioq5.eb-toc-container:hover{\n\t\t\t \n \n \n \n \n \n \n\n \n \n \n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-title{\n\t\t\t \n \n\n \n\t\t\t \n \n \n \n \n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper{\n\t\t\t \n \n\n \n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li{\n\t\t\t \n \n \n \n \n\t\t }\n\n .eb-toc-7ioq5.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list{\n \n \n \n\n \n\n \n \n \n\n \n \n\n \n }\n\n\t \n\t\t ","editorMobile":"\n\t\t \n\t\t .eb-toc-7ioq5.eb-toc-container{\n\t\t\t \n\n\n\t\t\t \n \n\n \n\t\t\t \n \n\n \n\t\t\t \n \n \n\n \n\n \n \n \n\n \n \n \n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container:hover{\n\t\t\t \n \n \n\n \n \n \n \n\n \n \n\n \n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-title{\n\t\t\t \n \n\n \n\t\t\t \n \n \n \n \n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper{\n\t\t\t \n \n\n \n\t\t }\n\n\t\t .eb-toc-7ioq5.eb-toc-container .eb-toc-wrapper li{\n\t\t\t \n \n \n \n \n\t\t }\n\n .eb-toc-7ioq5.eb-toc-container.style-1 .eb-toc__list-wrap \u003e .eb-toc__list li .eb-toc__list{\n \n \n \n\n \n\n \n \n \n\n \n \n \n }\n\n\t \n\t "},"headers":[{"level":2,"content":"Build a Data Collection System in 3 Easy Steps","text":"Build a Data Collection System in 3 Easy Steps","link":"build-a-data-collection-system-in-3-easy-steps"},{"level":3,"content":"What is a Data Collection System?","text":"What is a Data Collection System?","link":"what-is-a-data-collection-system"},{"level":4,"content":"Key Components of a Data Collection System","text":"Key Components of a Data Collection System","link":"key-components-of-a-data-collection-system"},{"level":2,"content":"Building a Data Collection System with Five","text":"Building a Data Collection System with Five","link":"building-a-data-collection-system-with-five"},{"level":2,"content":"Step 1: Database For Your Data Collection System","text":"Step 1: Database For Your Data Collection System","link":"step-1-database-for-your-data-collection-system"},{"level":2,"content":"Step 2: Designing the Data Collection Form","text":"Step 2: Designing the Data Collection Form","link":"step-2-designing-the-data-collection-form"},{"level":2,"content":"Step 3: Deploying the Form","text":"Step 3: Deploying the Form","link":"step-3-deploying-the-form"},{"level":4,"content":"Securing Your Data Collection Form: Logins, Authentication, Permissions","text":"Securing Your Data Collection Form: Logins, Authentication, Permissions","link":"securing-your-data-collection-form-logins-authentication-permissions"},{"level":2,"content":"Conclusion: Building a Secure Data Collection System","text":"Conclusion: Building a Secure Data Collection System","link":"conclusion-building-a-secure-data-collection-system"}],"deleteHeaderList":[{"label":"Build a Data Collection System in 3 Easy Steps","value":"build-a-data-collection-system-in-3-easy-steps","isDelete":false},{"label":"What is a Data Collection System?","value":"what-is-a-data-collection-system","isDelete":false},{"label":"Key Components of a Data Collection System","value":"key-components-of-a-data-collection-system","isDelete":false},{"label":"Building a Data Collection System with Five","value":"building-a-data-collection-system-with-five","isDelete":false},{"label":"Step 1: Database For Your Data Collection System","value":"step-1-database-for-your-data-collection-system","isDelete":false},{"label":"Step 2: Designing the Data Collection Form","value":"step-2-designing-the-data-collection-form","isDelete":false},{"label":"Step 3: Deploying the Form","value":"step-3-deploying-the-form","isDelete":false},{"label":"Securing Your Data Collection Form: Logins, Authentication, Permissions","value":"securing-your-data-collection-form-logins-authentication-permissions","isDelete":false},{"label":"Conclusion: Building a Secure Data Collection System","value":"conclusion-building-a-secure-data-collection-system","isDelete":false}],"isMigrated":true,"titleBg":"rgba(69,136,216,1)","titleColor":"rgba(255,255,255,1)","contentBg":"rgba(241,235,218,1)","contentColor":"rgba(0,21,36,1)","contentGap":8,"titleAlign":"center","titleFontSize":22,"titleFontWeight":"normal","titleLineHeightUnit":"px","contentFontWeight":"normal","contentLineHeight":1.4,"ttlP_isLinked":true,"commonStyles":{"desktop":".wp-admin .eb-parent-eb-toc-7ioq5 { display:block }.wp-admin .eb-parent-eb-toc-7ioq5 { filter:unset }.wp-admin .eb-parent-eb-toc-7ioq5::before { content:none }.eb-parent-eb-toc-7ioq5 { display:block }.root-eb-toc-7ioq5 { position:relative }","tab":".editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-7ioq5 { display:block }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-7ioq5 { filter:none }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-7ioq5::before { content:none }.eb-parent-eb-toc-7ioq5 { display:block }","mobile":".editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-7ioq5 { display:block }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-7ioq5 { filter:none }.editor-styles-wrapper.wp-embed-responsive .eb-parent-eb-toc-7ioq5::before { content:none }.eb-parent-eb-toc-7ioq5 { display:block }"}} /-->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading {"level":3} -->
<h3 class="wp-block-heading">What is a Data Collection System?</h3>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>A data collection system is an interface used to gather, store, manage, and analyze data. These systems capture relevant information that can be used for decision-making, research, analysis, and reporting. Data collection systems can vary in complexity, from simple web forms to sophisticated software integrated with databases and analytics tools.</p>
<!-- /wp:paragraph -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Key Components of a Data Collection System</h4>
<!-- /wp:heading -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li><strong>Web Forms</strong>: Online forms that users fill out to submit information.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Databases</strong>: MySQL, PostgreSQL, or MongoDB that store collected data in an organized manner.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Data Security</strong>: Protecting data from unauthorized access and breaches.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Analytics Tools</strong>: Software that allows users to query the database, generate reports, and visualize data.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Dashboards</strong>: Interactive interfaces that provide real-time insights and trends.</li>
<!-- /wp:list-item --></ul>
<!-- /wp:list -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading -->
<h2 class="wp-block-heading">Building a Data Collection System with Five</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Creating a data collection system in Five offers a multitude of advantages over traditional form builders, making it ideal for those who need robust, secure, and analyzable data.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>One of the standout features of Five is the ability to create login-protected forms. This ensures that only authorized users can access and submit data, enhancing the security of your data collection system. Traditional form builders often lack these advanced security features, leaving your data vulnerable to unauthorized access.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Unlike traditional form builders, which only store submitted data. Five allows you to directly connect your data collection to a database. This allows you to query your database and generate visual representations of your data, making it easier to identify trends, patterns, and correlations. Most traditional form builders usually require exporting data to third-party tools for analysis, adding extra steps and potential for errors.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Data collection systems built with Five are designed for professional-grade data collection, making it ideal for extensive surveys, research projects, or feedback analysis. Five's advanced features ensure that data collection is not only secure but also systematic and scalable.</p>
<!-- /wp:paragraph -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading -->
<h2 class="wp-block-heading">Step 1: Database For Your Data Collection System</h2>
<!-- /wp:heading -->
<!-- wp:tadv/classic-paragraph -->
<div style="background-color: #001524;"><hr style="height: 5px;" />
<pre style="text-align: center; overflow: hidden; white-space: pre-line;"><span style="color: #f1ebda; background-color: #4588d8; font-size: calc(18px + 0.390625vw);"><strong>Build a Data Collection System<br /></strong><span style="font-size: 14pt;">Check out Five's online data collection</span></span></pre>
<p style="text-align: center;"><a href="https://five.co/get-started/" target="_blank" rel="noopener"><button style="background-color: #f8b92b; border: none; color: black; padding: 20px; text-align: center; text-decoration: none; display: inline-block; font-size: 18px; cursor: pointer; margin: 4px 2px; border-radius: 5px;"><strong>Get Instant Access</strong></button><br /></a></p>
<hr style="height: 5px;" /></div>
<!-- /wp:tadv/classic-paragraph -->
<!-- wp:paragraph -->
<p>To begin, <a href="https://five.co/get-started/">get free access to Five</a> and start a new application by navigating to Applications and clicking the yellow plus button.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"align":"center","id":3235,"sizeSlug":"full","linkDestination":"none"} -->
<figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Yellow-Plus-Button-Create-a-New-Application-1024x649-1.png" alt="" class="wp-image-3235"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>Start by creating a database to store the collected data.</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li><strong>Create a New Application</strong><!-- wp:list -->
<ul><!-- wp:list-item -->
<li>Click the yellow plus button.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Name your application (e.g., My First App or Data Collection Form).</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Confirm by clicking the check icon in the upper right corner.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Click on the blue Manage button to enter the development environment.</li>
<!-- /wp:list-item --></ul>
<!-- wp:image {"align":"center","id":3236,"sizeSlug":"full","linkDestination":"none"} -->
<figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Manage-Your-Application-1024x576-1.png" alt="" class="wp-image-3236"/></figure>
<!-- /wp:image -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li><strong>Create Database Tables</strong><!-- wp:list -->
<ul><!-- wp:list-item -->
<li>Go to Data > Table Wizard, a point-and-click interface for creating database tables.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Name your table descriptively (e.g., Recipes for a recipe collection form).</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Add fields to your table using the plus button, specifying the data types (e.g., text, integer, float).</li>
<!-- /wp:list-item --></ul>
<!-- wp:image {"id":3237,"sizeSlug":"full","linkDestination":"none"} -->
<figure class="wp-block-image size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Table-Wizard-1024x649-1-1.png" alt="" class="wp-image-3237"/></figure>
<!-- /wp:image -->
<!-- wp:paragraph -->
<p>Remember to choose appropriate data and display types to ensure your data is stored and displayed correctly. For example, use Float.2 for prices with two decimal places.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>Save your table by clicking the check icon in the upper right corner. Your MySQL database table is now ready to store data.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"id":3238,"sizeSlug":"full","linkDestination":"none"} -->
<figure class="wp-block-image size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Excel-to-Web-App-Database-Fields-1024x626-1-1.png" alt="" class="wp-image-3238"/></figure>
<!-- /wp:image -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading -->
<h2 class="wp-block-heading">Step 2: Designing the Data Collection Form</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Next, navigate to Visual > Form Wizard in Five.</p>
<!-- /wp:paragraph -->
<!-- wp:list {"ordered":true} -->
<ol><!-- wp:list-item -->
<li><strong>Select Data Source</strong><!-- wp:list -->
<ul><!-- wp:list-item -->
<li>In the Form Wizard’s General section, select the database table you created as the main data source.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>This links your backend (database) with your frontend (form).</li>
<!-- /wp:list-item --></ul>
<!-- /wp:list --></li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li><strong>Create the Form</strong><!-- wp:list -->
<ul><!-- wp:list-item -->
<li>Click the check icon in the upper right corner to finalize the form creation.</li>
<!-- /wp:list-item --></ul>
<!-- /wp:list --></li>
<!-- /wp:list-item --></ol>
<!-- /wp:list -->
<!-- wp:paragraph -->
<p>Your form is now complete and connected to your database.</p>
<!-- /wp:paragraph -->
<!-- wp:image {"align":"center","id":3239,"sizeSlug":"full","linkDestination":"none"} -->
<figure class="wp-block-image aligncenter size-full"><img src="https://five.co/wp-content/uploads/2024/07/Five.Co-Form-Wizard-Creating-a-form-1024x656-4.png" alt="" class="wp-image-3239"/></figure>
<!-- /wp:image -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading -->
<h2 class="wp-block-heading">Step 3: Deploying the Form</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>To deploy your form:</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li><strong>Deploy to Development</strong><!-- wp:list -->
<ul><!-- wp:list-item -->
<li>Click the “Deploy to Development” button in the top right corner. This opens your app in a new browser tab.</li>
<!-- /wp:list-item --></ul>
<!-- wp:paragraph -->
<p>Your prototype web form is now live. To enhance it, consider adding <a href="http://five.co/themes">themes</a> or additional features.</p>
<!-- /wp:paragraph -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading {"level":4} -->
<h4 class="wp-block-heading">Securing Your Data Collection Form: Logins, Authentication, Permissions</h4>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Five’s form creator allows you to quickly build secure online forms with user roles and permissions.</p>
<!-- /wp:paragraph -->
<!-- wp:list -->
<ul><!-- wp:list-item -->
<li><strong>Add User Roles and Logins</strong><!-- wp:list -->
<ul><!-- wp:list-item -->
<li>Turn your application into a <a href="https://help.five.org/2.5/docs/applications/adding-managing-applications/">multi-user</a> app, automatically adding a login screen.</li>
<!-- /wp:list-item -->
<!-- wp:list-item -->
<li>Create user roles with specific permissions. For instance, one role can submit forms while another can view a dashboard summarizing form responses.</li>
<!-- /wp:list-item --></ul>
<!-- wp:paragraph -->
<p>Explore Five’s documentation for more detailed instructions on securing your data collection system.</p>
<!-- /wp:paragraph -->
<!-- wp:separator -->
<hr class="wp-block-separator has-alpha-channel-opacity"/>
<!-- /wp:separator -->
<!-- wp:heading -->
<h2 class="wp-block-heading">Conclusion: Building a Secure Data Collection System</h2>
<!-- /wp:heading -->
<!-- wp:paragraph -->
<p>Building a data collection system with Five’s rapid application development environment offers numerous advantages over traditional form builders. The process involves three key steps: creating the database, designing the form, and launching the web form. Five provides security features, including login protection, authentication, and permissions, ensuring that your data collection system is secure and only accessible to authorized users.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>By using Five, you can directly connect your data collection to a database, enabling efficient data management and real-time analysis through custom charts and visual representations. This capability allows you to easily identify trends, patterns, and correlations, which is often cumbersome and error-prone with traditional form builders that require exporting data to third-party tools.</p>
<!-- /wp:paragraph -->
<!-- wp:paragraph -->
<p>With Five, you can improve your data collection process, improve data security, and use analytical tools to gain insights, making it the superior choice for building a comprehensive and efficient data collection system.</p>
<!-- /wp:paragraph --> | domfive |
1,915,108 | Thiết kế Website Tại Bình Phước Tối Ưu Chi Phí | Đối với các doanh nghiệp tại Bình Phước, việc sở hữu một website ấn tượng, tối ưu hóa về mặt SEO và... | 0 | 2024-07-08T02:47:07 | https://dev.to/terus_technique/thiet-ke-website-tai-binh-phuoc-toi-uu-chi-phi-377j | website, digitalmarketing, seo, terus |

Đối với các doanh nghiệp tại Bình Phước, việc sở hữu một website ấn tượng, tối ưu hóa về mặt SEO và có khả năng mang lại lượng khách hàng tiềm năng là vô cùng cần thiết.
Terus tự tin là một công ty chuyên về thiết kế và phát triển website. Chúng tôi đã tạo nên những trang web chuyên nghiệp, hiện đại và tối ưu hóa SEO cho nhiều doanh nghiệp tại Bình Phước. Với kinh nghiệm nhiều năm trong lĩnh vực này, Terus hiểu rõ những yếu tố then chốt để một website thành công, từ thiết kế trực quan, dễ sử dụng đến tối ưu hóa trải nghiệm người dùng và tăng tỷ lệ chuyển đổi.
Một trong những ưu điểm nổi bật của [dịch vụ thiết kế website tại Bình Phước](https://terusvn.com/thiet-ke-website-tai-hcm/) của Terus chính là thiết kế đẹp mắt và chuyên nghiệp. Đội ngũ thiết kế giàu kinh nghiệm của Terus sẽ tạo ra những [giao diện website thu hút, phù hợp với thương hiệu và ngành nghề](https://terusvn.com/thiet-ke-website-tai-hcm/) của khách hàng. Ngoài ra, việc tối ưu hóa SEO cũng là một yếu tố then chốt mà Terus luôn chú trọng, nhằm giúp website của khách hàng có thể dễ dàng được tìm thấy trên các công cụ tìm kiếm và thu hút lưu lượng truy cập chất lượng.
Terus cũng cam kết mang đến dịch vụ hỗ trợ khách hàng tùy chỉnh và khả năng tùy biến cao. Với sự hiểu biết sâu sắc về nhu cầu và mục tiêu của từng khách hàng, đội ngũ chuyên gia của Terus sẽ thiết kế và triển khai website phù hợp, đồng thời luôn sẵn sàng hỗ trợ khách hàng trong quá trình vận hành và cập nhật.
Ngoài ra, Terus cũng cung cấp các dịch vụ bổ trợ khác như Digital Marketing, quản trị website và phát triển phần mềm, nhằm đáp ứng mọi nhu cầu về số hóa và công nghệ thông tin của các doanh nghiệp tại Bình Phước. Với đội ngũ chuyên gia tận tâm và cam kết mang lại giá trị tối ưu cho khách hàng, Terus đã và đang khẳng định vị thế là một trong những đơn vị cung cấp dịch vụ thiết kế website hàng đầu tại Bình Phước.
Tìm hiểu thêm về [Thiết kế Website Tại Bình Phước Tối Ưu Chi Phí](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-binh-phuoc/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,109 | Thiết kế Website Tại Bình Thuận Thu Hút | Trong thời đại số hóa ngày nay, sở hữu một website chuyên nghiệp đóng vai trò then chốt trong việc... | 0 | 2024-07-08T02:49:44 | https://dev.to/terus_technique/thiet-ke-website-tai-binh-thuan-thu-hut-2di6 | website, digitalmarketing, seo, terus |

Trong thời đại số hóa ngày nay, sở hữu một website chuyên nghiệp đóng vai trò then chốt trong việc thúc đẩy sự phát triển của doanh nghiệp. Đặc biệt đối với các công ty tại Bình Thuận, thiết kế website là một giải pháp hiệu quả giúp họ thu hút và tương tác với khách hàng một cách hiệu quả.
Ưu điểm của [dịch vụ thiết kế website tại Bình Thuận](https://terusvn.com/thiet-ke-website-tai-hcm/):
Thể hiện bộ mặt và đại diện cho doanh nghiệp: Website là cửa sổ số hiện diện của công ty, phản ánh rõ nét thương hiệu, sản phẩm/dịch vụ và văn hóa doanh nghiệp. Một website được thiết kế chuyên nghiệp sẽ tạo ấn tượng tích cực, nâng cao niềm tin của khách hàng.
Mạng lưới kinh doanh mở rộng: Có mặt trên online giúp doanh nghiệp tiếp cận và mở rộng phạm vi khách hàng tiềm năng, không bị giới hạn bởi khoảng cách địa lý. Đây là cơ hội vàng để tiếp cận thị trường toàn quốc và quốc tế.
Quảng cáo không giới hạn: Website trở thành công cụ quảng bá thương hiệu và sản phẩm/dịch vụ hiệu quả, tiết kiệm chi phí so với các phương thức truyền thống. Khách hàng có thể tìm hiểu thông tin về doanh nghiệp 24/7 mà không bị hạn chế bởi thời gian làm việc.
Giao tiếp hiệu quả với khách hàng: Tích hợp tính năng chat trực tuyến trên website giúp doanh nghiệp tương tác và phục vụ khách hàng một cách nhanh chóng, tạo trải nghiệm mua sắm tích cực.
Với quy trình bài bản và kinh nghiệm phong phú, Terus cam kết mang lại [dịch vụ thiết kế website chất lượng cao](https://terusvn.com/thiet-ke-website-tai-hcm/), góp phần thúc đẩy sự phát triển bền vững của doanh nghiệp tại Bình Thuận.
Tìm hiểu thêm về [Thiết kế Website Tại Bình Thuận Thu Hút](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-binh-thuan/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,110 | debouncing and throttling in javascript simplified by aryan | These jargon is nothing but ways to improve javascript❤ performance. Debounce Debounce is... | 0 | 2024-07-08T02:50:02 | https://dev.to/aryan015/debouncing-and-throttling-in-javascript-simplified-by-aryan-492e | javascript, react, webdev, programming | These jargon is nothing but ways to improve javascript❤ performance.
## Debounce
Debounce is improved version of throttle. `beware` that it will only run when cursor stops running, might miss some important inputs. If you move the cursor within second then it will reset the cursor.
snippet.js
```js
let interval;
function doSomething(){
clearTimeout(interval);
interval = setTimeout(function(){
//your code
},1000)
}
```
## Throttle
Throttle is useful when you want to ensure that a function is called at a limited rate or frequency, without missing any important inputs or events.❤
snippet.js
```js
// It will run
// after 1sec irrespective of cursor movement
let isScroll = true;
function doSomething(){
if(isScroll){
//your code
isScroll = false;
}
setTimeout(function(){
isScroll = true;
},1000)
}
```
| normal | throttle | debounce |
|--- | --- | --- |
| 1000 | 10 | 1 |
## Do by yourself
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Counter</title>
<!-- Add your CSS or external stylesheets here -->
<link rel="stylesheet" href="styles.css">
</head>
<body>
<!-- Your content goes here (always give parent element an unique id) -->
<main id='main'>
<!-- always try to give semantic elements I have avoided due to less style writes -->
<section id='normal'>
<span style="font-weigth:600;">Normal </span><span id='normal-counter'></span>
</section>
<section id='throttle'>
<span style="font-weigth:600;">Throttle </span><span id='throttle-counter'><span>
</section>
<section id='debounce'>
<span style="font-weigth:600;">Debounce </span><span id='debounce-counter'>12</span>
</section>
</main>
<!-- Add your JavaScript or external scripts here -->
<script src="script.js" defer></script>
</body>
</html>
```
script.js
```js
const tcounter = document.getElementById("throttle-counter");
const normal = document.getElementById("normal-counter");
const dcounter = document.getElementById("debounce-counter");
tcounter.innerText = '0'
dcounter.innerText = '0'
normal.innerText = '0'
window.addEventListener('mousemove',update)
// normal counter
let ncounter = 0;
// throttle counter
let tcount = 0;
// debounce counter
let dcount = 0;
// note: functions are hoisted
let isScroll = true;
function update(){
// normal function
normalUpdate();
//throttle function
throttleUpdate();
// debounce function
debounceUpdate();
}
function normalUpdate(){
console.log('normal')
normal.innerText = ncounter++;
}
function throttleUpdate(){
console.log('throttle')
if(isScroll){
//your code
tcounter.innerText = tcount++
isScroll = false;
}
setTimeout(function(){
isScroll = true;
},1000)
}
let interval;
function debounceUpdate(){
console.log('debounce');
clearTimeout(interval);
interval = setTimeout(function(){
//your code
dcounter.innerText = dcount++;
},1000)
}
// you can create a seperate module and import them to make codebase cleaner❤
```
output

[Validate html❤](https://validator.w3.org/)
[video reference](https://www.youtube.com/watch?v=TppUB5WGz8w)
[my linkedin❤](https://www.linkedin.com/in/aryan-khandelwal-779b5723a/) | aryan015 |
1,915,111 | Thiết kế Website Tại Cà Mau Chuyên Nghiệp | Khi doanh nghiệp muốn xây dựng một website tại Cà Mau, việc xác định đúng nhu cầu là bước quan trọng... | 0 | 2024-07-08T02:52:57 | https://dev.to/terus_technique/thiet-ke-website-tai-ca-mau-chuyen-nghiep-kp5 | website, digitalmarketing, seo, terus |

Khi doanh nghiệp muốn xây dựng một website tại Cà Mau, việc xác định đúng nhu cầu là bước quan trọng đầu tiên. Doanh nghiệp cần hiểu rõ mục tiêu kinh doanh, mục đích tiếp thị, mục tiêu công nghệ và mục tiêu tài chính để đưa ra định hướng phù hợp cho việc thiết kế website.
Dựa trên mục tiêu và nhu cầu của doanh nghiệp, doanh nghiệp sẽ lựa chọn loại hình website thích hợp tại Cà Mau. Đây có thể là website thông tin, website bán hàng, website cung cấp dịch vụ,... Mỗi loại hình website sẽ có những tính năng, giao diện và cách tiếp cận khách hàng khác nhau.
Khi lựa chọn [dịch vụ thiết kế website tại Cà Mau](https://terusvn.com/thiet-ke-website-tai-hcm/), doanh nghiệp cần xem xét các ưu điểm như thiết kế đẹp và chuyên nghiệp, tối ưu hóa SEO, hỗ trợ tùy chỉnh và khả năng tùy biến cao. Terus tự tin là một trong những đơn vị cung cấp dịch vụ thiết kế website uy tín, chuyên nghiệp tại Cà Mau.
Terus có đội ngũ thiết kế sáng tạo và nhiều kinh nghiệm, cùng với quy trình thiết kế website chặt chẽ bao gồm các bước như: Nhận yêu cầu và tư vấn, thiết kế bản demo, hoàn thiện giao diện và triển khai tính năng, tối ưu chỉ số Insight, chạy thử sản phẩm và bàn giao sản phẩm.
Với mẫu website đa dạng và dịch vụ thiết kế website theo mẫu tại Cà Mau, Terus có thể giúp doanh nghiệp tiết kiệm thời gian và chi phí, đồng thời mang đến một [website chuyên nghiệp, hiện đại và phù hợp](https://terusvn.com/thiet-ke-website-tai-hcm/) với nhu cầu kinh doanh.
Tìm hiểu thêm về [Thiết kế Website Tại Cà Mau Chuyên Nghiệp](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-ca-mau/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,112 | Simple Guide to Learning SEO Without Tech Experience | SEO (Search Engine Optimization) is super important for your website's success. Don't worry if you're... | 0 | 2024-07-08T02:53:16 | https://dev.to/juddiy/simple-guide-to-learning-seo-without-tech-experience-1ign | website, seo, learning | SEO (Search Engine Optimization) is super important for your website's success. Don't worry if you're not a tech guru—you can still grasp and use SEO to boost your site's ranking and visibility in search engines with these easy methods:
1. **Learn the Basics**: Understanding the fundamental concepts of SEO is the first step. You can start by reading simple SEO beginner's guides or tutorials to grasp key terms like "keywords," "ranking," "traffic," and how they impact search engine rankings.
2. **Focus on Content Quality**: Regardless of your role, prioritize the quality of content published on your website. High-quality, valuable content not only attracts more visitors but also enhances search engine rankings. Learn how to craft compelling titles and content, using appropriate keywords and semantic SEO techniques to optimize content.
3. **Market Research and Competitive Analysis**: While you don't need deep technical knowledge, understanding your target market and the strategies of your main competitors is crucial. Conduct market research and analyze competitors' websites to discover potential SEO opportunities and strategies.
4. **Utilize SEO Tools**: There are many user-friendly SEO tools available to help non-technical professionals analyze website performance and make improvement recommendations. For example, [SEO AI](https://seoai.run/) can analyze keyword usage, offer content optimization suggestions, and assess website structure to help you optimize your site for improved search engine rankings.
5. **Collaborate with SEO Experts**: If you encounter challenges while learning and applying SEO, consider collaborating with SEO experts or agencies. They can provide tailored advice and strategies to help optimize your website and enhance its search engine rankings.
By using these methods, non-technical folks can easily understand and apply basic SEO principles. This will help bring more valuable traffic to their websites and achieve significant user growth. | juddiy |
1,915,113 | Thiết kế Website Tại Cao Bằng Tăng Doanh Thu | Dịch vụ thiết kế website chuyên nghiệp tại Cao Bằng sẽ mang lại nhiều lợi ích cho doanh nghiệp. Đầu... | 0 | 2024-07-08T02:55:54 | https://dev.to/terus_technique/thiet-ke-website-tai-cao-bang-tang-doanh-thu-2nm5 | website, digitalmarketing, seo, terus |

[Dịch vụ thiết kế website chuyên nghiệp tại Cao Bằng](https://terusvn.com/thiet-ke-website-tai-hcm/) sẽ mang lại nhiều lợi ích cho doanh nghiệp. Đầu tiên, nó sẽ là nơi cung cấp nguồn thông tin hữu ích và đáng tin cậy cho khách hàng. Khách hàng có thể dễ dàng tiếp cận thông tin về sản phẩm, dịch vụ, giá cả, chính sách... của doanh nghiệp. Điều này không chỉ giúp tăng sự thân thiện với khách hàng mà còn tạo dựng niềm tin và uy tín cho thương hiệu.
Hơn nữa, việc sở hữu một website cũng sẽ giúp doanh nghiệp chăm sóc khách hàng một cách thuận tiện hơn. Khách hàng có thể dễ dàng liên hệ, đặt hàng, hay giải đáp các thắc mắc thông qua các tính năng trên website. Điều này không chỉ nâng cao trải nghiệm của khách hàng mà còn giúp doanh nghiệp tiết kiệm thời gian và chi phí.
Bên cạnh đó, website cũng đóng vai trò quan trọng trong việc thực hiện các chiến lược marketing hiệu quả. Với sự hỗ trợ của website, doanh nghiệp có thể dễ dàng tiếp cận khách hàng tiềm năng, quảng bá thương hiệu, và tăng doanh số bán hàng.
Tuy nhiên, để đạt được những lợi ích trên, doanh nghiệp phải thiết kế website tại Cao Bằng theo chuẩn SEO (Tối ưu hóa công cụ tìm kiếm). Một website chuẩn SEO sẽ giúp cải thiện khả năng hiển thị trên các công cụ tìm kiếm, tăng lưu lượng truy cập không phải trả tiền, và mang lại lưu lượng truy cập chất lượng, có nhiều khả năng chuyển đổi.
Terus tự hào là một công ty thiết kế website uy tín tại Cao Bằng, có thể giúp doanh nghiệp đạt được những mục tiêu trên. Với đội ngũ thiết kế đẹp và chuyên nghiệp, cùng với dịch vụ tối ưu hóa SEO, Terus cam kết mang lại những website ấn tượng, chuẩn SEO và mang lại hiệu quả cao cho doanh nghiệp.
Với quy trình chặt chẽ này, Terus cam kết mang lại [bản thiết kế website toàn diện chuyên nghiệp, chuẩn SEO](https://terusvn.com/thiet-ke-website-tai-hcm/) và ưng ý nhất cho khách hàng tại Cao Bằng.
Tìm hiểu thêm về [Thiết kế Website Tại Cao Bằng Tăng Doanh Thu](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-cao-bang/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,114 | Stay Updated with PHP/Laravel: Weekly News Summary (01/07/2024–07/07/2024) | Intro: Check out this insightful summary of PHP Laravel Weekly News for the week of July 1st to July... | 0 | 2024-07-08T02:57:34 | https://poovarasu.dev/php-laravel-weekly-news-summary-01-07-2024-to-07-07-2024/ | php, laravel | **Intro**: Check out this insightful summary of PHP Laravel Weekly News for the week of July 1st to July 7th, 2024. Stay updated with the latest developments, releases, and community updates in the PHP Laravel ecosystem.
**Key Points:**
- 🚀 New String Helpers and ServeCommand Improvements in Laravel 11.14: Releases include new string methods like chopStart() and chopEnd().
- 🖥️ Build Custom Admin Panels With Backpack for Laravel: Collection of packages for creating admin panels quickly.
- 🛠️ Laravel Error Solutions on the Default Exception Page: Spatie's package offers solutions for common Laravel errors.
- 💳 Spark Stripe: PayPal Support: Spark Stripe now supports PayPal for new subscriptions.
- 📚 A Guide to PHP Attributes: Explains PHP attributes and their usage.
- 🧪 Lawman: Pest Architecture Testing for Saloon API Integrations: Introduces Pest for API testing.
- 🔗 Specify Allowed URL Schemes in Short URL: Details on Laravel's URL scheme options.
- 🔧 The #[\Override] Attribute in PHP: Discusses PHP's override attribute.
- 🔄 Using whereAny() for cleaner queries in Eloquent: How to use whereAny() in Laravel Eloquent for cleaner data queries.
- 📧 Sending transactional emails using Mailcoach API in an Express.js application: Integration guide for Mailcoach API with Express.js.
- 📨 FIlament Email - Email Log for Filament Projects: Plugin to log and resend emails within Laravel projects.
- 🚀 Laravel Slower - Optimize Your DB Queries with AI: Package for optimizing Laravel application database performance.
**Key Takeaway**: The Laravel ecosystem continues to evolve with updates like new string helpers in Laravel 11.14 and tools such as Pest for API testing, enhancing developer efficiency and application reliability. These innovations underscore Laravel's commitment to providing robust solutions for modern web development challenges.
This summary encapsulates the significant updates and releases in the PHP Laravel ecosystem for the specified week. Stay tuned for more updates and insights in the future!
Check out the complete article here [https://poovarasu.dev/php-laravel-weekly-news-summary-01-07-2024-to-07-07-2024/](https://poovarasu.dev/php-laravel-weekly-news-summary-01-07-2024-to-07-07-2024/) | poovarasu |
1,915,115 | Thiết kế Website Tại Đắk Lắk Đầy Đủ Chức Năng | Một website chuyên nghiệp mang lại nhiều lợi ích cho doanh nghiệp. Thứ nhất, nó giúp tăng tính... | 0 | 2024-07-08T02:59:12 | https://dev.to/terus_technique/thiet-ke-website-tai-dak-lak-day-du-chuc-nang-2c5 | website, digitalmarketing, seo, terus |

Một website chuyên nghiệp mang lại nhiều lợi ích cho doanh nghiệp.
Thứ nhất, nó giúp tăng tính chuyên nghiệp và uy tín của thương hiệu, khẳng định vị thế của doanh nghiệp trong mắt khách hàng.
Thứ hai, website trở thành kênh tiếp cận và tương tác hiệu quả với khách hàng, cung cấp thông tin sản phẩm, dịch vụ một cách nhanh chóng.
Thứ ba, website còn là công cụ marketing hiệu quả, giúp mở rộng thị trường, tiếp cận khách hàng tiềm năng. Cuối cùng, website có thể tự động hóa quy trình kinh doanh, nâng cao năng suất và hiệu quả hoạt động.
Terus tự hào cung cấp [dịch vụ thiết kế website chuyên nghiệp](https://terusvn.com/thiet-ke-website-tai-hcm/) tại Đắk Lắk với nhiều ưu điểm nổi bật.
Thứ nhất, các website do Terus thiết kế sở hữu giao diện bắt mắt, thể hiện rõ nét phong cách và thương hiệu của doanh nghiệp.
Thứ hai, các website được tối ưu hóa SEO, giúp nâng cao khả năng hiển thị trên các công cụ tìm kiếm, thu hút lượng truy cập lớn.
Thứ ba, đội ngũ chuyên gia của Terus luôn hỗ trợ khách hàng tận tình, giải đáp mọi thắc mắc và hướng dẫn sử dụng hiệu quả. Cuối cùng, website có khả năng tùy chỉnh cao, có thể dễ dàng cập nhật nội dung, điều chỉnh chức năng theo nhu cầu của khách hàng.
Với những ưu điểm nổi bật và quy trình thiết kế bài bản, Terus tự hào là đơn vị cung cấp [dịch vụ thiết kế website tại Đắk Lắk uy tín, chuyên nghiệp](https://terusvn.com/thiet-ke-website-tai-hcm/). Doanh nghiệp tại Đắk Lắk có thể hoàn toàn yên tâm khi lựa chọn Terus để sở hữu một website chuyên nghiệp, thu hút khách hàng tiềm năng và thúc đẩy sự phát triển của doanh nghiệp.
Tìm hiểu thêm về [Thiết kế Website Tại Đắk Lắk Đầy Đủ Chức Năng](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-dak-lak/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,116 | WebCheck: Find out what hackers know about your site | There are times when you would like to know what all these hackers know about your site that you may... | 0 | 2024-07-08T03:00:22 | https://dev.to/tkouleris/webcheck-find-out-what-hackers-know-about-your-site-1elj | web, security | There are times when you would like to know what all these hackers know about your site that you may be unaware of. So you open 5-6 tools on your computer, you open sites in your browser that can help you, but finally you lose yourself between applications and data which you try to put in a logical order.
But don't be afraid, developers from all over the world have found the solution. WebCheck the application that will help you find out what hackers know about your page, gathered in one place.
Web page: [https://web-check.xyz/](https://web-check.xyz/)
Source Code: [https://github.com/Lissy93/web-check](https://github.com/Lissy93/web-check) | tkouleris |
1,915,117 | Stay Updated with Python/FastAPI/Django: Weekly News Summary (01/07/2024–07/07/2024) | Intro: Check out this insightful summary of Python/FastAPI/Django Weekly News for the week of July... | 0 | 2024-07-09T03:00:00 | https://poovarasu.dev/python-fastapi-django-weekly-news-summary-01-07-2024-to-07-07-2024/ | python, fastapi, django, flask | **Intro:** Check out this insightful summary of Python/FastAPI/Django Weekly News for the week of July 1st to July 7th, 2024. Stay updated with the latest developments, releases, and community updates in the Python ecosystem.
**Key Points:**
🐍 Python News Roundup: NumPy and Polars release new major versions, impacting the data science ecosystem and Python's future versions.
🧮 Build a Calculator with Satellite Data and explore Best Practices in PyCoders Issue #636.
🔄 Reusable Components in Django using Stimulus and Tailwind CSS highlighted by Michael Yin.
🌐 Developing GraphQL APIs in Django with Strawberry by Oluwole Majiyagbe.
🚀 Understanding FastAPI with Starlette and ASGI by Rafael de Oliveira Marques.
📝 Explaining Decorators in Django for Beginners by Ismail Ibrahim.
📊 Python's production usage compared between Go and Python by Abanoub Hanna.
📚 Guide on scraping Amazon using Python by Ione R. Garza.
🛠️ Build a Dynamic Blog with Flask and HTMX by 3a5abi 🥷.
💳 Adding Payment functionality to Django apps with Stripe by Paul.
**Key Takeaway:** This article reflects ongoing advancements in Python's ecosystem, including significant updates to core libraries like NumPy and discussions on language evolution. It underscores Python's versatility across web development, data scraping, API development, and more, showcasing its robust adoption in various domains of software engineering and data science.
This summary offers a concise overview of recent advancements in the Python/FastAPI/Django framework, providing valuable insights for developers and enthusiasts alike. Explore the full post for more in-depth coverage and stay updated on the latest in Python/FastAPI/Django development.
Check out the complete article here https://poovarasu.dev/python-fastapi-django-weekly-news-summary-01-07-2024-to-07-07-2024/ | poovarasu |
1,915,118 | Thiết kế Website Tại Đắk Nông Chuyên Nghiệp Uy Tín | Lý do các doanh nghiệp cần thiết kế website tại Đắk Nông: Tăng tính chuyên nghiệp và độ tin cậy của... | 0 | 2024-07-08T03:02:07 | https://dev.to/terus_technique/thiet-ke-website-tai-dak-nong-chuyen-nghiep-uy-tin-bjj | website, digitalmarketing, seo, terus |

Lý do các doanh nghiệp cần thiết kế website tại Đắk Nông:
Tăng tính chuyên nghiệp và độ tin cậy của thương hiệu: Một website chuyên nghiệp với thiết kế đẹp mắt và nội dung chất lượng sẽ tạo ấn tượng tốt với khách hàng, khẳng định vị thế và uy tín của doanh nghiệp.
Mở rộng phạm vi tiếp cận khách hàng: Một website có thể giúp các doanh nghiệp tại Đắk Nông tiếp cận được khách hàng trên toàn quốc, thậm chí là trên toàn cầu, vượt qua ranh giới địa lý truyền thống.
Tăng hiệu quả hoạt động kinh doanh: Với một website được thiết kế và vận hành tốt, doanh nghiệp có thể tiết kiệm chi phí, nâng cao năng suất và hiệu quả hoạt động.
Cải thiện trải nghiệm khách hàng: Một website thân thiện với người dùng, dễ sử dụng và cung cấp thông tin đầy đủ sẽ giúp tăng sự hài lòng và gắn kết của khách hàng với doanh nghiệp.
Ưu điểm khi sử dụng [dịch vụ thiết kế website tại Đắk Nông](https://terusvn.com/thiet-ke-website-tai-hcm/) của Terus:
Thiết kế đẹp mắt và chuyên nghiệp: Đội ngũ thiết kế tài năng của Terus sẽ tạo ra những website ấn tượng, phù hợp với thương hiệu và ngành nghề của doanh nghiệp.
Tối ưu hóa SEO: Các website được thiết kế bởi Terus đều được tối ưu hóa về mặt SEO, giúp doanh nghiệp dễ dàng xuất hiện trên các kết quả tìm kiếm.
Hỗ trợ khách hàng tận tình: Terus cam kết cung cấp dịch vụ chăm sóc khách hàng tận tình, luôn sẵn sàng hỗ trợ và giải đáp mọi thắc mắc của khách hàng.
Khả năng tùy chỉnh cao: Các website do Terus thiết kế được xây dựng trên nền tảng linh hoạt, cho phép doanh nghiệp dễ dàng cập nhật, thay đổi nội dung và tính năng theo nhu cầu kinh doanh.
Terus là đơn vị uy tín, chuyên nghiệp trong lĩnh vực thiết kế website tại Đắk Nông. Với kinh nghiệm nhiều năm trong ngành, Terus đã thiết kế hàng trăm website cho các doanh nghiệp, tổ chức ở nhiều lĩnh vực khác nhau. Các website do Terus thiết kế đều theo chuẩn SEO, đảm bảo tối ưu hóa trải nghiệm người dùng và nâng cao khả năng tiếp cận khách hàng.
Với [dịch vụ thiết kế website chuẩn SEO chuyên nghiệp, uy tín](https://terusvn.com/thiet-ke-website-tai-hcm/) của Terus, các doanh nghiệp tại Đắk Nông có thể xây dựng một sự hiện diện trực tuyến mạnh mẽ, thu hút khách hàng tiềm năng và nâng cao hiệu quả kinh doanh. Hãy liên hệ ngay với Terus để được tư vấn và triển khai dịch vụ thiết kế website phù hợp với nhu cầu của doanh nghiệp.
Tìm hiểu thêm về [Thiết kế Website Tại Đắk Nông Chuyên Nghiệp Uy Tín](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-dak-nong/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,119 | Thiết kế Website Tại Điện Biên Độc Đáo, Thu Hút | Lý do các doanh nghiệp cần thiết kế website tại Điện Biên Nâng cao hiệu quả hoạt động kinh doanh:... | 0 | 2024-07-08T03:04:34 | https://dev.to/terus_technique/thiet-ke-website-tai-dien-bien-doc-dao-thu-hut-21ih | website, digitalmarketing, seo, terus |

Lý do các doanh nghiệp cần thiết kế website tại Điện Biên
Nâng cao hiệu quả hoạt động kinh doanh: Một website chuyên nghiệp không chỉ là công cụ quảng bá, tiếp thị hiệu quả mà còn là kênh để doanh nghiệp tương tác, phục vụ khách hàng một cách tốt nhất. Thông qua website, doanh nghiệp có thể cung cấp đầy đủ thông tin về sản phẩm, dịch vụ, cập nhật tin tức, khuyến mãi... giúp tăng sự tin tưởng và lòng trung thành của khách hàng.
Mở rộng phạm vi hoạt động: Một website chuyên nghiệp sẽ giúp doanh nghiệp vượt ra khỏi giới hạn địa lý, tiếp cận được khách hàng ở xa và mở rộng thị trường tiềm năng. Đặc biệt đối với các doanh nghiệp tại Điện Biên, việc sở hữu một website chuyên nghiệp sẽ giúp họ vươn tầm ra toàn quốc và thậm chí cả quốc tế.
Tăng uy tín và thương hiệu: Trong thời đại số, khách hàng thường tìm kiếm và đánh giá doanh nghiệp thông qua website. Vì vậy, một website chuyên nghiệp, hiện đại sẽ giúp doanh nghiệp tạo ấn tượng tốt, nâng cao uy tín và thương hiệu trong mắt khách hàng.
Cải thiện trải nghiệm khách hàng: Website chuyên nghiệp không chỉ giúp doanh nghiệp truyền tải thông điệp một cách hiệu quả mà còn mang lại trải nghiệm tuyệt vời cho khách hàng thông qua giao diện thân thiện, dễ sử dụng và các tính năng tiện ích.
Terus là đơn vị hàng đầu trong lĩnh vực [thiết kế và phát triển website tại Điện Biên](https://terusvn.com/thiet-ke-website-tai-hcm/). Với kinh nghiệm nhiều năm, đội ngũ chuyên gia giàu kinh nghiệm và công nghệ hiện đại, Terus cam kết mang đến những sản phẩm website chất lượng cao, đáp ứng mọi nhu cầu của doanh nghiệp.
Ngoài ra, Terus cũng cung cấp các [mẫu website tiêu chuẩn, được thiết kế chuyên nghiệp](https://terusvn.com/thiet-ke-website-tai-hcm/), phù hợp với nhiều ngành nghề kinh doanh khác nhau. Khách hàng có thể tham khảo và lựa chọn mẫu website yêu thích để Terus triển khai và tùy chỉnh theo nhu cầu.
Tìm hiểu thêm về [Thiết kế Website Tại Điện Biên Độc Đáo, Thu Hút](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-dien-bien/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,121 | Mengenal GPXSlot: Platform Slot Online Terpercaya | Dalam era digital saat ini, perjudian online telah menjadi bagian tak terpisahkan dari kehidupan... | 0 | 2024-07-08T03:07:20 | https://dev.to/gpxslot/mengenal-gpxslot-platform-slot-online-terpercaya-1lnp | gpxslot, linkaltgpxslot, slotgacor | Dalam era digital saat ini, perjudian online telah menjadi bagian tak terpisahkan dari kehidupan banyak orang di seluruh dunia. Salah satu jenis perjudian online yang populer adalah permainan slot. Dengan perkembangan teknologi, semakin banyak platform judi online bermunculan, salah satunya adalah [GPXSlot](https://gpxslot104.com).
## Apa Itu GPXSlot?
GPXSlot adalah salah satu platform judi online yang mengkhususkan diri dalam permainan slot. Mereka menawarkan berbagai macam permainan slot yang menarik dan beragam, disertai dengan fitur-fitur modern untuk memaksimalkan pengalaman bermain penggunanya. Platform ini dikenal karena keandalannya dan keamanannya dalam menyediakan lingkungan bermain yang fair dan transparan bagi para pengguna.
## Keunggulan GPXSlot
Pilihan Permainan Slot Beragam: GPXSlot menyediakan ratusan jenis permainan slot dari berbagai provider terkemuka di industri perjudian. Hal ini memastikan bahwa para pemain memiliki banyak opsi untuk dipilih sesuai dengan preferensi mereka.
Keamanan dan Keandalan: Keamanan data dan transaksi finansial merupakan prioritas utama GPXSlot. Mereka menggunakan teknologi enkripsi terbaru untuk melindungi informasi pribadi pengguna dan memastikan bahwa setiap transaksi berjalan lancar dan aman.
Bonus dan Promosi: Seperti kebanyakan platform judi online, GPXSlot menawarkan berbagai bonus dan promosi kepada para pemainnya. Mulai dari bonus selamat datang hingga bonus loyalitas, mereka berusaha untuk memberikan nilai tambah kepada setiap pengguna mereka.
Dukungan Pelanggan Profesional: Tim dukungan pelanggan GPXSlot siap membantu pengguna dalam setiap masalah yang timbul. Mereka tersedia melalui berbagai saluran komunikasi, seperti live chat, email, dan telepon, untuk memastikan pengalaman bermain yang mulus bagi semua orang.
Kompatibilitas dengan Perangkat: GPXSlot dirancang untuk berfungsi dengan baik pada berbagai perangkat, termasuk desktop, tablet, dan ponsel pintar. Ini memungkinkan para pemain untuk menikmati permainan favorit mereka di mana pun dan kapan pun mereka mau.
## Bagaimana Cara Bergabung dengan GPXSlot?
Bergabung dengan GPXSlot relatif mudah. Calon anggota hanya perlu melakukan registrasi akun, yang umumnya melibatkan proses yang cepat dan sederhana. Setelah mendaftar, pengguna dapat melakukan deposit menggunakan berbagai metode pembayaran yang disediakan dan mulai menikmati berbagai permainan slot yang ditawarkan oleh platform ini.
## Kesimpulan
GPXSlot adalah salah satu platform judi online terpercaya yang menawarkan pengalaman bermain slot yang menyenangkan dan aman. Dengan fokus pada keamanan, beragam permainan, dan layanan pelanggan yang responsif, GPXSlot menjadi pilihan yang baik bagi para penggemar permainan slot online. Bagi mereka yang mencari platform judi online yang dapat diandalkan dan menarik, GPXSlot patut dipertimbangkan sebagai pilihan utama. | gpxslot |
1,915,122 | Thiết kế Website Tại Đồng Tháp Chuẩn UI/ UX | Trong thời đại số hóa như hiện nay, việc sở hữu một website chuyên nghiệp đóng vai trò rất quan... | 0 | 2024-07-08T03:07:42 | https://dev.to/terus_technique/thiet-ke-website-tai-dong-thap-chuan-ui-ux-2l1g | website, digitalmarketing, seo, terus |

Trong thời đại số hóa như hiện nay, việc sở hữu một website chuyên nghiệp đóng vai trò rất quan trọng đối với các doanh nghiệp tại Đồng Tháp. Một website có thể giúp doanh nghiệp quảng bá sản phẩm, dịch vụ, tăng tính cạnh tranh, tiếp cận khách hàng tiềm năng, cung cấp thông tin chi tiết về doanh nghiệp.
Đặc biệt, đối với các doanh nghiệp trong lĩnh vực du lịch và nông nghiệp - hai ngành kinh tế chủ lực của Đồng Tháp, việc [xây dựng website chuyên nghiệp chuẩn UI/UX](https://terusvn.com/thiet-ke-website-tai-hcm/) càng trở nên cần thiết để thu hút khách du lịch, tiêu thụ sản phẩm nông nghiệp.
Khi thiết kế website, các doanh nghiệp tại Đồng Tháp cần lưu ý một số yếu tố sau:
Tính chuyên nghiệp và thẩm mỹ: Thiết kế website phải thể hiện được thương hiệu, hình ảnh của doanh nghiệp một cách chuyên nghiệp, thu hút và gây ấn tượng với người dùng ngay từ cái nhìn đầu tiên.
Tính năng và trải nghiệm người dùng: Website cần được thiết kế với các tính năng phù hợp, dễ sử dụng, mang lại trải nghiệm tốt cho khách truy cập.
Tính tương thích: Website phải được thiết kế để hiển thị tốt trên nhiều thiết bị khác nhau như máy tính, điện thoại di động, máy tính bảng.
Tính an toàn và bảo mật: Website cần được xây dựng trên một nền tảng an toàn, đảm bảo bảo mật thông tin của người dùng.
Tính địa phương hóa: Nội dung website cần phản ánh được đặc trưng văn hóa, địa lý của Đồng Tháp, giúp tăng tính hấp dẫn và phù hợp với người dùng tại địa phương.
Việc sở hữu một website chuyên nghiệp mang lại nhiều lợi ích cho các doanh nghiệp tại Đồng Tháp, bao gồm:
Tăng tính hiện diện và độ nhận diện thương hiệu trên internet.
Cung cấp kênh tiếp cận, tương tác với khách hàng hiệu quả.
Tăng cơ hội tiếp cận khách hàng tiềm năng, mở rộng thị trường.
Nâng cao uy tín, tin cậy của doanh nghiệp trong mắt khách hàng.
Cải thiện quy trình kinh doanh, tăng hiệu quả hoạt động.
Tiết kiệm chi phí quảng cáo, marketing truyền thống.
Đến với Terus, đơn vị hàng đầu trong lĩnh vực [thiết kế và phát triển website tại Đồng Tháp](https://terusvn.com/thiet-ke-website-tai-hcm/). Với kinh nghiệm nhiều năm, đội ngũ chuyên gia giàu kinh nghiệm và công nghệ hiện đại, Terus cam kết mang đến những sản phẩm website chất lượng cao, đáp ứng mọi nhu cầu của doanh nghiệp.
Tìm hiểu thêm về [Thiết kế Website Tại Đồng Tháp Chuẩn UI/ UX](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-dong-thap/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,123 | Thiết kế Website Tại Gia Lai Tiêu Chuẩn 2024 | Trong thời đại ngày nay, sở hữu trang thiết kế website chuyên nghiệp và hiện đại không chỉ là một... | 0 | 2024-07-08T03:10:36 | https://dev.to/terus_technique/thiet-ke-website-tai-gia-lai-tieu-chuan-2024-1efk | website, digitalmarketing, seo, terus |

Trong thời đại ngày nay, [sở hữu trang thiết kế website chuyên nghiệp và hiện đại](https://terusvn.com/thiet-ke-website-tai-hcm/) không chỉ là một lựa chọn mà còn là một nhu cầu cấp thiết đối với các doanh nghiệp tại Gia Lai. Những lợi ích quan trọng mà một website Gia Lai có thể mang lại là:
Thiết lập sự hiện diện: Một website chuyên nghiệp sẽ giúp doanh nghiệp của bạn tạo được sự hiện diện trực tuyến, xây dựng thương hiệu và gia tăng độ nhận diện trong mắt khách hàng.
Tận dụng tối đa các cơ hội tiếp cận: Với website, doanh nghiệp có thể tiếp cận và giao tiếp với khách hàng mọi lúc, mọi nơi, mở rộng phạm vi hoạt động kinh doanh.
Quảng cáo không giới hạn: Website là kênh quảng bá sản phẩm, dịch vụ hiệu quả, giúp doanh nghiệp tiếp cận và thu hút khách hàng tiềm năng.
Phục vụ khách hàng hiệu quả: Các tính năng như đặt hàng trực tuyến, hỗ trợ khách hàng 24/7 sẽ giúp cải thiện trải nghiệm của khách hàng.
Phương tiện truyền thông linh hoạt: Website có thể tích hợp các nền tảng mạng xã hội, kênh video, blog để tăng khả năng tương tác và gia tăng hiệu quả truyền thông.
Với nhiều năm kinh nghiệm trong lĩnh vực này, Terus tự hào mang đến các doanh nghiệp tại Gia Lai những giải pháp thiết kế website chuyên nghiệp, ưu việt:
Giao diện website độc quyền, ấn tượng: Terus sẽ tạo ra những giao diện website tại Gia Lai độc đáo, thu hút, phù hợp với thương hiệu và ngành nghề của doanh nghiệp.
Chuẩn SEO, chuẩn di động, responsive: Các website do Terus thiết kế đều đáp ứng chuẩn SEO, tối ưu trải nghiệm người dùng trên mọi thiết bị di động.
Tính năng đầy đủ, linh hoạt: Các website được trang bị đầy đủ các tính năng cần thiết như đặt hàng trực tuyến, quản lý nội dung dễ dàng.
Hệ thống quản trị dễ dàng: Terus cung cấp hệ thống quản trị website đơn giản, thân thiện, giúp doanh nghiệp dễ dàng cập nhật nội dung.
Với những ưu điểm nổi bật, Terus tự tin sẽ mang đến [dịch vụ thiết kế website chất lượng cao, chuẩn SEO và tối ưu trải nghiệm người dùng](https://terusvn.com/thiet-ke-website-tai-hcm/), góp phần thúc đẩy sự phát triển bền vững của các doanh nghiệp tại địa phương.
Tìm hiểu thêm về [Thiết kế Website Tại Gia Lai Tiêu Chuẩn 2024](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-gia-lai/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,124 | Best and Good Solution | Weekly lessons from a Software Engineer: — Working in an Agile Software Development team for a... | 0 | 2024-07-08T03:13:14 | https://dev.to/rodonguyen/best-and-good-solution-236e | Weekly lessons from a Software Engineer:
—
Working in an Agile Software Development team for a while, I realised that the best solution is not always the best thing for business.
The business owner (BO) may not care how well the code is written or if the architecture is well designed for the future.
Sometimes, implementing an ok solution is better because it is simpler, saves time (time is money to BO) and is faster to launch to the customer. This is even more important for startups.
The decision to choose a complex solution must not only be justified by its benefits but also consider the business context. Otherwise, the business may run out of money before it can see the benefits of that “amazing” solution.
So… What can I do as a Software Engineer in the team? When you implement the next feature, evaluate different approaches and open a discussion with relevant stakeholders 😉 | rodonguyen | |
1,915,125 | Mastering Regular Expressions: A Comprehensive Journey | The article is about a comprehensive collection of 8 free programming learning resources from GetVM.io, all focused on the topic of regular expressions. It covers a wide range of topics, including the fundamentals of regular expression matching, efficient algorithms for regular expression matching, building custom regular expression engines, diving deep into the mechanics of regular expressions, implementing regular expressions in functional JavaScript, and even building a regex engine from scratch in Golang and JavaScript. The article provides detailed overviews of each tutorial, along with direct links to the resources, making it an invaluable guide for anyone looking to master the art of regular expressions. | 27,985 | 2024-07-08T03:14:38 | https://dev.to/getvm/mastering-regular-expressions-a-comprehensive-journey-2pgg | getvm, programming, freetutorial, collection |
Are you ready to dive deep into the world of regular expressions and unlock the power of pattern matching? 🔍 This collection of tutorials from GetVM.io offers a comprehensive exploration of the fundamental concepts, practical applications, and advanced techniques behind this essential programming tool.

## A Beginner's Guide to Regular Expression Matching
Start your journey by exploring the [A Regular Expression Matcher](https://getvm.io/tutorials/a-regular-expression-matcher) tutorial, which delves into the history, development, and practical applications of regular expressions. Gain a solid understanding of the basics of pattern recognition and how to leverage this powerful tool in your programming endeavors.
## Efficient Algorithms for Regular Expression Matching
Next, dive into the world of [Regular Expression Matching Can Be Simple And Fast](https://getvm.io/tutorials/regular-expression-matching-can-be-simple-and-fast), where you'll learn about efficient algorithms for regular expression matching, including the Thompson NFA approach. Discover how to build high-performance programs that leverage the power of regular expressions.
## Building Custom Regular Expression Engines
For a more hands-on experience, check out the [Build Your Own Regular Expression Engines: Backtracking, NFA, DFA](https://getvm.io/tutorials/build-your-own-regular-expression-engines-backtracking-nfa-dfa) tutorial. This comprehensive guide will teach you how to build custom regular expression engines, covering backtracking, NFA, and DFA approaches, and how to parse regular expressions using Python.
## Diving Deep into Regular Expression Engines
Explore the inner workings of regular expression engines with the [Implementing a Regular Expression Engine](https://getvm.io/tutorials/implementing-a-regular-expression-engine) tutorial. Delve into the world of finite automata, Thompson's Construction, and practical programming techniques to gain a deeper understanding of how these powerful tools function.
## Functional JavaScript and Regular Expressions
Discover the intersection of functional programming and regular expressions with the [How to implement regular expressions in functional javascript using derivatives](https://getvm.io/tutorials/how-to-implement-regular-expressions-in-functional-javascript-using-derivatives) tutorial. Learn how to implement regular expressions in functional JavaScript using derivatives, with practical examples and in-depth explanations.
## Exploring Regex Mechanics Across Programming Languages
Dive into the intricate mechanics of regular expressions with the [How Regexes Work | Comprehensive Guide to Regular Expressions](https://getvm.io/tutorials/how-regexes-work) tutorial. Gain a deep understanding of regex patterns and their practical implementation across programming languages, including C.
## Building a Regex Engine from Scratch in Golang
For a more advanced challenge, check out the [How to build a regex engine from scratch](https://getvm.io/tutorials/how-to-build-a-regex-engine-from-scratch) tutorial, which guides you through the process of building a regex engine from scratch in Golang. Explore parsing, state machines, and practical examples to deepen your understanding of regular expressions.
## A Minimalist Regex Engine in JavaScript
Finally, dive into the [Build a Regex Engine in Less than 40 Lines of Code](https://getvm.io/tutorials/build-a-regex-engine-in-less-than-40-lines-of-code) tutorial, where you'll learn how to implement a rudimentary regex engine in less than 40 lines of JavaScript code. Explore the core syntax, functionality, and underlying logic of regular expressions in a concise and practical way.
Embark on this comprehensive journey and master the art of regular expressions! 🎉 Whether you're a beginner or an experienced programmer, these tutorials from GetVM.io will equip you with the knowledge and skills to harness the power of this essential programming tool.
## Enhance Your Learning Experience with GetVM Playground
Elevate your regular expression mastery by leveraging the powerful GetVM platform. GetVM is a Google Chrome browser extension that provides an online coding playground, allowing you to seamlessly explore and experiment with the concepts covered in these tutorials. 🚀
With GetVM's interactive Playground, you can dive right into the code, testing and refining your regular expression skills in real-time. No more switching between multiple tabs or applications – GetVM's integrated environment keeps everything at your fingertips, streamlining your learning journey. 💻
Experience the joy of immediate feedback and instant results as you tinker with regular expression patterns, algorithms, and engine implementations. GetVM's Playground empowers you to learn by doing, solidifying your understanding through hands-on practice. 🎉 Unlock your full potential and become a regular expression pro with the help of this invaluable tool.
---
## Want to learn more?
- 🚀 Practice the resources on [GetVM](https://getvm.io)
- 📖 Explore More [Free Resources on GetVM](https://getvm.io/explore)
Join our [Discord](https://discord.gg/XxKAAFWVNu) or tweet us [@GetVM](https://x.com/getvmio) 😄 | getvm |
1,915,126 | Case (II) - KisFlow-Golang Stream Real-Time Computing - Flow Parallel Operation | Github: https://github.com/aceld/kis-flow Document:... | 0 | 2024-07-08T03:15:34 | https://dev.to/aceld/case-i-kisflow-golang-stream-real-time-computing-flow-parallel-operation-364m | go | <img width="150px" src="https://github.com/aceld/kis-flow/assets/7778936/8729d750-897c-4ba3-98b4-c346188d034e" />
Github: https://github.com/aceld/kis-flow
Document: https://github.com/aceld/kis-flow/wiki
---
[Part1-OverView](https://dev.to/aceld/part-1-golang-framework-hands-on-kisflow-streaming-computing-framework-overview-8fh)
[Part2.1-Project Construction / Basic Modules](https://dev.to/aceld/part-2-golang-framework-hands-on-kisflow-streaming-computing-framework-project-construction-basic-modules-cia)
[Part2.2-Project Construction / Basic Modules](https://dev.to/aceld/part-3golang-framework-hands-on-kisflow-stream-computing-framework-project-construction-basic-modules-1epb)
[Part3-Data Stream](https://dev.to/aceld/part-4golang-framework-hands-on-kisflow-stream-computing-framework-data-stream-1mbd)
[Part4-Function Scheduling](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-function-scheduling-4p0h)
[Part5-Connector](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-connector-hcd)
[Part6-Configuration Import and Export](https://dev.to/aceld/part-6golang-framework-hands-on-kisflow-stream-computing-framework-configuration-import-and-export-47o1)
[Part7-KisFlow Action](https://dev.to/aceld/part-7golang-framework-hands-on-kisflow-stream-computing-framework-kisflow-action-3n05)
[Part8-Cache/Params Data Caching and Data Parameters](https://dev.to/aceld/part-8golang-framework-hands-on-cacheparams-data-caching-and-data-parameters-5df5)
[Part9-Multiple Copies of Flow](https://dev.to/aceld/part-8golang-framework-hands-on-multiple-copies-of-flow-c4k)
[Part10-Prometheus Metrics Statistics](https://dev.to/aceld/part-10golang-framework-hands-on-prometheus-metrics-statistics-22f0)
[Part11-Adaptive Registration of FaaS Parameter Types Based on Reflection](https://dev.to/aceld/part-11golang-framework-hands-on-adaptive-registration-of-faas-parameter-types-based-on-reflection-15i9)
---
[Case1-Quick Start](https://dev.to/aceld/case-i-kisflow-golang-stream-real-time-computing-quick-start-guide-f51)
[Case2-Flow Parallel Operation](https://dev.to/aceld/case-i-kisflow-golang-stream-real-time-computing-flow-parallel-operation-364m)
[Case3-Application of KisFlow in Multi-Goroutines](https://dev.to/aceld/case-iii-kisflow-golang-stream-real-application-of-kisflow-in-multi-goroutines-4m7g)
[Case4-KisFlow in Message Queue (MQ) Applications](https://dev.to/aceld/case-iv-kisflow-golang-stream-real--4k3e)
---
## Download KisFlow Source
```bash
$go get github.com/aceld/kis-flow
```
[KisFlow Developer Documentation](https://github.com/aceld/kis-flow/wiki)
## Source Code Example
https://github.com/aceld/kis-flow-usage/tree/main/8-connector
KisFlow Can Achieve the Combination of Two Flows via Connector
Using the combination of the following two flows, this introduction will cover the interface and usage of the `Connector`.
## Data Flow Diagram

### Case Introduction
Assume a student has four attributes:
```bash
Student ID: stu_id
Credit 1: score_1
Credit 2: score_2
Credit 3: score_3
```
`Define Flow1`: CalStuAvgScore-1-2 to calculate a student's average score of Credit 1 (score_1) and Credit 2 (score_2) (avg_score_1_2).
`Define Flow2`: CalStuAvgScore-3 to calculate a student's average score of Credit 3 (score_3) and avg_score_1_2, which is the average of Credit 1, Credit 2, and Credit 3. The average of Credit 1 and Credit 2 is provided by Flow1.
### Flow1
Flow1 consists of 4 functions:
`V (Function: VerifyStu)` to verify the validity of StuId
`C (Function: AvgStuScore12)` to calculate the average score of Credit 1 and Credit 2
`S (Function: SaveScoreAvg12)` to store avg_score_1_2 in Redis
`E (Function: PrintStuAvgScore)` to print the average score of Credit 1 and Credit 2.
### Flow2
Flow2 consists of 4 functions:
`V (Function: VerifyStu)` to verify the validity of StuId
`L (Function: LoadScoreAvg12)` to read the current student's average score of Credit 1 and Credit 2 (avg_score_1_2) calculated by Flow1
`C (Function: AvgStuScore3)` to calculate the average score of Credit 3 and the average score of Credit 1 and Credit 2
`E (Function: PrintStuAvgScore)` to print the average score of Credit 1, Credit 2, and Credit 3.
> conf/func/func-AvgStuScore-3.yml
```yaml
kistype: func
fname: AvgStuScore3
fmode: Calculate
source:
name: SourceStuScore
must:
- stu_id
```
> conf/func/func-LoadScoreAvg-1-2.yml
```yaml
kistype: func
fname: LoadScoreAvg12
fmode: Load
source:
name: SourceStuScore
must:
- stu_id
option:
cname: Score12Cache
```
## Basic Data Protocol
> stu_proto.go
```go
package main
type StuScore1_2 struct {
StuId int `json:"stu_id"`
Score1 int `json:"score_1"`
Score2 int `json:"score_2"`
}
type StuScoreAvg struct {
StuId int `json:"stu_id"`
AvgScore float64 `json:"avg_score"`
}
type StuScore3 struct {
StuId int `json:"stu_id"`
AvgScore12 float64 `json:"avg_score_1_2"` // score_1, score_2 avg
Score3 int `json:"score_3"`
}
```
### Connector Init
The Connector defined in this project, Score12Cache, is a link resource associated with Redis. This Connector requires an initialization method for establishing a connection when KisFlow starts.
> conn_init.go
```go
package main
import (
"context"
"fmt"
"github.com/aceld/kis-flow/kis"
"github.com/aceld/kis-flow/log"
"github.com/go-redis/redis/v8"
)
// type ConnInit func(conn Connector) error
func InitScore12Cache(connector kis.Connector) error {
fmt.Println("===> Call Connector InitScore12Cache")
// init Redis Conn Client
rdb := redis.NewClient(&redis.Options{
Addr: connector.GetConfig().AddrString, // Redis-Server address
Password: "", // password
DB: 0, // select db
})
// Ping test
pong, err := rdb.Ping(context.Background()).Result()
if err != nil {
log.Logger().ErrorF("Failed to connect to Redis: %v", err)
return err
}
fmt.Println("Connected to Redis:", pong)
// set rdb to connector
connector.SetMetaData("rdb", rdb)
return nil
}
```
Here, the successfully connected Redis instance is stored in the connector's cache variable "rdb."
```go
// set rdb to connector
connector.SetMetaData("rdb", rdb)
```
### FaaS Implementation
#### Function(V): VerifyStu
> faas_stu_verify.go
```go
package main
import (
"context"
"fmt"
"github.com/aceld/kis-flow/kis"
"github.com/aceld/kis-flow/serialize"
)
type VerifyStuIn struct {
serialize.DefaultSerialize
StuId int `json:"stu_id"`
}
func VerifyStu(ctx context.Context, flow kis.Flow, rows []*VerifyStuIn) error {
fmt.Printf("->Call Func VerifyStu\n")
for _, stu := range rows {
// Filter out invalid data
if stu.StuId < 0 || stu.StuId > 999 {
// Terminate the current Flow process, subsequent functions of the current Flow will not be executed
return flow.Next(kis.ActionAbort)
}
}
return flow.Next(kis.ActionDataReuse)
}
```
`VerifyStu()` is used to validate data. If the data does not meet the requirements, the current data flow is terminated. Finally, the data is reused and passed to the next layer through `flow.Next(kis.ActionDataReuse)`.
#### Function(C): AvgStuScore12
> faas_avg_score_1_2.go
```go
package main
import (
"context"
"fmt"
"github.com/aceld/kis-flow/kis"
"github.com/aceld/kis-flow/serialize"
)
type AvgStuScoreIn_1_2 struct {
serialize.DefaultSerialize
StuScore1_2
}
type AvgStuScoreOut_1_2 struct {
serialize.DefaultSerialize
StuScoreAvg
}
func AvgStuScore12(ctx context.Context, flow kis.Flow, rows []*AvgStuScoreIn_1_2) error {
fmt.Printf("->Call Func AvgStuScore12\n")
for _, row := range rows {
out := AvgStuScoreOut_1_2{
StuScoreAvg: StuScoreAvg{
StuId: row.StuId,
AvgScore: float64(row.Score1+row.Score2) / 2,
},
}
// Submit result data
_ = flow.CommitRow(out)
}
return flow.Next()
}
```
`AvgStuScore12()` calculates the average score of score_1 and `score_2`, resulting in `avg_score`.
#### Function(S): SaveScoreAvg12
> faas_save_score_avg_1_2.go
```go
package main
import (
"context"
"fmt"
"github.com/aceld/kis-flow/kis"
"github.com/aceld/kis-flow/serialize"
"github.com/go-redis/redis/v8"
"strconv"
)
type SaveStuScoreIn struct {
serialize.DefaultSerialize
StuScoreAvg
}
func BatchSetStuScores(ctx context.Context, conn kis.Connector, rows []*SaveStuScoreIn) error {
var rdb *redis.Client
// Get Redis Client
rdb = conn.GetMetaData("rdb").(*redis.Client)
// Set data to redis
pipe := rdb.Pipeline()
for _, score := range rows {
// make key
key := conn.GetConfig().Key + strconv.Itoa(score.StuId)
pipe.HMSet(context.Background(), key, map[string]interface{}{
"avg_score": score.AvgScore,
})
}
_, err := pipe.Exec(ctx)
if err != nil {
return err
}
return nil
}
func SaveScoreAvg12(ctx context.Context, flow kis.Flow, rows []*SaveStuScoreIn) error {
fmt.Printf("->Call Func SaveScoreAvg12\n")
conn, err := flow.GetConnector()
if err != nil {
fmt.Printf("SaveScoreAvg12(): GetConnector err = %s\n", err.Error())
return err
}
if BatchSetStuScores(ctx, conn, rows) != nil {
fmt.Printf("SaveScoreAvg12(): BatchSetStuScores err = %s\n", err.Error())
return err
}
return flow.Next(kis.ActionDataReuse)
}
```
`SaveScoreAvg12()` stores the data in Redis through the bound Connector, using the key configured in the Connector. Finally, the source data is transparently transmitted to the next function.
#### Function(E): PrintStuAvgScore
> faas_stu_score_avg_print.go
```go
package main
import (
"context"
"fmt"
"github.com/aceld/kis-flow/kis"
"github.com/aceld/kis-flow/serialize"
)
type PrintStuAvgScoreIn struct {
serialize.DefaultSerialize
StuId int `json:"stu_id"`
AvgScore float64 `json:"avg_score"`
}
func PrintStuAvgScore(ctx context.Context, flow kis.Flow, rows []*PrintStuAvgScoreIn) error {
fmt.Printf("->Call Func PrintStuAvgScore, in Flow[%s]\n", flow.GetName())
for _, row := range rows {
fmt.Printf("stuid: [%+v], avg score: [%+v]\n", row.StuId, row.AvgScore)
}
return flow.Next()
}
```
`PrintStuAvgScore()` prints the average score of the current student.
#### Function(L): LoadScoreAvg12
> faas_load_score_avg_1_2.go
```go
package main
import (
"context"
"fmt"
"github.com/aceld/kis-flow/kis"
"github.com/aceld/kis-flow/serialize"
"github.com/go-redis/redis/v8"
"strconv"
)
type LoadStuScoreIn struct {
serialize.DefaultSerialize
StuScore3
}
type LoadStuScoreOut struct {
serialize.DefaultSerialize
StuScore3
}
func GetStuScoresByStuId(ctx context.Context, conn kis.Connector, stuId int) (float64, error) {
var rdb *redis.Client
// Get Redis Client
rdb = conn.GetMetaData("rdb").(*redis.Client)
// make key
key := conn.GetConfig().Key + strconv.Itoa(stuId)
// get data from redis
result, err := rdb.HGetAll(ctx, key).Result()
if err != nil {
return 0, err
}
// get value
avgScoreStr, ok := result["avg_score"]
if !ok {
return 0, fmt.Errorf("avg_score not found for stuId: %d", stuId)
}
// parse to float64
avgScore, err := strconv.ParseFloat(avgScoreStr, 64)
if err != nil {
return 0, err
}
return avgScore, nil
}
func LoadScoreAvg12(ctx context.Context, flow kis.Flow, rows []*LoadStuScoreIn) error {
fmt.Printf("->Call Func LoadScoreAvg12\n")
conn, err := flow.GetConnector()
if err != nil {
fmt.Printf("LoadScoreAvg12(): GetConnector err = %s\n", err.Error())
return err
}
for _, row := range rows {
stuScoreAvg1_2, err := GetStuScoresByStuId(ctx, conn, row.StuId)
if err != nil {
fmt.Printf("LoadScoreAvg12(): GetStuScoresByStuId err = %s\n", err.Error())
return err
}
out := LoadStuScoreOut{
StuScore3: StuScore3{
StuId: row.StuId,
Score3: row.Score3,
AvgScore12: stuScoreAvg1_2, // avg score of score1 and score2 (load from redis)
},
}
// commit result
_ = flow.CommitRow(out)
}
return flow.Next()
}
```
`LoadScoreAvg12()` reads the average score of score_1 and score_2 from Redis through the linked resource Redis of the bound Connector using the key configured in the Connector. It then sends the source data from upstream, along with the newly read average score of score1 and score2, to the next layer.
#### Function(C): AvgStuScore3
> faas_stu_score_avg_3.go
```go
package main
import (
"context"
"fmt"
"github.com/aceld/kis-flow/kis"
"github.com/aceld/kis-flow/serialize"
)
type AvgStuScore3In struct {
serialize.DefaultSerialize
StuScore3
}
type AvgStuScore3Out struct {
serialize.DefaultSerialize
StuScoreAvg
}
func AvgStuScore3(ctx context.Context, flow kis.Flow, rows []*AvgStuScore3In) error {
fmt.Printf("->Call Func AvgStuScore3\n")
for _, row := range rows {
out := AvgStuScore3Out{
StuScoreAvg: StuScoreAvg{
StuId: row.StuId,
AvgScore: (float64(row.Score3) + row.AvgScore12*2) / 3,
},
}
// Submit result data
_ = flow.CommitRow(out)
}
return flow.Next()
}
```
`AvgStuScore3()` recalculates the average score of three scores by adding `score_3` and the average score of `score_1` and `score_2`, resulting in the final average score `avg_score`.
### Register FaaS & CaaSInit/CaaS (Register Function/Connector)
> main.go
```go
func init() {
// Register functions
kis.Pool().FaaS("VerifyStu", VerifyStu)
kis.Pool().FaaS("AvgStuScore12", AvgStuScore12)
kis.Pool().FaaS("SaveScoreAvg12", SaveScoreAvg12)
kis.Pool().FaaS("PrintStuAvgScore", PrintStuAvgScore)
kis.Pool().FaaS("LoadScoreAvg12", LoadScoreAvg12)
kis.Pool().FaaS("AvgStuScore3", AvgStuScore3)
// Register connectors
kis.Pool().CaaSInit("Score12Cache", InitScore12Cache)
}
```
### Main Process
> main.go
```go
package main
import (
"context"
"github.com/aceld/kis-flow/file"
"github.com/aceld/kis-flow/kis"
"sync"
)
func RunFlowCalStuAvgScore12(ctx context.Context, flow kis.Flow) error {
// Commit data
_ = flow.CommitRow(`{"stu_id":101, "score_1":100, "score_2":90}`)
_ = flow.CommitRow(`{"stu_id":102, "score_1":100, "score_2":80}`)
// Run the flow
if err := flow.Run(ctx); err != nil {
return err
}
return nil
}
func RunFlowCalStuAvgScore3(ctx context.Context, flow kis.Flow) error {
// Commit data
_ = flow.CommitRow(`{"stu_id":101, "score_3": 80}`)
_ = flow.CommitRow(`{"stu_id":102, "score_3": 70}`)
// Run the flow
if err := flow.Run(ctx); err != nil {
return err
}
return nil
}
func main() {
ctx := context.Background()
// Load Configuration from file
if err := file.ConfigImportYaml("conf/"); err != nil {
panic(err)
}
var wg sync.WaitGroup
wg.Add(2)
go func() {
// Run flow1 concurrently
defer wg.Done()
flow1 := kis.Pool().GetFlow("CalStuAvgScore12")
if flow1 == nil {
panic("flow1 is nil")
}
if err := RunFlowCalStuAvgScore12(ctx, flow1); err != nil {
panic(err)
}
}()
go func() {
// Run flow2 concurrently
defer wg.Done()
flow2 := kis.Pool().GetFlow("CalStuAvgScore3")
if flow2 == nil {
panic("flow2 is nil")
}
if err := RunFlowCalStuAvgScore3(ctx, flow2); err != nil {
panic(err)
}
}()
wg.Wait()
return
}
```
Two Goroutines are launched concurrently to execute `Flow1` and `Flow2`, calculating the final average scores for student `101` and student `102`.
### Execution Results
```bash
===> Call Connector InitScore12Cache
Connected to Redis: PONG
Add FlowRouter FlowName=CalStuAvgScore12
===> Call Connector InitScore12Cache
Connected to Redis: PONG
Add FlowRouter FlowName=CalStuAvgScore3
->Call Func VerifyStu
->Call Func VerifyStu
->Call Func AvgStuScore12
->Call Func LoadScoreAvg12
->Call Func SaveScoreAvg12
->Call Func PrintStuAvgScore, in Flow[CalStuAvgScore12]
stuid: [101], avg score: [95]
stuid: [102], avg score: [90]
->Call Func AvgStuScore3
->Call Func PrintStuAvgScore, in Flow[CalStuAvgScore3]
stuid: [101], avg score: [90]
stuid: [102], avg score: [83.33333333333333]
```
In `Flow[CalStuAvgScore3]`, we observe the final computed average scores for scores 1, 2, and 3.
---
Author: Aceld
GitHub: https://github.com/aceld
KisFlow Open Source Project Address: https://github.com/aceld/kis-flow
Document: https://github.com/aceld/kis-flow/wiki
---
[Part1-OverView](https://dev.to/aceld/part-1-golang-framework-hands-on-kisflow-streaming-computing-framework-overview-8fh)
[Part2.1-Project Construction / Basic Modules](https://dev.to/aceld/part-2-golang-framework-hands-on-kisflow-streaming-computing-framework-project-construction-basic-modules-cia)
[Part2.2-Project Construction / Basic Modules](https://dev.to/aceld/part-3golang-framework-hands-on-kisflow-stream-computing-framework-project-construction-basic-modules-1epb)
[Part3-Data Stream](https://dev.to/aceld/part-4golang-framework-hands-on-kisflow-stream-computing-framework-data-stream-1mbd)
[Part4-Function Scheduling](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-function-scheduling-4p0h)
[Part5-Connector](https://dev.to/aceld/part-5golang-framework-hands-on-kisflow-stream-computing-framework-connector-hcd)
[Part6-Configuration Import and Export](https://dev.to/aceld/part-6golang-framework-hands-on-kisflow-stream-computing-framework-configuration-import-and-export-47o1)
[Part7-KisFlow Action](https://dev.to/aceld/part-7golang-framework-hands-on-kisflow-stream-computing-framework-kisflow-action-3n05)
[Part8-Cache/Params Data Caching and Data Parameters](https://dev.to/aceld/part-8golang-framework-hands-on-cacheparams-data-caching-and-data-parameters-5df5)
[Part9-Multiple Copies of Flow](https://dev.to/aceld/part-8golang-framework-hands-on-multiple-copies-of-flow-c4k)
[Part10-Prometheus Metrics Statistics](https://dev.to/aceld/part-10golang-framework-hands-on-prometheus-metrics-statistics-22f0)
[Part11-Adaptive Registration of FaaS Parameter Types Based on Reflection](https://dev.to/aceld/part-11golang-framework-hands-on-adaptive-registration-of-faas-parameter-types-based-on-reflection-15i9)
---
[Case1-Quick Start](https://dev.to/aceld/case-i-kisflow-golang-stream-real-time-computing-quick-start-guide-f51)
[Case2-Flow Parallel Operation](https://dev.to/aceld/case-i-kisflow-golang-stream-real-time-computing-flow-parallel-operation-364m)
[Case3-Application of KisFlow in Multi-Goroutines](https://dev.to/aceld/case-iii-kisflow-golang-stream-real-application-of-kisflow-in-multi-goroutines-4m7g)
[Case4-KisFlow in Message Queue (MQ) Applications](https://dev.to/aceld/case-iv-kisflow-golang-stream-real--4k3e)
| aceld |
1,915,127 | Thiết kế Website Tại Hà Giang Đẹp Mắt | Lợi ích của việc thiết kế website tại Hà Giang chuẩn SEO Cầu nối giữa công ty và khách hàng: Một... | 0 | 2024-07-08T03:16:09 | https://dev.to/terus_technique/thiet-ke-website-tai-ha-giang-dep-mat-34po | website, digitalmarketing, seo, terus |

Lợi ích của việc thiết kế website tại Hà Giang chuẩn SEO
Cầu nối giữa công ty và khách hàng: Một website chuyên nghiệp sẽ là cầu nối hiệu quả, giúp khách hàng dễ dàng tìm kiếm, tiếp cận và tương tác với doanh nghiệp của bạn.
Kênh quảng cáo bền vững miễn phí: Website của bạn sẽ trở thành một kênh quảng cáo hiệu quả, giúp quảng bá thương hiệu, sản phẩm/dịch vụ của doanh nghiệp một cách bền vững và không tốn chi phí.
Không giới hạn thời gian và không gian bán hàng: Với website, doanh nghiệp có thể bán hàng 24/7, mở rộng phạm vi hoạt động không chỉ trong phạm vi Hà Giang mà còn trên toàn quốc và thậm chí là toàn cầu.
Cạnh tranh với đối thủ: Một website chuyên nghiệp sẽ giúp doanh nghiệp của bạn nổi bật hơn, thu hút khách hàng hiệu quả hơn so với các đối thủ cạnh tranh.
Giao tiếp và Bán hàng Hiệu quả: Website sẽ giúp doanh nghiệp giao tiếp, tương tác với khách hàng một cách chuyên nghiệp, tạo sự tin tưởng, qua đó tăng tỷ lệ chuyển đổi và doanh số bán hàng.
Thiết kế website tại Hà Giang của Terus - Những gì bạn sẽ có?
Giao diện đẹp mắt độc quyền cho doanh nghiệp: Với đội ngũ thiết kế sáng tạo, Terus sẽ mang đến cho doanh nghiệp của bạn một giao diện website độc đáo, thu hút khách hàng ngay từ cái nhìn đầu tiên.
Chuẩn SEO, chuẩn di động, responsive: Website của bạn sẽ được thiết kế theo chuẩn SEO để dễ dàng tìm kiếm và hiển thị trên các thiết bị di động, đáp ứng trải nghiệm tối ưu cho người dùng.
Thiết kế đầy đủ tính năng: Terus sẽ thiết kế website của bạn với đầy đủ các tính năng cần thiết, từ mục giới thiệu, sản phẩm/dịch vụ, tin tức, liên hệ,... giúp nâng cao trải nghiệm người dùng.
Hệ thống Admin quản trị dễ dàng: Bạn sẽ được cung cấp hệ thống quản trị website thân thiện, giúp bạn dễ dàng cập nhật nội dung, quản lý website một cách hiệu quả.
Terus tự hào là [đơn vị thiết kế website chuyên nghiệp, uy tín tại Hà Giang](https://terusvn.com/thiet-ke-website-tai-hcm/), với nhiều năm kinh nghiệm trong lĩnh vực này. Chúng tôi đã từng thiết kế thành công hàng trăm website cho các doanh nghiệp tại Hà Giang và trên cả nước, đáp ứng mọi nhu cầu của khách hàng.
Với quy trình chặt chẽ và kinh nghiệm lâu năm, Terus cam kết mang đến cho doanh nghiệp tại Hà Giang những [dịch vụ thiết kế website chuyên nghiệp, chuẩn SEO và Insight khách hàng](https://terusvn.com/thiet-ke-website-tai-hcm/), góp phần thúc đẩy sự phát triển của doanh nghiệp.
Tìm hiểu thêm về [Thiết kế Website Tại Hà Giang Đẹp Mắt](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-ha-giang/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,161 | Leetcode Day 7: Remove Duplicated from Sorted Array Explained | The problem is as follows: Given an integer array nums sorted in non-decreasing order, remove the... | 0 | 2024-07-08T03:25:02 | https://dev.to/simona-cancian/leetcode-day-7-remove-duplicated-from-sorted-array-explained-3oe | leetcode, python, beginners, codenewbie | **The problem is as follows:**
Given an integer array `nums` sorted in non-decreasing order, remove the duplicates in-place such that each unique element appears only once. The relative order of the elements should be kept the same. Then return t_he number of unique elements in `nums`_.
Consider the number of unique elements of `nums` to be `k`, to get accepted, you need to do the following things:
- Change the array `nums` such that the first `k` elements of `nums` contain the unique elements in the order they were present in `nums` initially. The remaining elements of `nums` are not important as well as the size of `nums`.
- Return `k`.
Custom Judge:
The judge will test your solution with the following code:
```
int[] nums = [...]; // Input array
int[] expectedNums = [...]; // The expected answer with correct length
int k = removeDuplicates(nums); // Calls your implementation
assert k == expectedNums.length;
for (int i = 0; i < k; i++) {
assert nums[i] == expectedNums[i];
}
```
If all assertions pass, then your solution will be accepted.
Example 1:
```
Input: nums = [1,1,2]
Output: 2, nums = [1,2,_]
Explanation: Your function should return k = 2, with the first two elements of nums being 1 and 2 respectively.
It does not matter what you leave beyond the returned k (hence they are underscores).
```
Example 2:
```
Input: nums = [0,0,1,1,1,2,2,3,3,4]
Output: 5, nums = [0,1,2,3,4,_,_,_,_,_]
Explanation: Your function should return k = 5, with the first five elements of nums being 0, 1, 2, 3, and 4 respectively.
It does not matter what you leave beyond the returned k (hence they are underscores).
```
**Here is how I solved it:**
- First, initialize a pointer `k` and set it to 0. This pointer will keep track of the position of the last unique element in the array.
```
class Solution:
def removeDuplicates(self, nums: List[int]) -> int:
# Initialize a pointer 'k' and set it to 0
k = 0
```
- Loop through `nums` array from the second element (index 1). The first element is always unique, so we can skip it for comparison purposes.
- Check for duplicates: if the current element `nums[i]` is different from the last unique element `nums[k]`.
- If it is, it means we have found a new unique element. Move to the next element and update `nums[k]` to current element `nums[i]`.
```
for i in range(1, len(nums)):
if nums and nums[i] != nums[k]:
k += 1
nums[k] = nums[i]
```
- After the loop, `k` will be the index of the last unique element, so the total number of unique elements is `k + 1`. Return `k` + 1 because `k` starts at 0'
```
return k + 1
```
**Here is the completed solution:**
```
class Solution:
def removeDuplicates(self, nums: List[int]) -> int:
k = 0
for i in range(1, len(nums)):
if nums and nums[i] != nums[k]:
k += 1
nums[k] = nums[i]
return k + 1
```
| simona-cancian |
1,915,162 | How to Install MySQL on Linux/UNIX and Windows | MySQL is one of the most popular open-source relational database management systems (RDBMS). It is developed, distributed, and supported by Oracle Corporation. MySQL is used by many database-driven web applications, including Drupal, Joomla, phpBB, and WordPress. It is also used by many popular websites, including Facebook, Twitter, Flickr, and YouTube. | 27,755 | 2024-07-08T03:25:24 | https://dev.to/labex/how-to-install-mysql-on-linuxunix-and-windows-nfm | mysql, coding, programming, tutorial |
## Introduction

This article covers the following tech skills:

MySQL is one of the most popular open-source relational database management systems (RDBMS). It is developed, distributed, and supported by Oracle Corporation. MySQL is used by many database-driven web applications, including Drupal, Joomla, phpBB, and WordPress. It is also used by many popular websites, including Facebook, Twitter, Flickr, and YouTube.
In this lab, we will learn how to install MySQL on Linux/UNIX and Windows. We will also learn how to start and stop the MySQL server, and how to use the MySQL client to connect to the server.
## Install MySQL on Linux/UNIX
MySQL is mostly installed as part of LAMP (Linux, Apache, MySQL, PHP/Python/Perl) stack. It uses SQL (Structured Query Language) for data management.
The installation of MySQL on Ubuntu is very simple. **We have installed MySQL for you.** In our case, we list all you need to do when you install MySQL server independently on Ubuntu Linux. The actual operation may vary slightly depending on the version. **You can just omit this chapter and start the next part, it doesn't affect anything.** After the installation, we will be able to start it from the Linux terminal.
It follows the steps below:
- Update the package index.
- Install MySQL-Server.
- Execute the included security script.
At this point, your Ubuntu system should be up and running. Launch the terminal from the launcher. Run the following command on the terminal to update the package index:
```shell
sudo apt-get update
```
Next, run the following command to install the `mysql-server`:
```shell
sudo apt-get install mysql-server -y
```
Wait for the installation to complete. During the installation, you will be prompted to set the password for the MySQL root user. Enter the password and confirm it. This will install MySQL on your system. Once the installation is complete, the MySQL service will start automatically. You can verify it by running the following command:
```shell
sudo service mysql start
```
```text
* Starting MySQL database server [ OK ]
```
Then, run the following command to initiate the configuration:
```bash
mysql_secure_installation
```
You will then be taken through a sequence of steps for which you will be expected to type "Y" for "yes" and any other key for "no". If you need additional security measures other than password protection, then you can type "Y", otherwise, type any other key and then hit the enter key to proceed. Do this for every step until you are done when the terminal prompt is returned to you.
Congratulations, you have just set MySQL on your Ubuntu Linux!
## Test MySQL
Now that you have installed MySQL on your system, it should be up and running. To confirm this, run the following command on the terminal:
```shell
sudo service mysql status
```
The output should be similar to the following:
```text
* /usr/bin/mysqladmin Ver 8.0.34-0ubuntu0.22.04.1 for Linux on x86_64 ((Ubuntu))
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Server version 5.5.5-10.6.12-MariaDB-0ubuntu0.22.04.1
Protocol version 10
Connection Localhost via UNIX socket
UNIX socket /var/run/mysqld/mysqld.sock
Uptime: 5 min 15 sec
Threads: 2 Questions: 74 Slow queries: 0 Opens: 33 Open tables: 26 Queries per second avg: 0.234
```
The "Threads" line is an indication that MySQL has been installed and it is running.
After a successful installation of MySQL, the initialization of the base tables will have been done, and the server will be up and running.
The mysqladmin binary can help us know the version of MySQL that has been installed on our system. The version can be checked by running the following command:
```shell
mysqladmin --version
```
It should give the following result:
```text
mysqladmin Ver 8.0.34-0ubuntu0.22.04.1 for Linux on x86_64 ((Ubuntu))
```
As shown in the above output, I have installed MySQL 8.0.34. If you don't get such output, then a problem must have occurred during the installation process. To solve the issue, repeat all the steps above in order to install MySQL on your system.
## Access MySQL by Shell
Now that MySQL is ready on your system, you will need to access it and perform various tasks including creating databases, creating tables, inserting data etc. You can access the MySQL shell from the Linux terminal by running the following command:
```shell
mysql -uroot -p
```
The "-u" option is for the username, which in our case is the root, while the "-p" option is for the password.
You will be prompted to enter the password for the user. Note that we are logging in as the root user, so provide the root password that you created during the installation of MySQL. **If you did not set a password, then just press the enter key.**
You will then be presented with the MySQL prompt from which you can execute your SQL commands:
```text
Welcome to the MySQL monitor. Commands end with ; or \g.
Your MySQL connection id is 43
Server version: 5.5.5-10.6.12-MariaDB-0ubuntu0.22.04.1 Ubuntu 22.04
Copyright (c) 2000, 2023, Oracle and/or its affiliates.
Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql>
```
Note that in the above example, we have logged in as the root user. However, you can use the same approach to log in as any other user, provided you have an active account in the DBMS.
After a successful login, a user can perform all activities for which they have been granted privileges. The root user is the top-level user will all the privileges that a user can have on the DBMS. To close the connection to the MySQL database, run the "exit" command on the MySQL prompt as follows:
```text
mysql> exit
```
This will close the connection to the database, and you will see the prompt for your Ubuntu system.
Whenever you need to clear the terminal, just press the "Ctrl + L" keys and everything will be cleared from the terminal.
## Summary
Congratulations on starting the MySQL journey with us! There are many people just like you. Stop by the dicussion forum to see what others have been talking about. Next lab, we will be walking you through MySQL database management. Take some time to digest today's lesson and sit tight, because we're moving to the next milestone soon.
---
## Want to learn more?
- 🚀 Practice [Installation of MySQL](https://labex.io/tutorials/linux-installation-of-mysql-178583)
- 🌳 Learn the latest [MySQL Skill Trees](https://labex.io/skilltrees/mysql)
- 📖 Read More [MySQL Tutorials](https://labex.io/tutorials/category/mysql)
Join our [Discord](https://discord.gg/J6k3u69nU6) or tweet us [@WeAreLabEx](https://twitter.com/WeAreLabEx) 😄 | labby |
1,915,163 | OAuth em aplicações SPA / Mobile (PKCE extension) | Em diversas implementações OAuth / OpenID Connect nos deparamos com o uso de clientes confidenciais... | 0 | 2024-07-08T03:25:46 | https://dev.to/erick_tmr/oauth-em-aplicacoes-spa-mobile-pkce-extension-iao | security, webdev, mobile, oauth | Em diversas implementações OAuth / OpenID Connect nos deparamos com o uso de clientes confidenciais (clientes registrados que possuem um par `client_id` e `client_secret`), porém, em aplicações client-side, como SPAs e apps mobile, é impossível garantir a confidencialidade do `client_secret`. Portanto, torna-se necessário um meio seguro para obter o access token através de algum fluxo do OAuth.
Para resolver esse problema de clientes públicos, entra em ação a [RFC 7636 Proof Key for Code Exchange by OAuth Public Clients](https://datatracker.ietf.org/doc/html/rfc7636), que adiciona uma extensão ao fluxo do authorization code, tornando a comunicação mais segura. É dessa RFC que quero aprofundar um pouco neste post.
## Qual problema o PKCE tenta mitigar?
Primeiramente, lê-se piquici (pixy), a sigla PKCE, haha.
O PKCE mitiga um conhecido ataque do tipo `man in the midle` , aplicado principalmente em aplicações mobile, segue um diagrama abaixo demostrando esse tipo de ataque.

Explicando brevemente:
1. A aplicação cliente inicia o fluxo `authorization code` utilizando um browser
2. Browser faz o request para o endpoint /authorize do authorization server
3. Browser recebe de volta a resposta do servidor com o `authorization code` (code)
4. Um app malicioso instalado no dispositivo cliente intercepta essa resposta que seria encaminhada do browser para a aplicação cliente (normalmente utilizando um custom URI scheme)
5. O app malicioso, em posse do `authorization code` , completa o fluxo trocando o code pelo access token e tendo acesso aos recursos protegidos autorizados pelo usuário.
Desta forma, um app malicioso que tenha infectado o dispositivo do cliente pode obter acesso a um access token interceptando code da resposta do request para o endpoint /authorize.
Um ponto importante a se destacar é que devido a natureza dos clientes públicos de não conseguirem guardar de maneira segura suas credenciais, o fluxo do authorization code permite que estes obtenham o access token somente através da posse do code e do cliente_id da aplicação cliente. Este tipo de ataque se tornaria bem menos eficaz caso fosse possível autenticar a aplicação cliente através de seu client_secret.
### OBS sobre custom URI schemes
Para quem não é da área mobile ou front-end, custom URI schemes é uma forma de criar um protocolo custom para sua aplicação, assim como http:// ou https://, algo como myapp://.
Assim, quando um usuário clicar ou for redirecionado para uma URL que utilize este protocolo, sua aplicação será inicializada com os parâmetros associados a URL.
Para saber mais sobre indico este ótimo e breve artigo, [Launching Applications from the Web: The Power of Custom URL Schemes](https://alirezahamid.medium.com/launching-applications-from-the-web-the-power-of-custom-url-schemes-4d9fa3e6cdbe).
## Como o PKCE mitiga este tipo de ataque?
O PKCE mitiga este tipo de ataque introduzindo alguns parâmetros adicional nos requests envolvidos no fluxo do authorization code.
No diagrama abaixo ilustramos essa nova dinâmica:

Explicando o novo fluxo:
1. Aplicação cliente cria um `code_verifier`, uma string aleatória de alta entropia criptográfica
2. Aplicação cliente deriva o `code_challenge` do code_verifier aplicando algum algoritmo como SHA256, sendo este algoritmo denomiado `code_challenge_method`
3. Aplicação cliente encaminha o `code_challenge` e o `code_challenge_method` , junto dos demais parâmetros, no request para o /authorize
4. O authorization server guarda o code_challenge e o code_challenge_method, associando-os ao authorization code emitido na resposta do /authorize
5. No request para o /token a aplicação cliente envia junto do code o `code_verifier` gerado no passo 1, além dos demais parâmetros necessários
6. Authorization server vai aplicar as transformações no code_verifier baseado no code_challenge e code_challenge_method associados com o authorization code recebido
7. Caso a verificação seja bem sucedida, o authorization server retorna o access code para a aplicação cliente
Então, com a introdução destes novos parâmetros, mesmo que algum app malicioso seja capaz de interceptar o authorization code, este não será capaz de trocar o code por um access token no endpoint /token por não saber o `code_verifier` e o `code_challenge_method` utilizado na comunicação com o authorization server.
## Considerações de segurança
A extensão do PKCE parte do princípio que o code verifier pode ser mantido em segredo do atacante (app malicioso) e que é praticamente impossível de ser adivinhado ou derivado pelo atacante.
Para que o code verifier atinja o requisito satisfatório de aleatoriedade criptográfica, sugere-se utilizar pelo menos 256 bits de entropia.
Pessoalmente, sugiro que utilize bibliotecas open-source confiáveis para gerar o code_challege e verifier, uma vez que são padrões conhecidos. Recorrer a experiência e poder do open-source é sempre bem vindo.
## Concluindo
O PKCE resolve o problema de clientes públicos, como SPAs e Mobile apps, em obter de maneira segura o access token, usufruindo assim dos benefícios do OAuth sem precisar comprometer sua segurança, então lembre-se sempre de utilizar essa extensão ao implementar clientes OAuth públicos.
Para saber mais sobre OAuth e OpenID Connect, deixo aqui a minha coletânea de posts sobre o tema. | erick_tmr |
1,915,164 | Как мы принимали новичка на должность дизайнера | Наша компания, ABC Design, всегда славилась тем, что мы находим самых талантливых и креативных людей.... | 0 | 2024-07-08T03:28:53 | https://dev.to/abcsultan/kak-my-prinimali-novichka-na-dolzhnost-dizainiera-3l4k | Наша компания, ABC Design, всегда славилась тем, что мы находим самых талантливых и креативных людей. Если хотите ознакомиться с нашими услугами: https://abc-design.uz/. Но вот однажды решили мы принять нового дизайнера. Да не простого, а самого что ни на есть гениального! Провели несколько собеседований, отсмотрели тонны портфолио, и наконец нашли того самого кандидата — Ивана.
Первый день Ивана начался с того, что он появился в офисе в костюме супергероя. Да-да, в самом настоящем костюме Человека-Паука! Мы, конечно, немного удивились, но подумали, что это, возможно, часть его креативного подхода.
Первое задание для Ивана было простым — нарисовать логотип для нового проекта. Он уселся за компьютер, но вместо того, чтобы открыть Photoshop, он достал огромный рулон бумаги и начал рисовать на полу офиса. Все коллеги смотрели на него с интересом, а кто-то даже начал делать ставки, что же из этого выйдет.
Через пару часов Иван закончил свою работу и с гордостью представил нам результат. Это был огромный логотип в стиле граффити, занимавший всю комнату! На вопрос, почему он не воспользовался компьютером, Иван ответил: "Графический редактор? Ну да, у меня же фломастеры есть!"
Мы дружно рассмеялись и поняли, что перед нами настоящий креативщик. С этого момента Иван стал нашей легендой, а его граффити-логотип так понравился клиенту, что тот попросил нарисовать его на стене своего офиса.
С тех пор у нас в компании есть традиция — каждый новый дизайнер в свой первый день должен принести что-то необычное и креативное. И все это благодаря Ивану, нашему Человеку-Пауку с фломастерами!
```
```
| abcsultan | |
1,915,165 | Open-Source Website Directory System AigoTools, Deploy Your Website Directory Site with One Click! | I have open-sourced a website directory system on GitHub that can automatically generate website... | 0 | 2024-07-08T03:37:29 | https://dev.to/someu/open-source-website-directory-system-aigotools-deploy-your-website-directory-site-with-one-click-235l | opensource, nextjs, nestjs | I have open-sourced a website directory system on GitHub that can automatically generate website information — AigoTools. By simply inputting the address of the websites to be included, the system can automatically take screenshots of the websites, crawl the website information, and process this information through a llm model. With this system, you can easily deploy a navigation site with 10,000+ URLs.
**GitHub:** [https://github.com/someu/aigotools](https://github.com/someu/aigotools)
{% embed https://youtu.be/P2BBFj5vxV0?feature=shared %}
## Core Features
1. Site Management
2. Automatic Site Information Collection
3. User Management (Clerk)
4. Internationalization
5. Dark/Light Theme Switching
6. SEO Optimization
7. Multiple Image Storage Solutions
8. Open Source UI Design Drafts
## Ideas
1. The project is developed based on Next.js and NestJS. The navigation site main body and the crawling service are separated. If you don't need the crawling service, you can directly deploy the main body of the navigation site on Vercel, making it very convenient and fast.
2. Website information processing based on large models. This project uses Jina to read website information and OpenAI to summarize and automatically categorize website information. There are prompts for information summarization and categorization in the project, using the GPT-4 model to summarize website information.
3. The crawling service is based on Bull.js for queue management, easily handling crawling tasks for tens of thousands of navigation sites.
4. The project's UI is simple, and the author has also open-sourced the design drafts. We can make adjustments and develop our own site based on these design drafts.
## Project Links
**GitHub**: [https://github.com/someu/aigotools](https://github.com/someu/aigotools)
**Demo Site**: [https://www.aigotools.com/](https://www.aigotools.com/)
| someu |
1,915,166 | Dynamic Typing vs. Static Typing: Which is Better? | Introduction In programming languages, typing systems dictate how variables are defined and used. Two... | 0 | 2024-07-08T03:32:41 | https://dev.to/test_automation/dynamic-typing-vs-static-typing-which-is-better-553a | **Introduction**
In programming languages, typing systems dictate how variables are defined and used. Two prominent typing systems are Dynamic Typing and Static Typing. Each approach has distinct advantages and considerations, depending on the requirements of the project and the preferences of the development team. In this blog post, we'll compare Dynamic Typing and Static Typing to understand their strengths and when each might be preferred.
**Dynamic Typing**
Dynamic Typing allows variables to change types as the program runs. Languages like Python and JavaScript employ dynamic typing, where variables are not bound to a specific data type at compile-time but are resolved during runtime. This flexibility simplifies rapid prototyping and accommodates dynamic environments where data types may vary. Dynamic Typing can lead to shorter development cycles and quicker iteration times, as developers can focus less on type declarations and more on functional implementation.
**Static Typing**
Static Typing requires variables to be explicitly declared with their data types at compile-time. Languages like Java, C++, and TypeScript utilize static typing, enforcing type safety and catching errors early in the development process. Static Typing promotes robustness and reliability by reducing runtime errors related to type mismatches. Additionally, IDEs and compilers can provide more comprehensive code analysis, autocomplete suggestions, and refactoring tools, enhancing developer productivity and code maintainability.
**Conclusion**
The choice between Dynamic Typing and Static Typing depends on factors such as project complexity, team expertise, and performance requirements. Dynamic Typing offers flexibility and agility in dynamic environments, while Static Typing provides early error detection and improved code safety. Understanding the trade-offs between these typing systems enables developers to make informed decisions based on the specific needs and goals of their projects. | test_automation | |
1,915,167 | 3D Venn Diagram Chess Game | Check out this Pen I made! | 0 | 2024-07-08T03:34:29 | https://dev.to/dan52242644dan/3d-venn-diagram-chess-game-pea | codepen, javascript, css, html | Check out this Pen I made!
{% codepen https://codepen.io/Dancodepen-io/pen/mdZdWLZ %} | dan52242644dan |
1,915,169 | Thiết kế Website Tại Hà Nội Tối Ưu Giao Diện | Hà Nội được đánh giá là thành phố đang phát triển nhanh, thu hút nhiều nguồn đầu tư. Để vươn lên dẫn... | 0 | 2024-07-08T03:41:28 | https://dev.to/terus_technique/thiet-ke-website-tai-ha-noi-toi-uu-giao-dien-27k4 | website, digitalmarketing, seo, terus |

Hà Nội được đánh giá là thành phố đang phát triển nhanh, thu hút nhiều nguồn đầu tư. Để vươn lên dẫn đầu và tạo ra lợi ích kinh doanh cho các doanh nghiệp lớn và nhỏ, bạn cần có trang web của riêng mình.
Những lợi ích to lớn mà dịch vụ thiết kế website tại Hà Nội mang lại là:
Tạo dựng được vị thế, chỗ đứng trên thị trường website tại Hà Nội
Dễ dàng tạo chiến lược kinh doanh, tiếp thị trực tuyến
Thu hút và tìm kiếm khách hàng tiềm năng
Tại sao nên lựa chọn Terus để thiết kế website chuyên nghiệp tại Hà Nội:
Trước tiên, Terus sử dụng công nghệ mới nhất, hiện đại nhất để xây dựng website cho khách hàng. Các tính năng và công cụ được tích hợp đầy đủ nhằm hỗ trợ tối ưu hóa và quản lý website một cách hiệu quả. Hệ thống bảo mật đa lớp được áp dụng nhằm đảm bảo an toàn và bảo mật cho website của khách hàng.
Ngoài ra, Terus cung cấp nhiều gói dịch vụ thiết kế website tại Hà Nội đa dạng, khách hàng có thể lựa chọn phù hợp với nhu cầu và ngân sách của mình. Đội ngũ chuyên gia của Terus luôn đặt chất lượng và sự hài lòng của khách hàng lên hàng đầu, cam kết cung cấp dịch vụ hoàn hảo và trọn vẹn.
Ngoài ra, Terus còn cung cấp dịch vụ hậu mãi và hỗ trợ khách hàng trọn đời, đảm bảo website luôn hoạt động tối ưu và đáp ứng được nhu cầu của khách hàng trong suốt quá trình sử dụng.
Với những ưu điểm nổi bật về chất lượng, công nghệ, dịch vụ hậu mãi và quy trình thiết kế chặt chẽ, Terus tự hào vì là một trong những đơn vị cung cấp dịch vụ thiết kế website tại Hà Nội uy tín và chuyên nghiệp nhất trên thị trường hiện nay.
Việc lựa chọn Terus làm đơn vị thiết kế website tại Hà Nội sẽ giúp doanh nghiệp tiết kiệm chi phí, nâng cao hiệu quả kinh doanh và tạo ấn tượng tốt với khách hàng. Với đội ngũ chuyên gia giàu kinh nghiệm, quy trình làm việc chuyên nghiệp và dịch vụ hậu mãi tận tình, Terus cam kết mang đến giải pháp thiết kế website tại Hà Nội hoàn hảo và đáp ứng mọi yêu cầu của khách hàng.
Tìm hiểu thêm về [Thiết kế Website Tại Hà Nội Tối Ưu Giao Diện ](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-ha-noi/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,170 | Elevate Loyalty: Minting Polygon-Based Rewards with Wix Backend Events | This is a submission for the Wix Studio Challenge . What I Built I've developed an... | 0 | 2024-07-08T03:41:32 | https://dev.to/ubinix_warun/elevate-loyalty-minting-polygon-based-rewards-with-wix-backend-events-k2l | devchallenge, wixstudiochallenge, webdev, javascript | *This is a submission for the [Wix Studio Challenge ](https://dev.to/challenges/wix).*
## What I Built
I've developed an innovative loyalty program platform using Wix Studio and Velo (Service Plugins). This platform incentivizes user engagement and fosters brand loyalty through:
- Polygon-Based Rewards: Users earn loyalty points in the form of Polygon tokens, offering them real value and the ability to participate in the wider Web3 ecosystem.
- Event-Driven Minting: Points are minted automatically when users complete actions like making purchases, submitting reviews, or referring friends.
- Gamified Experience: The platform encourages participation with a tiered rewards system and exciting redemption options.
## Demo
https://ubinix5warun.wixstudio.io/wix-x-devto-airdrop
## Development Journey
Purpose of onCheckoutCompleted()
This event is your golden ticket to trigger actions right after a customer successfully completes a purchase on your Wix store. It's a crucial point to:
- Award Loyalty Points: Calculate and mint loyalty tokens on the Polygon blockchain based on the purchase amount or specific items bought.
- Update User Balances: Increase the customer's loyalty point balance in your Wix database.
- Trigger Personalized Messages: Send a thank-you email or notification, potentially including the newly earned loyalty point amount.
- Analyze Purchase Data: Gather data for insights into purchasing trends, which can help refine your loyalty program.
Code Example (Velo - Backend):
```
import wixData from 'wix-data';
import { mintLoyaltyTokens } from 'backend/polygon-api'; // Your custom Polygon interaction module
export function wixEcom_onCheckoutCompleted(event) {
const { checkout } = event;
const customerId = checkout.buyerInfo.memberId;
const totalAmount = checkout.totals.total;
// 1. Calculate Loyalty Points (Example logic: 1 point per $10 spent)
const pointsEarned = Math.floor(totalAmount / 10);
// 2. Mint Loyalty Tokens on Polygon (Assuming you have this function)
mintLoyaltyTokens(customerId, pointsEarned);
// 3. Update User's Loyalty Points in Database
wixData.query("Members")
.eq("_id", customerId)
.find()
.then((results) => {
const member = results.items[0];
member.loyaltyPoints += pointsEarned;
wixData.update("Members", member);
});
// 4. (Optional) Trigger Personalized Email/Notification
// ... (using Wix Automations or a 3rd-party service)
}
```
How to Set Up
- Backend Function: Create the wixEcom_onCheckoutCompleted function in a backend module (e.g., backend/events.js).
- Event Registration: Register this function as an event handler in your site's backend (Wix dashboard -> Velo -> Backend).
- Polygon Interaction: Develop the mintLoyaltyTokens function (or equivalent) to interact with your Polygon smart contract and mint the loyalty tokens.
- Database: Ensure you have a "Members" collection in your Wix database to store user loyalty points. | ubinix_warun |
1,915,171 | Round Two: Enhancing the Ollama Cluster | Re-cap Just over three weeks ago I wrote a post titled: Setting Up an Ollama + Open-WebUI... | 0 | 2024-07-08T03:43:01 | https://www.saltyoldgeek.com/posts/ollama-cluster-part-ii/ | ollama, openwebui, hardware, troubleshooting | ## Re-cap
Just over three weeks ago I wrote a post titled: [Setting Up an Ollama + Open-WebUI Cluster](https://www.saltyoldgeek.com/posts/ollama-cluster-part-i/?utm_source=internal), where I went over my first experiment in creating a entry barrier node for what was to become an Ollama cluster. In short, I could see the card but performance was negligible. Working on what I was previously able to accomplish it was time to bump things up.
## Round Two Setup
Using the same Nvidia Quadro K620 and USB-C Power supply the only item left was the adapter and host machine. This time it was a Lenovo M700 Tiny and the [ADT-Link M.2 NGFF NVMe Key M Extender Cable](https://www.amazon.com/dp/B07YDH8KW9). Originally this was going to go into the Wifi card slot on the motherboard, what I didn't think to check was that while the NVME drive was M key the wifi card was A+E keyed, ok, let's try the NVME slot and temporary mode the drive to an external NVME-to-USB adapter. The drive booted with no problem, however NVIDIA card was not recognized at all, it did spin the fan as would be expected when signaled on, but no dice in `lspci`.
## Adapt to M.2 A+E Key?
Sadly, after much research, any adapter to the A+E key would only support PCIe 1x, which is not worth the effort. There is a possible solution that might still work for this task, but that's another post. Keep watching for updates, and until next time, fair winds and following seas. | blacknight318 |
1,915,173 | Top 6 video downloader: Use yt-dlp for pornhub video downloader | In this article, we will explore some top-rated Pornhub video downloader options for Windows, Mac,... | 0 | 2024-07-08T03:44:23 | https://dev.to/cris_dejohn_ec017df54c2b4/top-6-video-downloader-use-yt-dlp-for-pornhub-video-downloader-cl | python, video, downloader | In this article, we will explore some top-rated [Pornhub video downloader](https://go2keep.com/) options for Windows, Mac, Android and direct browser use. We'll look at their key features, supported formats and ideal scenarios for each. Guidelines around legal personal downloading will also be covered. Finally, various practical benefits of saving videos locally will be discussed, from educational purposes to data conservation.
Whether you need a simple solution to grab the occasional clip or a full-fledged application to orchestrate bulk library migrations, there are downloader alternatives suitable for every Pornhub enthusiast. By understanding your personal needs and the legal landscape, you can confidently unlock the potential of offline mobile access to the extensive treasure trove of educational, documentary and entertainment videos hosted on Pornhub.
Reliable Pornhub Video Downloaders for Windows, Mac, Android and Online
Whether you need a desktop application or web-based software, here are six tried and tested options for downloading Pornhub videos:
## 1. yt-dlp
Usability: 5/5
Size: 101MB
Rating: 4.6/5
Available formats: MP4, MP3, WEBM
Price: Free (open source)
JDownloader stands out as one of the most full-featured and usable Pornhub downloaders. It simplifies the process of batch downloading and scheduling repetitive tasks for hands-free video saving. With support for over 100 formats, users can retrieve videos in their preferred codecs. Backed by a large community, JDownloader is consistently updated to stay compatible with the latest websites. Its simple yet powerful interface allows for smooth bulk downloading ideal for fulfilling large content libraries.
## 2. Go2Keep
Usability: 4/5
Size: Web-based
Rating: 4.3/5
Available formats: MP4, WebM, MP3
Price: Free
[Go2Keep](https://go2keep.com/) serves as a quick and lightweight option without software installation. Simply copy-paste the Pornhub link URL and the downloader automatically detects available formats for the video. Downloads begin instantly, and users can resume incomplete ones seamlessly. The website interface provides a clean and hassle-free experience for saving individual videos or entire playlists with just a few clicks.
## 3. Y2mate Pornhub Downloader
Usability: 5/5
Size: Web-based
Rating: 4.7/5
Formats: MP4, MP3, WebM
Price: Free
Featuring an intuitive design and instant downloads, Y2mate excels at fulfilling rapid one-time saves from Pornhub. Users waste no time browsing or configuring complicated settings. Simply enter the video URL and the direct download links populate for the supported high quality formats. With no software to install and a seamless browser-based workflow, Y2mate provides unbeatable convenience for on-the-go video snatching.
## 4. Savido
Usability: 4/5
Size: Web-based
Rating: 4.4/5
Formats: MP3, MP4, FLV
Price: Free
Savido makes Batch downloading playlists from Pornhub a breeze. Copy all the video URLs you want, select the format, and with one click the downloader takes care of the rest. Users can continue other tasks while Savido works seamlessly in the background. It also allows conversion of Pornhub to MP3 audio for listening pleasure on the go.
## 5. ByClick Pornhub Video Downloader
Usability: 4/5
Size: 39MB
Rating: 4.5/5
Formats: MP4, WEBM, MP3
Price: Free
ByClick provides an intuitive desktop solution for Windows. Its built-in browser lets users easily copy URLs from Pornhub and paste them into the downloader. Videos and playlists download automatically at optimized quality levels up to crisp 4K. Users stay in full control with options for custom formats and playback on any device.
## 6. Allavsoft
Usability: 4/5
Size: 27MB
Rating: 4.4/5
Formats: MP4, MP3, WEBM
Price: Free (with optional $15 Pro version)
As a simple one-click downloader, Allavsoft proves efficient at grabbing videos from Pornhub. Powerful options include the ability to drag and drop URLs for batch downloads. Allavsoft provides reliable software updates and helpful customer assistance. The clean interface remains uncluttered for ease of discovering new content.
In conclusion, these reliable Pornhub downloaders vary in features but effectively serve different user needs through desktop, web and mobile-based solutions.
Downloading Pornhub Videos Legally
In general, it is fair use to download Pornhub videos for private personal viewing through downloader tools as long as it is not done excessively or for commercial redistribution purposes. However, the legality can still be blurry in some cases depending on factors like the video type and owner permission.
It is safest and most considerate to only download content when absolutely necessary for non-public consumption such as educational research or archiving temporarily unavailable videos. If you plan to modify or reuse portions of copyrighted works, it's best to first seek explicit consent from the creators. Under no circumstances should downloaded videos be uploaded again or monetized without authorization.
Always be mindful that Pornhub's terms reserve the right for content partners to disallow downloads of specific files through automated detection systems. While rare, it's still possible some downloads may be blocked. Overall, personal educational downloads for private offline viewing pose minimum legal risk if the standard copyright rules are followed.
## Reasons to Download Pornhub Videos
Beyond simply watching favorites offline, there are many productive reasons why people utilize video downloaders:
Educational research or referencing - Students can backup instructional tutorials and lectures for extended learning later.
Archival of rare or valuable videos - Downloads help preserve vulnerable content at risk of deletion by original uploaders.
Device compatibility - Converting formats through downloaders allows optimized playback on a variety of mobile phones, laptops, and media players.
Internet outages - Saving space locally insures uninterrupted access during periods with no connectivity like plane flights or remote areas.
Travel data limits - Downloading before trips saves cellular data consumption that could exceed monthly caps while streaming over links.
Enjoyment on multiple screens - Families can collectively view shared media downloaded to a home PC or entertainment system.
## Conclusion
In conclusion, Pornhub downloaders provide immense value for both personal and professional use cases through their convenient features. By respecting copyright guidelines, users can leverage these legitimate tools to freely curate, archive and enjoy audiovisual content from the world's largest public media library. Whether for study, work, or pleasure, downloading extendsPornhub's expansive reach directly to any device, anywhere, offline or in areas with narrow networks. With a trusted downloader, the full potential of this digital video universe becomes seamlessly accessible.
| cris_dejohn_ec017df54c2b4 |
1,915,174 | The React useRef Hook: Not Just for DOM Elements | In this post, we'll cover what the useRef hook is, some examples of how it can be used, and when you... | 26,157 | 2024-07-08T04:38:17 | https://dev.to/opensauced/the-react-useref-hook-not-just-for-html-elements-3cf3 | react, javascript, webdev | In this post, we'll cover what the `useRef` hook is, some examples of how it can be used, and when you shouldn't use it.
## What is useRef?
The `useRef` hook creates a reference object that holds a mutable value, stored in its [current](https://react.dev/reference/react/useRef#referencing-a-value-with-a-ref) property. This value can be anything from a DOM element to a plain object. Unlike component state via say the [useState](https://react.dev/reference/react/useState) hook, changes to a reference object via `useRef` won't trigger a re-render of your component, improving performance.
## Examples
### Referencing a DOM element using the useRef Hook
In React, state manages data that can trigger re-renders. But what if you need a way to directly access [document object model](https://developer.mozilla.org/en-US/docs/Web/API/Document_Object_Model/Introduction) (DOM) elements that shouldn't cause re-renders? That's where the [useRef](https://react.dev/reference/react/useRef) hook comes in.
Typically, you'd do something like this.
```typescript
import { useEffect, useRef } from "react";
export const SomeComponent = () => {
const firstNameInputRef = useRef<HTMLInputElement>(null);
// for plain JavaScript change the above line to
// const firstNameInputRef = useRef(null);
useEffect(() => {
firstNameInputRef.current?.focus();
}, [firstNameInputRef.current]);
return (
<form>
<label>
First Name:
<input type="text" ref={firstNameInputRef}/>
</label>
</form>
);
}
```
1. We create a variable named `firstNameInputRef` using `useRef` to reference the DOM element (initially null) and use `useEffect` to focus the input element on the initial render.
1. Inside `useEffect`, we check if `firstNameInputRef.current` exists (it will be the actual DOM element after the initial render). If it does, we call `focus()` to set focus on the input.
1. The dependency array `[firstNameInputRef.current]` ensures `useEffect` only runs once when the reference changes (i.e., after the initial render).
### Referencing a non-DOM element using the useRef Hook
Recently, I was working on Open Sauced's [StarSearch](https://opensauced.pizza/blog/open-source-insights-with-starsearch), a Copilot for git history feature we released at the end of May 2024. You can read more about StarSearch in the blog post below.
{% embed https://dev.to/bekahhw/introducing-starsearch-unlock-the-copilot-for-git-history-5ddb %}
The ask was to be able to start a new StarSearch conversation. To do so, I had to stop the current conversation. If you've worked with the [OpenAI](https://openai.com/index/openai-api/) API or similar APIs, they typically return a [ReadableStream](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStream) as a response.
A ReadableStream is a web API that allows data to be read in chunks as it becomes available, enabling efficient processing of large or real-time data sets. In the context of API responses, this means we can start handling the data immediately, without waiting for the entire response to complete.
I initially had this feature working, but ran into issues if the response started to stream. The solution, create a reference to the readable stream via the `useRef` hook and when a new conversation is started, cancel the one in progress. You can see these changes in the pull request (PR) below
{%embed https://github.com/open-sauced/app/pull/3637 %}
So now, if someone presses the _Create a New Conversation_ button, I cancel the current streaming response from StarSearch, e.g.
```typescript
const streamRef = useRef<ReadableStreamDefaultReader<string>>();
// for plain JavaScript change the above line to
// const streamRef = useRef();
...
const onNewChat = () => {
streamRef.current?.cancel();
...
};
...
```
1. We create a variable named `streamRef` using `useRef` to hold a reference to the current [ReadableStreamDefaultReader](https://developer.mozilla.org/en-US/docs/Web/API/ReadableStreamDefaultReader).
1. The `onNewChat` function checks if `streamRef.current` exists (meaning a stream is ongoing).
1. If a stream exists, we call `cancel()` on `streamRef.current` to stop it before starting a new conversation.
## Wrapping Up
`useRef` was the perfect solution for my use case. Maybe you'll find the `useRef` hook useful for something other than referencing a DOM element as well.
You can store almost anything in a reference object via the `useRef` hook, and it won't cause re-renders in your component. If you're persisting component state, opt for `useState` or other hooks like `useReducer` so that the component does re-render.
For further reading on the `useRef` hook, I highly recommend checking out the React documentation for the [useRef hook](https://react.dev/reference/react/useRef).
Stay saucy peeps!
If you would like to know more about my work in open source, [follow me on OpenSauced](https://oss.fyi/nickytonline).
| nickytonline |
1,915,175 | how to overcome 413 error code in | I am using react and express for my application , i am uploading camera file using js api and then... | 0 | 2024-07-08T03:47:30 | https://dev.to/mohammed_mubeen/how-to-overcome-413-error-code-in-3gmi | I am using react and express for my application , i am uploading camera file using js api and then upload pdf , png file in a single payload object but it has 413 error how to overcome from this issue | mohammed_mubeen | |
1,915,178 | Thiết kế Website Tại Hà Tĩnh Phù Hợp Mọi Ngành Nghề | Ngày nay, Internet bao trùm mọi lĩnh vực. Việc tìm kiếm thông tin trên Internet hay mua bán, kinh... | 0 | 2024-07-08T03:51:49 | https://dev.to/terus_technique/thiet-ke-website-tai-ha-tinh-phu-hop-moi-nganh-nghe-4bab | website, digitalmarketing, seo, terus |

Ngày nay, Internet bao trùm mọi lĩnh vực. Việc tìm kiếm thông tin trên Internet hay mua bán, kinh doanh trực tuyến không còn xa lạ với mọi người, không chỉ ở Hà Tĩnh mà còn ở tất cả các nơi khác. Doanh nghiệp muốn tồn tại và phát triển cần có cách thích ứng và thay đổi theo xu hướng khách hàng.
Và website tại Hà Tĩnh là một trong những công cụ marketing quan trọng nhất trong thời đại công nghệ 4.0. Việc tạo và phát triển một website chuyên dụng cho doanh nghiệp không chỉ giúp thương hiệu của bạn được nhiều người biết đến mà còn dễ dàng nâng cao hiệu quả hoạt động.
Một website không chỉ giúp thiết lập sự hiện diện của doanh nghiệp trên internet, mà còn mang lại nhiều lợi ích khác như tối ưu hóa cơ hội tiếp cận khách hàng, quảng bá thương hiệu không giới hạn, phục vụ khách hàng hiệu quả và là một phương tiện truyền thông linh hoạt.
Terus, với nhiều năm kinh nghiệm trong lĩnh vực thiết kế website, cam kết mang đến những [giải pháp thiết kế website chuyên nghiệp, chuẩn SEO và Insight khách hàng](https://terusvn.com/thiet-ke-website-tai-hcm/) cho các doanh nghiệp tại Hà Tĩnh. Khi hợp tác với Terus, doanh nghiệp sẽ nhận được những gì?
Đầu tiên, doanh nghiệp sẽ có một giao diện website đẹp mắt, độc quyền và phù hợp với thương hiệu của mình. Website được thiết kế chuẩn SEO, chuẩn di động và responsive, đảm bảo trải nghiệm tối ưu cho người dùng trên mọi thiết bị. Ngoài ra, website sẽ được trang bị đầy đủ các tính năng cần thiết, cùng với một hệ thống quản trị đơn giản, dễ sử dụng.
Với kinh nghiệm, đội ngũ chuyên gia và quy trình thiết kế website chuyên nghiệp, Terus cam kết mang đến các giải pháp website tại Hà Tĩnh tối ưu, góp phần thúc đẩy sự phát triển của các doanh nghiệp tại địa phương này. Nếu bạn đang tìm kiếm một [đơn vị thiết kế website uy tín tại Hà Tĩnh](https://terusvn.com/thiet-ke-website-tai-hcm/), hãy liên hệ với Terus để được tư vấn và hỗ trợ.
Tìm hiểu thêm về [Thiết kế Website Tại Hà Tĩnh Phù Hợp Mọi Ngành Nghề](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-ha-tinh/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,180 | Self-Hosting Perplexica and Ollama | Perplexica and Ollama Setup Are you in the self-hosted camp, enjoying Ollama, and... | 0 | 2024-07-08T03:57:59 | https://www.saltyoldgeek.com/posts/perplexica-with-ollama/ | ollama, perplexica, perplexityai | ## Perplexica and Ollama Setup
Are you in the self-hosted camp, enjoying Ollama, and wondering when we'd have something like Perplexity AI but local, and maybe a bit more secure? I had been keeping an eye out when I came across an article on [MARKTECHPOST](https://www.marktechpost.com/2024/06/09/perplexica-the-open-source-solution-replicating-billion-dollar-perplexity-for-ai-search-tools/) about [Perplexica](https://github.com/ItzCrazyKns/Perplexica). So I decided to take a crack at it. There were a few issues I encountered which we'll work around in the Perplexica setup, aside from config there was a call property that we need to address. Let's dive in.
## Ollama Install and Setup
To begin with Ollama, follow these steps:
1. Run the installation script using
```bash
curl -fsSL https://ollama.com/install.sh | sh
```
2. Pull the latest version of Llama3 using
```bash
ollama pull llama3:latest
```
3. Pull the latest version of Nomic-Embed-Text using
```bash
ollama pull nomic-embed-text:latest
```
4. Edit the Ollama service file by running sudo systemctl edit ollama.service and adding the following lines
```text
Copy Code
[Service]
Environment="OLLAMA_HOST=0.0.0.0"
```
5. Reload the systemd daemon using
```bash
sudo systemctl daemon-reload
```
6. Restart the Ollama service using
```bash
sudo systemctl restart ollama
```
## Perplexica Setup
To set up Perplexica, follow these steps:
1. Clone the Perplexica repository using
```bash
git clone https://github.com/ItzCrazyKns/Perplexica.git
```
2. Copy the sample configuration file to a new file named config.toml
```bash
using cp sample.config.toml config.toml
```
3. Open config.toml in a text editor (such as nano) and make the following changes:
```toml
Change OLLAMA = http://server-ip:11434
```
Comment out the server for SEARXNG and press CTRL+X to exit and Y to save
4. Open `ui/components/theme/Switcher.tsx` in a text editor (such as nano) and make the following changes to line 10
```text
const ThemeSwitcher = ({ className, size }: { className?: string; size?: number }) => {
```
Then press ctrl+x, then y to save the file
5. Open `docker-compose.yml` in a text editor (such as nano) and make the following changes
```text
SEARXNG_API_URL=http://server-ip:4000
NEXT_PUBLIC_API_URL=http://server-ip:3001/api
NEXT_PUBLIC_WS_URL=ws://server-ip:3001
```
6. Build and start the Perplexica container using
```bash
docker compose up -d --build
```
7. Access Perplexica by visiting `http://server-ip:3000` in your web browser
That's it! With these steps, you should be able to set up both Perplexica and Ollama on your system. If you found this helpful please share this post, donate to my Buymeacoffee, or clap if you're reading this on Medium. Till next time fair winds and following seas!
| blacknight318 |
1,915,181 | Rendering Modes Explained | In this article, we will explore the different rendering modes for a web application, commonly seen... | 0 | 2024-07-10T14:02:04 | https://dev.to/andresilva-cc/rendering-modes-explained-2711 | webdev, javascript, frontend | In this article, we will explore the different **rendering modes** for a web application, commonly seen in meta-frameworks like **Next.js** and **Nuxt**. Understanding these modes is **crucial** for developers seeking to **optimize performance** and **user experience**.
For demonstration purposes, we will use a project I've developed in Nuxt 3, which makes it easier to see the differences between the rendering modes.
## Client-Side Rendering (CSR)
Modern JavaScript libraries and frameworks like **Vue.js** and **React.js** brought many improvements to the front-end ecosystem, such as **easier componentization** and a **reactivity system**, but they also introduced their own challenges.
When we talk about **Client-Side Rendering** **(CSR)**, it's how those frameworks typically operate **by default**. Rendering occurs entirely on the **client-side**, within the browser. This means that the server delivers a **blank page** and then the **JavaScript** needs to be downloaded and executed to **render the UI** to the user. Consequently, there's a period during which the user sees **no content**, which can vary based on the user's network speed and hardware.
In addition to the **poor user experience** inherent in this rendering mode, **SEO** is also impacted because web crawlers must **wait for the page to be fully rendered** before they can index it.
This mode is also commonly referred to as **SPA (Single Page Application)**.

**Pros:**
- Lesser complexity
- Can be hosted on a static server
**Cons:**
- Blank page while JavaScript is not executed
- Bad for SEO
### Demonstration
{% embed https://www.youtube.com/watch?v=wYogjDe9YoA %}
## Server-Side Rendering (SSR)
The meta-frameworks were created to mainly solve that problem. When we talk about “meta-framework”, it usually means a framework that is built on top of another, providing further abstraction and tools. For example, **Next.js** is a meta-framework for **React.js** and **Nuxt** is a meta-framework for **Vue.js**.
How they solve that problem is by introducing **Server-Side Rendering (SSR)**. When the browser requests a page from the server, instead of the server responding with a **blank page**, it runs the framework on the **server-side** to **render the page**. This means that the user doesn't see a blank page anymore, but a **fully-rendered page** without executing any JavaScript on the browser.
Of course, that page is completely static, so there's **no interactivity**. The process of turning the static page into an interactive page is called **hydration**. Basically, what this means is that the framework is run again on the browser side, so it **binds all the** **event listeners** in the DOM and makes the page **interactive**.
An important point to understand is that usually, our **API calls** need to be made on the **server** so that the data is present to **render the page**. The meta-frameworks usually introduce some sort of **hook or function** that makes all those requests run on the server and not be duplicated in the client.

**Pros:**
- Good for SEO
- Page content available immediately
**Cons:**
- Greater complexity
- Needs a server with support
- Bigger processing cost
### Demonstration
{% embed https://www.youtube.com/watch?v=KrMVfwJ9MfM %}
## Static Site Generation (SSG)
Another feature that the meta-frameworks introduced is **Static Site Generation (SSG)**. **Static** means that the page doesn't depend on any dynamic data.
For example, imagine a **profile page**. A profile page should not be static because the content of the page is different based on the user details. On the other hand, a page that describes the **terms and conditions** of your application is always going to be the same, because it doesn't have any dynamic data that needs to be fetched on the server-side.
For those cases, it's a good option to opt for **SSG**. Those pages are going to be **rendered at** **build time** and will not change, so the server uses **fewer resources** to deliver those pages to the client.

**Pros:**
- Good for SEO
- Page content available immediately
- Can be hosted on a static server
**Cons:**
- Greater complexity
- Limitation with dynamic route parameters
- Dynamic content not updated after build
### Demonstration
{% embed https://www.youtube.com/watch?v=Uh7mBQL0FIE %}
## Incremental Static Regeneration (ISR)
The last option and the most recent one introduced by meta-frameworks is **Incremental Static Regeneration (ISR)**. One easy way to describe it is that it is a mix of **SSR** and **SSG** because it **renders the page**, **caches it**, and **revalidates it** after a specific time interval.
For example, imagine a page with the most recent blog posts. New posts aren't created every second, so it makes sense to **render the page**, **cache it**, and after 1 minute **re-render the page**. While the **TTL (Time To Live)** doesn't expire, the server will deliver the cached page.
When using **ISR**, it's up to you to define the TTL of the page, so choose a setting that makes sense for the page in question.
Other terms that may be used are: **Incremental Static Generation (ISG)** and **Stale-While-Revalidate (SWR)**.

**Pros:**
- Good for SEO
- Page content available immediately
- Smaller processing cost when compared to SSR
**Cons:**
- Greater complexity
- Needs a server with support
### Demonstration
{% embed https://www.youtube.com/watch?v=iNC9fqek8wk %}
## Project
As I've mentioned at the beginning of the article, this is the project I've used to demonstrate the rendering modes:
**Website:** https://renderingmodes.andresilva.cc/
**Repository:** https://github.com/andresilva-cc/demo-nuxt3-rendering-modes | andresilva-cc |
1,915,182 | 4 Essential UI Design Tips That Will Transform Your Interfaces | Starting from scratch Starting a new design completely from the ground up can be a... | 0 | 2024-07-08T03:59:56 | https://dev.to/syedumaircodes/4-essential-ui-design-tips-that-will-transform-your-interfaces-5e5p | beginners, ux, ui, productivity | ## Starting from scratch
Starting a new design completely from the ground up can be a challenging task, if you don't have the complete requirements or haven't decided on the features that your product will have.
Figure out the main components of your product first and then get to designing. If you spend time designing a cool navigation component but later found that the product doesn't need a navigation component then your effort will be wasted.
You can always use that component in another design but you designed something that wasn't needed and this can be quite a blunder when your working with tight deadlines.
> Focusing on the main features and components will help you save time and also get your work done faster.
## Using a Design System
If you have a pre-defined color scheme or a design system that you will use for the aesthetics of your design then start using it from the beginning so that you can incorporate it into your design easily.
If not, don't stress over it. Start your design with black and white colors and then add colors and accents at the end so that you don't waste time experimenting with colors while your design isn't even finished.
Creating designs in monotone or gray scale will allow you to experiment with spacing, contrast and size of the design and you'll end up with a cleaner design that you can add color to whenever you want.
## Ship whats neccessary
There are a ton of features that you can add in your product whether you want to include them or not.
In the start, add only the necessary and core components and then listen to the user feedback about which features they need and how they need it in your product and then design them and add them.
Doing so will make your product lean and will make the user happy as well as you're only shipping the features that they need and not unnecessary features that will make users want to leave your product.
## Design Accordingly
The vibe of your designs should be according to the main user demographic. If you're making a corporate or finance product then the vibe should be more professional looking with simple colors and fonts and to the point components that make the design look inline with the company or business.
On the other hand, a product for a startup or a product that focuses on the youth will have a totally different vibe to it. It will have wild design, large-size typography, animations, fun-little interactions, and other interactive elements that catch the attention of the viewer.
---
Which of these tips resonated with you the most? Share your thoughts in the comments and let me know how you plan to apply it in your next UI design project. Don't forget to like and share if you found this helpful! | syedumaircodes |
1,915,183 | Thiết kế Website Tại Hải Dương Chuyên Nghiệp | Dịch vụ thiết kế website chuyên nghiệp và hiện đại không chỉ là một công cụ để quảng bá thương... | 0 | 2024-07-08T04:00:04 | https://dev.to/terus_technique/thiet-ke-website-tai-hai-duong-chuyen-nghiep-2koj | website, digitalmarketing, seo, terus |

[Dịch vụ thiết kế website chuyên nghiệp và hiện đại](https://terusvn.com/thiet-ke-website-tai-hcm/) không chỉ là một công cụ để quảng bá thương hiệu, sản phẩm và dịch vụ, mà còn là một kênh tiếp cận khách hàng tiềm năng và tăng cường sự tương tác. Đối với các doanh nghiệp tại Hải Dương, việc đầu tư xây dựng một website uy tín và tối ưu sẽ mang lại nhiều lợi ích như:
Thiết lập sự hiện diện trực tuyến, tăng tính chuyên nghiệp và uy tín của thương hiệu.
Tận dụng tối đa các cơ hội tiếp cận và thu hút khách hàng mới trên môi trường online.
Cung cấp một kênh quảng cáo, tiếp thị không giới hạn, tiết kiệm chi phí so với các phương tiện truyền thông truyền thống.
Cải thiện trải nghiệm của khách hàng thông qua các tính năng tiện ích và dịch vụ hỗ trợ trực tuyến hiệu quả.
Trở thành một phương tiện truyền thông linh hoạt, giúp doanh nghiệp dễ dàng cập nhật thông tin, tương tác với khách hàng.
Với dịch vụ thiết kế website tại Hải Dương do Terus cung cấp, doanh nghiệp sẽ nhận được những gì?
Giao diện website độc quyền, mang phong cách riêng của doanh nghiệp, tạo ấn tượng đầu tiên tốt với khách hàng.
Chuẩn SEO và tối ưu hiển thị trên các thiết bị di động, đáp ứng các tiêu chuẩn hiện đại.
Đầy đủ các tính năng cần thiết cho hoạt động kinh doanh, quản lý và tương tác với khách hàng.
Hệ thống quản trị website dễ sử dụng, giúp doanh nghiệp dễ dàng cập nhật, quản lý nội dung.
Terus tự tin là một trong những công ty hàng đầu về lĩnh vực thiết kế và phát triển website tại Hải Dương. Với đội ngũ chuyên gia giàu kinh nghiệm, Terus cam kết mang lại những [giải pháp thiết kế website tối ưu, đáp ứng mọi nhu cầu](https://terusvn.com/thiet-ke-website-tai-hcm/) của doanh nghiệp.
Tìm hiểu thêm về [Thiết kế Website Tại Hải Dương Chuyên Nghiệp](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-hai-duong/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,184 | Thiết kế Website Tại Hải Phòng Chất Lượng Cao | Lợi ích của việc thiết kế website tại Hải Phòng chuẩn SEO Cầu nối giữa công ty và khách hàng: Một... | 0 | 2024-07-08T04:02:21 | https://dev.to/terus_technique/thiet-ke-website-tai-hai-phong-chat-luong-cao-1d1o | website, digitalmarketing, seo, terus |

Lợi ích của việc thiết kế website tại Hải Phòng chuẩn SEO
Cầu nối giữa công ty và khách hàng: Một website chuyên nghiệp sẽ là cầu nối hiệu quả, giúp khách hàng dễ dàng tìm kiếm, tiếp cận và tương tác với doanh nghiệp của bạn.
Kênh quảng cáo bền vững miễn phí: Website của bạn sẽ trở thành một kênh quảng cáo hiệu quả, giúp quảng bá thương hiệu, sản phẩm/dịch vụ của doanh nghiệp một cách bền vững và không tốn chi phí.
Không giới hạn thời gian và không gian bán hàng: Với website, doanh nghiệp có thể bán hàng 24/7, mở rộng phạm vi hoạt động không chỉ trong phạm vi Hải Phòng mà còn trên toàn quốc và thậm chí là toàn cầu.
Cạnh tranh với đối thủ: Một website chuyên nghiệp sẽ giúp doanh nghiệp của bạn nổi bật hơn, thu hút khách hàng hiệu quả hơn so với các đối thủ cạnh tranh.
Giao tiếp và Bán hàng Hiệu quả: Website sẽ giúp doanh nghiệp giao tiếp, tương tác với khách hàng một cách chuyên nghiệp, tạo sự tin tưởng, qua đó tăng tỷ lệ chuyển đổi và doanh số bán hàng.
Thiết kế website tại Hải Phòng của Terus - Những gì bạn sẽ có?
Giao diện đẹp mắt độc quyền cho doanh nghiệp: Với đội ngũ thiết kế sáng tạo, Terus sẽ mang đến cho doanh nghiệp của bạn một giao diện website độc đáo, thu hút khách hàng ngay từ cái nhìn đầu tiên.
Chuẩn SEO, chuẩn di động, responsive: Website của bạn sẽ được thiết kế theo chuẩn SEO để dễ dàng tìm kiếm và hiển thị trên các thiết bị di động, đáp ứng trải nghiệm tối ưu cho người dùng.
Thiết kế đầy đủ tính năng: Terus sẽ thiết kế website của bạn với đầy đủ các tính năng cần thiết, từ mục giới thiệu, sản phẩm/dịch vụ, tin tức, liên hệ,... giúp nâng cao trải nghiệm người dùng.
Hệ thống Admin quản trị dễ dàng: Bạn sẽ được cung cấp hệ thống quản trị website thân thiện, giúp bạn dễ dàng cập nhật nội dung, quản lý website một cách hiệu quả.
Terus tự hào là [đơn vị thiết kế website chuyên nghiệp, uy tín tại Hải Phòng](https://terusvn.com/thiet-ke-website-tai-hcm/), với nhiều năm kinh nghiệm trong lĩnh vực này. Chúng tôi đã từng thiết kế thành công hàng trăm website cho các doanh nghiệp tại Hà Giang và trên cả nước, đáp ứng mọi nhu cầu của khách hàng.
Với quy trình chặt chẽ và kinh nghiệm lâu năm, Terus cam kết mang đến cho doanh nghiệp tại Hải Phòng những [dịch vụ website chuyên nghiệp, tối ưu chi phí](https://terusvn.com/thiet-ke-website-tai-hcm/), góp phần thúc đẩy sự phát triển của doanh nghiệp.
Tìm hiểu thêm về [Thiết kế Website Tại Hải Phòng Đẹp Mắt](https://terusvn.com/thiet-ke-website/thiet-ke-website-tai-hai-phong/)
Các dịch vụ tại Terus:
Digital Marketing:
· [Dịch vụ Facebook Ads](https://terusvn.com/digital-marketing/dich-vu-facebook-ads-tai-terus/)
· [Dịch vụ Google Ads](https://terusvn.com/digital-marketing/dich-vu-quang-cao-google-tai-terus/)
· [Dịch vụ SEO Tổng Thể](https://terusvn.com/seo/dich-vu-seo-tong-the-uy-tin-hieu-qua-tai-terus/)
Thiết kế website:
· [Dịch vụ Thiết kế website chuẩn Insight](https://terusvn.com/thiet-ke-website/dich-vu-thiet-ke-website-chuan-insight-chuyen-nghiep-uy-tin-tai-terus/)
· [Dịch vụ Thiết kế website](https://terusvn.com/thiet-ke-website-tai-hcm/) | terus_technique |
1,915,185 | SEO Offpage là gì | SEO Offpage, hay còn gọi là SEO Offsite, là một kỹ thuật tối ưu hóa các yếu tố ngoài trang web nhằm... | 0 | 2024-07-08T04:06:47 | https://dev.to/khoahocseoimta1/seo-offpage-la-gi-3ckj | khoahocseoimta, daotaoseoimta, daotaoseotphcm, seooffpage | SEO Offpage, hay còn gọi là SEO Offsite, là một kỹ thuật tối ưu hóa các yếu tố ngoài trang web nhằm nâng cao thứ hạng trên trang kết quả tìm kiếm (SERP). Các yếu tố chính bao gồm:
Xây dựng liên kết ngoài (backlink): Tạo ra các liên kết chất lượng từ các trang web uy tín trỏ về trang web của bạn, giúp tăng cường độ tin cậy và sức mạnh của trang web.
Phát triển mạng xã hội (social entity): Tận dụng các nền tảng mạng xã hội để tương tác và xây dựng mối quan hệ với khách hàng tiềm năng, tạo ra lưu lượng truy cập tự nhiên và sự quan tâm đối với thương hiệu.
Tạo dấu ấn trên mạng xã hội (social bookmarking): Sử dụng các trang mạng xã hội để đánh dấu và chia sẻ nội dung quan trọng, giúp tăng cường khả năng hiển thị và tương tác của trang web.
Việc triển khai chiến lược SEO Offpage mang lại nhiều lợi ích quan trọng cho website:
Tăng chỉ số EEAT (Expertise, Authoritativeness, Trustworthiness): Khẳng định tính chuyên môn, uy tín và độ tin cậy của trang web trong mắt công cụ tìm kiếm.
Tăng lưu lượng truy cập (traffic): Thu hút lượng truy cập chất lượng cao từ các nguồn liên kết ngoài và mạng xã hội.
Gia tăng chuyển đổi (conversions): Chuyển đổi lưu lượng truy cập thành khách hàng thực sự, cải thiện hiệu quả kinh doanh.
Tăng nhận diện thương hiệu (brand awareness): Mở rộng tầm ảnh hưởng và nhận diện thương hiệu thông qua các hoạt động ngoại vi trên mạng.
SEO Offpage là một phần không thể thiếu trong chiến lược SEO tổng thể, giúp đảm bảo rằng trang web không chỉ tối ưu hóa về mặt kỹ thuật mà còn có sự hiện diện mạnh mẽ và uy tín trong cộng đồng trực tuyến.
Tham khảo tổng quan kiến thức về SEO Offpage tạ bài viết: [](https://imta.edu.vn/seo-offpage-la-gi/)
Thông tin liên hệ
Công ty TNHH IMTA
Điện thoại: 028 2269 9899
Email: info@imta.edu.vn
Địa chỉ: Tòa Nhà Charmington La Pointe , Số 181 Cao Thắng nối dài, Phường 12, Quận 10, Thành phố Hồ Chí Minh, Việt Nam
Website: [https://imta.edu.vn/](https://imta.edu.vn/)
Khóa học SEO: [https://imta.edu.vn/khoa-hoc-seo-website/](https://imta.edu.vn/khoa-hoc-seo-website/)
#khoahocseoimta #daotaoseoimta #daotaoseotphcm
Social:
[https://khoahocseowebsite.bravesites.com/entries/general/seo-off-page-la-gi](https://khoahocseowebsite.bravesites.com/entries/general/seo-off-page-la-gi
)
[https://diigo.com/0wseyj](https://diigo.com/0wseyj
)
[https://www.4shared.com/s/fhTVrw5Jdge](https://www.4shared.com/s/fhTVrw5Jdge)
[https://online.pubhtml5.com/timwy/dnma/](https://online.pubhtml5.com/timwy/dnma/
)
[https://diendannhansu.com/threads/seo-offpage-la-gi.474075/](https://diendannhansu.com/threads/seo-offpage-la-gi.474075/
)
[https://telegra.ph/SEO-Offpage-la-gi-07-08](https://telegra.ph/SEO-Offpage-la-gi-07-08) | khoahocseoimta1 |
1,915,186 | Developing a GROWI Plug-in (Template Edition) | Explain the procedure for developing a GROWI plug-in. This is as much as we know, as we have not yet grasped the whole process, but please use it as a reference during development. | 0 | 2024-07-08T04:07:38 | https://dev.to/goofmint/developing-a-growi-plug-in-template-edition-40jp | opensource, growi, markdown | ---
title: Developing a GROWI Plug-in (Template Edition)
published: true
description: Explain the procedure for developing a GROWI plug-in. This is as much as we know, as we have not yet grasped the whole process, but please use it as a reference during development.
tags:
- OSS
- GROWI
- Markdown
# cover_image: https://direct_url_to_image.jpg
# Use a ratio of 100:42 for best results.
# published_at: 2024-06-21 05:04 +0000
---
[GROWI](https://growi.org/ja/), an open source in-house wiki, has a plug-in feature. It can be used to display your own data or to customize the display.
In this article, I will explain the procedure for developing a GROWI plug-in. This is as much as we know, as we have not yet grasped the whole process, but please use it as a reference during development.
## Types of plug-ins
There are three types of GROWI plug-ins
- script
- theme
- template
This time, we will focus on script (template).
## Notes
In GROWI, you can use [page template function](https://docs.growi.org/ja/guide/features/template.html#%E9%9A%8E%E5%B1%A4%E3%81%AB%E5 %AF%BE%E3%81%97%E3%81%A6%E3%83%86%E3%83%B3%E3%83%95%E3%82%9A%E3%83%AC%E3%83%BC%E3%83%88%E3%82%92%E9%81%A9%E7%94%A8%E3%81%99%E3%82 8B%E6%96%B9%E6%B3%95) is also available. The templates discussed in this article will be usable templates that can be created on any page and plugins that can be published and shared online.
## Templates
We have created a template that can be used to create plug-ins. This is the basis for the explanation.
[goofmint/growi-plugin-script-template](https://github.com/goofmint/growi-plugin-templates-template)
## Plugin configuration
No coding knowledge is required for the template plugin. You can edit it under the `dist` folder. The structure is as follows.
```
% tree .
.
.└── example
├── en_US
│ ├── meta.json
meta.json │ └── template.md
└── en_US │ └── meta.json │ └── template.md
└─ meta.json
template.md
3 directories, 4 files
```
## Create a new template.
Copy or rename the `example` folder. You can name it anything you like, but make sure it is easy to understand for later maintenance. You can create multiple templates.
## Rename the package
The package name is defined in `package.json`. Change it first.
```js
{
"name": "growi-plugin-templates-for-template", // fix here
"version": "1.0.0",.
"description": "GROWI template plugin for template", // modify here
// omitted
}
```
## Fix per locale
GROWI supports Japanese and English by default. There are `en_US` and `ja_JP` folders for each locale. To add other languages, modify `package.json`. Currently, GROWI also supports `zh_CN` and `fr_FR`.
```javascript
{
// omit
"growiPlugin": {
"schemaVersion": "4",.
"types": [
"template".
],.
"locales": [
"en_US", "ja_JP" // add here
]
}
}
```
## Fix the metafile
In the metafile `meta.json`, set the name of the template listing.
```js
{
"title": "Example title" // Fix here.
}
```
## Modify template
The content of the template is `template.md`. Feel free to edit the content.
```markdown
## Example template
Describe the contents of a great template here!
```
You are now free to create your own template.
## Create template content
Creating a template is actually easier to write on GROWI. Make sure the resulting content is OK and copy it into `template.md`.

## Using the template
Here are the steps to use the template you have created.
### Notes
The template must be published as a Git repository.
### Installation
Please enter the URL of the Git repository.

### Use
To use, go to the Edit Page screen and click the file icon at the bottom.

A list of templates will be displayed, from which you can select the template you wish to use. You can also specify the locale.

## Summary
By using template plug-ins, you can quickly create pages with a common structure. Also, by using a comprehensive page template, you can avoid omissions and omissions of description and consideration.
Please make use of templates.
[GROWI, an OSS development wiki tool | comfortable information sharing for all](https://growi.org/ja/) | goofmint |
1,915,187 | Demystifying Volatility: Calculating the Relative Volatility Index (RVI) for Crypto Trading | Volatility – the lifeblood of any crypto market – can be a double-edged sword. While it presents... | 0 | 2024-07-08T04:10:09 | https://dev.to/epakconsultant/demystifying-volatility-calculating-the-relative-volatility-index-rvi-for-crypto-trading-5ea7 | trading | Volatility – the lifeblood of any crypto market – can be a double-edged sword. While it presents opportunities for profit, it also carries significant risk. The Relative Volatility Index (RVI) emerges as a valuable tool for crypto traders, offering insights into a cryptocurrency's relative volatility and potentially guiding entry and exit points.
Unveiling the RVI: A Glimpse into Volatility
The RVI is a technical indicator that ranges from 0 to 100, with higher values signifying higher relative volatility and vice versa. It essentially compares the average price gains (up closes) to the average price losses (down closes) over a chosen timeframe.
Here's the core formula for calculating RVI:
RVI = 100 - 100 / ( 1 + ( Standard Deviation of Up Closes / Standard Deviation of Down Closes ) )
Let's break it down step-by-step:
1. Identify Up and Down Closes: Over your chosen timeframe (e.g., 14 days), determine the closing prices that were higher than the previous day's close (up closes) and those that were lower (down closes).
2. Calculate Standard Deviation: Calculate the standard deviation for both the up closes and down closes. Standard deviation measures the dispersion of data points from the average.
3. Relative Volatility Ratio: Divide the standard deviation of up closes by the standard deviation of down closes. This ratio reflects the relative strength of upward and downward price movements.
4. Normalize the Value: Finally, apply the formula to convert the ratio into an RVI value between 0 and 100.
Pro Tip: Many trading platforms offer built-in RVI indicators, automating the calculations and displaying the RVI value on your charts.
[Demystifying Candlesticks: Unveiling the Power of Heikin Ashi for Trading Success: Heikin Ashi Mastery](https://www.amazon.com/dp/B0CVNPBTMK)
Interpreting the RVI for Trading Decisions:
- RVI Above 50: Generally indicates a period of higher relative volatility, potentially suggesting increased buying or selling pressure. This could be an opportunity to enter a trade in the direction of the prevailing trend, but caution is advised due to the heightened risk.
- RVI Below 50: Suggests a period of lower relative volatility, potentially signifying consolidation or a lack of clear direction. While this might not present immediate high-probability trading opportunities, it can be a good time to refine your entry strategy for the next breakout.
- RVI at Extremes: Extreme highs (above 80) or lows (below 20) can indicate overbought or oversold conditions, respectively. However, be cautious of false signals, especially in highly volatile markets like crypto.
Remember: The RVI is just one tool in your trading toolbox. Consider combining it with other technical indicators like price action analysis or momentum indicators (e.g., StochRSI) for a more comprehensive understanding of market conditions.
Combining RVI with StochRSI for Enhanced Analysis:
The StochRSI (Stochastic Relative Strength Index) is another popular indicator that measures price momentum. Here's how you can combine them:
- RVI for Volatility: Use the RVI to gauge the overall relative volatility of the market.
- StochRSI for Momentum: Utilize the StochRSI to identify potential overbought or oversold conditions within the established volatility range identified by the RVI.
For example, an RVI above 50 coupled with a high StochRSI reading might suggest a potential shorting opportunity (selling high) due to the combination of increased volatility and an overbought signal.
Remember: Always conduct your own research and implement proper risk management strategies before making any trading decisions.
| epakconsultant |
1,915,188 | How to add Stripe payment functionality to Next.js App | How to add Stripe payment functionality using the Next.js App Router: 1. Set Up Your... | 0 | 2024-07-08T04:12:59 | https://article.shade.cool/p/34 | webdev, javascript, beginners, programming | How to add Stripe payment functionality using the Next.js App Router:
{% youtube https://youtu.be/cE4PzMonikc?si=Xr2sxJt53lMpyjD4 %}
### 1. **Set Up Your Stripe Account**
- Sign up at [stripe.com](https://stripe.com).
- Complete the setup process including adding business details and bank account.
### 2. **Install Stripe Packages**
- Install the required Stripe packages in your Next.js project:
```bash
npm install stripe @stripe/stripe-js @stripe/react-stripe-js
```
### 3. **Create a Server-Side Route for Payment Intent**
- Use the Next.js App Router to create an API route for generating a payment intent.
```javascript
// app/api/create-payment-intent/route.js
import Stripe from 'stripe';
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY);
export async function POST(request) {
const { amount } = await request.json();
try {
const paymentIntent = await stripe.paymentIntents.create({
amount,
currency: 'usd',
});
return new Response(JSON.stringify({ clientSecret: paymentIntent.client_secret }), {
status: 200,
headers: {
'Content-Type': 'application/json',
},
});
} catch (error) {
return new Response(JSON.stringify({ error: error.message }), {
status: 500,
headers: {
'Content-Type': 'application/json',
},
});
}
}
```
### 4. **Create a Client-Side Payment Form**
- Set up a client-side form to collect payment information using Stripe Elements.
```javascript
// app/page.js
import { useState, useEffect } from 'react';
import { loadStripe } from '@stripe/stripe-js';
import { Elements, CardElement, useStripe, useElements } from '@stripe/react-stripe-js';
const stripePromise = loadStripe(process.env.NEXT_PUBLIC_STRIPE_PUBLISHABLE_KEY);
function CheckoutForm({ clientSecret }) {
const stripe = useStripe();
const elements = useElements();
const [error, setError] = useState(null);
const [success, setSuccess] = useState(null);
const handleSubmit = async (event) => {
event.preventDefault();
const { error, paymentIntent } = await stripe.confirmCardPayment(clientSecret, {
payment_method: {
card: elements.getElement(CardElement),
},
});
if (error) {
setError(error.message);
} else if (paymentIntent.status === 'succeeded') {
setSuccess('Payment succeeded!');
}
};
return (
<form onSubmit={handleSubmit}>
<CardElement />
<button type="submit" disabled={!stripe}>Pay</button>
{error && <div>{error}</div>}
{success && <div>{success}</div>}
</form>
);
}
export default function Home() {
const [clientSecret, setClientSecret] = useState('');
useEffect(() => {
fetch('/api/create-payment-intent', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ amount: 5000 }), // amount in cents
})
.then((res) => res.json())
.then((data) => setClientSecret(data.clientSecret));
}, []);
return (
clientSecret && (
<Elements stripe={stripePromise} options={{ clientSecret }}>
<CheckoutForm clientSecret={clientSecret} />
</Elements>
)
);
}
```
### 5. **Testing and Deployment**
- Use Stripe's test card numbers to test the integration. For example, `4242 4242 4242 4242` is a valid test card number.
- Once testing is complete, deploy your Next.js application and switch Stripe from test mode to live mode in the dashboard.
By following these steps, you can easily integrate Stripe payments into your Next.js application using the App Router. If you need further customization or encounter any issues, refer to Stripe's detailed [documentation](https://stripe.com/docs) and API references. | sh20raj |
1,915,189 | Navigating Crypto's Choppy Waters: Understanding Risk with Historical Volatility (HV) | The cryptocurrency market, with its dynamic price swings, can be both thrilling and treacherous.... | 0 | 2024-07-08T04:13:04 | https://dev.to/epakconsultant/navigating-cryptos-choppy-waters-understanding-risk-with-historical-volatility-hv-1505 | trading | The cryptocurrency market, with its dynamic price swings, can be both thrilling and treacherous. Investors and traders rely on various tools to navigate this volatility, and Historical Volatility (HV) emerges as a crucial metric for risk assessment. This article dives into HV, how it compares to the Relative Volatility Index (RVI), and its application in making informed crypto trading decisions.
Demystifying Historical Volatility (HV): A Look Back Informs the Future
HV measures the price dispersion of an asset over a specific historical period. It essentially quantifies how much the price has fluctuated on average within that timeframe. A higher HV indicates greater price swings, suggesting potentially higher risk and reward.
[Deciphering Market Movements: Understanding Breakouts, Breakdowns, Uptrends, Downtrends, and Sideways Trends in Trading: Mastering Market Patterns](https://www.amazon.com/dp/B0CW1KS28G)
Here's a key distinction between HV and RVI:
- HV: Focuses solely on past price movements, providing a historical perspective on volatility.
- RVI: Compares the average price gains to average price losses, offering a snapshot of relative volatility in relation to the recent past.
Unveiling the Formula: Calculating Historical Volatility
Calculating HV involves a few steps:
- Gather Closing Prices: Collect historical closing prices for the chosen cryptocurrency over your desired timeframe (e.g., daily closing prices for the past year).
- Calculate Daily Log Returns: For each day, calculate the logarithmic return by subtracting the previous day's closing price from the current day's closing price, then dividing by the previous day's closing price.
- Annualize the Returns: As daily returns represent short-term fluctuations, we need to annualize them to reflect a yearly perspective. Multiply the standard deviation of the daily log returns by the square root of the number of trading days in a year (approximately 252).
Pro Tip: Many trading platforms offer built-in HV indicators that automate these calculations and display the HV value on your charts.
Interpreting HV for Risk Assessment:
- High HV (Above 50%) : Suggests a potentially risky investment with significant price swings. While this can present opportunities for high returns, it also carries a greater chance of substantial losses.
- Medium HV (20-50%) : Indicates a market with moderate volatility, offering a balance between risk and reward.
- Low HV (Below 20%) : Suggests a relatively stable market with smaller price fluctuations. While this translates to lower risk, potential returns might also be limited.
Remember: HV is a historical measure and doesn't predict future volatility. Combine it with other technical and fundamental analysis to make informed choices.
Making Trading Decisions Based on HV:
- High HV: May indicate a market ripe for trend-following strategies if you have a high-risk tolerance. However, be prepared for potential losses during pullbacks.
- Medium HV: Offers opportunities for both trend-following and mean reversion strategies (buying low and selling high) depending on the prevailing market sentiment.
- Low HV: Might be suitable for long-term investors seeking capital appreciation with lower risk, but be patient as price movements might be slower.
Comparing HV Across Cryptocurrencies:
Analyzing HV across different cryptocurrencies can help you diversify your portfolio based on risk tolerance. For instance, you might choose to hold a mix of high-risk, high-reward altcoins with a lower HV stablecoin to balance your portfolio's overall volatility.
Remember: Always conduct your own research and implement proper risk management strategies before making any crypto investments.
By understanding HV and incorporating it into your trading strategy, you can gain valuable insights into the inherent risk associated with cryptocurrencies and navigate the ever-changing market landscape with greater confidence.
| epakconsultant |
1,915,190 | Python | Python is an high-level interpreter and object oriented programming language used for various... | 0 | 2024-07-08T04:15:03 | https://dev.to/arokya_naresh_178a488116e/python-2d7e | python, beginners | Python is an high-level interpreter and object oriented programming language used for various applications
It have standard library
| arokya_naresh_178a488116e |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.